<?xml version="1.0" encoding="UTF-8" standalone="no"?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"><channel><title>Robust Football Tracking in Video</title><description></description><managingEditor>noreply@blogger.com (Unknown)</managingEditor><pubDate>Sat, 14 Sep 2024 05:39:50 -0700</pubDate><generator>Blogger http://www.blogger.com</generator><openSearch:totalResults xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">15</openSearch:totalResults><openSearch:startIndex xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">1</openSearch:startIndex><openSearch:itemsPerPage xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">25</openSearch:itemsPerPage><link>http://robustfootballtrackinginvideo.blogspot.com/</link><language>en-us</language><item><title>ROBUST OBJECT TRACKING USING JOINT COLOR-TEXTURE HISTOGRAM</title><link>http://robustfootballtrackinginvideo.blogspot.com/2010/05/robust-object-tracking-using-joint.html</link><category>Journal Used</category><pubDate>Tue, 25 May 2010 08:19:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-4355584009142310878</guid><description>This report is written by JIFENG NING,LEI ZHANG and DAVID ZHANG, CHENGKE WU&lt;br /&gt;&lt;br /&gt;A novel object tracking algorithm is presented in this paper by using the joint colortexture&lt;br /&gt;histogram to represent a target and then applying it to the mean shift framework.&lt;br /&gt;Apart from the conventional color histogram features, the texture features of&lt;br /&gt;the object are also extracted by using the local binary pattern (LBP) technique to&lt;br /&gt;represent the object. The major uniform LBP patterns are exploited to form a mask&lt;br /&gt;for joint color-texture feature selection. Compared with the traditional color histogram&lt;br /&gt;based algorithms that use the whole target region for tracking, the proposed algorithm&lt;br /&gt;extracts effectively the edge and corner features in the target region, which characterize&lt;br /&gt;better and represent more robustly the target. The experimental results validate that&lt;br /&gt;the proposed method improves greatly the tracking accuracy and efficiency with fewer&lt;br /&gt;mean shift iterations than standard mean shift tracking. It can robustly track the target&lt;br /&gt;under complex scenes, such as similar target and background appearance, on which the&lt;br /&gt;traditional color based schemes may fail to track.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;Introduction :&lt;br /&gt;&lt;br /&gt;Real-time object tracking is a critical task in computer vision applications. Many&lt;br /&gt;tracking algorithms have been proposed to overcome the difficulties arising from&lt;br /&gt;noise, occlusion, clutter and changes in the foreground object or in the background&lt;br /&gt;environment. Among the various tracking algorithms,mean shift tracking algorithms&lt;br /&gt;have recently become popular due to their simplicity and efficiency.&lt;br /&gt;&lt;br /&gt;The mean shift algorithm was originally proposed by Fukunaga and Hostetler&lt;br /&gt;for data clustering. It was later introduced into the image processing community by&lt;br /&gt;Cheng. Bradski odified it and developed the Continuously Adaptive Mean Shift&lt;br /&gt;(CAMSHIFT) algorithm to track a moving face. Comaniciu and Meer successfully&lt;br /&gt;applied mean shift algorithm to image segmentation and object tracking. Mean&lt;br /&gt;Shift is an iterative kernel-based deterministic procedure which converges to a local&lt;br /&gt;maximum of the measurement function with certain assumptions on the kernel&lt;br /&gt;behaviors. Furthermore, mean shift is a low complexity algorithm, which provides&lt;br /&gt;a general and reliable solution to object tracking and is independent of the target&lt;br /&gt;representation.&lt;br /&gt;&lt;br /&gt;The texture patterns, which reflect the spatial structure of the object,&lt;br /&gt;are effective features to represent and recognize targets. Since the texture features&lt;br /&gt;introduce new information that the color histogram does not convey, using the joint&lt;br /&gt;color-texture histogram for target representation is more reliable than using only&lt;br /&gt;color histogram in tracking complex scenes. The idea of combining color and edge for&lt;br /&gt;target representation has been exploited by researchers.7,10 However, how to utilize&lt;br /&gt;effectively both the color intensity and texture features is still a difficult problem.&lt;br /&gt;&lt;br /&gt;This is because though many texture analysis methods, such as gray concurrence&lt;br /&gt;matrices9 and Gabor filtering, have been proposed, they have high computational&lt;br /&gt;complexity and cannot be directly used together with color histogram.&lt;br /&gt;Currently, a widely used form of target representation is the color histogram,&lt;br /&gt;which could be viewed as the discrete probability density function (PDF) of the&lt;br /&gt;target region. Color histogram is an estimating mode of point sample distribution&lt;br /&gt;and is very robust in representing the object appearance. However, using only color&lt;br /&gt;histograms in mean shift tracking has some problems. First, the spatial information&lt;br /&gt;of the target is lost. Second, when the target has similar appearance to the&lt;br /&gt;background, color histogram will become invalid to distinguish them. For a better&lt;br /&gt;target representation, the gradient or edge features have been used in combination&lt;br /&gt;with color histogram. Several object representations that exploit the spatial&lt;br /&gt;information have been developed by partitioning the tracking region into fixed size&lt;br /&gt;fragments, meaningful patches or the articulations of human objects. For each&lt;br /&gt;subregion, a color or edge feature based target model was presented.&lt;br /&gt;&lt;br /&gt;The local binary pattern (LBP)16,17 technique is very effective to describe the&lt;br /&gt;image texture features. LBP has advantages such as fast computation and rotation&lt;br /&gt;invariance, which facilitates the wide usage in the fields of texture analysis,&lt;br /&gt;image retrieval, face recognition, image segmentation, etc. Recently, LBP was successfully applied to the detection of moving objects via background&lt;br /&gt;subtraction. In LBP, each pixel is assigned a texture value, which can be naturally&lt;br /&gt;combined with the color value of the pixel to represent targets. In Ref. , Nguyen&lt;br /&gt;et al. employed the image intensity and the LBP feature to construct a twodimensional&lt;br /&gt;histogram representation of the target for tracking thermographic and&lt;br /&gt;monochromatic video.&lt;br /&gt;&lt;br /&gt;In this paper, we adopt the LBP scheme to represent the target texture feature&lt;br /&gt;and then propose a joint color-texture histogram method for a more distinctive and&lt;br /&gt;effective target representation. The major uniform LBP patterns are used to identify&lt;br /&gt;the key points in the target region and then form a mask for joint color-texture&lt;br /&gt;feature selection. The proposed target representation scheme eliminates smooth&lt;br /&gt;background and reduces noise in the tracking process. Compared with the traditional&lt;br /&gt;RGB color space based target representation, it efficiently exploits the target&lt;br /&gt;structural information and hence achieves better tracking performance with fewer&lt;br /&gt;mean shift iterations and higher robustness to various interferences of background&lt;br /&gt;and noise in complex scenes.&lt;br /&gt;&lt;br /&gt;The paper is organized as follows. Section 2 briefly introduces the mean shift&lt;br /&gt;algorithm. Section 3 analyzes LBP and presents the joint color-texture histogram&lt;br /&gt;scheme in detail. Experimental results are presented and discussed in Sec. 4.&lt;br /&gt;Section 5 concludes the paper.&lt;br /&gt;&lt;br /&gt;&lt;a href="http://www.google.com/url?sa=t&amp;source=web&amp;ct=res&amp;cd=1&amp;ved=0CBIQFjAA&amp;url=http%3A%2F%2Fwww4.comp.polyu.edu.hk%2F~cslzhang%2Fpaper%2FIJPRAI_09_Tracking.pdf&amp;ei=GOr7S_btNZTDrAfghqGqAg&amp;usg=AFQjCNHtV_sTlsLO5sVWS9GrHi80CtWmwA&amp;sig2=wt1mbtU8rs8Wo012-nEolA"&gt;Download Here&lt;/a&gt;.</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>Matlab development implementation</title><link>http://robustfootballtrackinginvideo.blogspot.com/2010/03/matlab-development-implementation.html</link><category>Matlab development.</category><pubDate>Mon, 15 Mar 2010 21:30:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-1854447939394298326</guid><description>&lt;span class="fullpost"&gt;Hello there, its have been a long time i have never update the content of this blog. Recently i find that are so headache to continue this project alone, but i will never give up until i really finish this project soon. The due date to submit the report might be 2 weeks from now. I have starting to write my report now.&lt;br /&gt;&lt;br /&gt;After i finish my report i will tell you guys what the outcomes that i get from the report.&lt;br /&gt;See you later.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>Learning Matlab Process</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/12/learning-matlab-process.html</link><category>Matlab</category><pubDate>Wed, 30 Dec 2009 21:53:00 -0800</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-1660874693508416955</guid><description>To day i just want to inform you, i'm going to learn matlab to develop my software. &lt;br /&gt;&lt;br /&gt;What basic matlab i need to know is ;&lt;br /&gt;&lt;br /&gt;1) Basic feature of matlab&lt;br /&gt;2) Matlab desktop management&lt;br /&gt;3) Script M-file&lt;br /&gt;4) Arrays and Array Operations&lt;br /&gt;5) Multidimensional array&lt;br /&gt;6) Cell Array and Structures&lt;br /&gt;7) Relation and logical Operation&lt;br /&gt;8) Control Flow&lt;br /&gt;9) Functions&lt;br /&gt;10) Matric Algebra&lt;br /&gt;11) Fourier Analysis&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;The thesis related using matlab will be cover after i finish above chapters.&lt;br /&gt;&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>FIR filter</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/10/fir-filter.html</link><category>Definition</category><category>FIR filter</category><category>Important Term</category><pubDate>Sat, 31 Oct 2009 20:38:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-3478036921165286714</guid><description>A finite impulse response (FIR ) filter is a type of a digital filter. The impulse response, the filter's response to a Kronecker delta input, is finite because it settles to zero in a finite number of sample intervals. This is in contrast to infinite impulse response (IIR) filters, which have internal feedback and may continue to respond indefinitely. The impulse response of an Nth-order FIR filter lasts for N+1 samples, and then dies to zero.&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>Journal List reference</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/10/journal-list-reference.html</link><category>Journal List reference</category><pubDate>Fri, 30 Oct 2009 21:11:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-8031706361160204810</guid><description>Journal List reference ;&lt;br /&gt;---------------------------------&lt;br /&gt;&lt;br /&gt;From COMPLEX DISCRETE WAVELET TRANSFORM BASED MOTION ESTIMATION.pdf&lt;br /&gt;&lt;br /&gt;1) An iterative images registration technique with an application to stereo vision [B. Lucas and T. Kanade].pdf&lt;br /&gt;&lt;br /&gt;2) Determining Optical Flow [B. K. P. Horn and B.G. Schunk].pdf&lt;br /&gt;&lt;br /&gt;3) Computation of component image velocity from local phase information [D.J Fleet and A. D. Jepson] .pdf&lt;br /&gt;&lt;br /&gt;4) Performance Of Optical Flow [J. L. Barron - D.J Fleet and S.S Beauchemin].pdf&lt;br /&gt;&lt;br /&gt;5) Complex wavelets and shift invarience by N. G. Kingsbury&lt;br /&gt;&lt;br /&gt;6) Mesh-Base Motion Estimation and Compensation in the wavelet domain using a redundant transform [S. Cui - Y. Wang and J.E Fowler].pdf&lt;br /&gt;&lt;br /&gt;7) An Overcomplete Discrete Wavelet Transform For Video Compression by N. Sebe.pdf&lt;br /&gt;&lt;br /&gt;8) A New Framework for complex wavelet transform by F.C.A. Fernandes.pdf&lt;br /&gt;&lt;br /&gt;9) Motion Estimation Using a Complex-Valued Wavelet Transform by J. Magarey.pdf&lt;br /&gt;&lt;br /&gt;10) Real-Time Tracking of Non-Rigid Objects using Mean Shift[by D. Comaniciu].pdf&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>What is MPEG-1?</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/10/what-is-mpeg-1.html</link><category>Definition</category><category>Important Term</category><category>Mpeg1</category><pubDate>Fri, 30 Oct 2009 11:54:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-3555360486881531120</guid><description>MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to 1.5 Mbit/s (26:1 and 6:1 compression ratios respectively)[1] without excessive quality loss, making Video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) possible.[2][3]&lt;br /&gt;Today, MPEG-1 has become the most widely compatible lossy audio/video format in the world, and is used in a large number of products and technologies. Perhaps the best-known part of the MPEG-1 standard is the MP3 audio format it introduced.&lt;br /&gt;The MPEG-1 standard is published as ISO/IEC-11172. The standard consists of the following five Parts:&lt;br /&gt;Systems (storage and synchronization of video, audio, and other data together)&lt;br /&gt;Video (compressed video content)&lt;br /&gt;Audio (compressed audio content)&lt;br /&gt;Conformance testing (testing the correctness of implementations of the standard)&lt;br /&gt;Reference software (example software showing how to encode and decode according to the standard)&lt;br /&gt;&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;span style="font-weight: bold;"&gt;History&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Modeled on the successful collaborative approach and the compression technologies developed by the Joint Photographic Experts Group and CCITT's Experts Group on Telephony (creators of the JPEG image compression standard and the H.261 standard for video conferencing respectively) the Moving Picture Experts Group (MPEG) working group was established in January 1988. MPEG was formed to address the need for standard video and audio formats, and build on H.261 to get better quality through the use of more complex encoding methods.[2][4]&lt;br /&gt;Development of the MPEG-1 standard began in May 1988. 14 video and 14 audio codec proposals were submitted by individual companies and institutions for evaluation. The codecs were extensively tested for computational complexity and subjective (human perceived) quality, at data rates of 1.5 Mbit/s. This specific bitrate was chosen for transmission over T-1/E-1 lines and as the approximate data rate of audio CDs.[5] The codecs that excelled in this testing were utilized as the basis for the standard and refined further, with additional features and other improvements being incorporated in the process.[6]&lt;br /&gt;After 20 meetings of the full group in various cities around the world, and 4½ years of development and testing, the final standard (for parts 1-3) was approved in early November 1992 and published a few months later.[7] The reported completion date of the MPEG-1 standard, varies greatly: a largely complete draft standard was produced in September 1990, and from that point on, only minor changes were introduced.[2] The draft standard was publicly available for purchase.[8] The standard was finished with the 6 November 1992 meeting[9]. The Berkeley Plateau Multimedia Research Group developed a MPEG-1 decoder in November 1992.[10] In July 1990, before the first draft of the MPEG-1 standard had even been written, work began on a second standard, MPEG-2,[11] intended to extend MPEG-1 technology to provide full broadcast-quality video (as per CCIR 601) at high bitrates (3 - 15 Mbit/s), and support for interlaced video.[12] Due in part to the similarity between the two codecs, the MPEG-2 standard includes full backwards compatibility with MPEG-1 video, so any MPEG-2 decoder can play MPEG-1 videos.[13]&lt;br /&gt;Notably, the MPEG-1 standard very strictly defines the bitstream, and decoder function, but does not define how MPEG-1 encoding is to be performed (although a reference implementation is provided in ISO/IEC-11172-5).[1] This means that MPEG-1 coding efficiency can drastically vary depending on the encoder used, and generally means that newer encoders perform significantly better than their predecessors.[14] The first three parts (Systems, Video and Audio) of ISO/IEC 11172 were published in August 1993.&lt;br /&gt;&lt;br /&gt;&lt;span style="font-weight: bold;"&gt;Patents&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;MPEG-1 video and Layer I/II audio may be able to be implemented without payment of license fees.[16][17] [18][19][20] The ISO patent database lists one patent for ISO 11172, US 4,472,747, which expired in 2003.[21] The near-complete draft of the MPEG-1 standard was publicly available as ISO CD 11172[8] by December 6, 1991.[22] Due to its age, many of the patents on the technology have expired. Neither the Kuro5hin article "Patent Status of MPEG-1,H.261 and MPEG-2"[23] nor a thread on the gstreamer-devel[24] mailing list were able to list a single unexpired MPEG-1 video and Layer I/II audio patent. A full MPEG-1 decoder and encoder can not be implemented royalty free since there are companies that require patent fees for implementations of MPEG-1 Layer 3 Audio however.&lt;br /&gt;&lt;br /&gt;&lt;span style="font-weight: bold;"&gt;Applications&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;Most popular computer software for video playback includes MPEG-1 decoding, in addition to any other supported formats.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;The popularity of MP3 audio has established a massive installed base of hardware that can playback MPEG-1 Audio (all 3 layers).&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;"Virtually all digital audio devices" can playback MPEG-1 Audio.[25] Many millions have been sold to-date.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;Before MPEG-2 became widespread, many digital satellite/cable TV services used MPEG-1 exclusively.[4][14]&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;The widespread popularity of MPEG-2 with broadcasters means MPEG-1 is playable by most digital cable and satellite set-top boxes, and digital disc and tape players, due to backwards compatibility.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;MPEG-1 is the exclusive video and audio format used on Video CD (VCD), the first consumer digital video format, and still a very popular format around the world.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;The Super Video CD standard, based on VCD, uses MPEG-1 Audio exclusively, as well as MPEG-2 video.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;The DVD-Video format uses MPEG-2 video primarily, but MPEG-1 support is explicitly defined in the standard.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;The DVD Video standard originally required MPEG-1 Layer II audio for PAL countries, but was changed to allow AC-3/Dolby Digital-only discs. MPEG-1 Layer II audio is still allowed on DVDs, although newer extensions to the format, like MPEG Multichannel, are rarely supported.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;Most DVD players also support Video CD and MP3 CD playback, which use MPEG-1.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;The international Digital Video Broadcasting (DVB) standard primarily uses MPEG-1 Layer II audio, and MPEG-2 video.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;span class="fullpost"&gt;The international Digital Audio Broadcasting (DAB) standard uses MPEG-1 Layer II audio exclusively, due to MP2's especially high quality, modest decoder performance requirements, and tolerance of errors.&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">1</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>what is .MP4 file?</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/10/what-is-mp4-file.html</link><category>Definition</category><category>Important Term</category><category>Mp4</category><category>Mpeg4</category><pubDate>Fri, 30 Oct 2009 07:48:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-687658635820704758</guid><description>MPEG-4 Part 14, formally ISO/IEC 14496-14:2003, is a multimedia container format standard specified as a part of MPEG-4. It is most commonly used to store digital video and digital audio streams, especially those defined by MPEG, but can also be used to store other data such as subtitles and still images. Like most modern container formats, MPEG-4 Part 14 allows streaming over the Internet. A separate hint track is used to include streaming information in the file. The official filename extension for MPEG-4 Part 14 files is .mp4, thus the container format is often referred to simply as MP4.&lt;br /&gt;Some devices advertised as "MP4 players" are simply MP3 players that also play AMV video and/or some other video format, and do not play the MPEG-4 part 14 format.&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;span style="font-weight:bold;"&gt;History of MP4&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;MPEG-4 Part 14 is based upon ISO/IEC 14496-12:2004 (MPEG-4 Part 12: ISO base media file format) which is directly based upon Apple’s QuickTime container format.[2][3][4] MPEG-4 Part 14 is essentially identical to the MOV format, but formally specifies support for Initial Object Descriptors (IOD) and other MPEG features.[5] MPEG-4 Part 14 revises and completely replaces Clause 13 of ISO/IEC 14496-1 (MPEG-4 Part 1: Systems), in which the file format for MPEG-4 content was previously specified.[6]&lt;br /&gt;The MPEG-4 file format specification was created on the basis of the QuickTime format specification published in 2001.[7] The MPEG-4 file format, version 1 was published in 2001 as ISO/IEC 14496-1:2001, which is a revision of the MPEG-4 Part 1: Systems specification published in 1999 (ISO/IEC 14496-1:1999).[8][9][10] In 2003, the first version of MP4 file format was revised and replaced by MPEG-4 Part 14: MP4 file format (ISO/IEC 14496-14:2003), commonly named as MPEG-4 file format version 2.[11] The MP4 file format was generalized into the ISO Base Media File format ISO/IEC 14496-12:2004, which defines a general structure for time-based media files. It in turn is used as the basis for other file formats in the family (for example MP4, 3GP, Motion JPEG 2000).[2][12][13]&lt;br /&gt;The MP4 file format defined some extensions over ISO Base Media File Format to support MPEG-4 visual/audio codecs and various MPEG-4 Systems features such as object descriptors and scene descriptions. Some of these extensions are also used by other formats based on ISO base media file format (e.g. 3GP).[1] List of all registered extensions for ISO Base Media File Format is published on the official registration authority website www.mp4ra.org. The registration authority for code-points (identifier values) in "MP4 Family" files is Apple Computer Inc. and it is named in Annex D (informative) in MPEG-4 Part 12.[12] Codec designers should register the codes they invent, but the registration is not mandatory[14] and some of invented and used code-points are not registered.[15] When someone is creating a new specification derived from the ISO Base Media File Format, all the existing specifications should be used both as examples and a source of definitions and technology. If an existing specification already covers how a particular media type is stored in the file format (e.g. MPEG-4 audio or video in MP4), that definition should be used and a new one should not be invented.&lt;br /&gt;&lt;br /&gt;&lt;span style="font-weight:bold;"&gt;.MP4 versus .M4A file extensions&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;The existence of two different file extensions for naming audio-only MP4 files has been a source of confusion among users and multimedia playback software. Since MPEG-4 Part 14 is a container format, MPEG-4 files may contain any number of audio, video, and even subtitle streams, making it impossible to determine the type of streams in an MPEG-4 file based on its filename extension alone. In response, Apple Inc. started using and popularizing the .m4a file extension. Software capable of audio/video playback should recognize files with either .m4a or .mp4 file extensions, as would be expected, as there are no file format differences between the two. Most software capable of creating MPEG-4 audio will allow the user to choose the filename extension of the created MPEG-4 files.&lt;br /&gt;While the only official file extension defined by the standard is .mp4, various file extensions are commonly used to indicate intended content:&lt;br /&gt;MPEG-4 files with audio and video generally use the standard .mp4 extension.&lt;br /&gt;Audio-only MPEG-4 files generally have a .m4a extension. This is especially true of non-protected content.&lt;br /&gt;MPEG-4 files with audio streams encrypted by FairPlay Digital Rights Management as sold through the iTunes Store use the .m4p extension. iTunes Plus tracks are unencrypted and use .m4a accordingly.&lt;br /&gt;Audio book and podcast files, which also contain metadata including chapter markers, images, and hyperlinks, can use the extension .m4a, but more commonly use the .m4b extension. An .m4a audio file cannot "bookmark" (remember the last listening spot), whereas .m4b extension files can.&lt;br /&gt;The Apple iPhone uses MPEG-4 audio for its ringtones but uses the .m4r extension rather than the .m4a extension.&lt;br /&gt;Raw MPEG-4 Visual bitstreams are named .m4v but this extension is also sometimes used for video in MP4 container format.[16]&lt;br /&gt;Mobile phones use 3GP, an implementation of MPEG-4 Part 12 (a.k.a MPEG-4/JPEG2000 ISO Base Media file format), similar to MP4. It uses .3gp and .3g2 extensions. These files also store non-MPEG-4 data (H.263, AMR, TX3G).&lt;br /&gt;The common but non-standard use of the extensions .m4a and .m4v is due to the popularity of Apple’s iPod, iPhone, and iTunes Store. With modification, Nintendo's DSi and Sony's PSP can also play M4A.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>What is B frame?</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/10/what-is-b-frame.html</link><category>Definition</category><category>Important Term</category><pubDate>Wed, 21 Oct 2009 14:01:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-3891716753068312256</guid><description>&lt;a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.cs.cf.ac.uk/Dave/Multimedia/bframe.gif"&gt;&lt;img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 366px; height: 211px;" src="http://www.cs.cf.ac.uk/Dave/Multimedia/bframe.gif" border="0" alt="" /&gt;&lt;/a&gt;&lt;br /&gt;&lt;span style="font-weight:bold;"&gt;B-Frames&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;The MPEG encoder also has the option of using forward/backward interpolated prediction. These frames are commonly referred to as bi-directional interpolated prediction frames, or B frames for short. As an example of the usage of I, P, and B frames, consider a group of pictures that lasts for 6 frames, and is given as I,B,P,B,P,B,I,B,P,B,P,B,Š As in the previous I and P only example, I frames are coded spatially only and the P frames are forward predicted based on previous I and P frames. The B frames however, are coded based on a forward prediction from a previous I or P frame, as well as a backward prediction from a succeeding I or P frame. As such, the example sequence is processed by the encoder such that the first B frame is predicted from the first I frame and first P frame, the second B frame is predicted from the second and third P frames, and the third B frame is predicted from the third P frame and the first I frame of the next group of pictures. From this example, it can be seen that backward prediction requires that the future frames that are to be used for backward prediction be encoded and transmitted first, out of order. This process is summarized in Figure 7.16. There is no defined limit to the number of consecutive B frames that may be used in a group of pictures, and of course the optimal number is application dependent. Most broadcast quality applications however, have tended to use 2 consecutive B frames (I,B,B,P,B,B,P,Š) as the ideal trade-off between compression efficiency and video quality.&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;span style="font-weight:bold;"&gt;B-Frame Encoding&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;The main advantage of the usage of B frames is coding efficiency. In most cases, B frames will result in less bits being coded overall. Quality can also be improved in the case of moving objects that reveal hidden areas within a video sequence. Backward prediction in this case allows the encoder to make more intelligent decisions on how to encode the video within these areas. Also, since B frames are not used to predict future frames, errors generated will not be propagated further within the sequence.&lt;br /&gt;One disadvantage is that the frame reconstruction memory buffers within the encoder and decoder must be doubled in size to accommodate the 2 anchor frames. This is almost never an issue for the relatively expensive encoder, and in these days of inexpensive DRAM it has become much less of an issue for the decoder as well. Another disadvantage is that there will necessarily be a delay throughout the system as the frames are delivered out of order as was shown in Figure . Most one-way systems can tolerate these delays, as they are more objectionable in applications such as video conferencing systems.&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>What is Luminance?</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/10/what-is-luminance.html</link><category>Definition</category><category>Important Term</category><pubDate>Wed, 21 Oct 2009 13:24:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-3566801829925805212</guid><description>&lt;a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/math/e/f/c/efcd7066fd6fddc3a354dc50330f4b1b.png"&gt;&lt;img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 145px; height: 43px;" src="http://upload.wikimedia.org/math/e/f/c/efcd7066fd6fddc3a354dc50330f4b1b.png" border="0" alt="" /&gt;&lt;/a&gt;&lt;br /&gt;Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle. The SI unit for luminance is candela per square metre (cd/m2). A non-SI term for the same unit is the "nit". The CGS unit of luminance is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2.&lt;br /&gt;Luminance is often used to characterize emission or reflection from flat, diffuse surfaces. The luminance indicates how much luminous power will be perceived by an eye looking at the surface from a particular angle of view. Luminance is thus an indicator of how bright the surface will appear. In this case, the solid angle of interest is the solid angle subtended by the eye's pupil. Luminance is used in the video industry to characterize the brightness of displays. A typical computer display emits between 50 and 300 cd/m2. The sun has luminance of about 1.6×109 cd/m2 at noon.[1]&lt;br /&gt;Luminance is invariant in geometric optics. This means that for an ideal optical system, the luminance at the output is the same as the input luminance. For real, passive, optical systems, the output luminance is at most equal to the input. As an example, if you form a demagnified image with a lens, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. The light at the image plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be "brighter" than the source.&lt;br /&gt;&lt;br /&gt;Source From &lt;a href="http://en.wikipedia.org/wiki/Luminance"&gt;Wikipedia&lt;/a&gt;.&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>What is digital Monochrome image?</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/10/what-is-digital-monochrome-image.html</link><category>Definition</category><category>Important Term</category><pubDate>Wed, 21 Oct 2009 13:10:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-8708398892368342371</guid><description>&lt;a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/wikipedia/en/2/20/Parrot_EGA_monochrome_palette.png"&gt;&lt;img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 200px;" src="http://upload.wikimedia.org/wikipedia/en/2/20/Parrot_EGA_monochrome_palette.png" border="0" alt="" /&gt;&lt;/a&gt;&lt;br /&gt;Monochrome[1] is a term generally used to describe painting, drawing, design, or photograph in one color or shades of one color.[2] Monochromatic light is light of a single wavelength, though in practice it can refer to light of a narrow wavelength range. A monochromatic object or image is one whose range of colors consists of shades of a single color or hue; monochrome images in neutral colors are also known as grayscale or black-and-white.&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>Summary of Complex Discrete Wavelet Transform Base Motion Estimation</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/09/summary-of-complex-discrete-wavelet.html</link><category>CDWT</category><category>Complex Discrete Wavelet Transform Base Motion Estimation</category><category>summary</category><pubDate>Sun, 27 Sep 2009 07:04:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-3546608011779768729</guid><description>For the tracking application, the estimation of the ‘true’ motion vector is crucial. The complex discrete wavelet transform (CDWT) base motion estimation algorithm produced superior results for the estimation of the dense flow field and has been evaluated. First, the comparison of the results of the Lucas and Kanade’s (LK) and Horn and Schunk’s (HS) motion estimation algorithms is performed. Second, tracking performances are compared for the cases of CDWT-based and LK-based flow field. Lastly, the tracking performance of the proposed tracker is evaluated by using a number of test sequences and is compared to the Correlation and Mean Shift Tracker. It is observed that it can successfully track various different targets and is robust to changes of the target signature. &lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;Since CDWT is shift-variant so it cannot be used directly for motion estimation. Several modifications have been proposed to make the DWT shift-invariant. Among the methods used is Redundant Discrete Wavelet Transform (RDWT), Overcomplete Discrete Wavelet Transform (ODWT). But these 2 methods provide only invariance for integer-shifts. So Double-Density Wavelet Transform (CDDWT) is proposed. CDWT based motion estimation algorithm is robust and provides sub-pixel accuracy which is important for tracking. CDWT algorithm hierarchical structure and proceeds from coarse to fine resolution level. At each level, motion is estimated for each subpel and the resultant flow field is propagated to the next resolution level by scaling the flow vectors and wrapping the transform coefficients of the reference image accordingly. For estimating each subpel, a quantity called the subband squared difference is obtained by the sum of absolute differences of the values of the subpels in the six detailed subimages. The corresponds to a quadratic surface whose minimum gives the desired displacement. These surfaces are accumulated through the levels to obtain the ‘cumulative squared difference’. The result of the algorithm is a real-valued motion estimate for each pixel in the images. The tracking algorithm is to track any kind of target selected by operator. The target can be rigid or non-rigid and can change pose, size and shape during tracking. The optical estimator is crucial for success of tracking algorithm. &lt;br /&gt;The stimulation have been performed to test different aspects of the algorithm, first, the quality and suitability of the flow field generated by the CDWT-based motion estimation algorithm have been evaluated, second, stimulations replacing the CDWT-generated flow in the tracking algorithm with the Lucas and Kanade’s flow are performed. Lastly, the proposed tracking algorithm is compared with the correlation Tracker and the Mean shift Tracker.&lt;br /&gt;The suitability of the flow generated by CDWT-based motion estimation method is evaluated in three different ways. Firstly, the flow method, secondly, the proposed tracking algorithm is evaluated. From that comparison we find that CDWT based tracker is the most accurate and efficient motion tracker compared to others method. CDWT based tracker method can also maintain the track and follows the flow information successfully.  Although CDWT based method not as precise as the others, but it can produced a denser and smoother flow field than other methods especially for regions where only low frequency components were present.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>How C Programming Works</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/08/how-c-programming-works.html</link><category>Programming</category><pubDate>Fri, 7 Aug 2009 09:13:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-3345933636029668968</guid><description>&lt;a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsBjelXK5rhekI4e9GqmEK2MQwH1wbNpOHOQtR6HLyQRWiUXcvvs5bgLwJO88VUOhYWv-K-AiLp9QqAK0lARdEftBleb44FvOUQuemWaYkBLwC9HYH2ICQla-1j6DYahdYPh4fdxrovOk/s1600-h/c-exec.gif"&gt;&lt;img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 377px; height: 229px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsBjelXK5rhekI4e9GqmEK2MQwH1wbNpOHOQtR6HLyQRWiUXcvvs5bgLwJO88VUOhYWv-K-AiLp9QqAK0lARdEftBleb44FvOUQuemWaYkBLwC9HYH2ICQla-1j6DYahdYPh4fdxrovOk/s400/c-exec.gif" alt="" id="BLOGGER_PHOTO_ID_5367256694474031634" border="0" /&gt;&lt;/a&gt;&lt;br /&gt;The C programming language is a popular and widely used programming language for creating computer programs. Programmers around the world embrace C because it gives maximum control and efficiency to the programmer.&lt;br /&gt;&lt;br /&gt;If you are a programmer, or if you are interested in becoming a programmer, there are a couple of benefits you gain from learning C:&lt;br /&gt;&lt;br /&gt;  * You will be able to read and write code for a large number of platforms -- everything from microcontrollers to the most advanced scientific systems can be written in C, and many modern operating systems are written in C.&lt;br /&gt;&lt;br /&gt;  * The jump to the object oriented C++ language becomes much easier. C++ is an extension of C, and it is nearly impossible to learn C++ without learning C first.&lt;br /&gt;&lt;br /&gt;In this article, we will walk through the entire language and show you how to become a C programmer, starting at the beginning. You will be amazed at all of the different things you can create once you know C!&lt;br /&gt;&lt;br /&gt;&lt;span style="font-weight: bold;"&gt;What is C?&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;C is a computer programming language. That means that you can use C to create lists of instructions for a computer to follow. C is one of thousands of programming languages currently in use. C has been around for several decades and has won widespread acceptance because it gives programmers maximum control and efficiency. C is an easy language to learn. It is a bit more cryptic in its style than some other languages, but you get beyond that fairly quickly.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyCh7jg7Y5HgirYGhSsnSH85yEoOtmnF8oqOsvjbT831FHOZrrlZ0a_WacWGy1fVbSL6mukvSUEsvinmBgBxsoxLgy5a-SP-YCWuDiHRpfi43KaVcubD-aLfefSWiD-_fwp0gqJvXS9ac/s1600-h/c-compile.gif"&gt;&lt;img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 344px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyCh7jg7Y5HgirYGhSsnSH85yEoOtmnF8oqOsvjbT831FHOZrrlZ0a_WacWGy1fVbSL6mukvSUEsvinmBgBxsoxLgy5a-SP-YCWuDiHRpfi43KaVcubD-aLfefSWiD-_fwp0gqJvXS9ac/s400/c-compile.gif" alt="" id="BLOGGER_PHOTO_ID_5367286694851004498" border="0" /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;C is what is called a compiled language. This means that once you write your C program, you must run it through a C compiler to turn your program into an executable that the computer can run (execute). The C program is the human-readable form, while the executable that comes out of the compiler is the machine-readable and executable form. What this means is that to write and run a C program, you must have access to a C compiler. If you are using a UNIX machine (for example, if you are writing CGI scripts in C on your host's UNIX computer, or if you are a student working on a lab's UNIX machine), the C compiler is available for free. It is called either "cc" or "gcc" and is available on the command line. If you are a student, then the school will likely provide you with a compiler -- find out what the school is using and learn about it. If you are working at home on a Windows machine, you are going to need to download a free C compiler or purchase a commercial compiler. A widely used commercial compiler is Microsoft's Visual C++ environment (it compiles both C and C++ programs). Unfortunately, this program costs several hundred dollars. If you do not have hundreds of dollars to spend on a commercial compiler, then you can use one of the free compilers available on the Web. See &lt;a href="http://delorie.com/djgpp/"&gt;http://delorie.com/djgpp/&lt;/a&gt; as a starting point in your search. &lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsBjelXK5rhekI4e9GqmEK2MQwH1wbNpOHOQtR6HLyQRWiUXcvvs5bgLwJO88VUOhYWv-K-AiLp9QqAK0lARdEftBleb44FvOUQuemWaYkBLwC9HYH2ICQla-1j6DYahdYPh4fdxrovOk/s72-c/c-exec.gif" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>Object Tracking</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/08/object-tracking.html</link><category>Object Tracking</category><pubDate>Tue, 4 Aug 2009 08:59:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-955680119864174469</guid><description>&lt;a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmkljWPfQf9VFnhGTZtPePzHO__8N4H2nhzg8XB1rZFgalzE4QiJZpf_0-b56fdf0Y6o7mcLSjW5WGferiUkx-eHxj-YfSjQBgdRLQoeTVLCLcK864ezXz8lZQ7XlJAM3YUyDp2vbQPbo/s1600-h/tracking.jpg"&gt;&lt;img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 309px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmkljWPfQf9VFnhGTZtPePzHO__8N4H2nhzg8XB1rZFgalzE4QiJZpf_0-b56fdf0Y6o7mcLSjW5WGferiUkx-eHxj-YfSjQBgdRLQoeTVLCLcK864ezXz8lZQ7XlJAM3YUyDp2vbQPbo/s400/tracking.jpg" alt="" id="BLOGGER_PHOTO_ID_5366152310568600738" border="0" /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br&gt;&lt;br /&gt;Before i have to build this kind of software, i need to find the objective to do this by reading a lot of article that already exits on the internet, and compile those thing to be my own input.&lt;br /&gt;&lt;br /&gt;I have a lot of journal about this title, but none of them show how to build this kind of software. Hmmm... never mine, i think i should work on the definition first, then i have to build the flow chart, followed by programming the source code C++. I might also consider to used mathlab GUI for this purpose.&lt;br /&gt;&lt;br /&gt;Object tracking can be described as a correspondence problem, and involves fnding which&lt;br /&gt;object in a video frame relates to which object in the next frame. Tracking methods can be&lt;br /&gt;classified into four major categories:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Model based tr&lt;span class="highlightedSearchTerm"&gt;ac&lt;/span&gt;king&lt;/li&gt;&lt;li&gt;&lt;span class="highlightedSearchTerm"&gt;Ac&lt;/span&gt;tive contour based tr&lt;span class="highlightedSearchTerm"&gt;ac&lt;/span&gt;king&lt;/li&gt;&lt;li&gt;Feature based tr&lt;span class="highlightedSearchTerm"&gt;ac&lt;/span&gt;king&lt;/li&gt;&lt;li&gt;Region based tr&lt;span class="highlightedSearchTerm"&gt;ac&lt;/span&gt;king.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmkljWPfQf9VFnhGTZtPePzHO__8N4H2nhzg8XB1rZFgalzE4QiJZpf_0-b56fdf0Y6o7mcLSjW5WGferiUkx-eHxj-YfSjQBgdRLQoeTVLCLcK864ezXz8lZQ7XlJAM3YUyDp2vbQPbo/s72-c/tracking.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>Contact</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/08/contact.html</link><category>Contact</category><pubDate>Tue, 4 Aug 2009 08:37:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-7679701261980288260</guid><description>&lt;form method="post" action="http://www.emailmeform.com/fid.php?formid=333623" enctype="multipart/form-data" accept-charset="UTF-8"&gt;&lt;table cellpadding="2" cellspacing="0" border="0" bgcolor="#FFFFFF"&gt;&lt;tr&gt;&lt;td&gt;&lt;font face="Verdana" size="1" color="#000000"&gt;&lt;/font&gt; &lt;div style="" id="mainmsg"&gt; &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;br&gt;&lt;table cellpadding="2" cellspacing="0" border="0" bgcolor="#FFFFFF"&gt;&lt;tr valign="top"&gt; &lt;td nowrap&gt;&lt;font face="Verdana" size="1" color="#000000"&gt;Your Name&lt;/font&gt;&lt;/td&gt; &lt;td&gt;&lt;input type="text" name="FieldData0" size="30"&gt; &lt;/td&gt;&lt;/tr&gt;&lt;tr valign="top"&gt; &lt;td nowrap&gt;&lt;font face="Verdana" size="1" color="#000000"&gt;Your Email Address&lt;/font&gt;&lt;/td&gt; &lt;td&gt;&lt;input type="text" name="FieldData1" size="30"&gt; &lt;/td&gt;&lt;/tr&gt;&lt;tr valign="top"&gt; &lt;td nowrap&gt;&lt;font face="Verdana" size="1" color="#000000"&gt;Subject&lt;/font&gt;&lt;/td&gt; &lt;td&gt;&lt;input type="text" name="FieldData2" size="30"&gt; &lt;/td&gt;&lt;/tr&gt;&lt;tr valign="top"&gt; &lt;td nowrap&gt;&lt;font face="Verdana" size="1" color="#000000"&gt;Message&lt;/font&gt;&lt;/td&gt; &lt;td&gt;&lt;textarea name="FieldData3" cols="30" rows="10"&gt;&lt;/textarea&gt;&lt;br&gt; &lt;/td&gt;&lt;/tr&gt;&lt;tr&gt; &lt;td colspan="2"&gt;&lt;table cellpadding=5 cellspacing=0 bgcolor="#E4F8E4" width="100%"&gt;&lt;tr bgcolor="#AAD6AA"&gt;&lt;td colspan="2"&gt;&lt;font color="#FFFFFF" face="Verdana" size="2"&gt;&lt;b&gt;Image Verification&lt;/b&gt;&lt;/font&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="padding: 2px;" width="10"&gt;&lt;img src="http://www.emailmeform.com/turing.php" id="captcha"&gt;&lt;/td&gt;&lt;td valign="top"&gt;&lt;font color="#000000"&gt;Please enter the text from the image&lt;/font&gt;   &lt;br&gt;&lt;input type="text" name="Turing" value="" maxlength="100" size="10"&gt; [ &lt;a href="#" onclick=" document.getElementById('captcha').src = document.getElementById('captcha').src + '?' + (new Date()).getMilliseconds()"&gt;Refresh Image&lt;/a&gt; ] [ &lt;a href="http://www.emailmeform.com/?v=turing&amp;pt=popup" onClick="window.open('http://www.emailmeform.com/?v=turing&amp;pt=popup','_blank','width=400, height=300, left=' + (screen.width-450) + ', top=100');return false;"&gt;What's This?&lt;/a&gt; ]&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt; &lt;td&gt; &lt;/td&gt; &lt;td align="left"&gt;&lt;input type="text" name="hida2" value="" maxlength="100" size="3" style="display : none;"&gt;&lt;input type="submit" class="btn" value="Send email" name="Submit"&gt;    &lt;input type="reset" class="btn" value="  Clear  " name="Clear"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=2 align="center"&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/form&gt;</description><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item><item><title>Thesis Title 'Robust Football Tracking in Video'</title><link>http://robustfootballtrackinginvideo.blogspot.com/2009/08/thesis-title-robust-football-tracking.html</link><category>About</category><pubDate>Tue, 4 Aug 2009 03:09:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-1328511248982019707.post-4649987423070860791</guid><description>&lt;a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuoVGYX0CD7-leS10T6FiTRFS0CKY8eqIz_YBAQ55IKJgSzaVLDrcXReeogZqgxs7HopQaCaUe03Jhc63XUtcz7P2xn9UnR39JnO5lavZD8j9a2XlXfxWz4n57udgTJdECt1w3Q5GRt6A/s1600-h/martinhandtracking.jpg"&gt;&lt;img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 165px; height: 124px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuoVGYX0CD7-leS10T6FiTRFS0CKY8eqIz_YBAQ55IKJgSzaVLDrcXReeogZqgxs7HopQaCaUe03Jhc63XUtcz7P2xn9UnR39JnO5lavZD8j9a2XlXfxWz4n57udgTJdECt1w3Q5GRt6A/s400/martinhandtracking.jpg" alt="" id="BLOGGER_PHOTO_ID_5366128914403240882" border="0" /&gt;&lt;/a&gt;&lt;br /&gt;Hello everyone, i'm Muhamad Ikhtiaruddin. I'm doing my thesis title this year about Robust Football Tracking in Video. So i decide to make a blog in order to check my self about my thesis progress. Anything about my progress doing this project i will post it here. So anyone that have a good programming skill are welcome to help me, and i really need those help anyway.&lt;br /&gt;&lt;br /&gt;&lt;span class="fullpost"&gt;&lt;br /&gt;&lt;br /&gt;Supervisor : Zatul Saliza Binti Saleh&lt;br /&gt;&lt;br /&gt;Code: ZS01&lt;br /&gt;&lt;br /&gt;Title: Robust Football Tracking in Video&lt;br /&gt;&lt;br /&gt;Objective: To locate and track football&lt;br /&gt;&lt;br /&gt;Synopsis:&lt;br /&gt;&lt;br /&gt;In televised soccer matches, the football is usually the centre of attention. Various types of personalisation, summarisation and packaging of televised soccer require that the football be located, and where possible, tracked through the video frames. You are expected to do preliminary work on football location and tracking.  The approach is based on colour and region analysis of the content of video frames in a context-specific manner and there is scope for extending this approach to frequency information available in the encoded MPEG-1 video stream and to motion analysis over time. At the end of this project you will have learned loads about MPEG-1 and digital video analysis and will have sharpened your software development skills.&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuoVGYX0CD7-leS10T6FiTRFS0CKY8eqIz_YBAQ55IKJgSzaVLDrcXReeogZqgxs7HopQaCaUe03Jhc63XUtcz7P2xn9UnR39JnO5lavZD8j9a2XlXfxWz4n57udgTJdECt1w3Q5GRt6A/s72-c/martinhandtracking.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">1</thr:total><author>noreply@blogger.com (Muhamad Ikhtiaruddin)</author></item></channel></rss>