Object Tracking Algorithms
Object Tracking Algorithms
Object Tracking Literature Review
This document contains the broad categories of the object tracking algorithms.
Silhouette Tracking
Objects may have complex shapes, for example, hands, head, and shoulders that cannot be well described by simple geometric shapes. Silhouette based methods provide an accurate shape description for these objects. The goal of a silhouette-based object tracker is to find the object region in each frame by means of an object model generated using the previous frames. This model can be in the form of a color histogram (HUTTENLOCHER 1993), object edges or the object contour (KANG 2003).
Background Subtraction Method
Object detection can be achieved by building a representation of the scene called the background model and then finding deviations from the model for each incoming frame. Any significant change in an image region from the background model signifies a moving object (JAIN 1997). The pixels constituting the regions undergoing change are marked for further processing. Usually, a connected component algorithm is applied to obtain connected regions corresponding to the objects (SALARI 1990). This process is referred to as the background subtraction.
A background subtraction method uses multiple cues to detect objects robustly in adverse conditions (Javed). The algorithm consists of three distinct levels, i.e., pixel level, region level and frame level. At the pixel level, statistical models of gradients and color are separately used to classify each pixel as belonging to background or foreground. In the region level, foreground pixels obtained from the color based subtraction are grouped into regions and gradient based subtraction is then used to make inferences about the validity of these regions. Pixel based models are updated based on decisions made at the region level. Finally, frame level analysis is performed to detect global illumination changes. Our method provides the solution to some of the common problems that are not addressed by most background subtraction algorithms, such as fast illumination changes, repositioning of static background objects, and initialization of background model with moving objects present in the scene.
Contour Tracking
Contour tracking methods, in contrast to shape matching methods iteratively evolve an initial contour in the previous frame to its new position in the current frame (TERZOPOULOS 1992). This contour evolution requires that some part of the object in the current frame overlap with the object region in the previous frame. Tracking by evolving a contour can be performed using two different approaches. The first approach uses state space models to model the contour shape and motion (ISARD 1998). The second approach directly evolves the contour by minimizing the contour energy (Yilmaz A. And Shah 2004), using direct minimization techniques such as gradient descent (CHEN 2001).
A popular approach (Yilmaz 2004), proposes a tracking method which tracks the complete object regions, adapts to changing visual features, and handles occlusions. Tracking is achieved by evolving the contour from frame to frame by minimizing some energy functional evaluated in the contour vicinity defined by a band. Our approach has two major components related to the visual features and the object shape. Visual features (color,