Please use this identifier to cite or link to this item:
Real-Time Multi-Object Tracking Algorithm Using Improved Object-Image-Completeness and Prediction-Search
|關鍵字:||預測-搜尋法;Prediction-Search;追蹤多個目標;低成本;即時;multi-object tracking;low cost;real-time||出版社:||電機工程學系所||引用:||[ 1 ]K. Jack, Video Demystified: A Handbook for the Digital Engineer, 4th ed. Elsevier Inc. 2005, ch.3. [ 2 ]G. A. Baxes, Digital Image Processing : Principles And Applications, 1st ed. John Wiley & Sons , Inc. 1994. [ 3 ]T. -L. Hwang, J.J Clark, “On local detection of moving edges,” Pattern Recognition,. Proceedings, 10th International Conference on, vol. 1, pp.180 – 184, Jun. 1990. [ 4 ]A. H. S. Lai and N. H. C. Yung, “A fast and accurate scoreboard algorithm for estimating stationary backgrounds in an image sequence,” The IEEE International Symposium on Circuits and Systems (ISCAS’98), vol. 4, pp. 241 – 244, 1998. [ 5 ]A. Monnet, A. Mittal, N. Paragios, and V. Ramesh, “Background modeling and subtraction of dynamic scenes,” in IEEE International Conference on Computer Vision (ICCV’03), pp.1305–1312, 2003. [ 6 ]H. T. LIU, G. H. JIANG, and L. WANG, “Design of deformable template:a case for multi-object tracking,” Journal of System Simulation, vol. 18, pp. 1073-1077, Apr. 2006. [ 7 ]N. Prabhakar, V. Vaithiyanathan, A. P. Sharma, A. Singh and P. Singhal, “Object tracking using frame differencing and template matching,” Research Journal of Applied Sciences, Engineering and Technology, vol. 4, pp. 5497-5501, Dec. 2012. [ 8 ]P. Meer ,D. Comaniciu and V. Ramesh,” Real-time tracking of non-rigid objects using mean shift,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR’00), Vol. 2, pp. 142-149, Hilton Head, SC, 2000. [ 9 ]D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI’02), Vol. 24, No. 5, May 2002. [ 10 ] Y. Cheng, “Mean shift, mode seeking, and clustering”, IEEE Trans. Pattern Analysis and Machine Intelligence (TPAMI’95), vol. 17, no. 8, pp. 790-799, 1995. [ 11 ]P. Hidayatullah, H. Konik,” CAMSHIFT Improvement on multi-hue and multi-object tracking,” in International Conference on Electrical Engineering and Informatics (ICEEI’11), Bandung, Indonesia, 2011. [ 12 ] J. G. Allen, R. Y. D. Xu, and J. S. Jin. “Object tracking using CAMShift algorithm and multiple quantized feature spaces,” in Proceedings of the Pan-Sydney area workshop on Visual information processing, ser. ACM International Conference Proceeding Series, vol. 100, pp. 3–7, 2004. [ 13 ] N. Ould-Dris, A. Ganoun, and R. Canals, “Tracking system using CAMShift and feature points,” 14th European Signal Processing Conference (EUSIPCO’06), Florence, Italy, 2006. [ 14 ] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transaction of the ASME, Journal of Basic Engineering, vol. 82, pp. 35 – 45, Mar. 1960. [ 15 ] S. S. Pathan, A. Al-Hamadi, and B. Michaelis, “Intelligent feature-guided multi-object tracking using Kalman filter,” 2nd International Conference on Computer Control and Communication, pp. 1-6, 2009. [ 16 ] J. Hu, H. Zhang, H. Huang, J. Feng, and G. Wang. “An improved Kalman filter algorithm base on quaternion correlation in object tracking,” Fuzzy Systems and Knowledge Discovery (FSKD’12), 9th International Conference on, pp. 1725 – 1729, 2012. [ 17 ] J. Hu, H. Zhang, J. Feng, H. Huang, and H. Ma, “A scale adaptive Kalman filter method based on quaternion correlation in object tracking,” Networking and Distributed Computing (ICNDC’12), Third International Conference on, pp. 170 – 174, 2012. [ 18 ] X. Li, K. Wang, W. Wang, and Y. Li ,“ A multiple object tracking method using Kalman filter,” IEEE International Conference on Information and Automation (ICIA’10), pp. 1862 – 1866, Jun. 2010. [ 19 ] K. Okuma, A. Taleghani, N. de Freitas, J. Little, and D. Lowe, “A Boosted particle filter: multitarget detection and tracking,” in European Conference on Computer Vision, May. 2004. [ 20 ] C. Yang, R. Duraiswami, and L. Davis, “Fast multiple object track- ing via a hierarchical particle ﬁlter,” in International Conference on Computer Vision, 2005. [ 21 ] C. Cheng, R. Ansari, and A. Khokhar, “Multiple object tracking with kernel particle filter,” Computer Vision and Pattern Recognition (CVPR’05), IEEE Computer Society Conference on (Volume:1 ), pp. 566 – 573, Jun. 2005. [ 22 ] K. Nummiaro, K. M. Esther, and L. V. Gool, “An adaptive color-based particle filter,” in Proc. of Image And Vision Computing, Volume 21, pp. 99-110, Jan. 2003. [ 23 ] D. Schulz, W. Burgard, D. Fox, and A. B. Cremers, “Tracking multiple moving targets with a mobile robot using particle filter and statistical data association,“ in Proceedings Of The IEEE International Conference on Robotics and Automation, pp.1665-1670, 2001. [ 24 ] S. K. Zhou, R. Chellappa, and B. Mogaddam, “Appearance tracking using adaptive models in a particle filter,” in Proc. of 6th Asian Conference on Computer Vision (ACCV’04), Dec. 2004. [ 25 ] “Kinect for Windows features,” http://www.microsoft.com/en-us/ kinectforwindows/discover/features.aspx . [ 26 ] Abramov, K. Pauwels, J. Papon , F. Worgotter 1 , and B. Dellen, “Depth-supported real-time video segmentation with the Kinect,” Applications of Computer Vision (WACV’12), pp. 457 – 464, Jan. 2012. [ 27 ] D. Rao, Q. V. Le, T. Phoka, M. Quigley, A. Sudsang, and A. Y. Ng, “Grasping novel objects with depth segmentation,” in Intelligent Robots and Systems(IROS’10), pp. 2578 – 2585, Oct. 2010. [ 28 ] D. Mitzel, P. Sudowe and B. Leibe, “Real-Time multi-person tracking with time-constrained detection,” in Proceedings of the British Machine Vision Conference(BMVC’11), pp. 104.1-104.11, Sep. 2011. [ 29 ] C. Rabe, T. Muller, A. Wedel, and U. Franke, “Dense, robust, and accurate motion ﬁeld estimation from stereo image sequences in real-time,” In European Conference on Computer Vision (ECCV’10), Vol. 6314, pp. 582-595, 2010. [ 30 ] L. Cai, L. He, Y. Xu, Y. Zhao, and X. Yang, “Multi-object detection and tracking by stereo vision,” Pattern Recognition, vol. 43, no. 12, pp. 4028–4041, Dec. 2010. [ 31 ] N. Nguyen, H. H. Bui, S. Venkatesh, and G. West, “Multiple camera coordination in a surveillance system,” ACTA Automatica Sinica, vol. 29, pp. 408-422, 2003. [ 32 ] T. H. Chang and S. Gong, “Tracking multiple people with a multi - Camera System,” IEEE Workshop on Multi-Object Tracking, pp. 19-26, 2001. [ 33 ] F. A. Cheikh , S. K. Saha , V. Rudakova, and P. Wang, “Multi-people tracking across multiple cameras,” International Journal on New Computer Architectures and Their Applications (IJNCAA’12), vol. 2, pp. 23‐33, 2012. [ 34 ] M. Piccardi, “Background subtraction techniques: a review,” IEEE International Conference on Systems, Man and Cybernetics(IEEE SMC’04), vol. 4, pp. 3099 – 3104, Oct. 2004 [ 35 ] K. K. Ng and E. J. Delp, “New models for real-time tracking using particle ﬁltering,” Proceedings of SPIE Visual Communications and Image Processing, vol. 7257, Aug. 2009. [ 36 ] K. K. Ng and E. J. Delp, “Object tracking initialization using automatic moving object detection,” Visual Information Processing and Communication, Proceedings of the SPIE, vol. 7543, pp.12, Jan. 2010. [ 37 ] S. W. Chen, L. K. Wang, J. H. Lan, “Moving object tracking based on background subtraction combined temporal difference,” IEEE International Conference on Image Processing (ICIP’11), Bangkok, Dec. 2011. [ 38 ] N. Wang and G. Y. Wang, “Shape Descriptor with Morphology Method for Color-based Tracking,” International Journal of Automation and Computing, Vol. 4, no. 1, pp. 101-108, Jan. 2007. [ 39 ] G. N. Stamou, N. Nikolaidis and I. Pitas, “Object tracking based on morphological elastic graph matching,” in Proc. of the IEEE Int. Conf. on Image Processing (ICIP’05), vol. 1, Sep. 2005. [ 40 ] I. Haritaoglu, D. Harwood, and L. S. Davis, ”W4 S: A real-time system for detecting and tracking people in 2 1/2D,” Lecture Notes in Computer Science vol. 1406, pp. 877-892, 1998. [ 41 ] H. T. Chen, H. H. Lin, T. L. Liu, “Multi-object tracking using dynamical graph matching,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR’01), vol. 2, pp. 210 -217, 2001. [ 42 ] M. Tomasi, J. Diaz, E. Ros, “Real time architectures for moving-objects tracking,” Reconfigurable Computing: Architectures, Tools and Applications, Lecture Notes in Computer Science, vol. 4419, pp. 365-372, 2007. [ 43 ] K. Bernardin and R. Stiefelhagen,“Evaluating multiple object tracking performance: the CLEAR MOT metrics,” EURASIP Journal on Image and Video Processing, vol. 2008, pp. 10, Feb. 2008.||摘要:||
這篇論文是針對單一影像資料的硬體下，提出一個避免像以往使用複雜的演算法去達成追蹤多個目標的方法，以達到低成本和即時追蹤的需求。我們提出「預測-搜尋法」去降低計算量而達到即時追蹤的需求。此外我們也提了一個改良的「物體影像完整法」去改進物體影破碎的問題。除了「預測-搜尋法」之外我們還增加了距離和顏色比較的演算法去輔助追蹤，提高追蹤的準確度。「預測-搜尋法」用很低的計算量去達到追蹤的功能，所以追蹤的速度會很快。一般來說，大部分的追蹤會被「預測-搜尋法」完成，少部分才需要做距離的比較，更少部分才需要做顏色比較，所以追蹤的速度會很快。關於實際的追蹤速度，我們的測試平台在30 frames/sec的輸入影像下能保持在30 frames/sec 的追蹤速度。而在追蹤能力上，即使物體是彼此重疊的，我們的演算法仍能追蹤各個物體。我們已經實際驗證了18個物體的追蹤、 3個物體重疊時的追蹤、不同的外形物體的追蹤、不同尺寸物體的追蹤、不規則路徑物體的追蹤、走路中的人 、跑步中的人。
Except for general camera, currently most of robust techniques for multi-object tracking are to use other assistant sensors such as IR, stereo image sensor or multiple cameras for acquiring other information (e.g. depth field data) to segment objects from background. There are also many researches in single image sequence, however, most of them can''t accurately segment objects from background in complex background, moreover, can''t track the overlapped objects. Furthermore, they usually use the complex algorithm.
This thesis presents a method which avoids the common practice of using a complex algorithm for multi-object tracking based on single image sequence to achieve low cost and real-time. We propose a “Prediction-Search” to lower the computation for real-time demand. Furthermore, we also propose an improved “Object-Image-Completeness” to improve the broken image issue for target objects. In addition to “Prediction-Search”, we add the distance and color comparison algorithms for tracking assistant to make the tracking robust. The “Prediction-Search” has a very low computation to achieve tracking task so that the tracking speed will be very fast. In general, most of tracks will be completed by “Prediction-Search”, and the minority need distance comparison, and a few of the minority need color comparison, so the tracking speed will be very fast. Regarding the real tracking speed, our system can keep 30 frames/sec tracking speed based on 30 frames/sec input image sequence for real time demand in our test platform. In tracking performance, even if the objects are overlapping each other, the proposed algorithm still can track each object. We have implemented the tracking task for 18 objects, 3 objects overlap, different figure objects, different size objects, irregular path objects, walking peoples and running peoples.
|Appears in Collections:||電機工程學系所|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.