Please use this identifier to cite or link to this item:
標題: 應用於即時監控系統之有效率背景擷取與物件分割演算法
An Efficient Background Extraction and Object Segmentation Algorithm for Realtime Applications
作者: 王心怡
Wang, Hsin-Yi
關鍵字: 監控系統;Monitoring system;背景擷取;移動物件分割;陰影消除;空洞填補;區域填充;雜訊消除;背景更新;Background Extraction;Object Segmentation;Shadow Remove;Hollow Filling;Region filling;Noise Reduction;Background Update
出版社: 電機工程學系所
引用: [1]National Police Agency, Ministry of The Interior (2012, Mar. 30), Policing Statistics Report [Online]. Available: [2]National Police Agency, Ministry of The Interior (2007, Dec. 20), Scientific handling Analysis [Online]. Available: [3]O. Masoud, N. P. Papanikolopoulos, and E. Kwon, “The Use of Computer Vision in Monitoring Weaving Sections,” IEEE Trans. Intell. Transp. Syst., Vol. 2, No. 1, pp. 18-25, Oct., 2001. [4]C. Li, K. Ikeuchi, and M. Sakauchi, “Acquisition of Traffic Information Using A Video Camera with 2D Spatio-Temporal Image Transformation Technique,” in Proc. IEEE Int. Conf. Intell. Transp. Syst., Oct., 1999, pp. 634-638. [5]R. Defauw, S. Lakshmanan, and K. V. Prasas, “A System for Small Target Detection, Tracking, and Classification,” in Proc. IEEE Int. Conf. Intell. Transp. Syst., Oct., 1999, pp. 639-644. [6]D. Gao, and J. Zhou, “Adaptive Background Estimation for Real-Time Traffic Monitoring,” in Proc. IEEE Int. Conf. Intell. Transp. Syst., Aug., 2011, pp. 330-333. [7]E. Dickmanns, “The Development of Machine Vision For Road Vehicles in the Last Decade,” in Proc. IEEE Intell. Veh. Symp., 2002. [8]M. Bertozzi, A. Broggi, and A. Fascioli, “Vision-Based intelligent Vehicles: State of the Art and Perspectives,” Robotics and Autonomous Syst., vol. 32, pp. 1-16, 2000. [9]S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi, “Occlusion Robust Vehicle Detection utilizing Spatio-Temporal Markov Random Filter Model”, 7th World Congress on ITS, Torino, Italy, 2000. [10]R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects, ghosts and shadows in video streams,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1337-1342, 2003. [11]D. Koller, et. al., “Towards robust automatic traffic scene analysis in real-time,” in Proc. LAPR Int. Conf. Pattern Recogn., 1994, pp. 126-131. [12]G. Foresti, “Object detection and tracking in time-varying and badly illuminated outdoor environments,” SPIE Journal on Optical Engineering, 37 (9), 1998. [13][Online]. Available: [14]J. Zhou, D. Gao, and D. Zhang, “Moving vehicle detection for automatic traffic monitoring,” IEEE Trans. Veh. Technol., vol. 56, no. 1, pp. 51–59, Jan., 2007. [15]C. Demonceaux, A. Potelle, and D. Kachi-Akkouche, “Obstacle detection in a road scene based on motion analysis,” IEEE Trans. Veh. Technol., vol. 53, no. 6, pp. 1649–1656, Nov., 2004. [16]M. Bertozzi, A. Broggi, A. Fascioli, T. Graf, and M.-M. Meinecke, “Pedestrian detection for driver assistance using multi resolution infrared vision,” IEEE Trans. Veh. Technol., vol. 53, no. 6, pp. 1666–1678, Nov., 2004. [17]Y. Abramson, and Y. Freund, “Active learning for visual object detection,” Univ. California San Diego, San Diego, CA, CS2006-0871 Tech. Rep., Nov, 2006. [Online]. Available. [18][Online]. Available: [19]R. Cucchiara, M. Piccardi, and P. Mello, “Image Analysis and Rule-Based Reasoning for A Traffic Monitoring System,” IEEE Trans. Intell. Transp. Syst., Vol. 1, No. 2, pp. 119-130, Jun., 2000. [20]T. Nakanishi, and K. Ishii, “Automatic Vehicle Image Extraction Based on Spatio-Temporal Image Analysis,” in Proc. IAPR Int. Conf. Comput. Vision and Applications, Aug., 1992, Vol. 1, pp. 500-504. [21]B. L. Tseng, C. Y. Lin, and J. R. Smith, “Real-Time Video Surveillance for Traffic Monitoring Using Virtual Line Analysis,” in Proc. IEEE Int. Conf. Multimedia and Expo, Aug., 2002, Vol. 2, pp. 541-544. [22]J. Wu, Z. Yang, Jun Wu and A. Liu, “Virtual Line Group Based Video Vehicle Detection Algorithm Utilizing Both Luminance and Chrominance,” in Proc. IEEE Int. Conf. Ind. Electron. and Appl., May, 2007, pp.2854-2858. 23-25. [23]L. Anan, “Video Vehicle Detection Algorithm based on Virtual-Line Group,” in Proc. IEEE Asia Pacific Conf. Circuits Syst., Dec., 2006, pp.1148-1151, 4-7. [24]J. Wu and Z.-T. Xiao, “Improved video vehicle detection algorithm based on virtual line,” in Proc. IEEE Int. Conf. Mechanic Automation and Control Engineering, Jun., 2010, pp.2773-2776, 26-28. [25]Y. Mae, Y. Shirai, J. Miura, and Y. Kuno, “Object Tracking in Cluttered Background Base on Optical Flow and Edges, “ in Proc. Int. Conf. Pattern Recogn., Aug., 1996, Vol. 1, Vienna, pp. 196-200. [26]H.-Y. Zhang, “Multiple moving objects detection and tracking based on optical flow in polar-log images,“ in Proc. Int. Conf. Mach. Learning Cybern., Jul., 2010, vol.3, pp.1577-1582, 11-14. [27]P. Gao, X. Sun and W. Wang, “Moving object detection based on kirsch operator combined with Optical Flow,“ in Proc. Int. Conf. Image Anal. Signal Process., Apr., 2010, pp.620-624, 9-11. [28]A. Talukder and L. Matthies, “Real-time detection of moving objects from moving vehicles using dense stereo and optical flow,“ in Proc. IEEE Int. Conf. Intell. Robots Syst., Sept.-2 Oct., 2004, vol.4, pp. 3718- 3725, vol.4, 28. [29][Online]. Available: [30]A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving Target Classification and Tracking from Real-Time Video,” in Proc. IEEE Workshop Appl. of Comput. Vision, Princeton, NJ, Oct., 1998, pp. 8-14. [31]M. C. Huang, and S. H. Yen, “A Real-Time and Color-Based Computer Vision for Traffic Monitoring System,” in Proc. IEEE Int. Conf. Multimedia and Expo, Jun., 2004, Vol. 3, pp. 2119-2122. [32]B. P. L. Lo, and S. A. Velastin, “Automatic Congestion Detection System for Underground Platforms,” in Proc. IEEE Int. Symp. Intell. Multimedia, Video and Speech Process., Hong Kong, May, 2000, pp. 158-161. [33]Q. Zhou, and J. K. Aggarwal, “Tracking and Classifying Moving Objects from Video,” in Proc. IEEE Workshop Performance Evaluation of Tracking and Surveillance, 2001. [34]C. Stauffer, and W. E. L. Grimson, “Adaptive Background Mixture Models for Real-Time Tracking,” in Proc. IEEE Int. Conf. Comput. Vision and Pattern Recogn., Jun., 1999, Vol. 2, Fort Collins, CO, pp. 23-25. [35]I. Haritaoglu, D. Harwood, and L. S. Davis, “W4:Real-Time Surveillance of People and Their Activities,” in Proc. IEEE Trans. Pattern Anal. and Machine intell., Aug., 2000, Vol. 22, No. 8, pp. 809-830. [36]Z. He, J. Liu, and P. Li, “New Method of Background Update for Video-Based Vehicle Detection,” in Proc. IEEE Int. Conf. Intell. Transp. Syst., Oct., 2004, pp. 580-584. [37]C.-C. Chiu, M.-Y. Ku, and L.-W. Liang, “A Robust Object Segmentation System Using a Probability-Based Background Extraction Algorithm,” in Proc. IEEE Trans. Circuits and Syst. Video Technol., Apr., 2010, vol.20, no.4, pp.518-528. [38]Z. Sheng and X. Cui, “An Adaptive Learning Rate GMM for Background Extraction,” in Proc. Int. Conf. Comput. Sci. Softw. Eng., Dec., 2008, vol.6, pp.174-176, 12-14. [39]Gonzalez R F and Wintz P, “Digital Image Processing, 3rd Ed.,” Addison-Wesley, 1992. [40]L. Li, W. Huang, I. Y. H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Trans. Image Process., vol. 13, no. 11, pp. 1459–1472, Nov., 2004. [41]F. M. Wahl, K. Y. Wong, and R. G. Casey, “Block segmentation and text extraction in mixed text/image documents,” Comput. Vision, Graphics, Image Process., vol. 20, no. 4, pp. 375–390, Dec., 1982.
  本文提出了一種即時影像的移動物件偵測演算法,系統中包含背景擷取(Background Extraction)與移動物件分割(Object Segmentation)和背景更新(Background Update)。
  在背景擷取階段,僅使用少數群組來分類背景像素值,故僅佔用少量的記憶體空間,並且能夠在連續輸入的影像中準確且快速地擷取背景影像。在物件分割階段,為了消除移動物件分割後所產生的雜訊與空洞,我們使用一些後處理的方法來偵測出雜訊並且消除它。其中,本文提出了一種簡化過後的區域填充(Region filling)演算法,它不需經由繁複的疊代計算即能有效的填補物件中的空洞,而每一張影像進行填補區域的計算時間是固定的,不因物件的數量多寡或大小而有所改變。最後,在背景更新階段,透過當前的背景影像與輸入影像進行加權計算以獲得新的背景影像,使得系統能夠適應各種天氣與晝夜的變化,減少因背景影像的不真實所造成的偵測錯誤。

  An efficient real-time background extraction and moving object detection algorithm is proposed, the system contains the background extraction, moving object segmentation and background update.
  In background extraction stage, only use few group to classify the background pixel value, so it can extract the background pixel accurately and quickly from the input image sequence with less memory usage. With the algorithm accurately extracted the background, motion objects can be detected correctly and quickly. In object segmentation stage, to remove the noise and hollow produced after motion object detection, we use post-processing to detect and remove it. Moreover, this paper adopts a simplified region filling algorithm to fill the holes in object with fixed executing time per frame. Finally, the update phase in the background, weighted by the current background and input image to obtain a new background image, then, system able to adapt to the changes of all kinds of weather, day and night, to reduce detection errors caused by false background image.
  Experimental results for various environmental to demonstrate the accuracy and effectiveness of the proposed algorithm.
其他識別: U0005-0208201218012900
Appears in Collections:電機工程學系所

Show full item record
TAIR Related Article

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.