Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/7462
標題: 連續影像之自動人體擷取及姿態分析
Automatic Human Body Extraction and Posture Analysis in Consecutive Images
作者: 吳聚柔
Wu, Jiuh-Rou
關鍵字: human body extraction;人體擷取;moving object segmentation;automatic threshold;posture recognition;posture analysis;移動物體分割;自動門檻值;姿態辨識;姿態分析
出版社: 電機工程學系所
引用: [1] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Transactions on System, Man, and Cybernetics, Part C: Applications and Reviews, vol.34, no.3, pp. 334-352, Aug. 2004. [2] N. Friedman and S. Russell, “Image segmentation in video sequences : a probabilistic approach,” Proceedings of Annual Conference on Uncertainty in Artificial Intelligence,1997. [3] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russel, “Toward robust automatic traffic scene analysis in real-time,” Proceedings of IAPR International Conference Pattern Recognition, vol. 1, pp. 126-131, 1994. [4] M. Kohle, D. Merkl, and J. Kastner, “ Clinical gait analysis by neural network : Issues and experiences,” Proceedings of IEEE Symp. Computer-Based Medical system, pp. 138-143, 1997. [5] W. E. L. Grimson, C. Stauffer, R. Romano, and L. Lee, “Using adaptive tracking to classify and monitor activities in a site,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 22-31, 1998. [6] H. Z. Sun, T. Feng, and T. N. Tan, “Robust extraction of moving objects from image sequences,” Proceedings of the Fourth Asian Conference on Computer Vision, Taiwan, pp. 961-964, 2000. [7] J. Stauder, R. Mech, and J. Ostermann, “Detection of moving cast shadow for object segmentation,” IEEE Transactions Multimedia, vol. 1, pp. 65-76, Mar. 1999. [8] S. McKena, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, “Tracking group of people,” Computer Vison and Image Understanding, vol. 80, pp. 42-56, 2000. [9] A. Lipton, H. Fujiyoshi, and R. Patil, “Moving target classification and tracking from real-time video,” Proceedings of IEEE Workshop Applications of computer Vision, pp. 8-14, Oct. 1998. [10] D. Meyer, J. Denzler, and H. Niemann, “Model based extraction of articulated objects in image sequences for gait analysis,” Proceedings of IEEE International Conference on Image, pp. 78-81, 1988. [11] J. L. Barron, D. J. Fleet, and S. Beauchemin, “Performance of optical flow Techniques,” Proceedings of IEEE International Conference of Computer Vision, vol. 12, no. 1, pp. 42-77, Feb. 1994. [12]D. Meyer, J. Psl, and H. Niemann, “Gait classification with HMM's for trajectories of body parts extracted by mixture densities,” Proceedings of British Machine Vision Conference, pp. 459-468, 1998. [13] S. Y. Chien, S. Y. Ma, and L. G. Chen, “Efficient Moving Object segmentation algorithm using background registration technique,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, no. 7, pp. 577-586, July 2002. [14] L. Snidaro and G.L. Foresti, “Real-time thresholding with Euler numbers,” Pattern Recognition Letters, vol.24, no.9-10, pp. 1533-1544, Jun. 2003. [15] H. L. Tzou, “Real-Time Human Detection and Tracking,” Master Thesis, Tatung University, 2002. [16] P. N. Cheng, “The Application of Fuzzy Inference to Automatic Detect and Identification of Intruders in Security System,” Master Thesis, Tatung University, 2003. [17] A. F. Bobick and A. D. Wilson, “A state-based technique to the representation and recognition of gesture,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 19, pp. 1325-1337, Dec. 1997. [18] T. Starmer, J. Weaver, and A. Pentland, “Real-time American sign language recognition using desk and wearable computer-based video,” IEEE Transactions Pattern Analysis Machine Intelligence, vol. 20, pp. 1317-1375, Dec. 1998. [19] N. Johnson and D. Hogg, “Learning the distribution of object trajectories for event recognition,” Image Vision Computer, vol. 14, no. 8, pp. 609-615, 1996. [20] H. Fujiyoshi and A. J. Lipton, “Real-time human motion analysis by image skeletonization,” Proceedings of IEEE Workshop on Applications of Computer Vision , pp. 15-21, Oct. 1998. [21] I. Haritaoglu, D. Harwood and L. S. Davis, “W4 real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 809-830, Aug. 2000. [22]Y. Li, M. Songde, and L. Hanqing, “A multiscale morphological method for human posture recognition,” Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 56-61, Apr. 1998. [23] P. Spagnolo, M. Leo, A. Leone, G. Attolico, and A.Distante, “Posture estimation in visual surveillance of archaeological sites,” Proceedings of IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 277-283, July 2003. [24] S. Iwasawa, K. Ebihara, J. Ohya, and S. Morishima, “Real-time estimation of human body posture from monocular thermal images,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 15-20, 1997. [25] K. Takahashi and T. Tanigawa, “Remarks on real-time human posture estimation from silhouette image using neural network,” Proceedings of IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 370-375, Oct. 2004. [26] D. W. Kang and O. Jun, “Estimating postures of a human wearing a multiple-colored suit based on color information processing,” Proceedings of IEEE International Conference on Multimedia and Expo, vol. 2, pp. 261-264, 2003. [27] O. Jun, “Analysis of human behaviors by computer vision based approaches,” Proceedings of IEEE International Conference on Multimedia and Expo, vol. 1, pp. 913-916, Aug. 2002. [28] D. Kang and O. Jun, “Postures of a human wearing a multiple-colored suit based on color information processing,” Proceedings of IEEE International Conference on Multimedia and Expo, vol. 1, pp. 261-264, July 2003. [29] C. Y. Choo and H. Freeman, “An efficient technique for compressing chain-coded line drawing images,” Proceedings of Record Asilomar Conference Signal on Systems and Computers, vol. 2, pp. 717-720, Oct, 1992. [30] H. Kim and K. Nam, “Object recognition of one-dof tools by a back-propagation neural net,” IEEE Transactions on Neural Networks, vol. 6, no. 2, pp. 484-487, Mar. 1995. [31] S. Osowski and D. D. Nghia, “Neural networks for classification of 2-D Patterns,” Proceedings of International Conference on Signal Processing, vol. 3, pp. 1568-1571, Aug. 2000. [32] R. Mech and M. Wollborn, “A noise robust method for 2D shape estimation of moving objects in video sequences considering a moving camera,” Signal Processing, vol. 66, pp. 203-217, Apr. 1998. [33] L. T. Chen, “Moving Object Recognition By Contour-Based Neural Fuzzy Network,” Master Thesis, National Chung-Hsing University, 2004. [34] D. Chai and K. N. Ngan, “Face segmentation using skin color map in videophone applications,” IEEE Transactions CSVT, vol. 9, no. 4, pp. 551-564, Jun. 1999. [35] P. L. Rosin, “Thresholding for change detection,” Computer Vision Image Understanding, vol. 86, no. 2, pp. 79-95, 2002. [36] S. B. Gray, “Local properties of binary images in two dimensions,” IEEE Transactions Computers, pp. 551-561, 1971. [37] C. F. Juang and C. T. Lin, “An online self-constructing neural fuzzy inference network and its applications,” IEEE Transactions on Fuzzy Systems, vol. 6, no. 1, pp. 12-32, Feb. 1998. [38] N. Bojic and K. K. Pang, “Adaptive skin segmentation for head and shoulder video sequence,” SPIE Visual Communication Image Processing, vol. 4067, pp. 704-711, Jun. 2000. [39] K. Sobottka and I. Pitas, “A novel method for automatic face segmentation, facial feature extraction and tracking,” Signal Processing: Image Communication, vol. 12, pp. 263-281, Jun. 1998. [40] M. T. Razali and B. J. Adznan, “Detection and classification of moving object for smart vision sensor,” IEEE Department of Computer and Communication Engineering, vol. 1, pp. 733-737, Apr. 2006. [41] J. Ohya, A. Utsumi, and J. Yamato, Analyzing Video Sequences of Multiple Humans: Tracking, Posture Estimation, and Behavior Recognition, Chap. 3, Kluwer Academic Publishers, 2002.
摘要: 
本論文提出兩種人體姿態的分析方法,其一為使用遞迴式模糊類神經網路做人體姿態辨識,另一為利用輪廓及膚色資訊來做人體姿態估測。在做姿態分析之前,必須從背景中擷取出人體的輪廓。在此提出一個移動物體分割演算法將人體和背景從一連串的影像之中區分出來,並且,對於影像差異及背景差異加入了一個搭配尤拉(Euler)數之自動決定門檻值的方法,最後再經由一系列的影像處理來獲得完整的人體輪廓。姿態辨識方面,我們針對站、彎、坐、躺四種人體主要姿態去做辨識,我們運用輪廓的水平與垂直投影以及離散傅立葉轉換(DFT)來取得投影向量的特徵值,並配合人體輪廓的長寬比值,以這組數值代入遞迴式模糊類神經網路運算進而辨識姿態。在姿態估測上,目的在於估測人體上重要且具象徵性的位置,我們結合膚色及人體周圍輪廓之凸點來判斷人體上頭、手與腳的位置。實驗結果顯示,在此提出的方法可以有效地分辨四種姿態以及標示人體上這些重要的位置。

In this thesis, two kinds of human body posture analysis methods are proposed. One is continuous human body posture recognition by a recurrent fuzzy neural network, and the other is human posture estimation by silhouette and skin color information. Before posture analysis, it is necessary to segment the human body from background. A moving object segmentation algorithm is proposed to distinguish the human body from background from a sequence of images. This algorithm uses an automatic threshold determination method with Euler numbers for frame and background differences. After segmentation, a series of image processing is used obtain a complete silhouette of human body. The objective of posture recognition is to recognize four types of main body postures, including standing, bending, sitting, and lying. The significant Discrete Fourier Transform (DFT) coefficients of horizontal and vertical histograms together with length-width ratio of the silhouette are used as features. Recognizer is designed by a recurrent neural fuzzy network. In posture estimation, our objective is to locate significant body points. We combined skin color information with the convex points of contour of human body to locate head, hands, and feet. Experiment results show that the proposed approach can recognize the four types of postures and locate the significant points of human body with good performance.
URI: http://hdl.handle.net/11455/7462
其他識別: U0005-1107200720383800
Appears in Collections:電機工程學系所

Show full item record
 

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.