Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/9226
標題: 基於視覺之即時人型機器人模仿人類手部動作
Vision-Based Real-time Humanoid Robot Imitation of Human Hand Motion
作者: 張佑瑋
Chang, Yu-Wei
關鍵字: 機器人;robot;模仿;類神經模糊推理網路;imitate;sonfin
出版社: 電機工程學系所
引用: References [1] Haritaoglu, D. Harwood and L. S. Davis, “ real-time surveillance of people and their activities,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 809 – 830, Aug. 2000. [2] S. Y. Chen, S. Y. Ma, and L. G. Chen, “Efficient moving object segmentation algorithm using background registration technique,” IEEE Trans. Circuits and Systems for Video Technique, vol. 12, no. 7, pp. 577-586, July 2002. [3] C. Kim, J. Cho, and Y. Lee, “The relational properties among results of background subtraction”, Proc. of Int. Conf. on Advanced Communication Technology, vol. 3, pp. 1887-1890, 2008. [4] C. F. Juang, C. M. Chang, J. R. Wu, and D. M. Lee, “Computer vision-based human body segmentation and posture estimation,” IEEE Trans. Syst., Man, and Cyber., Part A: Systems and Humans, vol. 39, no. 1, pp. 119-133, Jan. 2009 [5] H.D. Cheng, X.H. Jiang, Y. Sun and J. Wang, “Color image segmentation: advances and prospects”, Pattern Recognition, vol. 34, pp. 2259–2281, 2001 [6] Q. Zhou and J.K. Aggarwal, “Tracking and classifying moving objects from video”, Proc. IEEE Int. Workshop Performance Evaluation of Tracking and Surveillance, pp. 52-59, Dec. 2001 [7] .Razali M.T. and Adznan B.J., “Detection and classification of moving object for smart vision sensor,” Proc. Information and Communication Technologies Conf., vol. 1, pp. 733-737, 2006 [8] T. Horprasert, D. Harwood, and L.S. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” Proc. IEEE Int. Conf. Computer Vision, Frame-Rate Workshop, pp. 1-19, Greece, Sept. 1999 [9] W. C. Du, Vision-Based Real-Time 3D Human Body Segmentation And 3D Virtual Human Model Construction, Master Thesis, National Chung-Hsing University, Taiwan, July 2011 [10] H. J. Jiang, T. C. Chen, and C. F. Juang, “Stereo camera-based real-time 3D character construction and human behavior following with interactive entertainment application,” Proc. Nat. Symp. System Science and Engineering, pp. 501-506, Taiwan, R.O.C., June 2012 [11] H. J. Jiang, C. F. Juang, “S Stereo Camera-based Real-Time 3D Virtual Human Construction with Interactive Entertainment Applications, June 2012 [12] A. F. Bobick and A. D. Wilson, “A state-based technique to the representation and recognition of gesture,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 19, pp. 1325-1337, Dec. 1997 [13] T. Starmer, J. Weaver, and A. Pentland, “Real-time American sign language recognition using desk and wearable computer-based video,” IEEE Transactions Pattern Analysis Machine Intelligence, vol. 20, pp. 1317-1375, Dec. 1998 [14] N. Johnson and D. Hogg, “Learning the distribution of object trajectories for event recognition,” Image Vision Computer, vol. 14, no. 8, pp. 609-615, 1996 [15] H. Fujiyoshi and A. J. Lipton, “Real-time human motion analysis by image skeletonization,” Proceedings of IEEE Workshop on Applications of Computer Vision , pp. 15-21, Oct. 1998 [16] J. W. Hsieh, C. H. Chuang, S. Y. Chen et al., “Segmentation of Human Body Parts Using Deformable Triangulation”, IEEE Trans. Syst., Man, and Cyber., Part A: Systems and Humans, vol. 40, no. 3, pp. 596-610, May 2010 [17] T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Communications of the ACM, Vol. 27, No. 3, pp. 236-239, March 1984 [18] C. Arcelli, G. Sanniti di Baja, “ A One-Pass Two-Operation Process to Detect the Skeletal Pixels on the 4-Distance Transform”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 4, pp. 411-414, April 1989 [19] Minos Garofalakis ,Michail G. Lagoudakis, Aggelos Bletsas, “ Forward and Inverse Kinematics for the NAO Humanoid Robot,July 2012 [20] C. K. Chui and G. Chen, Kalman Filtering with Real-Time Applications, Springer Series in Information Sciences, Vol. 17 (4th ed.). Springer, New York, 2009 [21] M. Potmesil, “Generating octree models of 3D objects from their silhouette in a sequence of images”, Computer Vision, Graphics, and Image Processing, vol. 40, pp. 1-29, 1987 [22] P. Strivasan, P. Lang, and S. Hackwood, “Computational geometric methods in volumetric intersections for 3D reconstruction”, Patter Recognition, vol. 28, no. 8, pp. 843-857, 1990 [23] S. Iwasawa, K. Ebihara, J. Ohya, and S. Morishima, “Real-time estimation of human body posture from monocular thermal images,” Proc. of IEEE Computer Society Conf. Computer Vision and Pattern Recognition, pp. 15-20, 1997. [24] Razali M.T. and Asznan B.J., “Detection and classification of moving object for smart vision sensor,” Proc. Information and communication Technologies, vol. 1, pp. 733-737, 2006 [25] T. Horprasert, D. Harwood, and L.S. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” Proc. IEEE Int. Conf. Computer Vision, Frame-Rate Workshop, pp. 1-19, Freece, Sept. 1999 [26] Fan Wang1, Cheng Tang, Yongsheng Ou and Yangsheng Xu, “A Real-Time Human Imitation System,” World congress on intelligent control and automation,
摘要: 
本論文提出一種即時的機器人模仿系統。此系統使用機器人的攝影機取得影像後,讓人型機器人NAO能夠模仿人類手部動作。對於每張從單獨攝影機擷取的圖片,將人體從RGB空間的背景中切割出來並考慮到去除陰影的部分。二維人體特徵點的部分包括了雙手的末端點,肩膀和手肘。雙手末端點的位置由人體輪廓的凸點和人體幾何限制來決定。肩膀和手肘的位置則由二維人體剪影求取的骨架來獲得。而這些特徵點的深度資訊則由自我建構類神經模糊推理網路(SONFIN)來估測。SONFIN網路的輸入是手肘和雙手末端點的二維座標而輸出則是它們的估測深度。藉由逆向運動學和人體的幾何限制,三維座標點可以用來找到雙手上下肢的旋轉角度,然後將這些角度套入機器人使其即時模仿手部動作。經過實驗的過程可以讓我們驗證這套機器人模仿系統的功能。

This thesis proposes a real-time robot imitation system in which the humanoid robot NAO imitates human hand motions using captured images from the robot camera. For each captured image from a single camera, human body is segmented from the background in the red-green-blue color space with the consideration of shadow removal. Two-dimensional (2D)significant points, including the tips of hands, the shoulders, and the elbows, located based on the segmented human body. The tips of hands are located based on convex points of the body contour and body geometrical characteristics. The shoulders and the elbows are located based on the skeleton pixels of a 2D body silhouette. Depth information of these located significant points is then estimated using a self-constructing neural fuzzy inference network (SONFIN). The inputs of the SONFIN are 2D coordinates of the elbow and the tip of the hand and the outputs are their estimated depth values. Based on inverse kinematics and geometric constraints of the human body, the three-dimensional (3D) coordinates of the significant points are used to find the rotating angles of the upper and lower limps of the hands. These obtained angles are then sent to the NAO robot so that the robot intimates the hand movements in real time. Experiments are conducted to verify the effectiveness of the proposed robot imitation system.
URI: http://hdl.handle.net/11455/9226
其他識別: U0005-1608201317454300
Appears in Collections:電機工程學系所

Show full item record
 

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.