Please use this identifier to cite or link to this item:
Vision-Based Real-time Humanoid Robot Imitation of Human Hand Motion
|關鍵字:||機器人;robot;模仿;類神經模糊推理網路;imitate;sonfin||出版社:||電機工程學系所||引用:||References  Haritaoglu, D. Harwood and L. S. Davis, “ real-time surveillance of people and their activities,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 809 – 830, Aug. 2000.  S. Y. Chen, S. Y. Ma, and L. G. Chen, “Efficient moving object segmentation algorithm using background registration technique,” IEEE Trans. Circuits and Systems for Video Technique, vol. 12, no. 7, pp. 577-586, July 2002.  C. Kim, J. Cho, and Y. Lee, “The relational properties among results of background subtraction”, Proc. of Int. Conf. on Advanced Communication Technology, vol. 3, pp. 1887-1890, 2008.  C. F. Juang, C. M. Chang, J. R. Wu, and D. M. Lee, “Computer vision-based human body segmentation and posture estimation,” IEEE Trans. Syst., Man, and Cyber., Part A: Systems and Humans, vol. 39, no. 1, pp. 119-133, Jan. 2009  H.D. Cheng, X.H. Jiang, Y. Sun and J. Wang, “Color image segmentation: advances and prospects”, Pattern Recognition, vol. 34, pp. 2259–2281, 2001  Q. Zhou and J.K. Aggarwal, “Tracking and classifying moving objects from video”, Proc. IEEE Int. Workshop Performance Evaluation of Tracking and Surveillance, pp. 52-59, Dec. 2001  .Razali M.T. and Adznan B.J., “Detection and classification of moving object for smart vision sensor,” Proc. Information and Communication Technologies Conf., vol. 1, pp. 733-737, 2006  T. Horprasert, D. Harwood, and L.S. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” Proc. IEEE Int. Conf. Computer Vision, Frame-Rate Workshop, pp. 1-19, Greece, Sept. 1999  W. C. Du, Vision-Based Real-Time 3D Human Body Segmentation And 3D Virtual Human Model Construction, Master Thesis, National Chung-Hsing University, Taiwan, July 2011  H. J. Jiang, T. C. Chen, and C. F. Juang, “Stereo camera-based real-time 3D character construction and human behavior following with interactive entertainment application,” Proc. Nat. Symp. System Science and Engineering, pp. 501-506, Taiwan, R.O.C., June 2012  H. J. Jiang, C. F. Juang, “S Stereo Camera-based Real-Time 3D Virtual Human Construction with Interactive Entertainment Applications, June 2012  A. F. Bobick and A. D. Wilson, “A state-based technique to the representation and recognition of gesture,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 19, pp. 1325-1337, Dec. 1997  T. Starmer, J. Weaver, and A. Pentland, “Real-time American sign language recognition using desk and wearable computer-based video,” IEEE Transactions Pattern Analysis Machine Intelligence, vol. 20, pp. 1317-1375, Dec. 1998  N. Johnson and D. Hogg, “Learning the distribution of object trajectories for event recognition,” Image Vision Computer, vol. 14, no. 8, pp. 609-615, 1996  H. Fujiyoshi and A. J. Lipton, “Real-time human motion analysis by image skeletonization,” Proceedings of IEEE Workshop on Applications of Computer Vision , pp. 15-21, Oct. 1998  J. W. Hsieh, C. H. Chuang, S. Y. Chen et al., “Segmentation of Human Body Parts Using Deformable Triangulation”, IEEE Trans. Syst., Man, and Cyber., Part A: Systems and Humans, vol. 40, no. 3, pp. 596-610, May 2010  T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Communications of the ACM, Vol. 27, No. 3, pp. 236-239, March 1984  C. Arcelli, G. Sanniti di Baja, “ A One-Pass Two-Operation Process to Detect the Skeletal Pixels on the 4-Distance Transform”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 4, pp. 411-414, April 1989  Minos Garofalakis ,Michail G. Lagoudakis, Aggelos Bletsas, “ Forward and Inverse Kinematics for the NAO Humanoid Robot,July 2012  C. K. Chui and G. Chen, Kalman Filtering with Real-Time Applications, Springer Series in Information Sciences, Vol. 17 (4th ed.). Springer, New York, 2009  M. Potmesil, “Generating octree models of 3D objects from their silhouette in a sequence of images”, Computer Vision, Graphics, and Image Processing, vol. 40, pp. 1-29, 1987  P. Strivasan, P. Lang, and S. Hackwood, “Computational geometric methods in volumetric intersections for 3D reconstruction”, Patter Recognition, vol. 28, no. 8, pp. 843-857, 1990  S. Iwasawa, K. Ebihara, J. Ohya, and S. Morishima, “Real-time estimation of human body posture from monocular thermal images,” Proc. of IEEE Computer Society Conf. Computer Vision and Pattern Recognition, pp. 15-20, 1997.  Razali M.T. and Asznan B.J., “Detection and classification of moving object for smart vision sensor,” Proc. Information and communication Technologies, vol. 1, pp. 733-737, 2006  T. Horprasert, D. Harwood, and L.S. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” Proc. IEEE Int. Conf. Computer Vision, Frame-Rate Workshop, pp. 1-19, Freece, Sept. 1999  Fan Wang1, Cheng Tang, Yongsheng Ou and Yangsheng Xu, “A Real-Time Human Imitation System,” World congress on intelligent control and automation,||摘要:||
This thesis proposes a real-time robot imitation system in which the humanoid robot NAO imitates human hand motions using captured images from the robot camera. For each captured image from a single camera, human body is segmented from the background in the red-green-blue color space with the consideration of shadow removal. Two-dimensional (2D)significant points, including the tips of hands, the shoulders, and the elbows, located based on the segmented human body. The tips of hands are located based on convex points of the body contour and body geometrical characteristics. The shoulders and the elbows are located based on the skeleton pixels of a 2D body silhouette. Depth information of these located significant points is then estimated using a self-constructing neural fuzzy inference network (SONFIN). The inputs of the SONFIN are 2D coordinates of the elbow and the tip of the hand and the outputs are their estimated depth values. Based on inverse kinematics and geometric constraints of the human body, the three-dimensional (3D) coordinates of the significant points are used to find the rotating angles of the upper and lower limps of the hands. These obtained angles are then sent to the NAO robot so that the robot intimates the hand movements in real time. Experiments are conducted to verify the effectiveness of the proposed robot imitation system.
|Appears in Collections:||電機工程學系所|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.