Please use this identifier to cite or link to this item:
|標題:||Computer Vision-Based Human Body Segmentation and Posture Estimation||作者:||Juang, C.F.
|關鍵字:||Background difference;body posture estimation;Euler number;human body;silhouette;moving object segmentation;posture analysis;face segmentation;classification;tracking;people;images||Project:||Ieee Transactions on Systems Man and Cybernetics Part a-Systems and Humans||期刊/報告no：:||Ieee Transactions on Systems Man and Cybernetics Part a-Systems and Humans, Volume 39, Issue 1, Page(s) 119-133.||摘要:||
This paper proposes a new method for vision-based human body posture estimation using body silhouette and skin-color information. A moving object segmentation algorithm is first proposed to distinguish the human body from the background using a sequence of images. This algorithm uses a fast Euler number computation technique to automatically determine the threshold of both frame and background differences. After segmentation, a sequence of image processing approaches then creates a complete silhouette of the human body. The objective of posture estimation is to locate five significant body points, including the head, tips of the feet, and tips of the hands. These significant points are first selected from convex points on a defined distance curve. A number of heuristic rules based on body shape characteristics are used to select the proper points among these convex candidates. These rules use features like the principal and minor axes of the human body, their interactions with the silhouette contour, the relative distances between convex points, and the curvature of convex points, An auxiliary skin-color feature is used when the silhouette shape features alone are not sufficient to estimate the significant points. Experimental results show that the proposed approach can efficiently and effectively locate the significant body points for most postures.
|Appears in Collections:||電機工程學系所|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.