Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/8973
標題: 應用於嬰幼兒監護系統之臉部表情辨識與異物偵測演算法設計與實作
Design and Implementation of Facial Expression Recognition and Foreign Object Detection Algorithm for Baby Watch and Care System
作者: 游聖民
Yu, Sheng-Min
關鍵字: Face Detection;人臉偵測;Facial Expression Recognition;Object Detection;表情辨識;異物偵測
出版社: 電機工程學系所
引用: [1] Z. F. Liu, Z. S. You, A. K. Jain, and Y. Q. Wang, “Face detection and facial feature extraction in color image”, Fifth International Conference on Computational Intelligence and Multimedia Applications, pp. 126-130, 2003. [2] Y. Guan, “Robust Eye Detection from Facial Image based on Multi-cue Facial Information,” IEEE International Conference on Control and Automation, pp. 1775 – 1778, 2007. [3] R. L. Hsu, A. M. Mohamed, and A. K. Jain, “Face detection in color images,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 696-706, 2002. [4] O.Ikeda, “Segmentation of faces in video footage using HSV colour for face detection and image retrieval,” IEEE International Conference on Image Processing, Vol. 2, pp. 913-916,2003. [5] P. Campadelli, R. Lanzarotti, and G. Lipori, “Face detection in colour images of generic scenes,” IEEE International Conference on Computational Intelligence for Homeland Security and Personal Safety, pp. 97-103, 2004. [6] J. Seo, and H. Ko, “Face detection using support vector domain description in color images,” IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5, pp. 729-732. [7] K. Nallaperumal, R. Subban, K. Krishnaveni, L. Fred, and R.K. Selvakumar, “Human face detection in color images using skin color and template matching models for multimedia on the Web,” IFIP International Conference on Wireless and Optical Communications Networks, pp. 5–5, 2006. [8] R. Pappu, and P. A. Beardsley, “A Qualitative Approach to Classifying Gaze Direction,” Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 160-165, 1998. [9] S. Birchfield, “An Elliptical Head Tracker,” Thirty-First Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1710-1714, 1997. [10] S. Birchfield, “An Elliptical Head Tracking Using Intensity Gradients and Color Histograms,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 232-237, 1998. [11] T. K. Leung, M. C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching,” Fifth International Conference on Computer Vision, pp. 637-644, 1995. [12] C. Lin, and K. C. Fan, “Human face detection using geometric triangle relationship,” 15th International Conference on Pattern Recognition, vol. 2, pp. 941-944, 2000. [13] J. M. Lee, J. H. Kim, and Y. S. Moon, “Face Extraction Method Using Edge Orientation and Face Geometric Features,” International Conference on Convergence Information Technology, pp. 1292 – 1297, 2007. [14] Q. Yuan, W. Gao, and H. Yao, “Robust frontal face detection in complex environment,” 16th International Conference on Pattern Recognition, vol. 1, pp. 25-28, 2002. [15] C. C. Han, H.Y. Liao, G. J. Yu, and L. H. Chen, “Fast face detection via morphology-based pre-processing,” Pattern Recognition, Vol. 33, pp. 1707-1712, 2000. [16] H. A. Rowley, S. Baluja, and T. Kanade, “Rotation Invariant Neural Network-Based Face Detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 963 – 963, 1998. [17] H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 203 – 208, 1996. [18] P. Viola and M. J. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features,” IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, 2001. [19] C. Garcia, and M. Delakis, “A neural architecture for fast and robust face detection”, 16th International Conference on Pattern Recognition, vol. 2, pp. 44-47, 2002. [20] C. H. Han, and K. B. Sim, ”Real-time face detection using AdaBoot algorithm”, International Conference on Control, Automation and Systems, pp. 1892 – 1895, 2008. [21] Z. Xin, X. Yanjun, and D. Limin, “Locating facial features with color information”, Fourth International Conference on Signal Processing Proceedings, vol. 2, pp. 889-892, 1998. [22] J. Park, J. Seo, D. An, and S. Chung, “Detection of Human Faces Using Skin Color and Eyes,” IEEE International Conference on Multimedia and Expo, vol.1, pp. 133-136, 2000. [23] X. Zhang and R. M. Mersereau, “Lip feature extraction towards an automatic speechreading system,” IEEE International Conference. Image Processing, vol.3, pp. 226-229, 2000. [24] S. Bashyal and G. K. Venayagamoorthy, “Recognition of facial expressions using Gabor wavelets and learning vector quantization,” Engineering Applications of Artificial Intelligence, vol.21, pp. 1056-1064, 2008. [25] M. Lyons, S. Akamasku, M. Kamachi, and J. Gyoba, “ Coding facial expressions with gabor wavelets,” Proceedings of the Third International Conference on Face and Gesture Recognition, pp. 200-205, 1998. [26] Y. Tian, T. Kanade, and J. F. Cohn, “ Eye-state action unit detection by gabor wavelets,” In Proceedings of International Conference on Multi-modal Interfaces, pp. 143–150, 2000. [27] P. Ekman and W. V. Friesen, “The Facial Action Coding System: A Technique for The Measurement of Facial Movement,” San Francisco: Consulting Psychologists Press, 1978. [28] Y. Tian, T. Kanade, and J. F. Cohn, “Evaluation of Gabor-Wavelet-based Facial Action Unit Recognition in Image Sequences of Increasing Complexity,” International Conference on Face and Gesture, pp. 218-223, 2002. [29] Y. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, pp. 97-115, 2001. [30] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000. [31] J.-J.J. Lien, T. Kanade, J. F. Cohn, and C. C. Li, “Detection, Tracking, and classification of action units in facial expression,” Journal of Robotics and Autonomous Systems, vol. 31, pp. 131-146, 1999. [32] K. Y. FANG, “An Implementation of Facial Expression Recognition System Based on Automatic Locating Features,” National Cheng Kung University master thesis, Tainan, Taiwan, 2009. [33] S. U. Jung, and J. H. Yoo, “Robust Eye Detection Using Self Quotient image,” International Symposium on Intelligent Signal Processing and Communications, pp. 263-266, 2006. [34] J. H. Hu, “Design and Implement of Facial Features Detection and Facial Expression Recognition Algorithm for Baby Watch and Care System,” National Chung Hsing University master thesis, Taichung, Taiwan, 2009. [35]“SOC Consortium Course Material-ARM Based SOC Design Laboratory Course,” National Chiao-Tung University IP Core Design. [36]“FPGA and CPLD Solutions from Xilinx,Inc.” http://www.xilinx.com/ [37] 王進德,”類神經網路與模糊控制理論入門與應用”,全華圖書股份有限公司,2008。
摘要: 
本論文提出一套具備嬰幼兒臉部表情辨識與異物偵測的智慧型數位監護系統;當嬰幼兒有異物入侵口鼻或是吐奶、嘔吐等危險情形發生時,系統能即時發出警告通知監護者;也可經由表情來判別嬰幼兒目前是否有身體不適的狀況,來取代現階段全靠人力監護的方式,減輕監護者的負擔。

因此本系統可分成兩個子系統:表情辨識和異物偵測。表情辨識的部分,主要辨識無表情、笑、哭三種表情。首先將影像中嬰幼兒的臉部特徵點擷取出來,藉由特徵點計算出特徵距離,並將這些特徵距離當成輸入類神經系統的特徵值,即可辨識出嬰幼兒的表情。異物偵測方面,是針對嬰幼兒發生嘔吐、口鼻被棉被或其他異物遮蓋之偵測,所以我們可以藉由動態影像前後張在嘴巴附近區域的色彩變化,去偵測目前是否有異物的入侵。

實驗結果顯示,使用四核心的PC(2.66GHz)去測試演算法找出人臉眼睛特徵點的精確度及其所花費時間,精確度約為88%、時間約是45ms而表情辨識的正確率約為80%。

最後將此嬰幼兒監護系統實現於ARM926EJ-S CPU與Xilinx FPGA之嵌入式系統平台上。並基於軟硬體協同設計的概念,抽出演算法裡運算複雜度最高的模組來做成硬體加速IP。當ARM CPU操作於266MHz且系統頻率為50MHz時其純ARM程式碼版本的影格率(Frame rate)大約可達每秒3.03張,而軟硬體共設計版本的影格率也可到每秒1.86張。

In this study, we will discuss a digital intelligent baby-watch-and-care system that can recognize baby''s expression and detect the external object. The system will alert watchers when it detects something around mouth and nose, and a vomit condition. In addition, we will figure out whether babies are within the safe condition by baby facial expressions. Thus, we can replace the manpower security with the intelligent video system and reduce the watcher''s burden.

In the intelligent baby-watch-and-care system, there are two subsystems, which include the facial expression recognition and external object detection. On the part of facial expression recognition, there are three conditions, which include deadpan, smiling, and crying. First, we extract baby's face features from the image. The features distance will be calculated by the features and they will be as input values to the neural network system. Thus, the scheme can recognize baby facial expressions. On the other part of the external object detection, we focus on detecting the vomit and something around mouth and nose. In order to achieve the above-mentioned demand, we can observe the color change which is near to the mouth by detecting the current and previous frames in dynamic video sequences. The experiment results show that our algorithm can detect the eye features accurately, and the accuracy for the eye feature detection is up to 88%, and the processing time needs 45 ms, then the accuracy for the facial expression recognition is about 80% by using the Quadcore (@2.66GHz) computer with C codes.

Finally, by following the principle of the HW/SW co-design, the baby- watch-and-care system is implemented on an embedded platform which is composed of ARM926EJ-S CPU and Xilinx FPGA. First, we profile the execution time of each module in the algorithm, and choose the maximum computational complexity module for the hardware realization. The HW/SW co-design can process 1.86 frames per second, and the pure software design with the ARM CPU can achieve 3.03 frames per second when the ARM CPU operates at 266MHz and the system frequency operates at 50MHz.
URI: http://hdl.handle.net/11455/8973
其他識別: U0005-2707201016585600
Appears in Collections:電機工程學系所

Show full item record
 
TAIR Related Article

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.