Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/6838
標題: 無色彩資訊之嬰幼兒臉部異物偵測與表情辨識演算法設計與嵌入式系統實作
Design and Embedded System Implementation of Colorless Foreign Object Detection Algorithm and Facial Expression Recognition for Baby Watch and Care System
作者: 歐威良
Ou, Wei-Liang
關鍵字: Facial Expression Recognition;表情辨識;Object Detection;Embedded System;異物偵測;嵌入式系統
出版社: 電機工程學系所
引用: [1] S. Birchfield, “An Elliptical Head Tracker,” Thirty-First Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1710-1714, 1997. [2] S. Birchfield, “An Elliptical Head Tracking Using Intensity Gradients and Color Histograms,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 232-237, 1998. [3] T. K. Leung, M. C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching,” Fifth International Conference on Computer Vision, pp. 637-644, 1995. [4] C. Lin, and K. C. Fan, “Human face detection using geometric triangle relationship,” 15th International Conference on Pattern Recognition, vol. 2, pp. 941-944, 2000. [5] J. M. Lee, J. H. Kim, and Y. S. Moon, “Face Extraction Method Using Edge Orientation and Face Geometric Features,” International Conference on Convergence Information Technology, pp. 1292 – 1297, 2007. [6] Q. Yuan, W. Gao, and H. Yao, “Robust frontal face detection in complex environment,” 16th International Conference on Pattern Recognition, vol. 1, pp. 25-28, 2002. [7] H. A. Rowley, S. Baluja, and T. Kanade, “Rotation Invariant Neural Network-Based Face Detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 963 – 963, 1998. [8] P. Viola and M. J. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features,” IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, 2001. [9] C. Garcia, and M. Delakis, “A neural architecture for fast and robust face detection”, 16th International Conference on Pattern Recognition, vol. 2, pp. 44-47, 2002. [10] C. H. Han, and K. B. Sim, ”Real-time face detection using AdaBoot algorithm”, International Conference on Control, Automation and Systems, pp. 1892 – 1895, 2008. [16] J. Park, J. Seo, D. An, and S. Chung, “Detection of Human Faces Using Skin Color and Eyes,” IEEE International Conference on Multimedia and Expo, vol.1, pp. 133-136, 2000. [17] X. Zhang and R. M. Mersereau, “Lip feature extraction towards an automatic speechreading system,” IEEE International Conference. Image Processing, vol.3, pp. 226-229, 2000. [18] S. Bashyal and G. K. Venayagamoorthy, “Recognition of facial expressions using Gabor wavelets and learning vector quantization,” Engineering Applications of Artificial Intelligence, vol.21, pp. 1056-1064, 2008. [19] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000. [20] Y. Tian, T. Kanade, and J. F. Cohn, “Evaluation of Gabor-Wavelet-based Facial Action Unit Recognition in Image Sequences of Increasing Complexity,” International Conference on Face and Gesture, pp. 218-223, 2002. [21] K. Y. FANG, “An Implementation of Facial Expression Recognition System Based on Automatic Locating Features,” National Cheng Kung University master thesis, Tainan, Taiwan, 2009. [22] S. U. Jung, and J. H. Yoo, “Robust Eye Detection Using Self Quotient image,” International Symposium on Intelligent Signal Processing and Communications, pp. 263-266, 2006. [23] J. H. Hu, “Design and Implement of Facial Features Detection and Facial Expression Recognition Algorithm for Baby Watch and Care System,” National Chung Hsing University master thesis, Taichung, Taiwan, 2009. [24] S. M. Yu “Design and Implementation of Facial Expression Recognition and Foreign Object Detection Algorithm for Baby Watch and Care System,’’ National Chung Hsing University master thesis, Taichung, Taiwan, 2010. [25] BOLYMIN 公司, “BOLYMIN /BEGA220A/ Block Diagram”, “http://www.bolymin.com.tw/Doc/BEGA220A%20VER04.pdf”. [26] BOLYMIN公司, “BOLYMIN /BEGA220A/ Hardware Specifications”,http://rainbow.com.ua/upload/files/LCD/BEGA220A.pdf”. [27] BOLYMIN 公司, “BOLYMIN /BEGA220A/ Display Embedded System”, “ http://www.bolymin.com.tw/Embeddeddetail.asp?productid=183”. [28] pudn程序員聯合開發網,“Windows CE下YUV格式視頻撥放原始碼,”http://www.pudn.com/downloads75/sourcecode/embed/detail274994.html . [29] Cyansoft赤岩軟件公司,“http://www.cyansoft.com.cn/zchcam.html”,”www.cyansoft.com.cn/downloads/usbcam/ZC030X_WCE_5.0.0.2.rar”,. [30] 次世代模擬器新聞網,“Microsoft Windows CE 5.0大出擊”, “http://playstation2.idv.tw/iacolumns/jl00022.html” [31] 王進德,“類神經網路與模糊控制理論入門與應用”,全華圖書股份有限公司,2008。 [32]“SOC Consortium Course Material-ARM Based SOC Design Laboratory Course,” National Chiao-Tung University IP Core Design. [33] Ali R, “A Hierarchical and Adaptive Deformable Model for Mouth Boundary Detection, Department of Electrical Engineering, University of Sydney, Sydney, Australia”,1997.
摘要: 
本論文提出一套無色彩資訊之嬰幼兒臉部異物偵測與表情辨識演算法設計與嵌入式系統實作之方法;主要是當嬰幼兒口鼻遇到異物入侵或是吐奶、嘔吐等危險情形發生時,系統便能即時發出警告通知監護者;同時也可經由分析嬰幼兒的表情來判別是否有哭鬧的狀況。此外因本論文為無色彩演算法,所以只要有一部近紅外線攝影機便可在夜晚正常運作,因此可藉由系統於日夜來做監測,以減低人力的負擔。

此外本系統可分為兩個子系統:一為異物偵測系統、二為表情辨識系統。異物偵測方面,主要是針對嬰幼兒發生嘔吐、口鼻被棉被或其他異物遮蓋之偵測,因此我們可以藉由動態影像建立參考影像與當下影像並透過嘴巴附近區域的像素變化,來偵測目前是否有異物的入侵。而表情辨識的部分,主要為辨識無表情、笑與哭三種表情。利用影像中嬰幼兒臉部的特徵點並將其擷取出來,再藉由計算特徵點所得到的特徵距離,將這些特徵距離做為類神經系統的輸入值,便可辨識出嬰幼兒當下的表情。

經實驗結果顯示,使用四核心的PC(2.50GHz)去測試演算法找出人臉眼睛特徵點的精確度及其所花費時間,精確度約為91%、時間約是29ms,而找出鼻子特徵點的精確度為87%、所花費的時間為1ms,找出嘴巴特徵點的精確度為87%、所花費的時間為2.1ms,接著在表情辨識的正確率約為80%。

最後將此嬰幼兒監護系統實現於BOLYMIN公司的BEGA220A之嵌入式系統平台上。而基於軟體最佳化的概念,將演算法裡運算複雜度較高的算式做優化。當ARM CPU操作於400MHz並掛載在Windows CE系統下時,其純軟體程式碼版本的影格率(Frame rate)大約可達每秒1.5張,加上近紅外線攝影後,影格率(Frame rate)也可達每秒1.02張。

This study presents a colorless facial foreign object detection algorithm and a facial expression recognition technique for baby care video surveillance systems, and the implementation with an embedded platform is also complete. When babies encounter foreign object invasion, spit up in nose and mouth, occur vomit and other dangerous situations, the system immediately issues an alert to the guardian, and simultaneously acts facial expressions to judge whether babies are crying. Based on the colorless video processing algorithm, using a near-infrared camera only will function properly day and night, and then this video surveillance system will reduce the human burden effectively.

The proposed intelligent baby-watch-and-care system is divided into two subsystems, including the facial foreign object detection and facial expression recognition. In the part of the facial object detection, we focus on detecting the vomit and something around mouth and nose. To achieve the above-mentioned demand, we can observe the color change which is near to the mouth by comparing the current and previous frames in dynamic video sequences. In the other part of the facial expression recognition, we only consider three conditions, including deadpan, smiling, and crying. First, we extract baby's face features from facial images. The features distance will be calculated, and they are input vectors to the neural network system. Thus the scheme recognizes baby facial expressions effectively.

Using the quad-core (@2.50GHz) computer with C codes, the experiment results show that our algorithm can detect the eye features accurately, and the accuracy for the eye feature detection is up to 91%, and the processing time needs 29 ms. The accuracy for the nose feature detection is up to 87%, and the processing time needs 1 ms. The accuracy for the mouth feature detection is up to 87%, and the processing time needs 2.1 ms. Then the accuracy for the facial expression recognition is about 80%.

Finally, the baby-watch-and-care system is implemented on the BOLYMIN BEGA220A embedded platform with an ARM926EJ CPU. Using software-based optimizations, the computational complexities are reduced and the software computations are optimized. When the system operates at 400MHz with an ARM CPU and mounts in Windows CE, the frame rate is about 1.5 frames per second. Using near-infrared photography, the frame rate is 1.02 frames per second.
URI: http://hdl.handle.net/11455/6838
其他識別: U0005-2007201120130500
Appears in Collections:電機工程學系所

Show full item record
 
TAIR Related Article

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.