Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/6838
DC FieldValueLanguage
dc.contributor張雲南zh_TW
dc.contributorYun-Nan Changen_US
dc.contributor吳崇賓zh_TW
dc.contributorChung-Bin Wuen_US
dc.contributor.advisor范志鵬zh_TW
dc.contributor.advisorChih-Peng Fanen_US
dc.contributor.author歐威良zh_TW
dc.contributor.authorOu, Wei-Liangen_US
dc.contributor.other中興大學zh_TW
dc.date2012zh_TW
dc.date.accessioned2014-06-06T06:39:02Z-
dc.date.available2014-06-06T06:39:02Z-
dc.identifierU0005-2007201120130500zh_TW
dc.identifier.citation[1] S. Birchfield, “An Elliptical Head Tracker,” Thirty-First Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1710-1714, 1997. [2] S. Birchfield, “An Elliptical Head Tracking Using Intensity Gradients and Color Histograms,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 232-237, 1998. [3] T. K. Leung, M. C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching,” Fifth International Conference on Computer Vision, pp. 637-644, 1995. [4] C. Lin, and K. C. Fan, “Human face detection using geometric triangle relationship,” 15th International Conference on Pattern Recognition, vol. 2, pp. 941-944, 2000. [5] J. M. Lee, J. H. Kim, and Y. S. Moon, “Face Extraction Method Using Edge Orientation and Face Geometric Features,” International Conference on Convergence Information Technology, pp. 1292 – 1297, 2007. [6] Q. Yuan, W. Gao, and H. Yao, “Robust frontal face detection in complex environment,” 16th International Conference on Pattern Recognition, vol. 1, pp. 25-28, 2002. [7] H. A. Rowley, S. Baluja, and T. Kanade, “Rotation Invariant Neural Network-Based Face Detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 963 – 963, 1998. [8] P. Viola and M. J. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features,” IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, 2001. [9] C. Garcia, and M. Delakis, “A neural architecture for fast and robust face detection”, 16th International Conference on Pattern Recognition, vol. 2, pp. 44-47, 2002. [10] C. H. Han, and K. B. Sim, ”Real-time face detection using AdaBoot algorithm”, International Conference on Control, Automation and Systems, pp. 1892 – 1895, 2008. [16] J. Park, J. Seo, D. An, and S. Chung, “Detection of Human Faces Using Skin Color and Eyes,” IEEE International Conference on Multimedia and Expo, vol.1, pp. 133-136, 2000. [17] X. Zhang and R. M. Mersereau, “Lip feature extraction towards an automatic speechreading system,” IEEE International Conference. Image Processing, vol.3, pp. 226-229, 2000. [18] S. Bashyal and G. K. Venayagamoorthy, “Recognition of facial expressions using Gabor wavelets and learning vector quantization,” Engineering Applications of Artificial Intelligence, vol.21, pp. 1056-1064, 2008. [19] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000. [20] Y. Tian, T. Kanade, and J. F. Cohn, “Evaluation of Gabor-Wavelet-based Facial Action Unit Recognition in Image Sequences of Increasing Complexity,” International Conference on Face and Gesture, pp. 218-223, 2002. [21] K. Y. FANG, “An Implementation of Facial Expression Recognition System Based on Automatic Locating Features,” National Cheng Kung University master thesis, Tainan, Taiwan, 2009. [22] S. U. Jung, and J. H. Yoo, “Robust Eye Detection Using Self Quotient image,” International Symposium on Intelligent Signal Processing and Communications, pp. 263-266, 2006. [23] J. H. Hu, “Design and Implement of Facial Features Detection and Facial Expression Recognition Algorithm for Baby Watch and Care System,” National Chung Hsing University master thesis, Taichung, Taiwan, 2009. [24] S. M. Yu “Design and Implementation of Facial Expression Recognition and Foreign Object Detection Algorithm for Baby Watch and Care System,’’ National Chung Hsing University master thesis, Taichung, Taiwan, 2010. [25] BOLYMIN 公司, “BOLYMIN /BEGA220A/ Block Diagram”, “http://www.bolymin.com.tw/Doc/BEGA220A%20VER04.pdf”. [26] BOLYMIN公司, “BOLYMIN /BEGA220A/ Hardware Specifications”,http://rainbow.com.ua/upload/files/LCD/BEGA220A.pdf”. [27] BOLYMIN 公司, “BOLYMIN /BEGA220A/ Display Embedded System”, “ http://www.bolymin.com.tw/Embeddeddetail.asp?productid=183”. [28] pudn程序員聯合開發網,“Windows CE下YUV格式視頻撥放原始碼,”http://www.pudn.com/downloads75/sourcecode/embed/detail274994.html . [29] Cyansoft赤岩軟件公司,“http://www.cyansoft.com.cn/zchcam.html”,”www.cyansoft.com.cn/downloads/usbcam/ZC030X_WCE_5.0.0.2.rar”,. [30] 次世代模擬器新聞網,“Microsoft Windows CE 5.0大出擊”, “http://playstation2.idv.tw/iacolumns/jl00022.html” [31] 王進德,“類神經網路與模糊控制理論入門與應用”,全華圖書股份有限公司,2008。 [32]“SOC Consortium Course Material-ARM Based SOC Design Laboratory Course,” National Chiao-Tung University IP Core Design. [33] Ali R, “A Hierarchical and Adaptive Deformable Model for Mouth Boundary Detection, Department of Electrical Engineering, University of Sydney, Sydney, Australia”,1997.zh_TW
dc.identifier.urihttp://hdl.handle.net/11455/6838-
dc.description.abstract本論文提出一套無色彩資訊之嬰幼兒臉部異物偵測與表情辨識演算法設計與嵌入式系統實作之方法;主要是當嬰幼兒口鼻遇到異物入侵或是吐奶、嘔吐等危險情形發生時,系統便能即時發出警告通知監護者;同時也可經由分析嬰幼兒的表情來判別是否有哭鬧的狀況。此外因本論文為無色彩演算法,所以只要有一部近紅外線攝影機便可在夜晚正常運作,因此可藉由系統於日夜來做監測,以減低人力的負擔。 此外本系統可分為兩個子系統:一為異物偵測系統、二為表情辨識系統。異物偵測方面,主要是針對嬰幼兒發生嘔吐、口鼻被棉被或其他異物遮蓋之偵測,因此我們可以藉由動態影像建立參考影像與當下影像並透過嘴巴附近區域的像素變化,來偵測目前是否有異物的入侵。而表情辨識的部分,主要為辨識無表情、笑與哭三種表情。利用影像中嬰幼兒臉部的特徵點並將其擷取出來,再藉由計算特徵點所得到的特徵距離,將這些特徵距離做為類神經系統的輸入值,便可辨識出嬰幼兒當下的表情。 經實驗結果顯示,使用四核心的PC(2.50GHz)去測試演算法找出人臉眼睛特徵點的精確度及其所花費時間,精確度約為91%、時間約是29ms,而找出鼻子特徵點的精確度為87%、所花費的時間為1ms,找出嘴巴特徵點的精確度為87%、所花費的時間為2.1ms,接著在表情辨識的正確率約為80%。 最後將此嬰幼兒監護系統實現於BOLYMIN公司的BEGA220A之嵌入式系統平台上。而基於軟體最佳化的概念,將演算法裡運算複雜度較高的算式做優化。當ARM CPU操作於400MHz並掛載在Windows CE系統下時,其純軟體程式碼版本的影格率(Frame rate)大約可達每秒1.5張,加上近紅外線攝影後,影格率(Frame rate)也可達每秒1.02張。zh_TW
dc.description.abstractThis study presents a colorless facial foreign object detection algorithm and a facial expression recognition technique for baby care video surveillance systems, and the implementation with an embedded platform is also complete. When babies encounter foreign object invasion, spit up in nose and mouth, occur vomit and other dangerous situations, the system immediately issues an alert to the guardian, and simultaneously acts facial expressions to judge whether babies are crying. Based on the colorless video processing algorithm, using a near-infrared camera only will function properly day and night, and then this video surveillance system will reduce the human burden effectively. The proposed intelligent baby-watch-and-care system is divided into two subsystems, including the facial foreign object detection and facial expression recognition. In the part of the facial object detection, we focus on detecting the vomit and something around mouth and nose. To achieve the above-mentioned demand, we can observe the color change which is near to the mouth by comparing the current and previous frames in dynamic video sequences. In the other part of the facial expression recognition, we only consider three conditions, including deadpan, smiling, and crying. First, we extract baby's face features from facial images. The features distance will be calculated, and they are input vectors to the neural network system. Thus the scheme recognizes baby facial expressions effectively. Using the quad-core (@2.50GHz) computer with C codes, the experiment results show that our algorithm can detect the eye features accurately, and the accuracy for the eye feature detection is up to 91%, and the processing time needs 29 ms. The accuracy for the nose feature detection is up to 87%, and the processing time needs 1 ms. The accuracy for the mouth feature detection is up to 87%, and the processing time needs 2.1 ms. Then the accuracy for the facial expression recognition is about 80%. Finally, the baby-watch-and-care system is implemented on the BOLYMIN BEGA220A embedded platform with an ARM926EJ CPU. Using software-based optimizations, the computational complexities are reduced and the software computations are optimized. When the system operates at 400MHz with an ARM CPU and mounts in Windows CE, the frame rate is about 1.5 frames per second. Using near-infrared photography, the frame rate is 1.02 frames per second.en_US
dc.description.tableofcontents誌 謝 i 論文摘要 ii Abstract iii 目錄 iv 表目錄 vii 圖目錄 viii 第一章 緒論 1 1.1 研究動機與目的 1 1.2 相關研究回顧 2 1.2.1 人臉偵測 2 1.2.2 特徵點擷取 3 1.2.3 表情辨識 4 1.3 系統概觀 5 1.4 論文架構 6 第二章 預備知識 7 2.1光線與近紅外線的關係 7 2.1.1近紅外線影像技術的應用 7 2.1.2近紅外線影像技術應用於嬰兒臉部辨識 7 2.2色彩空間轉換(RGB轉YCbCr) 8 2.3 平滑濾波(Smooth filter) 9 2.4 自商影像(Self Quotient Image) 10 2.5 邊緣偵測(Edge detection) 13 2.6直方圖等化(Histogram equalization) 15 第三章 先前相關的研究 17 3.1 眼睛特徵點偵測 17 3.1.1 眼睛特徵點偵測流程 17 3.1.2 快速自商影像(Self Quotient Image) 17 3.1.3 降取樣(Downscale) 17 3.1.4 眼睛濾波器(Eye filter) 18 3.1.5 辨別眼睛 18 3.1.6 取出眼睛的區域 21 3.1.7 眼睛修正(Eye Correction)與偵測出眼睛特徵點 22 3.2 人臉擷取 22 3.3 嘴巴特徵點偵測 22 3.3.1 嘴巴特徵點偵測流程 22 3.3.2 取出嘴巴大概區域 22 3.3.3 偵測出嘴巴特徵點 22 3.4 眉毛特徵點偵測 23 3.4.1 眉毛特徵點偵測流程 23 3.4.2 直方圖等化 23 3.4.3 偵測出眉巴特徵點 23 第四章 我們的人臉特徵點偵測方法 24 4.1 眼睛特徵點偵測 26 4.1.1 眼睛特徵點偵測流程 26 4.1.2 快速自商影像(Self Quotient Image) 26 4.1.3 降取樣(Downscale) 28 4.1.4 眼睛濾波器(Eye filter) 28 4.1.5 十字節濾波器(Cross Filter)30 4.1.6 眼睛修正(Eye Correction) 31 4.2 人臉擷取 33 4.3 鼻子特徵點偵測 34 4.3.1 鼻子特徵點偵測流程 34 4.3.2 眼睛與鼻子相對位置 34 4.3.3 偵測出鼻子特徵點 37 4.4 嘴巴特徵點偵測 39 4.4.1 嘴巴特徵點偵測流程 39 4.4.2 取出嘴巴大概區域 39 4.4.3 偵測出嘴巴特徵點 43 4.5 眉毛特徵點偵測 45 4.5.1 眉毛特徵點偵測流程 45 4.5.2 影像等化(Image Equalization) 45 4.5.3 偵測出眉毛特徵點 46 4.6 特徵點偵測結果 49 第五章 異物偵測與表情辨識 51 5.1 異物偵測 52 5.1.1 異物偵測的流程 53 5.1.2 異物偵測的分析 54 5.1.3 異物偵測的演算法 55 5.2 異物偵測結果 56 5.3 表情辨識 57 5.4 表情線索 57 5.5 計算特徵值 58 5.6 類神經網路架構 60 5.7 表情辨識結果 61 5.7.1資料庫 (Training database) 61 5.7.2測試影像(Testing pattern) 62 5.7.3 辨識結果 63 第六章 嬰幼兒監護系統之嵌入式軟硬體設計與應用 64 6.1 開發平台介紹 64 6.2 WINCE5.0驅動程式與USB Camera 66 6.2.1 WINCE5.0與驅動程式 66 6.2.2 撰寫應用於 Windows CE的攝影機驅動程式 67 6.2.3近紅外光攝影機與WINCE5.0系統整合應用與設計流程 68 6.3 系統實現與效能分析 71 6.3.1實體系統呈現 71 6.3.2整體之效能分析 72 第七章 討論與比較 73 7.1 實驗環境 73 7.2 實驗結果 74 7.2.1 眼睛特徵點比較結果 74 7.2.2鼻子特徵點比較結果 76 7.2.3 嘴巴特徵點比較結果 77 7.3 系統實測 79 7.4 修改為大人可用 79 第八章 結論與未來研究方向 80 8.1 結論 80 8.2 未來研究方向 80 參考文獻 81zh_TW
dc.language.isoen_USzh_TW
dc.publisher電機工程學系所zh_TW
dc.relation.urihttp://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-2007201120130500en_US
dc.subjectFacial Expression Recognitionen_US
dc.subject表情辨識zh_TW
dc.subjectObject Detectionen_US
dc.subjectEmbedded Systemen_US
dc.subject異物偵測zh_TW
dc.subject嵌入式系統zh_TW
dc.title無色彩資訊之嬰幼兒臉部異物偵測與表情辨識演算法設計與嵌入式系統實作zh_TW
dc.titleDesign and Embedded System Implementation of Colorless Foreign Object Detection Algorithm and Facial Expression Recognition for Baby Watch and Care Systemen_US
dc.typeThesis and Dissertationzh_TW
item.fulltextno fulltext-
item.languageiso639-1en_US-
item.openairetypeThesis and Dissertation-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.grantfulltextnone-
Appears in Collections:電機工程學系所
Show simple item record
 
TAIR Related Article

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.