Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/9327
標題: 應用臉部狀態偵測之臥床病人視訊監護系統設計與嵌入式系統實現
Design and Embedded Implementation of A Bed-Patient Video Monitor and Care System with Facial States Detection
作者: 林哲立
Lin, Che-Li
關鍵字: 臉部狀態偵測
Facial States Detection
視訊監護系統
嵌入式系統
Video Monitor and Care System
Embedded System
出版社: 電機工程學系所
引用: [1] Y. Guan, “Robust Eye Detection from Facial Image based on Multi-cue Facial Information,” IEEE International Conference on Control and Automation, pp. 1775 – 1778, 2007. [2] R. L. Hsu, A. M. Mohamed, and A. K. Jain, “Face detection in color images,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 696-706, 2002. [3] Z. F. Liu, Z. S. You, A. K. Jain, and Y. Q. Wang, “Face detection and facial feature extraction in color image”, Fifth International Conference on Computational Intelligence and Multimedia Applications, pp. 126-130, 2003. [4] H. A. Rowley, S. Baluja, and T. Kanade, “Rotation Invariant Neural Network-Based Face Detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 963 – 963, 1998. [5] H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 203 – 208, 1996. [6] K.K. Sung and T. Poggio, “Example-Based Learning for View-Based Human Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 39-51, Jan. 1998. [7] C. Garcia, and M. Delakis, “A neural architecture for fast and robust face detection”, 16th International Conference on Pattern Recognition, vol. 2, pp. 44-47, 2002. [8] C. H. Han, and K. B. Sim, ”Real-time face detection using AdaBoot algorithm”, International Conference on Control, Automation and Systems, pp. 1892 – 1895, 2008. [9] R. Pappu, and P. A. Beardsley, “A Qualitative Approach to Classifying Gaze Direction,” Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 160-165, 1998. [10] S. Birchfield, “An Elliptical Head Tracker,” Thirty-First Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1710-1714, 1997. [11] S. Birchfield, “An Elliptical Head Tracking Using Intensity Gradients and Color Histograms,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 232-237, 1998. [12] T. K. Leung, M. C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching,” Fifth International Conference on Computer Vision, pp. 637-644, 1995. [13] C. Lin, and K. C. Fan, “Human face detection using geometric triangle relationship,” 15th International Conference on Pattern Recognition, vol. 2, pp. 941-944, 2000. [14] J. M. Lee, J. H. Kim, and Y. S. Moon, “Face Extraction Method Using Edge Orientation and Face Geometric Features,” International Conference on Convergence Information Technology, pp. 1292 – 1297, 2007. [15] Q. Yuan, W. Gao, and H. Yao, “Robust frontal face detection in complex environment,” 16th International Conference on Pattern Recognition, vol. 1, pp. 25-28, 2002. [16] C. C. Han, H.Y. Liao, G. J. Yu, and L. H. Chen, “Fast face detection via morphology-based pre-processing,” Pattern Recognition, Vol. 33, pp. 1707-1712, 2000. [17] S. Bashyal and G. K. Venayagamoorthy, “Recognition of facial expressions using Gabor wavelets and learning vector quantization,” Engineering Applications of Artificial Intelligence, vol.21, pp. 1056-1064, 2008. [18] W.C. Kao,M.C. Hsu,Y.Y. Yang, “Local contrast enhancement and adaptive feature extraction for illumination-invariant face recognition,” Pattern Recognition,Nov.14, pp. 1736-1747, 2010. [19] M. Lyons, S. Akamasku, M. Kamachi, and J. Gyoba, “ Coding facial expressions with gabor wavelets,” Proceedings of the Third International Conference on Face and Gesture Recognition, pp. 200-205, 1998. [20] Y. Tian, T. Kanade, and J. F. Cohn, “ Eye-state action unit detection by gabor wavelets,” In Proceedings of International Conference on Multi-modal Interfaces, pp. 143–150, 2000. [21] X. Zhang and R. M. Mersereau, “Lip feature extraction towards an automatic speechreading system,” IEEE International Conference. Image Processing, vol.3, pp. 226-229, 2000. [22] J. Park, J. Seo, D. An, and S. Chung, “Detection of Human Faces Using Skin Color and Eyes,” IEEE International Conference on Multimedia and Expo, vol.1, pp. 133-136, 2000. [23] Z. Xin, X. Yanjun, and D. Limin, “Locating facial features with color information”, Fourth International Conference on Signal Processing Proceedings, vol. 2, pp. 889-892, 1998. [24] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000. [25] Y. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, pp. 97-115, 2001. [26] Justin D. Paola and D. Paola and Robert A. Schowengerdt, “A Detailed Comparison of Backpropagation Neural Network and Maximum-Likelihood Classifiers for Urban Land Use Classification” IEEE, 1995. [27] K. Y. FANG, “An Implementation of Facial Expression Recognition System Based on Automatic Locating Features,” National Cheng Kung University master thesis, Tainan, Taiwan, 2009. [28] S. U. Jung, and J. H. Yoo, “Robust Eye Detection Using Self Quotient image,” International Symposium on Intelligent Signal Processing and Communications, pp. 263-266, 2006. [29] J. H. Hu, “Design and Implement of Facial Features Detection and Facial Expression Recognition Algorithm for Baby Watch and Care System”, National Chung Hsing University master thesis, Taichung, Taiwan, 2009. [30] J. H. Hu, “Design and Embedded System Implementation of Colorless Foreign Object Detection Algorithm and Facial Expression Recognition for Baby Watch and Care System”, National Chung Hsing University master thesis, Taichung, Taiwan, 2011. [31] BOLYMIN 公司,“BOLYMIN /BEGA220A/ Block Diagram”,“http://www.bolymin.com.tw/Doc/BEGA220A%20VER04.pdf”. [32] BOLYMIN公司,“BOLYMIN /BEGA220A/ Hardware Specifications”,“http://rainbow.com.ua/upload/files/LCD/BEGA220A.pdf”. [33] BOLYMIN 公司,“BOLYMIN /BEGA220A/ Display Embedded System”,“ http://www.bolymin.com.tw/Embeddeddetail.asp?productid=183”. [34] 次世代模擬器新聞網,“Microsoft Windows CE 5.0大出擊”,“http://playstation2.idv.tw/iacolumns/jl00022.html” [35] Cyansoft,“http://www.cyansoft.com.cn/zchcam.html”,”www.cyansoft.com.cn/downloads/usbcam/ZC030X_WCE_5.0.0.2.rar”.
摘要: 本論文提出一套具臉部狀態偵測之臥床病人視訊監護系統;當臥床病人有異物入侵口鼻或是口吐白沫、嘔吐等危險情形發生時,系統能即時發出警告訊息通知監護人員;也可經由痛苦表情來判別臥床病人目前是否有身體不適的狀況正尋求待援,來取代現階段全靠人力監護的方式,減輕監護者的負擔。為了使系統能在夜晚或是亮度不足的環境下運作,我們使用近紅外線攝影機來取得臥床病人的臉部影像。 此外臉部狀態偵測系統可分成兩個子系統:異物偵測和痛苦表情辨識。異物偵測部分,主要是針對臥床病人發生嘔吐、口吐白沫或其他異物遮蓋之偵測,因此我們可以藉由動態影像建立參考影像與當下影像並透過嘴巴附近區域的像素變化,來偵測目前是否有異物的入侵。痛苦表情辨識部分,主要偵測臥床病人是否處於痛苦之狀態,首先將影像中人臉的臉部特徵點擷取出來,藉由特徵點計算出特徵距離,並將這些特徵距離及紋理資訊當成輸入類神經系統的特徵值,即可辨識出痛苦表情。 實驗結果顯示,使用四核心的個人電腦(2.66GHz)去測試演算法找出人臉眼睛特徵點精確度約為91%,而痛苦表情辨識的正確率約為90%。整體系統平均運算一張畫面時間為25毫秒。 最後將此臥床病人監護系統實現於寶麗明公司的BEGA220A之嵌入式系統平台上。而基於軟體最佳化的概念,將演算法裡運算複雜度較高的算式做優化,並提供使用者介面方便使用操作此系統及觀察即時狀態訊息。當ARM 處理器操作於400MHz並掛載在Windows CE作業系統下時,其純軟體程式執行一張影像平均運算時間為1.75秒;加上近紅外線即時取向後,執行一張影像平均運算時間為2秒。
This study presents a bed-patient video monitor and care system with facial states detection. The system will alert watchers when a bed-patient encounters something around mouth and nose, frothing at mouth, vomit condition, and other dangerous situations. In addition, pain expressions are used to judge whether a bed-patient is unwell and is seeking to aid or not. In order to deal with the uneven illumination and night condition, we use the NIR camera to capture bed-patient’s face images in poor illuminated conditions. The proposed bed-patient video monitor and care system is divided into two subsystems, which include the facial foreign object detection and facial pain expression recognition. In the part of the facial object detection, we focus on detecting the vomit and something around mouth and nose. To achieve the above-mentioned demand, we observe the change of gray-scale values which is near to the mouth by comparing the current and previous frames in dynamic video sequences. In the other part of the facial expression recognition, we only recognize whether the expression is painful or not. First, we extract bed-patient’s face features from facial images. Next, the feature distances and skin textures will be calculated, and they are inputted to the neural network system for expression recognitions. Finally, bed-patient pain expressions are recognized effectively. The experiment results show that our algorithm can detect the eye features accurately in various illuminations, such as day and night, and the accuracy for the eye features detection and face extraction is up to 91%, then the accuracy for the facial pain expression recognition is about 90%. The system execution time only needs 25 ms per frame, by using the Quadcore (@2.66GHz) computer with C codes. Finally, the bed-patient video monitor and care system is implemented on BOLYMIN BEGA220A Windows CE embedded platform with an ARM926EJ CPU. By using software-based optimizations, the computational complexities are reduced and the software computations are optimized. We also offer the user interface program for easy uses and observations. When the system operates at 400MHz with an ARM9 CPU, the frame execution time needs 1.75 seconds. By adding the NIR camera to capture facial images, the frame execution time needs 2 seconds.
URI: http://hdl.handle.net/11455/9327
其他識別: U0005-2707201214395400
文章連結: http://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-2707201214395400
Appears in Collections:電機工程學系所

文件中的檔案:

取得全文請前往華藝線上圖書館



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.