Please use this identifier to cite or link to this item:
Design and Embedded Implementation of A Bed-Patient Video Monitor and Care System with Facial States Detection
Facial States Detection
Video Monitor and Care System
|引用:|| Y. Guan, “Robust Eye Detection from Facial Image based on Multi-cue Facial Information,” IEEE International Conference on Control and Automation, pp. 1775 – 1778, 2007.  R. L. Hsu, A. M. Mohamed, and A. K. Jain, “Face detection in color images,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 696-706, 2002.  Z. F. Liu, Z. S. You, A. K. Jain, and Y. Q. Wang, “Face detection and facial feature extraction in color image”, Fifth International Conference on Computational Intelligence and Multimedia Applications, pp. 126-130, 2003.  H. A. Rowley, S. Baluja, and T. Kanade, “Rotation Invariant Neural Network-Based Face Detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 963 – 963, 1998.  H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 203 – 208, 1996.  K.K. Sung and T. Poggio, “Example-Based Learning for View-Based Human Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 39-51, Jan. 1998.  C. Garcia, and M. Delakis, “A neural architecture for fast and robust face detection”, 16th International Conference on Pattern Recognition, vol. 2, pp. 44-47, 2002.  C. H. Han, and K. B. Sim, ”Real-time face detection using AdaBoot algorithm”, International Conference on Control, Automation and Systems, pp. 1892 – 1895, 2008.  R. Pappu, and P. A. Beardsley, “A Qualitative Approach to Classifying Gaze Direction,” Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 160-165, 1998.  S. Birchfield, “An Elliptical Head Tracker,” Thirty-First Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1710-1714, 1997.  S. Birchfield, “An Elliptical Head Tracking Using Intensity Gradients and Color Histograms,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 232-237, 1998.  T. K. Leung, M. C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching,” Fifth International Conference on Computer Vision, pp. 637-644, 1995.  C. Lin, and K. C. Fan, “Human face detection using geometric triangle relationship,” 15th International Conference on Pattern Recognition, vol. 2, pp. 941-944, 2000.  J. M. Lee, J. H. Kim, and Y. S. Moon, “Face Extraction Method Using Edge Orientation and Face Geometric Features,” International Conference on Convergence Information Technology, pp. 1292 – 1297, 2007.  Q. Yuan, W. Gao, and H. Yao, “Robust frontal face detection in complex environment,” 16th International Conference on Pattern Recognition, vol. 1, pp. 25-28, 2002.  C. C. Han, H.Y. Liao, G. J. Yu, and L. H. Chen, “Fast face detection via morphology-based pre-processing,” Pattern Recognition, Vol. 33, pp. 1707-1712, 2000.  S. Bashyal and G. K. Venayagamoorthy, “Recognition of facial expressions using Gabor wavelets and learning vector quantization,” Engineering Applications of Artificial Intelligence, vol.21, pp. 1056-1064, 2008.  W.C. Kao,M.C. Hsu,Y.Y. Yang, “Local contrast enhancement and adaptive feature extraction for illumination-invariant face recognition,” Pattern Recognition,Nov.14, pp. 1736-1747, 2010.  M. Lyons, S. Akamasku, M. Kamachi, and J. Gyoba, “ Coding facial expressions with gabor wavelets,” Proceedings of the Third International Conference on Face and Gesture Recognition, pp. 200-205, 1998.  Y. Tian, T. Kanade, and J. F. Cohn, “ Eye-state action unit detection by gabor wavelets,” In Proceedings of International Conference on Multi-modal Interfaces, pp. 143–150, 2000.  X. Zhang and R. M. Mersereau, “Lip feature extraction towards an automatic speechreading system,” IEEE International Conference. Image Processing, vol.3, pp. 226-229, 2000.  J. Park, J. Seo, D. An, and S. Chung, “Detection of Human Faces Using Skin Color and Eyes,” IEEE International Conference on Multimedia and Expo, vol.1, pp. 133-136, 2000.  Z. Xin, X. Yanjun, and D. Limin, “Locating facial features with color information”, Fourth International Conference on Signal Processing Proceedings, vol. 2, pp. 889-892, 1998.  T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000.  Y. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, pp. 97-115, 2001.  Justin D. Paola and D. Paola and Robert A. Schowengerdt, “A Detailed Comparison of Backpropagation Neural Network and Maximum-Likelihood Classifiers for Urban Land Use Classification” IEEE, 1995.  K. Y. FANG, “An Implementation of Facial Expression Recognition System Based on Automatic Locating Features,” National Cheng Kung University master thesis, Tainan, Taiwan, 2009.  S. U. Jung, and J. H. Yoo, “Robust Eye Detection Using Self Quotient image,” International Symposium on Intelligent Signal Processing and Communications, pp. 263-266, 2006.  J. H. Hu, “Design and Implement of Facial Features Detection and Facial Expression Recognition Algorithm for Baby Watch and Care System”, National Chung Hsing University master thesis, Taichung, Taiwan, 2009.  J. H. Hu, “Design and Embedded System Implementation of Colorless Foreign Object Detection Algorithm and Facial Expression Recognition for Baby Watch and Care System”, National Chung Hsing University master thesis, Taichung, Taiwan, 2011.  BOLYMIN 公司,“BOLYMIN /BEGA220A/ Block Diagram”，“http://www.bolymin.com.tw/Doc/BEGA220A%20VER04.pdf”.  BOLYMIN公司,“BOLYMIN /BEGA220A/ Hardware Specifications”，“http://rainbow.com.ua/upload/files/LCD/BEGA220A.pdf”.  BOLYMIN 公司,“BOLYMIN /BEGA220A/ Display Embedded System”，“ http://www.bolymin.com.tw/Embeddeddetail.asp?productid=183”.  次世代模擬器新聞網，“Microsoft Windows CE 5.0大出擊”，“http://playstation2.idv.tw/iacolumns/jl00022.html”  Cyansoft，“http://www.cyansoft.com.cn/zchcam.html”，”www.cyansoft.com.cn/downloads/usbcam/ZC030X_WCE_184.108.40.206.rar”.|
最後將此臥床病人監護系統實現於寶麗明公司的BEGA220A之嵌入式系統平台上。而基於軟體最佳化的概念，將演算法裡運算複雜度較高的算式做優化，並提供使用者介面方便使用操作此系統及觀察即時狀態訊息。當ARM 處理器操作於400MHz並掛載在Windows CE作業系統下時，其純軟體程式執行一張影像平均運算時間為1.75秒；加上近紅外線即時取向後，執行一張影像平均運算時間為2秒。|
This study presents a bed-patient video monitor and care system with facial states detection. The system will alert watchers when a bed-patient encounters something around mouth and nose, frothing at mouth, vomit condition, and other dangerous situations. In addition, pain expressions are used to judge whether a bed-patient is unwell and is seeking to aid or not. In order to deal with the uneven illumination and night condition, we use the NIR camera to capture bed-patient’s face images in poor illuminated conditions. The proposed bed-patient video monitor and care system is divided into two subsystems, which include the facial foreign object detection and facial pain expression recognition. In the part of the facial object detection, we focus on detecting the vomit and something around mouth and nose. To achieve the above-mentioned demand, we observe the change of gray-scale values which is near to the mouth by comparing the current and previous frames in dynamic video sequences. In the other part of the facial expression recognition, we only recognize whether the expression is painful or not. First, we extract bed-patient’s face features from facial images. Next, the feature distances and skin textures will be calculated, and they are inputted to the neural network system for expression recognitions. Finally, bed-patient pain expressions are recognized effectively. The experiment results show that our algorithm can detect the eye features accurately in various illuminations, such as day and night, and the accuracy for the eye features detection and face extraction is up to 91%, then the accuracy for the facial pain expression recognition is about 90%. The system execution time only needs 25 ms per frame, by using the Quadcore (@2.66GHz) computer with C codes. Finally, the bed-patient video monitor and care system is implemented on BOLYMIN BEGA220A Windows CE embedded platform with an ARM926EJ CPU. By using software-based optimizations, the computational complexities are reduced and the software computations are optimized. We also offer the user interface program for easy uses and observations. When the system operates at 400MHz with an ARM9 CPU, the frame execution time needs 1.75 seconds. By adding the NIR camera to capture facial images, the frame execution time needs 2 seconds.
|Appears in Collections:||電機工程學系所|
Show full item record
TAIR Related Article
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.