Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/19939
標題: 一種以深度資訊為基礎的偵測手指骨架彎曲度演算法
An Algorithm For Detecting Bending-Degree of Finger Skeleton Based on Image Depth Information
作者: 蔣亦凱
Chiang, Yi-Kai
關鍵字: 手指彎曲角度
Finger bending degree detection
深度影像
手指骨架
手指偵測
KINECT
Depth image
Finger skeleton
Finger detection
KINECT
出版社: 資訊科學與工程學系所
引用: 參考文獻 [1] 方裕民, "人與物的對話-互動式介面的理論與實務," 台北: 田園城市出版社, 2003. [2] 賴文能、陳韋志, “淺談 2D 至 3D 視訊轉換技術,” 影像與識別, vol. 16 no. 2 2010. [3] M. de La Gorce, N. Paragios, and D. J. Fleet, "Model-based hand tracking with texture, shading and self-occlusions." pp. 1-8. [4] C. Jixu, and J. Qiang, "Efficient 3D Upper Body Tracking with Self-Occlusions." pp. 3636-3639. [5] J. Wasza, S. Bauer, and J. Hornegger, "High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications," Bildverarbeitung fur die Medizin 2011, Informatik aktuell H. Handels, J. Ehrhardt, T. M. Deserno, H.-P. Meinzer and T. Tolxdorff, eds., pp. 324-328: Springer Berlin Heidelberg, 2011. [6] J. Salvi, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognition, vol. 37, no. 4, pp. 827-849, 4//, 2004. [7] P. En, and L. Ling, "Acquiring human skeleton proportions from monocular images without posture estimation." pp. 2250-2255. [8] M. Sato, I. Bitter, M. A. Bender, A. E. Kaufman, and M. Nakajima, "TEASAR: tree-structure extraction algorithm for accurate and robust skeletons." pp. 281-449. [9] T. Jing, Z. Jin, L. Ligang, P. Zhigeng, and Y. Hao, “Scanning 3D Full Human Bodies Using Kinects,” Visualization and Computer Graphics, IEEE Transactions on, vol. 18, no. 4, pp. 643-650, 2012. [10] R. Giannitrapani, and V. Murino, "Three-dimensional skeleton extraction by point set contraction." pp. 565-569 vol.1. [11] K. Takahashi, Y. Nagasawa, and M. Hashimoto, "Remarks on 3D human body''s feature extraction from voxel reconstruction of human body posture." pp. 121-126. [12] Y. Xin, and Y. Xubo, "A Robust Human Action Recognition System Using Single Camera." pp. 1-4. [13] 賴宗誠, “應用多組雙眼攝影機系統進行車前三維環境模型重建,” 生物產業機電工程學研究所, 臺灣大學, 2012. [14] 林詩婷, “使用單一Kinect相機即時建構三維紋理模型,” 資訊工程研究所, 國立中正大學, 嘉義縣, 2012. [15] 張均維, “藉由Kinect體感裝置互動的3D水族箱,” 資訊多媒體應用學系碩士班, 亞洲大學, 台中市, 2013. [16] 廖秀鈺, “應用Kinect for Xbox 360訓練高齡者平衡能力之研究,” 工業設計系碩士班, 雲林科技大學, 雲林縣, 2011. [17] 吳佩真, “基於Kinect裝置之影像縮放技術,” 資訊工程研究所, 國立中正大學, 嘉義縣, 2010. [18] 李真忠, “運用立體視覺之指尖偵測與互動系統,” 資訊工程研究所, 中原大學, 桃園縣, 2011. [19] 黃信榮, “基於立體視覺手勢辨識的人機互動系統,” 資訊工程研究所, 國立中央大學, 桃園縣, 2009. [20] 李經寧, “即時手勢辨識系統應用於機上盒控制,” 資訊工程學系碩士在職專班, 國立中央大學, 桃園縣, 2009. [21] L. C. Ebert, G. Hatch, M. J. Thali, and S. Ross, “Invisible touch—Control of a DICOM viewer with finger gestures using the Kinect depth camera,” Journal of Forensic Radiology and Imaging, vol. 1, no. 1, pp. 10-14, 1//, 2013. [22] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore, “Real-time human pose recognition in parts from single depth images,” Commun. ACM, vol. 56, no. 1, pp. 116-124, 2013. [23] J. Jackoway, Building a Skeleton of a Human Hand Using Microsoft Kinect, MS Project Report 2012. [24] C. M. Z. Arun Ganesan. "A Real-time Hand Gesture Recognition System," http://web.eecs.umich.edu/~caoxiezh/eecs582.pdf. [25] 陳彥廷, “以模型為基礎之電腦手指連續運動影像分析系統,” 資訊工程學系碩博士班, 國立成功大學, 台南市, 2006. [26] M. W. Wright, R. Cipolla, and P. J. Giblin, “Skeletonization using an extended Euclidean distance transform,” Image and Vision Computing, vol. 13, no. 5, pp. 367-375, 6//, 1995. [27] Q. Xujia, S. Xiansheng, Z. Sida, and C. Shiwei, "Line-Skeleton Extraction of 3D Meshes Based on Geometry Segmentation." pp. 354-357. [28] C. Glynos, 3D Skeleton extraction using one Kinect camera, 2012.
摘要: 隨著軟硬體技術不斷增進,以往傳統接觸式的人機介面互動慢慢的從圖形化介面、接觸式近距離操作,一直進步到現在非接觸式的形式來與電腦做溝通,而在非接觸式遠距離操作中能夠直接辨識使用者手部動作來給與回應,使其可在遠處操作電腦,降低使用上的束縛且增加了操作上的便利;然而大部份方法都是針對偵測手的2D手勢來判斷並予以回應,在應用上會有限制且不夠直覺,如果能夠直接判斷3D的手指狀況將可提供更多用途,使用上也會更直觀、方便。 本論文提出一種單純使用深度影像圖來偵測手指彎曲程度且跨平台的演算法,只需要能提供深度影像的攝影機皆可適用,先針對手部取得完整初始化的必要資訊,接著開始偵測手指是否彎曲,針對有彎曲手指的深度資訊分析判斷彎曲的狀況,推估出手指指尖、關節等三維位置,重新架構出該手指的彎曲狀況,最後跟沒有彎曲的手指合併得到所有手指彎曲的角度;本論文使用Microsoft Kinect攝影機擷取深度資訊以供實驗。
A non-contact human-computer interaction system provides an alternative way to control devices and it is more convenient and less restrictive than traditional input methods. However, most researches applied finger or hand gestures recognition by only using a 2D camera. A 3D depth image can provide more intuitional data for finger or hand gestures recognition. In this thesis, a cross-platform algorithm is proposed to detect finger gesture information such as finger bending degree, finger joint and palm position. The proposed method takes into account the 3D data no matter where the images are obtained. This algorithm first obtains a complete 3D hand image for initialization. Then the blending information for each finger is considering. Finally we can reconstruct entire finger bending degrees to form a hand skeleton. In this thesis, we used a Microsoft Kinect to acquire the experimental samples. Only the depth data is used in the proposed method. The experimental results show that the reconstructed hand skeletons match the real hand situation for every participant.
URI: http://hdl.handle.net/11455/19939
其他識別: U0005-2508201323500400
文章連結: http://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-2508201323500400
Appears in Collections:資訊科學與工程學系所

文件中的檔案:

取得全文請前往華藝線上圖書館



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.