Please use this identifier to cite or link to this item:
標題: 基於快速強健特徵偵測之移動機器人視覺式定位
SURF (Speeded-Up Robust Features) Based Visual Localization for a Mobile Robot
作者: 陳俊穎
Chen, Chung-Ying
關鍵字: 單眼式相機視覺定位;monocular visual localization;快速強健特徵;SURF
出版社: 機械工程學系所
引用: [1] R. Siegwart, I. R. Nourbakhsh, and D. Scaramuzza, Introduction to Autonomous Mobile Robots, Cambridge, Massachusetts Institute of Technology Press, 2011. [2] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM Real-Time Single Camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, pp.1052-1067, June 2007. [3] Z. Zhang, Y. Huang, C. Li, and Y. Kang, “Monocular Vision Simultaneous Localization and Mapping using SURF,” Proceedings of the 7th World Congress on Intelligent Control and Automation, June 2008. [4] C. Wohler, P. d''Angelo, L. Kruger, and A. Kuhl, “Monocular 3D Scene Reconstruction at Absolute Scale,” ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 64, No. 6, pp.529-540, 2009. [5] Image distortion. [6] D. C. Brown, “Decentering Distortion and the Definitive Calibration of Metric Cameras,” The American Society of Photogrammetry Convention, 1965. [7] A. Conrady, “Decentered Lens Systems,” Monthly Notices of the Royal Astronomical Society, Vol. 79, pp.384-390, 1919. [8] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, pp.1330-1334, Nov 2000. [9] M. Pollefeys, R. Koch, and L. V. Gool, “Self-Calibration and Metric Reconstruction Inspite of Varying and Unknown Intrinsic Camera Parameters,” International Journal of Computer Vision, Vol 32, No. 1, pp.7-25, 1999. [10] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Alvey Vision Conference, 1988. [11] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Conference on Computer Vision, Vol. 60, No. 2, pp.91-110, 2004. [12] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, Vol 110, No. 3, pp.346–359, 2008. [13] K. Mikolajczyk and C. Schmid, “Scale and Affine Invariant Interest Point Detectors,” International Journal of Computer Vision, Vol. 60, pp.63-86, 2004. [14] O. Faugeras, Q. T. Luong, The Geometry of Multiple Images, Cambridge, MA, MIT Press, 2001. [15] S. Ullman, “The interpretation of Structure from Motion,”Proceedings of the Royal Society B, 1979. [16] R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge, UK, Cambridge University Press, 2004. [17] Honda ASIMO. [18] D. C. Brown, “Decentering Distortion of Lenses,” Photometric Engineering, Vol. 32, No. 3, pp.444-462, 1966. [19] Camera calibration toolbox for matlab. [20] Camera calibration with OpenCV. .html [21] Z. Zhang, “Flexible Camera Calibration by Viewing a Plane From Unknown Orientations,” Proceedings of the Seventh IEEE International Conference on Computer Vision, pp.666-673, 1999. [22] H. P. Moravec, “Towards Automatic Visual Obstacle Avoidance,” Proceedings of the 5th International Joint Conference on Artificial Intelligence, 1977. [23] H. P. Moravec, “Visual Mapping by a Robot Rover,” Proceedings of the 6th International Joint Conference on Artificial Intelligence, 1979. [24] K. Mikolajczyk and C. Schmid, “Indexing Based on Scale Invariant Interest Points,” International Conference on Computer Vision, pp.525-531, 2001. [25] OpenCV:SIFT原理與源碼分析. [26] H. C. Longuet-Higgins, “A Computer Algorithm for Reconstructing a Scene from two Projections,” Nature, Vol. 293, pp.133-135, 1981. [27] M. A. Fischler and R. C. Bolles, “Random Sampling Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communications of the ACM, Vol. 26, pp.381-395, 1981.
本論文使用單一相機於不同時間及位姿拍攝取得的影像,使用影像處理技術及快速強健特徵(Speeded-Up Robust Features, SURF)偵測法,取得影像中特徵點的座標,再將不同影像中屬於同景物的特徵點進行匹配,之後並利用影像特徵點匹配所得的結果,由運動探知結構(Structure from Motion, SfM)演算法,計算各相鄰取樣時間點間相機位姿的變化,即平移量(translation)和旋轉(rotation)矩陣。再藉此估測景物特徵點的深度,並利用限制條件恢復景物點實際深度,以及使用相機的平移量、旋轉矩陣和景物特徵點深度,進行相機與景物的定位計算。本論文最後並以固裝於Mecanum輪式四輪全向機器人上的攝影機,進行所提行動機器人定位(localization)技術的測試,以瞭解其實際性能。

In this thesis, we consider the application of a visual based localization and mapping algorithm for a 4-Mecanum omni-directional mobile robot. Digital images are took by a camera in different time and pose. Using basic image processing technology and Speeded-Up Robust Features (SURF) detector, the coordinates of some feature points in an image are extracted, and the feature points detected in successive different images, projected from the same scene points are then matched. Based on the matched feature points between two successive images and using the Structure-from-Motion (SfM) algorithm, the translation vector and rotation matrix between the poses of camera in different time are calculated. The depths of feature points can then be retrieved in absolute scale with some constraints by using the obtained translation vector and rotation matrix. Finally, the camera attached to mobile robot and object coordinates in the world coordinate frame are found via the translation vector, rotation matrix, and the depths of the feature points. Experimental results are then presented for illustrating the performance of the suggested visual-based localization method.
其他識別: U0005-1508201315544500
Appears in Collections:機械工程學系所

Show full item record

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.