Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/10217
標題: 利用谷歌街景影像建立實境導覽系統之研究
Integrating Google Street Views into Panoramic Navigation System
作者: 吳浩平
Wu, Hao-Ping
關鍵字: 街景影像;panoramic images;Google Maps API;實境路徑導覽;興趣點定位;photographical surveying measurement;image based route planning;Google Maps API
出版社: 土木工程學系所
引用: [1]. 何維信,測量學,宏泰出版社,第五版,(2004)。 [2]. 郭冠宗,「應用全景影像快速建置街景及路徑預覽」,碩士論文,國立中興大學土木研究所,臺中(2009)。 [3]. 張峻珽,「谷歌街景影像中興趣點直接地理定位之研究」,碩士論文,國立中興大學土木研究所,臺中(2011)。 [4]. Altamimi, Z., Sillard, P. and Boucher, C., “ITRF2000: A new release of the International Terrestrial Reference Frame for earth science applications,” Journal of Geophysical Research, Vol 107, pp.ETG 2-1 to ETG2-19(2002). [5]. Anguelov, D., Dulong, C., Filip, D., Frueh, C., Lafon, S., Lyon, R., Ogale, A., Vincent, L. and Weaver, J., “Google Street View: Capturing the World at Street Level,” Computer, Vol. 43 (6), pp. 32-38(2010). [6]. Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L., “Speeded-Up Robust Features (SURF),” International Journal of Computer Vision and Image Understanding, Vol.110 (3), pp.346-359(2008). [7]. Brown, M. and Lowe, D.G., “Automatic Panoramic Image Stitching using Invariant Features,” International Journal of Computer Vision, Vol.74 (1), pp.59-73(2007). [8]. Chen, B., Neubert, B., Ofek, E., Deussen, O. and Cohen, M., “Integrated Videos and Maps for Driving Directions,” Proceedings of the 22nd annual ACM symposium on User interface software and technology, pp.223-232(2009). [9]. Defense Mapping Agency, “Datums, Ellipsoids, Grids, and Grid Reference Systems,” Technical Manual 8358.1, (1990). [10]. El-Sheimy, N., “ The Development of VISAT a Mobile Survey System for GIS Applications,” Ph.D. Thesis, UCGE Report #20101, Department of Geomatics Engineering, The University of Calgary, (1996). [11]. Knopp, J., Sivic, J. and Pajdla, T., “Avoiding Confusing Features in Place Recognition,” ECCV''10 Proceedings of the 11th European conference on Computer vision: Part I, pp.748-761(2010). [12]. Kopf, J., Chen, B., Szeliski, R. and Cohen, M., “Street Slide: Browsing Street Level Imagery,” ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH, Vol.29 (4), Article No. 96(2010). [13]. Lee, C.H., Su, Y.C. and Chen, L.G., “Accurate Positioning System Based on Street View Recognition,” Acoustics, Speech and Signal Processing (ICASSP) IEEE International Conference, pp. 2305- 2308(2012). [14]. Lee, H.C., Kange, K.H., Joo, J.H., Kim, E.S. and Hur, G.T., “Development of Fitness Cycle System Using Google Streetview,” International Journal of Computer Science and Network Security, Vol.11 (2), pp.121-126(2011). [15]. Lewis, J.P., “Fast Template Matching,” Vision Interface, Canadian Image Processing and Pattern Recognition Society, Quebec City, Canada, pp. 120-123(1995). [16]. Lowe, D.G., “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Vol.60 (2), pp.91-110(2004). [17]. Neumann, L. and Matas, J., “Real-Time Scene Text Localization and Recognition,” Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on 16-21 June. 2012, pp.3538-3545(2012). [18]. Nodari, A., Vanetti, M. and Gallo, I., “Digital Privacy: Replacing Pedestrians from Google Street View Images,” Pattern Recognition (ICPR), 2012 21st International Conference on 11-15 Nov. 2012, pp.2889-2893(2012). [19]. Peng, C., Chen, B.Y. and Tsai, C.H., “Integrated Google Maps and Smooth Street View Videos for Route Planning,” Computer Symposium (ICS), 2010 International Conference on 16-18 Dec. 2010, pp.319-324(2010). [20]. Schroth, G., Al-Nuaimi, A., Huitl, R., Schweiger, F. and Steinbach, E., “Rapid Image Retrieval for Mobile Location Recognition,” Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference, pp.2320-2323(2011). [21]. Snavely, N., Seitz, S.M. and Szeliski, R., “Photo Tourism: Exploring Photo Collections in 3D,” ACM Transactions on Graphics (SIGGRAPH Proceedings), Vol.25 (3), pp.835-846(2006). [22]. Stroila, M., Yalcin, A., Mays, J. and Alwar, N., “Route Visualization in Indoor Panoramic Imagery with Open Area Maps,” Multimedia and Expo Workshops (ICMEW), 2012 IEEE International Conference on 9-13 July. 2012, pp.499-504(2012). [23]. Szeliski, R. and Shum, H.Y., “Creating Full View Panoramic Image Mosaics and Environment Maps,” Computer Graphics, pp. 251-258(1997). [24]. Tong, G.F. and Gu, J.H., “Locating Objects in Spherical Panoramic Images,” Robotics and Biomimetics (ROBIO), IEEE International Conference, pp. 818-823(2011). [25]. Tsai, J.D. and Chang, C.T., “Feature Positioning on Google Street View Panoramas,” Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS), Vol. I-4, pp.305-309(2012). [26]. Vincent, L., “Taking Online Maps Down to Street Level,” Computer, Vol.40 (12), pp.118-120(2007). [27]. Weir, J. and Yan, W., “Resolution Variant Visual Cryptography for Street View of Google Maps,” Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on May 30-June 2. 2010, pp.1695-1698(2010). [28]. Yazawa, Z., Uchiyama, H. and Saito, H., “Image Based View Localization System Retrieving from a Panorama Database by SURF,” MVA2009 IAPR Conference on Machine Vision Applications, pp.119-121(2009). [29]. Zamir, A., Darino, A., Patrick, R. and Shah, M., “Street View Challenge: Identification of Commercial Entities in Street View Imagery,” Machine Learning and Applications and Workshops (ICMLA), 2011 10th International Conference on 18-21 Dec. 2011, Vol.2, pp.380-383(2011). 網路資源: [30]. Iwane Laboratories Ltd, “The Explanation on CV IMAGE and 3D MAP by all-surrounding-image,” (2009). [31]. Mercator, P., http://en.wikipedia.org/wiki/File:Cylindrical_Projection_basics.svg, (2009). [32]. Osborne, P., “The Mercator Projections,” (2013). [33]. Subirana, J.S., Juan Zornoza , J.M. and Hernandez-Pajares, M., “Conventional Terrestrial Reference System,” Technical University of Catalonia, Spain,(2011).http://www.navipedia.net/index.php/File:Conventional_Terrestrial_Reference_System_Fig_1.png
摘要: 
隨著地圖產製技術的演進,實境街景影像可讓一般使用者在電腦、智慧型移動裝置環境下獲取初步的地理訊息與環境資訊,然而,應用於行前路徑規劃導覽時,從線上地圖上所展示的街景,仍需透過使用者手工點選建立地標間的關連,往往產生誤判、缺乏對路徑的認識與地標間的關連的情形產生。
本研究應用Google Maps API進行路徑規劃、以路線上各路口對應的WGS84坐標系統坐標,進行街景影像密度推估與路點內插,利用同路段路口點位坐標,推算方位角關係,使影像視角對齊行車方向,透過高程與車道限制關係式,降低錯誤街景影像干擾,建立一組完整路線影像串流預覽系統,並提出一組結合影像樣板比對法與空間前方交會的解算方法,能對街景中任意興趣點執行即時空間定位,解算該點三維坐標並獲取相關線上地理資訊,提供使用者完善的路徑實景預覽體驗。
本系統建立之街景影像串流,透過方位角轉換,可以使視角方向修正約20到150度的原始角度誤差量;透過平面限制式移除約11.56%的錯誤街景點位,有效減少X方向約6.01%的誤差量、Y方向減少約1.25%誤差,是一組基於路徑規劃的影像導覽系統。
應用於街景建物興趣點定位時,可達到約73%的定位準確度,以定位成果回傳該位置的店家資訊,正確率約為56%;以校內地面控制點測試,X分量平均誤差值約為2.4535公尺、Y分量約為4.0037公尺、Z分量約為6.2120公尺,誤差範圍約在5~10公尺內,適合應用於一般的街景建物量測。

With the development of surveying technologies, panoramic street views allow people to get the premier geographical and spatial information through computers and intelligent devices. With the increasing demand of online route planning, an efficient way to achieve the complete street views along with the navigation is established in this research.
A Panoramic image based route planning will also be built in this project. This function enables users to explore more geo-spatial information to fulfill the purpose of travel planning through streaming thousand or even more conjunctive panoramic images. To enrich the performance of this service, multi-functions, such as spatial coordinate interpolation, azimuth angle transformation, plane road boundary limit, elevated road boundary limit, are established in the system. Through requiring these spatial based functions, a near real-time image alignment technique for route planning will be presented, without the need of employing any on-field investigation. Finally, a video by streaming panoramic images will be made for previewing the street views along with the moving direction automatically.
To achieve spatial information in a street view automatically, we integrate some image processing technologies, such as Template Matching and NCC (Normalized Cross Correlation), to detect conjugate points on a panoramic image coordinate system, which is built of two or three conjunctive panoramic images. Then, we employ spatial intersection relationship formulas to create links between the panoramic image coordinate system and geographical coordinate system. Finally, we provide an automatic model for surveying measurement through close-range photogrammetry.
Functions adapted in the streaming of panoramic images, such as the azimuth angle transformation, fixed 20 to 150 degrees of horizontal angle errors. Through the plane road boundary limit, we removed 11.56% incorrect street view points, improved the performance of this system.
The other function in this system, the interesting point positioning, could reach to an approximately 73% value of precision. Using these positioning results, we acquired a 56% probability to obtain the correct geographical information. By providing a plane error range of about 5 to 10 meters, this automatic positioning model is suitable for general streetscape building measurement.
URI: http://hdl.handle.net/11455/10217
其他識別: U0005-1807201318465600
Appears in Collections:土木工程學系所

Show full item record
 

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.