Please use this identifier to cite or link to this item:
DC FieldValueLanguage
dc.contributorMing-Der Yangen_US
dc.contributor.authorWu, Hao-Pingen_US
dc.identifier.citation[1]. 何維信,測量學,宏泰出版社,第五版,(2004)。 [2]. 郭冠宗,「應用全景影像快速建置街景及路徑預覽」,碩士論文,國立中興大學土木研究所,臺中(2009)。 [3]. 張峻珽,「谷歌街景影像中興趣點直接地理定位之研究」,碩士論文,國立中興大學土木研究所,臺中(2011)。 [4]. Altamimi, Z., Sillard, P. and Boucher, C., “ITRF2000: A new release of the International Terrestrial Reference Frame for earth science applications,” Journal of Geophysical Research, Vol 107, pp.ETG 2-1 to ETG2-19(2002). [5]. Anguelov, D., Dulong, C., Filip, D., Frueh, C., Lafon, S., Lyon, R., Ogale, A., Vincent, L. and Weaver, J., “Google Street View: Capturing the World at Street Level,” Computer, Vol. 43 (6), pp. 32-38(2010). [6]. Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L., “Speeded-Up Robust Features (SURF),” International Journal of Computer Vision and Image Understanding, Vol.110 (3), pp.346-359(2008). [7]. Brown, M. and Lowe, D.G., “Automatic Panoramic Image Stitching using Invariant Features,” International Journal of Computer Vision, Vol.74 (1), pp.59-73(2007). [8]. Chen, B., Neubert, B., Ofek, E., Deussen, O. and Cohen, M., “Integrated Videos and Maps for Driving Directions,” Proceedings of the 22nd annual ACM symposium on User interface software and technology, pp.223-232(2009). [9]. Defense Mapping Agency, “Datums, Ellipsoids, Grids, and Grid Reference Systems,” Technical Manual 8358.1, (1990). [10]. El-Sheimy, N., “ The Development of VISAT a Mobile Survey System for GIS Applications,” Ph.D. Thesis, UCGE Report #20101, Department of Geomatics Engineering, The University of Calgary, (1996). [11]. Knopp, J., Sivic, J. and Pajdla, T., “Avoiding Confusing Features in Place Recognition,” ECCV''10 Proceedings of the 11th European conference on Computer vision: Part I, pp.748-761(2010). [12]. Kopf, J., Chen, B., Szeliski, R. and Cohen, M., “Street Slide: Browsing Street Level Imagery,” ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH, Vol.29 (4), Article No. 96(2010). [13]. Lee, C.H., Su, Y.C. and Chen, L.G., “Accurate Positioning System Based on Street View Recognition,” Acoustics, Speech and Signal Processing (ICASSP) IEEE International Conference, pp. 2305- 2308(2012). [14]. Lee, H.C., Kange, K.H., Joo, J.H., Kim, E.S. and Hur, G.T., “Development of Fitness Cycle System Using Google Streetview,” International Journal of Computer Science and Network Security, Vol.11 (2), pp.121-126(2011). [15]. Lewis, J.P., “Fast Template Matching,” Vision Interface, Canadian Image Processing and Pattern Recognition Society, Quebec City, Canada, pp. 120-123(1995). [16]. Lowe, D.G., “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Vol.60 (2), pp.91-110(2004). [17]. Neumann, L. and Matas, J., “Real-Time Scene Text Localization and Recognition,” Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on 16-21 June. 2012, pp.3538-3545(2012). [18]. Nodari, A., Vanetti, M. and Gallo, I., “Digital Privacy: Replacing Pedestrians from Google Street View Images,” Pattern Recognition (ICPR), 2012 21st International Conference on 11-15 Nov. 2012, pp.2889-2893(2012). [19]. Peng, C., Chen, B.Y. and Tsai, C.H., “Integrated Google Maps and Smooth Street View Videos for Route Planning,” Computer Symposium (ICS), 2010 International Conference on 16-18 Dec. 2010, pp.319-324(2010). [20]. Schroth, G., Al-Nuaimi, A., Huitl, R., Schweiger, F. and Steinbach, E., “Rapid Image Retrieval for Mobile Location Recognition,” Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference, pp.2320-2323(2011). [21]. Snavely, N., Seitz, S.M. and Szeliski, R., “Photo Tourism: Exploring Photo Collections in 3D,” ACM Transactions on Graphics (SIGGRAPH Proceedings), Vol.25 (3), pp.835-846(2006). [22]. Stroila, M., Yalcin, A., Mays, J. and Alwar, N., “Route Visualization in Indoor Panoramic Imagery with Open Area Maps,” Multimedia and Expo Workshops (ICMEW), 2012 IEEE International Conference on 9-13 July. 2012, pp.499-504(2012). [23]. Szeliski, R. and Shum, H.Y., “Creating Full View Panoramic Image Mosaics and Environment Maps,” Computer Graphics, pp. 251-258(1997). [24]. Tong, G.F. and Gu, J.H., “Locating Objects in Spherical Panoramic Images,” Robotics and Biomimetics (ROBIO), IEEE International Conference, pp. 818-823(2011). [25]. Tsai, J.D. and Chang, C.T., “Feature Positioning on Google Street View Panoramas,” Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS), Vol. I-4, pp.305-309(2012). [26]. Vincent, L., “Taking Online Maps Down to Street Level,” Computer, Vol.40 (12), pp.118-120(2007). [27]. Weir, J. and Yan, W., “Resolution Variant Visual Cryptography for Street View of Google Maps,” Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on May 30-June 2. 2010, pp.1695-1698(2010). [28]. Yazawa, Z., Uchiyama, H. and Saito, H., “Image Based View Localization System Retrieving from a Panorama Database by SURF,” MVA2009 IAPR Conference on Machine Vision Applications, pp.119-121(2009). [29]. Zamir, A., Darino, A., Patrick, R. and Shah, M., “Street View Challenge: Identification of Commercial Entities in Street View Imagery,” Machine Learning and Applications and Workshops (ICMLA), 2011 10th International Conference on 18-21 Dec. 2011, Vol.2, pp.380-383(2011). 網路資源: [30]. Iwane Laboratories Ltd, “The Explanation on CV IMAGE and 3D MAP by all-surrounding-image,” (2009). [31]. Mercator, P.,, (2009). [32]. Osborne, P., “The Mercator Projections,” (2013). [33]. Subirana, J.S., Juan Zornoza , J.M. and Hernandez-Pajares, M., “Conventional Terrestrial Reference System,” Technical University of Catalonia, Spain,(2011).
dc.description.abstract隨著地圖產製技術的演進,實境街景影像可讓一般使用者在電腦、智慧型移動裝置環境下獲取初步的地理訊息與環境資訊,然而,應用於行前路徑規劃導覽時,從線上地圖上所展示的街景,仍需透過使用者手工點選建立地標間的關連,往往產生誤判、缺乏對路徑的認識與地標間的關連的情形產生。 本研究應用Google Maps API進行路徑規劃、以路線上各路口對應的WGS84坐標系統坐標,進行街景影像密度推估與路點內插,利用同路段路口點位坐標,推算方位角關係,使影像視角對齊行車方向,透過高程與車道限制關係式,降低錯誤街景影像干擾,建立一組完整路線影像串流預覽系統,並提出一組結合影像樣板比對法與空間前方交會的解算方法,能對街景中任意興趣點執行即時空間定位,解算該點三維坐標並獲取相關線上地理資訊,提供使用者完善的路徑實景預覽體驗。 本系統建立之街景影像串流,透過方位角轉換,可以使視角方向修正約20到150度的原始角度誤差量;透過平面限制式移除約11.56%的錯誤街景點位,有效減少X方向約6.01%的誤差量、Y方向減少約1.25%誤差,是一組基於路徑規劃的影像導覽系統。 應用於街景建物興趣點定位時,可達到約73%的定位準確度,以定位成果回傳該位置的店家資訊,正確率約為56%;以校內地面控制點測試,X分量平均誤差值約為2.4535公尺、Y分量約為4.0037公尺、Z分量約為6.2120公尺,誤差範圍約在5~10公尺內,適合應用於一般的街景建物量測。zh_TW
dc.description.abstractWith the development of surveying technologies, panoramic street views allow people to get the premier geographical and spatial information through computers and intelligent devices. With the increasing demand of online route planning, an efficient way to achieve the complete street views along with the navigation is established in this research. A Panoramic image based route planning will also be built in this project. This function enables users to explore more geo-spatial information to fulfill the purpose of travel planning through streaming thousand or even more conjunctive panoramic images. To enrich the performance of this service, multi-functions, such as spatial coordinate interpolation, azimuth angle transformation, plane road boundary limit, elevated road boundary limit, are established in the system. Through requiring these spatial based functions, a near real-time image alignment technique for route planning will be presented, without the need of employing any on-field investigation. Finally, a video by streaming panoramic images will be made for previewing the street views along with the moving direction automatically. To achieve spatial information in a street view automatically, we integrate some image processing technologies, such as Template Matching and NCC (Normalized Cross Correlation), to detect conjugate points on a panoramic image coordinate system, which is built of two or three conjunctive panoramic images. Then, we employ spatial intersection relationship formulas to create links between the panoramic image coordinate system and geographical coordinate system. Finally, we provide an automatic model for surveying measurement through close-range photogrammetry. Functions adapted in the streaming of panoramic images, such as the azimuth angle transformation, fixed 20 to 150 degrees of horizontal angle errors. Through the plane road boundary limit, we removed 11.56% incorrect street view points, improved the performance of this system. The other function in this system, the interesting point positioning, could reach to an approximately 73% value of precision. Using these positioning results, we acquired a 56% probability to obtain the correct geographical information. By providing a plane error range of about 5 to 10 meters, this automatic positioning model is suitable for general streetscape building measurement.en_US
dc.description.tableofcontents目錄 摘要 II Abstract III 目錄 IV 表目錄 VIII 圖目錄 X 第一章 緒論 1 1.1研究動機與目的 1 1.2問題描述與預期成果 2 1.3研究流程與架構 3 1.4本文組織 4 第二章 文獻回顧 6 2.1街景影像研究文獻回顧 6 2.1.1興趣點定位文獻回顧 6 2.1.2影像串流文獻回顧 8 2.3坐標框架系統與坐標投影轉換 9 2.3.1地心地固坐標框架 10 2.3.2 WGS84 10 2.3.3麥卡托投影法 11 2.3.4 Google街景拍攝載體坐標框架 12 第三章 研究方法與理論介紹 13 3.1影像興趣點定位 13 3.1.1樣板比對法 14 3.1.2正規化交叉相關演算法 15 3.1.3近景攝影測量 15 3.1.4直接地理定位 16 3.1.5影像坐標系與地理坐標系連結 18 3.2 影像串流 19 3.2.1方位角轉換關係 20 3.2.2 街景影像空間內插 21 3.2.3 影像平面路徑限制 24 3.2.4 導覽影像高程限制 25 3.2.5 影像視角修正 26 3.2.6 時速動態模擬 27 第四章 操作平台建置 28 4.1開發環境 28 4.2系統架構 28 4.3 Google Maps API簡介 30 4.4開發語言與瀏覽器簡介 31 4.4.1 JavaScript 31 4.4.2 Ruby on Rails 32 4.4.3 Chrome 32 4.5系統函式介紹 32 第五章 實驗成果與討論分析 35 5.1系統操作介面介紹 35 5.2興趣點定位實驗設計與規劃 37 5.2.1實驗假設條件 37 5.2.2實驗區域介紹 38 5.2.3興趣點定位實驗成果 39 5.2.4興趣點定位誤差分析與修正方法 45 5.3影像串流流暢度實驗設計與規劃 46 5.3.1測試路線介紹 46 5.3.2影像跨車道限制式精度分析 47 5.3.3高程限制精度分析 48 5.3.4導覽視角對齊方法精度分析 48 5.3.5影像串流誤差分析與修正方法 50 第六章 結論與建議 53 6.1結論 53 6.2建議 54 參考文獻 56 附錄 主要實驗表格數據 59zh_TW
dc.subjectpanoramic imagesen_US
dc.subjectGoogle Maps APIzh_TW
dc.subjectphotographical surveying measurementen_US
dc.subjectimage based route planningen_US
dc.subjectGoogle Maps APIen_US
dc.titleIntegrating Google Street Views into Panoramic Navigation Systemen_US
dc.typeThesis and Dissertationzh_TW
item.openairetypeThesis and Dissertation-
item.fulltextno fulltext-
Appears in Collections:土木工程學系所
Show simple item record
TAIR Related Article

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.