簡易檢索 / 詳目顯示

研究生: 陳冠婷
Chen, Kuan-Ting
論文名稱: 基於時空同調性與特徵匹配最佳化之三維道路模型與二維街景影片對位技術
Mapping 3D road model to 2D street-view video using a Spatial-Temporal Coherent and Feature Matching Optimization
指導教授: 朱宏國
Chu, Hung-Kuo
口試委員: 姚智原
Yao, Chih-Yuan
王昱舜
Wang, Yu-Shuen
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 38
中文關鍵詞: 位置測量衛星導航3D 地圖對齊位置估算傳感器全球導航系統開放街圖
外文關鍵詞: position measurement, satellite navigation, 3D map alignment, position estimation sensor, GNSS, OpenStreetMap
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,自駕車一直是熱門的議題,發展非常快速。深度學習也被大量運用在此議題上,但目前基於台灣環境的測資非常稀少,而國外的環境與路況相對於台灣較為簡單,因此並沒有大量的資料能夠測試深度學習演算法的準確性。若能夠透過模擬器來生成虛擬測資,便能根據台灣的環境來產生相對應的資料。為了生成測資,我們需要影片、地理位置(GIS)與地板模型以供合成車輛事件,我們可以透過開放街道地圖(OpenStreetMap,縮寫為 OSM)來建構初始的模型,但因為模型結構的 GIS 資料的不穩定,常常會造成影像與模型無法吻合的情況,因而造成虛擬車輛飄移與行駛路徑錯誤等問題。這也是我們目前需要解決的問題,為了修正此問題我們可以手動去調整相機的角度與位置以得到較準確的影像資訊與模型對齊,但此種方式會花費過多的人力支援,因此我們希望能夠以自動化的方法來達成目的。我們透過所得到的資料結合目前的深度學習模型以生成能夠參考的特徵(語意分割、線段偵測等) 來進行對位的參考依據,並期望藉由三維道路模型與二維街景影像的特徵相似性,得到較佳的結果。除此之外,我們會透過時序上的關聯性來增加對位結果的準確性與平滑度,希望能透過此方法,提升行車影像與道路模型的對齊品質。


    In recent years, self-driving cars have been a hot topic and have developed very fast. Deep learning has also been widely used on this topic, but the measurement in the Taiwanese environment is very rare.The foreign environment and road conditions are relatively simple compared to Taiwan, so there is not a lot of information to test the accuracy of the deep learning algorithm. If we can generate virtual data through the simulator, we can generate corresponding data according to the environment in Taiwan. In order to generate data , we need film, geographic location (GIS) and road models for synthesizing vehicle events. We can construct the initial model through OpenStreetMap (OSM).But the GIS data of the model structure is unstable, often causes the image to be out of alignment with the model. This can lead to problems such as virtual vehicle drift and driving path errors. This is also the problem we need to solve now. In order to solve this problem, we can manually adjust the angle and position of the camera to get more accurate alignment between image information and model . However, this method will cost too much manpower support, so we hope to Automated methods to achieve the goal. We hope to Improve alignment accuracy between road model and street image using the feature from obtained data and deep learning model (speech segmentation, line segment detection, etc.) , and hope to obtain better results through the feature similarity between 3D road model and Street View image .In addition, we will increase the accuracy and smoothness of the alignment results through the Spatial-Temporal Coherent. We hope that this method can improve the alignment quality of the Street view image and the road model.

    中文摘要 i Abstract ii 目錄 iii 圖目錄 v 1 緒論 1 2 相關研究 4 3 系統概觀 6 4 前處理 8 4.1 資料蒐集 . . . . . . . . . . . . . . 8 4.1.1 影片錄製 . . . . . . . . . . . . . 8 4.1.2 地理資料紀錄 . . . . . . . . . . . 8 4.1.3 地板模型生成 . . . . . . . . . . . 9 4.2 資料處理 . . . . . . . . . . . . . .9 4.2.1 語意分割 . . . . . . . . . . . . . 10 4.2.2 車道線偵測 . . . . . . . . . . . . 10 4.2.3 線段處理 . . . . . . . . . . . . . 10 5 視野生成 12 5.1 環境建置 . . . . . . . . . . . . . . 12 5.2 生成候選人影像 . . . . . . . . . . .12 5.3 處理生成影像 . . . . . . . . . . . 13 6 特徵匹配對位 14 6.1 相似性測量 . . . . . . . . . . . . 14 6.1.1 參數設定 . . . . . . . . . . . . 14 6.1.2 相似性評估流程圖 . . . . . . . . 15 6.2 語義覆蓋率測量 . . . . . . . . . . 15 6.3 分數計算 . . . . . . . . . . . . . 16 6.3.1 數值規一化 . . . . . . . . . . . 16 6.3.2 整體特徵匹配公式 . . . . . . . . . 17 7 時序同調性 18 7.1 分數計算 . . . . . . . . . . . . . 18 7.1.1 面向一致性 . . . . . . . . . . . 19 7.1.2 距離關聯性 . . . . . . . . . . .19 7.1.3 整體時序同調性公式 . . . . . . . 19 7.2 實作細節 . . . . . . . . . . . . . 20 7.2.1 夾擊法計算初始點 . . . . . . . . 20 7.2.2 延伸初始序列結果 . . . . . . . . 20 7.3 時間測量 . . . . . . . . . . . . . 22 8 後處理 23 9 實驗與結果 25 9.1 權重設定 . . . . . . . . . . . . . 25 9.1.1 測試資料集 . . . . . . . . . . .25 9.1.2 特徵匹配間權重設定 . . . . . . . 26 9.1.3 特徵匹配與時序同調性間權重設定 . . 27 9.2 特徵匹配與時序同調性量化結果 . . . . 27 9.3 關鍵幀量化結果 . . . . . . . . . . 28 9.4 後處理量化結果 . . . . . . . . . . 30 9.5 結果展示 . . . . . . . . . . . . 31 10 結論 36 References 37

    [1] Google. Google map. 2005. https://www.google.com.tw/maps/.
    [2] Stephen Coast. Openstreetmap. 2004. https://www.openstreetmap.org/.
    [3] J. Yuan and A. M. Cheriyadat. Road segmentation in aerial images by exploiting road
    vector data. In 2013 Fourth International Conference on Computing for Geospatial Research
    and Application, pages 16–23, July 2013. doi: 10.1109/COMGEO.2013.4.
    [4] G. Máttyus, S. Wang, S. Fidler, and R. Urtasun. Enhancing road maps by parsing aerial
    images around the world. In 2015 IEEE International Conference on Computer Vision
    (ICCV), pages 1689–1697, Dec 2015. doi: 10.1109/ICCV.2015.197.
    [5] G. Máttyus, S. Wang, S. Fidler, and R. Urtasun. Hd maps: Fine-grained road segmentation
    by parsing ground and aerial images. In 2016 IEEE Conference on Computer Vision and
    Pattern Recognition (CVPR), pages 3611–3619, June 2016. doi: 10.1109/CVPR.2016.393.
    [6] R. Toledo-Moreo, D. Betaille, F. Peyret, and J. Laneurit. Fusing gnss, dead-reckoning,
    and enhanced maps for road vehicle lane-level navigation. IEEE Journal of Selected Topics
    in Signal Processing, 3(5):798–809, Oct 2009. ISSN 1932-4553. doi: 10.1109/JSTSP.2009.
    2027803.
    [7] J. M. Alvarez, F. Lumbreras, T. Gevers, and A. M. López. Geographic information for
    vision-based road detection. In 2010 IEEE Intelligent Vehicles Symposium, pages 621–626,
    June 2010. doi: 10.1109/IVS.2010.5548002.
    [8] Gaoya Cao, F. Damerow, B. Flade, M. Helmling, and J. Eggert. Camera to map alignment
    for accurate low-cost lane-level scene interpretation. In 2016 IEEE 19th International
    Conference on Intelligent Transportation Systems (ITSC), pages 498–504, Nov 2016. doi:
    10.1109/ITSC.2016.7795601.
    [9] Pascal Mueller, Simon Haegler, Andreas Ulmer, Simon Schubiger, Matthias Specht, Stefan Müller Arisona, and Basil Weber. City engine. 2008. https://www.esri.com/en-us/
    arcgis/products/esri-cityengine/overview.
    [10] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE
    Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),
    volume 1, pages 886–893 vol. 1, June 2005. doi: 10.1109/CVPR.2005.177.
    [11] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio
    Torralba. Semantic understanding of scenes through the ade20k dataset. International
    Journal on Computer Vision, 2018.
    [12] Zhou Bolei, Zhao Hang, Puig Xavier, Fidler Sanja, Barriuso Adela, and Torralba Antonio.
    Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer
    Vision and Pattern Recognition, 2017.
    [13] Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Spatial as deep:
    Spatial cnn for traffic scene understanding. In AAAI Conference on Artificial Intelligence
    (AAAI), February 2018.
    [14] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern
    Analysis and Machine Intelligence, PAMI-8(6):679–698, Nov 1986. ISSN 0162-8828. doi:
    10.1109/TPAMI.1986.4767851.
    [15] Epic Games. Unreal engine. 1996. www.unrealengine.com.
    [16] Pierre-Simon marquis de Laplace. Laplacian smoothing. https://en.wikipedia.org/
    wiki/Laplacian_smoothing

    無法下載圖示 全文公開日期 2024/08/25 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE