研究生: |
吳承恩 Wu, Cheng En |
---|---|
論文名稱: |
以Particle-filter為基礎的十字路口車輛即時追蹤技術 Particle-filter-based Vehicle Tracking within the Urban Intersection in Real-Time |
指導教授: |
王家祥
Wang, Jia Shung |
口試委員: |
陳煥宗
Chen, Hwann Tzong 葉梅珍 Yeh, Mei Chen |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2016 |
畢業學年度: | 104 |
語文別: | 英文 |
論文頁數: | 38 |
中文關鍵詞: | 粒子濾波器 、車輛追蹤 、影像監控 、隱馬可夫模型 |
外文關鍵詞: | Particle filter, Vehicle tracking, Video surveillance, Hidden Markov mode |
相關次數: | 點閱:3 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
智慧城市,已經成為各大現代化都市創新的目標,其中智慧型運輸系統更是重要指標。本論文以最常發生車輛事故十字路口為場景,藉由模擬十字路口架設的固定式監視器拍攝畫面來處理。特點是背景固定不變,車輛形狀不會劇烈改變。基於影像特點和為了達到即時計算,我們藉由Particle filter為基礎之多物體追蹤器,搭配Hidden Markov model(HMM)進行軌跡分類和預測,此追蹤器可即時得到路口監視器畫面上車輛之行車路徑,有效率的減少取樣時的樣本數,加快計算速度,完成即時運算目的。實作上將追蹤車輛上一幅的bounding box中心位置,加上HMM提供的新預測目標中心,以及前幾幅所得到的車輛平均速度變化量,計算出新的目標中心和取樣範圍,之後在此位置附近投點(particles),進行樣本取量,最後從樣本中找出與目標模版最接近的樣本,來決定樣本所在位置即新目標位置。總結本篇論文共提出六個步驟來提升運算速度和追蹤正確率,主要步驟是,著重於單車輛追蹤,輔助步驟是,以鄰近車輛關係為資訊的輔助方法。實驗影片以新竹光復路和建功路口為例,實驗結果證明本論文提出的多車輛追蹤器,可在有條件車流量下,達到十字路口車輛軌跡即時運算和高追蹤正確率。
Intelligent transportation system is an important target to urban development vision for a smart city. Therefore, this thesis is focus on gathering traffic information within the urban intersection where accidents frequently occur. Nowadays, most of urban intersections are being installed surveillance cameras so as to exploit and explore these vision-based contents, such as deploying the vehicle tracker to locate and record the trajectories instantaneously. In this thesis, a real-time vehicle tracker within the urban intersection is proposed. The tracking method is based on the concept of particle-filter and coupled with the Hidden Markov model (HMM), which provides the capabilities of trajectory classification and tracklet prediction. For tracking all of trajectories in real-time is a computational challenge, on the basis of collaborating previous records of vehicle movement and future tracklet prediction given by HMM. The proposed method removes most of the particles. Moreover, several tips for effectively implementing are included, such as (1) utilizing already-fixed trajectories (surrounding vehicles) to boost vehicle tracking accuracy; (2) vehicle ID labeling to identify relationships between surrounding vehicles. The experimental results demonstrate both the computational effectiveness and tracking correctness of the proposed method, the tracker truly execute in real-time for the intersections of six traffic lanes , say around six vehicles per second on tracking.
[1] D. Comaniciu, V. Ramesh, and P. Meer, “Real-time tracking of non-rigid objects using mean shift,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2000, Vol. 2, pp.142-149.
[2] Tangirala, K. V., and Namuduri, K, “Object Tracking in Video Using Particle Filtering.” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2005, Vol. 2, pp.657-660.
[3] Babenko, B., Yang, M. H., and Belongie, S. “Visual tracking with online multiple instance learning.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2009, pp. 983-990.
[4] Hare, S., Saffari, A., and Torr, P. H. “Struck: Structured output tracking with kernels.” International Conference on Computer Vision (ICCV), November 2011, pp. 263-270.
[5] C. M. Bishop, “Sequential Data,” in Pattern Recognition and Machine Learning, New York: Springer, 2006, pp.610-614
[6] Starner, and Thad E. “Visual Recognition of American Sign Language Using Hidden Markov Models,” Brain and Cognitive Sciences dept., Massachusetts Institute of Technology, Cambridge, MA, Rep., 1995.
[7] Rabiner, L., and Juang, B. “An introduction to hidden Markov models. ” IEEE ASSP Magazine, Janunary 1986, Volume:3, Issue: 1, pp. 4-16.
[8] Avidan, Shai. “Support vector tracking.”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, Issue: 8, August 2006, pp.1064-1072.
[9] Grabner, H., Grabner, M., and Bischof, H. “Real-time tracking via on-line boosting”. British Machine Vision Conference (BMVC), September 2006, Vol. 1, pp.47-56.
[10] Saffari, A., Godec, M., Pock, T., Leistner, C., and Bischof, H. “Online multi-class LPboost. ”IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010, pp.3570-3577.
[11] Grabner, H., Leistner, C., and Bischof, H. “Semi-supervised on-line boosting for robust tracking. ”European conference on computer vision (ECCV), October 2008, pp.234-237.
[12] Mei, X., and Ling, H. “Robust visual tracking using L1 minimization”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), September 2009, pp.1436-1443.
[13] X. Mei, H. Ling, Y. Wu, E. Blasch, and L. Bai. “Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2011, pp. 1257-1264.
[14] Y. Wu, H. Ling, J. Yu, F. Li, X. Mei, and E. Cheng. “Blurred Target Tracking by Blur-driven Tracker”. International Conference on Computer Vision (ICCV), November 2011, pp. 1100-1107.
[15] Bao, C., Wu, Y., Ling, H., and Ji, H. “Real time robust L1 tracker using accelerated proximal gradient approach”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2012, pp. 1830-1837.
[16] Jia, X., Lu, H., and Yang, M. H. "Visual tracking via adaptive structural local sparse appearance model." IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2012, pp. 1822-1829.