簡易檢索 / 詳目顯示

研究生: 陳正哲
Chen, Cheng-Che
論文名稱: 利用影像方法於移動平台偵測移動物體
Moving Objects Detection by Image Methods on a Moving Platform
指導教授: 蔡宏營
Tsai, HungYin
彭明輝
Perng, Ming-Hwei
口試委員: 蕭德瑛
Shaw, Dein
李素瑛
Lee, SuhYin
學位類別: 碩士
Master
系所名稱: 工學院 - 動力機械工程學系
Department of Power Mechanical Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 中文
論文頁數: 138
中文關鍵詞: 影像處理立體視覺移動物體偵測
外文關鍵詞: image processing, stereo vision, moving object detection
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  •   本研究旨在利用影像作為感測器在移動平台上偵測場景中的移動物體。因為場景與物體皆在運動,因此無法使用傳統的光流法或影像差異法找出移動物體。本研究提出兩種方式在移動平台上尋找移動物體,一般化之靜止場景三維模型重建法與適用於尋找接近中移動物體之單相機特徵尺度追蹤法。並實際透過室外場景之影像對上述演算法進行驗證。
      本研究針對靜止場景三維模型重建法先於室內建立特徵點豐富之場景驗證演算法的可行性,證實此方法可以重建靜止場景因相機運動所造成的特徵點位置變化,且與預測不符的特徵點為移動物體所產生。在室外場景,因場景非刻意安排,特徵點的對應較室內困難。本研究提出以同一影像中最接近對應作為一般化對應距離之參數,作為評價對應之標準。此對應標準較直接使用對應距離閥值或是尋找最小對應等方式有更高的對應成功率。在室外場景,因物體分布的距離變化大,直接重建轉換模型將導致立體座標點之形變。本研究提出以遠特徵點與近特徵點分別計算旋轉矩陣與平移向量的模型方式,較既有的利用最小平方法建立剛體轉換模型有更快的運算速度。透過改良的轉換模型重建方式,本研究成功的於戶外移動場景中找到移動物體。
      另外,為了能夠達到即時運算,本研究提出針對向後照射之單相機,尋找接近中移動物體之快速演算法。利用追蹤特徵點之尺度變化,找出尺度漸增的特徵點,並以特徵點之間的相對關係加以精緻。此方法成功於室外場景中找出接近中的移動物體,且處理速度可以達到每秒4.4張影像。


    Due to the change of pixels from each captured scene with time, moving objects cannot be detected by the traditional optical flow or frames difference methods. Two methods of moving objects detection on a moving platform are proposed in this study. One is “Static scene reconstruction”, a generalized method, and the other is “Single camera feature scale differentiation” for detecting approaching moving objects from backward camera. Both methods are tested by the image set captured from the real outdoor scene.
    Static scene reconstruction is verified first by the indoor scene with plentiful and distinct feature points which enhance the robustness of feature point matching. Static scene transform matrix can be estimated by the sequential feature match. Feature points on moving objects are defined as feature points with large distance between the positions of feature point predicted by the static scene transform matrix and the corresponding detected feature points.
    Some modifications are needed when applying Static scene reconstruction for the outdoor scene. A normalized criterion is proposed to enhance the matching between the ambiguous features from the outdoor scene. In addition, the distant difference between the objects is larger in the outdoor scene than that in the indoor one. The shape of the static scene can be improperly changed by the unconstrained scene transform matrix accordingly. A method of computing the rotation matrix by far feature points in an image and the translation vector by near feature points is proposed. The process by this proposed method is faster than that by the rigid body transformation method which estimates the transform matrix by mean square error. Moving objects are successfully detected in the outdoor tests.
    For objects that approach the moving platform backwards, a method of feature scale differentiation by single camera is proposed. Feature points of scale increasing are selected as moving object candidates based on this method. Moving objects can therefore be found by refining the moving object candidates after the consideration of the spatial relationship with the adjacent features. Approaching moving objects are successfully detected in the outdoor scene, and the processing frequency is 4.4 fps, which is faster than the acquiring frequency of 2 fps.

    目錄 I 圖目錄 V 表目錄 XI 摘要 XII ABSTRACT XIII 致謝 XV 第1章 簡介 1 1.1 研究動機 1 1.2 研究背景 3 1.2.1 直接距離量測法 3 1.2.1.1 超音波 3 1.2.1.2 雷達 4 1.2.2 間接距離量測法 5 1.2.2.1 熱紅外線 5 1.2.2.2 可見光影像 6 1.3 小結 7 第2章 文獻回顧 8 2.1 單相機系統 9 2.1.1 相機模型 10 2.1.2 距離估測 12 2.1.3 物體辨識 15 2.1.4 主動影像相減法 18 2.1.5 小結 23 2.2 雙相機系統 23 2.2.1 距離估測 24 2.2.2 極線幾何 26 2.2.3 Generic Obstacle and Lane Detection(GOLD) 28 2.2.4 小結 30 2.3 對應技術 30 2.3.1 稠密點對應技術 31 2.3.2 特徵點對應技術 32 2.4 小結 35 第3章 研究方法 37 3.1 系統硬體架構 37 3.1.1 攝影機 38 3.1.2 系統架構 39 3.1.3 鏡頭 41 3.1.4 取像程式 42 3.1.5 光軸校正 44 3.2 演算法 46 3.2.1 相機校正 47 3.2.2 特徵點對應法-SIFT 50 3.2.2.1 SIFT特徵點的提取 51 3.2.2.2 SIFT特徵點的描述 55 3.2.2.3 SIFT特徵點對應 56 3.2.3 靜止場景三維模型重建 56 3.2.3.1 線上程式流程 58 3.2.3.2 三維運動模型 60 3.2.3.3 RANSAC 61 3.2.4 單相機特徵尺度變化追蹤 64 3.2.4.1 原理與分析 65 3.2.4.2 演算法 67 第4章 結果與討論 71 4.1 靜止場景三維模型重建法 71 4.1.1 實驗設置 71 4.1.2 程式流程驗證 73 4.1.3 室內場景測試結果 80 4.2 室外場景測試 83 4.2.1 對應測試 83 4.2.1.1 前處理 84 4.2.1.2 建立ground truth 86 4.2.1.3 SIFT描述距離與正確對應關聯性 90 4.2.1.4 以最小距離對應作為候選對應 94 4.2.1.5 對應評價 96 4.2.1.6 相互對應 98 4.2.1.7 對應結果 100 4.2.2 室外靜止場景重建 101 4.2.2.1 程式流程 101 4.2.2.2 RANSAC 之正確對應判斷 103 4.2.2.3 分別計算旋轉與平移矩陣 104 4.2.2.4 利用最小平方法求剛體轉移矩陣 110 4.2.2.5 室外場景測試結果 110 4.3 單相機特徵尺度追蹤法 115 4.3.1 連續時間對應 116 4.3.2 精緻對應結果 118 4.3.3 排除空間中矛盾的特徵 120 4.3.4 運算時間 122 第5章 結論 124 5.1 結論 124 5.2 未來方向 125 參考文獻 127

    [1] M. Stone and J. Broughton, “Getting off your bike: cycling accidents in Great Britain in 1990-199,” Accident Analysis and Prevention, pp. 549–556, 2003.
    [2] 國情統計通報, 行政院主計處, 2010.
    [3] 98年公路里程及橋樑總統計資料, 交通部公路總局.
    [4] 公路路線設計規範, 交通部, 2011.
    [5] 李彥鋒, 以視覺為基礎之嵌入式車輛偵測系統, 國立勤益科技大學, 電子工程系, 2009.
    [6] 鄭凌軒, DSP Based 車路視覺系統之研究, 國立中山大學, 電機工程學系研究所, 2005.
    [7] W. Jones, “Keeping cars from crashing,” IEEE Spectrum, vol. 9, no. 38, pp. 40-45, 2001.
    [8] T. Gandhi and M. M. Trivedi, “Pedestrian Collision Avoidance Systems: A Survey of Computer Vision Based Recent Studies,” IEEE Intelligent Transportation Systems Conference, pp. 976-981, 2006.
    [9] J. Ge, Y. Luo and G. Tei, “Real-Time Pedestrian Detection and Tracking at Nighttime for Driver-Assistance Systems,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, pp. 283-298, 2009.
    [10] F. Xu, X. Liu and K. Fujimura, “Pedestrian Detection and Tracking With Night Vision,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, pp. 63-71, 2005.
    [11] 鄭惟仁, 低成本車輛碰撞預防警示系統之研究, 國立高雄應用科技大學, 電子工程系, 2008.
    [12] Z. Sun, G. Bebis and R. Miller, “On-Road Vehicle Detection: A Review,” Transactions on Pattern Analysis And Machine Intelligence, vol.28, pp. 694-711, 2006.
    [13] H. Ishiguro, M. Yamamoto and S. Tsuji, “Omnidirectional Stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 257-262, 1992.
    [14] Z. Zhu, K. Rajasekar, E. Riseman and A. Hanson, “Panoramic virtual stereo vision of cooperative mobile robots for localizing 3D moving objects,” IEEE Workshop on Omnidirectional Vision, pp. 29-36, 2000.
    [15] S. Shimizu, K. Yamamoto, C. Wang, Y. Satoh, H. Tanahashi and Y. Niwa, “Moving object detection by mobile Stereo Omni-directional System (SOS) using spherical depth image,” Pattern Analysis and Applications, pp. 113-126, 2006.
    [16] http://www.ionroad.com/.
    [17] http://www.carmate.co.jp/software_en/drivematesafetycam/
    [18] “mobileye C2-270,” http://mobileye.com/products/mobileye-c2-270/.
    [19] H.-Y. Chang, C.-M. Fu and C.-L. Huang, “Real-Time Vision-Based Preceding Vehicle Tracking And Recognition,” Intelligent Vehicles Symposium, pp. 514-519, 2005.
    [20] Y.-C. Kuoa, N.-S. Paia and Y.-F. Lib, “Vision-based vehicle detection for a driver assistance system,” Computers & Mathematics with Applications, vol. 8, no. 61, pp. 2096-2100, 2011.
    [21] T. Kalinke, C. Tzomakas and W. v. Seelen, “A Texture-Based Object Detection and an Adaptive Model-Based assification,” Conference Intelligent Vehicles, pp. 341-346, 1998.
    [22] A. Bensrhair, M. Bertozzi, A. Broggi, P. Miche, S. Mousset and A. G.Moulminet, “A Cooperative Approach to Vision-Based Vehicle Detection,” IEEE Intelligent Transportation Systems, pp. 209-214, 2001.
    [23] J. C. McCall and M. M. Trivedi, “Video-Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, pp. 20-37, 2006.
    [24] B. Funt, K. Barnard and L. Martin, “Is Machine Colour Constancy Good Enough?,” European Conference on Computer Vision, vol. 1406, pp. 445- 459, 1998.
    [25] G. Finlayson, S. Hordley and P. HubeL, “Color by correlation: a simple, unifying framework for color constancy,” Pattern Analysis and Machine Intelligence, vol. 23, pp. 1209-1221, 2001.
    [26] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
    [27] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol.2, Article No. 27, 2011.
    [28] T. Hashiyama, D. Mochizuki, Y. Yano and S. Okuma, “Active frame subtraction for pedestrian detection from images of moving camera,” IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 480-485, 2003.
    [29] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2003.
    [30] J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/index.html#own_calib.
    [31] M. Bertozzi and A. Broggi, “GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Transactions on Image Processing, vol. 7, pp. 62-81, 1998.
    [32] R. Szeliski, Richard Szeliski. Computer Vision: Algorithms and Applications, New York: Springer, 2010.
    [33] C. H. a. M. Stephens, “A combined corner and edge detector,” Alvey Vision Conference, pp. 147-152, 1988.
    [34] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
    [35] A. V. a. B. Fulkerson, “VLFeat: An open and portable library of computer vision algorithms,” http://www.vlfeat.org/.
    [36] M. Fischler and R. Bolles, “Random Sample Consensus-A Paradigm for Model-Fitting With Applications to Image-Analysis And Automated Cartography,” Communications of the ACM, pp. 381-395, 1981.
    [37] D. A. Forsyth and J. Ponce, “Estimating Rigid Transformations,” Computer Vision, a modern approach, p. 480, 1993.
    [38] C. Wu, “SiftGPU: A GPU Implementation of Scale Invariant Feature Transform (SIFT),” http://cs.unc.edu/~ccwu/siftgpu/.
    [39] G. Stein, Y. Gdalyahu and A. Shashua, “Stereo-Assist: Top-down Stereo for Driver Assistance Systems in Intelligent Vehicles,” Intelligent Vehicles Symposium (IV), pp. 723-730, 2010.
    [40] M. Irani. and P. Anandan, “Robust Multi-Sensor Image Alignmen,” IEEE International Conference on Computer Vision, pp. 959-955, 1998.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE