簡易檢索 / 詳目顯示

研究生: 田鈞豪
Jun-Hao Tian
論文名稱: 三維影像之重建與追蹤系統之研究
Study toward 3D Image Reconstruction and Tracking System
指導教授: 陳建祥
Jian-Shiang Chen
口試委員:
學位類別: 碩士
Master
系所名稱: 工學院 - 動力機械工程學系
Department of Power Mechanical Engineering
論文出版年: 2007
畢業學年度: 95
語文別: 中文
論文頁數: 62
中文關鍵詞: 三維影像重建影像追蹤電腦視覺影像處理
外文關鍵詞: 3D reconstructiion, feature tracking, computer vision, image processing
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究以個人電腦為基本平台,搭配影像處理方法與立體視覺,以兩台具備水平與垂直旋轉方向的攝影機取像,然後建構出一即時立體影像追蹤系統,來設計實驗。實驗中以兩攝影機同步擷取成對影像,經過影像處理方法分離目標物與背景,藉此框選目標物位置,接著根據極線幾何原理來進行目標物在兩影像中對應特徵點的比對,測量目標物與攝影機光學中心的偏差,並對攝影機進行校正以得知相機參數,之後即可計算目標物的立體影像深度,並即時鎖定目標物進行追蹤。
    實驗主要分成兩個部分:第一個部分『即時影像追蹤』,目的在使攝影機能確實的把目標物鎖定在影像範圍內,應用移動邊緣檢測法取得目標物座標及位移方向,送出致動訊號讓攝影機的步進馬達旋轉,將目標物鎖定於影像中心,以達到追蹤效果。第二個部分『3D立體視覺』,目的在取得目標物與攝影機之間的距離,使用各種影像處理方法來簡化影像,測量左右成對影像的偏移距離計算目標影像深度,並探討結果與實際誤差及可能原因。


    A real-time object tracking system with stereo vision and an image processing method on a PC-based platform was developed. We designed the experimental setup by combining two binocular webcams which can perform pan and tilt, independently. First, the binocular webcams intercept the paired-images simultaneously, then the system separates the target object and the background from the paired-images by an image processing method. Finally, the system locates the target object by using the paired-images for stereo matching. After camera calibration and measuring the difference between the target object and the optical axes, the system can calculate the depth between the target object and the webcams, then commands the webcams to track the object in real-time.
    There are two experimental verifications were performed. The first experiment is the reconstruction of a three-dimensional stereo vision from 2-D images. The objective of this part is to calculate the distance between the webcams and the target object. We try to simplify the paired-images by using various image processing methods at the outset. Measuring the difference between the paired-images in order to calculate the depth of the target object and discussing every possible reason that could have caused the error between the result and the correct data in the end. The second experiment is a real-time image tracking system. The objective of this experiment is to command the webcams rotating in the direction of the target object. By applying the moving target shifting method, we measure the geometric center of the target object and let the webcams rotate in the direction of the target object.

    第一章 緒論 1-1 研究動機 1-2 文獻回顧 1-3 本文架構 第二章 問題描述 2-1 影像深度重建 2-2 相機參數校正 2-3 立體成像資訊 2-3-1 透鏡成像系統 2-3-2 極線幾何法 2-3-3 深度計算方法 2-4 目標物偵測 2-4-1 移動邊緣偵測法(Moving Edge Detection) 2-4-2 移動目標平移法(Moving Target Shifting) 2-4-3 交叉比對法(Cross-correlation) 2-5 結語 第三章 實驗系統架構 3-1 系統架構 3-1-1 硬體架構 3-1-2 軟體環境 3-2 相機參數推算 3-2-1 內部參數 3-2-2 外部參數 3-2-3 相機校正(Camera Calibration) 3-3 立體深度重建 3-3-1 基本矩陣與極線限制 3-3-2 影像深度計算 3-4 目標物的偵測與追蹤 3-4-1 灰階正規化 3-4-2 影像相減(Image subtraction) 3-4-3 邊緣偵測(Edge Detection) 3-4-4 影像相乘(Image Multiplication) 3-4-4 影像重整(Image Reconstruction) 3-4-5 特徵點追蹤(Feature Tracking) 3-5結論 第四章 實驗結果與討論 4-1 相機校正結果 4-2 移動目標的偵測與追蹤 4-2-1 影像前處理 4-2-2 移動目標追蹤 4-3 三維資訊重建結果 4-3-1 特徵點追蹤 4-3-2 重建三維資訊 4-3-3 重建空間中的球軌跡 4-4 討論 第五章 本文貢獻與未來研究之方向與建議 5-1 本文貢獻 5-2 未來研究之方向與建議 參考文獻

    [1] S. T. Barnard and M. A. Fischler, “Computational stereo,” Computing Surveys, vol. 14, pp. 553-572, 1982.
    [2] A. Henrichsen, 3D Reconstruction and Camera Calibration from 2D Images, Ph. D. Thesis, University of Cape Town, 2000.
    [3] D. Liebowitz, Camera Calibration and Reconstruction of Geometry from Images, Ph. D. Thesis, University of Oxford, UK, 2001.
    [4] S. T. Barnard and W. B. Thompson , “Disparity analysis of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, pp. 330-340, 1980.
    [5] W. Zhao, J. Y. Chang, D. M. Smith and M. D. Ginsberg, “Disparity analysis and its application to three–dimensional reconstruction of medical images,” Proc. Fifth Annual IEEE Symposium on Computer-Based Medical Systems, pp.302-308, June 1992.
    [6] Behrooz Kamgar-Parsi and Behzad Kamgar-Parsi, “Evaluation of quantization error in computer vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.11, issue 9, pp.929-940, September 1989.
    [7] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.22, no.11,pp. 1330-1334, November 2000.
    [8] Z. Zhang, “Flexible camera calibration by viewing a plane from nnknown orientations,” Proc. of the 7th IEEE International Conf. on Computer Vision, pp. 666-673, 1999.
    [9] A. P. Tirumalai, B. G. Schunck and R. C. Jain, “Dynamic stereo with self-calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.14, issue 12, pp.1184–1189, December 1992.
    [10] C. Geyer and K. Daniilidis, “Mirrors in motion: Epipolar geometry and motion estimation,” Proc. of the 9th IEEE International Conf. on Computer Vision, pp. 766-773, 2003.
    [11] M. Bjorkman and J. Eklundh, “A real-time system for epipolar geometry and ego-motion estimation,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 2, pp.506-513, June 2000.
    [12] Hiroshi Kase, Noriaki Maru, Atsushi Nishikawa, Shinya Yamada, and Fumio Miyazaki, “Visual servoing of the manipulator using the stereo vision,” Proc. of the 1993 IEEE/IECON International Conference on Industrial Electronics, Control, and Instrumentation, pp. 1791-1796, 1993.
    [13] S. H. Han, W. H. Seo, K. S. Yoon, and M. H. Lee, “Real-time control of an industrial robot using image-based visual servoing,” Proc. of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1762-1796, 1999.
    [14] J. S. Lee, C. W. Seo, and E. S. Kim, “Implementation of opto-digital stereo object tracking system,” Optics Communications, vol. 200, pp. 73-85, 2001.
    [15] G. N. Marichal, L. Acosta, L. Moreno, J.A. M□ndez, J.J. Rodrigo, and M. Sigut, “Obstacle avoidance for a mobile robot: A neuro-fuzzy approach,” Fuzzy Sets and Systems, vol. 124, pp. 171-179, 2001.
    [16] Kiyosumi Kidono, Jun Miura, and Yoshiaki Shirai, “Autonomous visual navigation of a mobile robot using a human-guided experience,” Robotics and Autonomous Systems, vol. 40, pp. 121-130, 2002.
    [17] D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science, vol. 194, pp. 283-287, 1976.
    [18] D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach, Prentice-Hall Inc., New Jersey, 2003.
    [19] C. Gonzales and E. Woods, Digital Image Processing, 2nd Edition, Prentice-Hall Inc., New Jersey, 2002.
    [20] T. S. Caetano, T. Caelli, D. Schuurmans and D. A. C. Barone, “Graphical models and point pattern matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, issue 10, pp.1646-1663, October 2006.
    [21] http://www.intel.com/technology/computing/opencv/
    [22] 陳嘉雄, Two-Stage Recognition of Multi-orientation Faces, 國立東華大學資訊工程學系碩士論文, pp. 13-14 , 2002.
    [23] 繆紹綱, 數位影像處理活用-Matlab, 全華科技圖書, 1999.
    [24] 黃瑞松, Constructing 3D Facial Models for Next Generation Video Coding, 國立中正大學電機工程學系碩士論文, pp. 42-43, 1998.
    [25] J. Shi and C. Tomasi, “Good Features to Track, ” Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
    [26] B. D. Lucas and T. Kanade , “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. DARPA Image Understanding Workshop, pp. 121-130, 1981.
    [27] Bouguet Jean Yves, Pyramidal Implementation of the Lucas-Kanade
    Feature Tracker, Microsoft Research Labs, Tech. Rep., 1999.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE