研究生: |
田鈞豪 Jun-Hao Tian |
---|---|
論文名稱: |
三維影像之重建與追蹤系統之研究 Study toward 3D Image Reconstruction and Tracking System |
指導教授: |
陳建祥
Jian-Shiang Chen |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 動力機械工程學系 Department of Power Mechanical Engineering |
論文出版年: | 2007 |
畢業學年度: | 95 |
語文別: | 中文 |
論文頁數: | 62 |
中文關鍵詞: | 三維影像重建 、影像追蹤 、電腦視覺 、影像處理 |
外文關鍵詞: | 3D reconstructiion, feature tracking, computer vision, image processing |
相關次數: | 點閱:3 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究以個人電腦為基本平台,搭配影像處理方法與立體視覺,以兩台具備水平與垂直旋轉方向的攝影機取像,然後建構出一即時立體影像追蹤系統,來設計實驗。實驗中以兩攝影機同步擷取成對影像,經過影像處理方法分離目標物與背景,藉此框選目標物位置,接著根據極線幾何原理來進行目標物在兩影像中對應特徵點的比對,測量目標物與攝影機光學中心的偏差,並對攝影機進行校正以得知相機參數,之後即可計算目標物的立體影像深度,並即時鎖定目標物進行追蹤。
實驗主要分成兩個部分:第一個部分『即時影像追蹤』,目的在使攝影機能確實的把目標物鎖定在影像範圍內,應用移動邊緣檢測法取得目標物座標及位移方向,送出致動訊號讓攝影機的步進馬達旋轉,將目標物鎖定於影像中心,以達到追蹤效果。第二個部分『3D立體視覺』,目的在取得目標物與攝影機之間的距離,使用各種影像處理方法來簡化影像,測量左右成對影像的偏移距離計算目標影像深度,並探討結果與實際誤差及可能原因。
A real-time object tracking system with stereo vision and an image processing method on a PC-based platform was developed. We designed the experimental setup by combining two binocular webcams which can perform pan and tilt, independently. First, the binocular webcams intercept the paired-images simultaneously, then the system separates the target object and the background from the paired-images by an image processing method. Finally, the system locates the target object by using the paired-images for stereo matching. After camera calibration and measuring the difference between the target object and the optical axes, the system can calculate the depth between the target object and the webcams, then commands the webcams to track the object in real-time.
There are two experimental verifications were performed. The first experiment is the reconstruction of a three-dimensional stereo vision from 2-D images. The objective of this part is to calculate the distance between the webcams and the target object. We try to simplify the paired-images by using various image processing methods at the outset. Measuring the difference between the paired-images in order to calculate the depth of the target object and discussing every possible reason that could have caused the error between the result and the correct data in the end. The second experiment is a real-time image tracking system. The objective of this experiment is to command the webcams rotating in the direction of the target object. By applying the moving target shifting method, we measure the geometric center of the target object and let the webcams rotate in the direction of the target object.
[1] S. T. Barnard and M. A. Fischler, “Computational stereo,” Computing Surveys, vol. 14, pp. 553-572, 1982.
[2] A. Henrichsen, 3D Reconstruction and Camera Calibration from 2D Images, Ph. D. Thesis, University of Cape Town, 2000.
[3] D. Liebowitz, Camera Calibration and Reconstruction of Geometry from Images, Ph. D. Thesis, University of Oxford, UK, 2001.
[4] S. T. Barnard and W. B. Thompson , “Disparity analysis of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, pp. 330-340, 1980.
[5] W. Zhao, J. Y. Chang, D. M. Smith and M. D. Ginsberg, “Disparity analysis and its application to three–dimensional reconstruction of medical images,” Proc. Fifth Annual IEEE Symposium on Computer-Based Medical Systems, pp.302-308, June 1992.
[6] Behrooz Kamgar-Parsi and Behzad Kamgar-Parsi, “Evaluation of quantization error in computer vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.11, issue 9, pp.929-940, September 1989.
[7] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.22, no.11,pp. 1330-1334, November 2000.
[8] Z. Zhang, “Flexible camera calibration by viewing a plane from nnknown orientations,” Proc. of the 7th IEEE International Conf. on Computer Vision, pp. 666-673, 1999.
[9] A. P. Tirumalai, B. G. Schunck and R. C. Jain, “Dynamic stereo with self-calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.14, issue 12, pp.1184–1189, December 1992.
[10] C. Geyer and K. Daniilidis, “Mirrors in motion: Epipolar geometry and motion estimation,” Proc. of the 9th IEEE International Conf. on Computer Vision, pp. 766-773, 2003.
[11] M. Bjorkman and J. Eklundh, “A real-time system for epipolar geometry and ego-motion estimation,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 2, pp.506-513, June 2000.
[12] Hiroshi Kase, Noriaki Maru, Atsushi Nishikawa, Shinya Yamada, and Fumio Miyazaki, “Visual servoing of the manipulator using the stereo vision,” Proc. of the 1993 IEEE/IECON International Conference on Industrial Electronics, Control, and Instrumentation, pp. 1791-1796, 1993.
[13] S. H. Han, W. H. Seo, K. S. Yoon, and M. H. Lee, “Real-time control of an industrial robot using image-based visual servoing,” Proc. of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1762-1796, 1999.
[14] J. S. Lee, C. W. Seo, and E. S. Kim, “Implementation of opto-digital stereo object tracking system,” Optics Communications, vol. 200, pp. 73-85, 2001.
[15] G. N. Marichal, L. Acosta, L. Moreno, J.A. M□ndez, J.J. Rodrigo, and M. Sigut, “Obstacle avoidance for a mobile robot: A neuro-fuzzy approach,” Fuzzy Sets and Systems, vol. 124, pp. 171-179, 2001.
[16] Kiyosumi Kidono, Jun Miura, and Yoshiaki Shirai, “Autonomous visual navigation of a mobile robot using a human-guided experience,” Robotics and Autonomous Systems, vol. 40, pp. 121-130, 2002.
[17] D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science, vol. 194, pp. 283-287, 1976.
[18] D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach, Prentice-Hall Inc., New Jersey, 2003.
[19] C. Gonzales and E. Woods, Digital Image Processing, 2nd Edition, Prentice-Hall Inc., New Jersey, 2002.
[20] T. S. Caetano, T. Caelli, D. Schuurmans and D. A. C. Barone, “Graphical models and point pattern matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, issue 10, pp.1646-1663, October 2006.
[21] http://www.intel.com/technology/computing/opencv/
[22] 陳嘉雄, Two-Stage Recognition of Multi-orientation Faces, 國立東華大學資訊工程學系碩士論文, pp. 13-14 , 2002.
[23] 繆紹綱, 數位影像處理活用-Matlab, 全華科技圖書, 1999.
[24] 黃瑞松, Constructing 3D Facial Models for Next Generation Video Coding, 國立中正大學電機工程學系碩士論文, pp. 42-43, 1998.
[25] J. Shi and C. Tomasi, “Good Features to Track, ” Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
[26] B. D. Lucas and T. Kanade , “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. DARPA Image Understanding Workshop, pp. 121-130, 1981.
[27] Bouguet Jean Yves, Pyramidal Implementation of the Lucas-Kanade
Feature Tracker, Microsoft Research Labs, Tech. Rep., 1999.