研究生: |
陳芝婷 Chen, Chih-Ting |
---|---|
論文名稱: |
利用雙相機於具有移動物體場景估測移動平台的自我運動 Ego Motion Estimation in a Scene with Moving Objects Using Stereo Cameras |
指導教授: |
彭明輝
Perng, Ming-Hwei |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 動力機械工程學系 Department of Power Mechanical Engineering |
論文出版年: | 2009 |
畢業學年度: | 97 |
語文別: | 中文 |
論文頁數: | 81 |
中文關鍵詞: | 自我運動 、雙相機 、移動物體 、區域配對 、極線幾何 |
外文關鍵詞: | ego motion, stereo camera, moving object, region matching, epipolar geometry |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究目的為研發一種新的「自我運動估測(ego motion estimation)」技術,技術需求來自於智慧車輛閃避路面障礙物,使用相機來自動偵測前方移動物體,欲將現有任何一種偵測移動物體的方法應用於移動平台上,皆須自我運動資訊。
目前估測方法依據相機系統來分,可分成單相機系統和雙相機系統,經文獻回顧分析後發現欲達成本研究設定目標,深度資訊為不可或缺的資訊,故使用可提供深度資訊的雙相機進行實驗。
本研究提出一完整的估測自我運動演算法。先以速度較快的區域配對技術來做初步配對,再針對配對好的區域中以SSD (Sum of Square Difference)找到對應點以計算點的3D位置。結合這兩種方法來計算深度資訊,特徵點配對的搜尋範圍被大幅縮小,可加快運算速度。
然而因(1)SSD無法避免配對錯誤(2)場景當中有移動物體,以這些對應錯誤的點來估測自我運動,並不正確。本研究設計Truncated Method,以統計的方法來排除對應錯誤的點,經過數次疊代,可得到精確的自我運動參數。
相較於現有未加入深度資訊補償的演算法,只能限定在小範圍內深度變化不大的場景,本演算法可應用在現實中的場景,包括室內及室外,場景當中有移動物體…,皆能計算誤差在兩像素內的移動平台的自我運動,後續可利用此參數做影響補償。而相較於只將SSD window size擴大企圖提高對應正確率的的方法而言,實驗結果證實,本方法能具有較短的運算時間,但卻能估測出更精確自我運動參數。
[1] W. Hu, T. Tan, L. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors," IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), Vol. 34, pp. 334-352, 2004.
[2] T. B. Moeslund, A. Hilton and V. Kruger, “A survey of advances in vision-based human motion capture and analysis,” Computer Vision and Image Understanding, Vol. 104, pp. 90–126, 2006.
[3] Omar Javed, Khurram Shafique, Mubarak Shah, “A Hierarchical Approach to Robust Background Subtraction using Color and Gradient Information,” IEEE Workshop on Motion and Video Computing, pp. 22-27, 2002.
[4] D. Gutchess, M. Trajkovi C, E. Cohen-solal, D. Lyons, A. K. Jain, “A background model initialization algorithm for video surveillance,” IEEE International Conference on Computer Vision, 2001.
[5] S. Y. Elhabian, K. M. El-Sayed and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques - State-of-art,” Recent Patents on Computer Science, Vol. 1, pp. 32-54, 2008.
[6] Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, “Efficient Moving Object Segmentation Algorithm Using Background Registration Technique,” IEEE Trans. on Circuits Syst. Video Technol., Vol. 12, no.7, pp. 577-586, 2002.
[7] W. Hu, T. Tan, L. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), Vol. 34, No. 3, pp. 334-52, 2004.
[8] N. Bocheva, “Detection of motion discontinuities between complex motions,” Vision Research, Vol. 46, pp. 129–140, 2006.
[9] D. M. Gavrila and S. Munder, “Multi-cue Pedestrian Detection and Tracking from a Moving Vehicle,” International Journal of Computer Vision, Vol. 73, No. 1, p. 41–59, 2007.
[10] Z. Member, G. Bebis and R. Miller, “On-Road Vehicle Detection: A Review,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 28, No. 5, p. 694-711, 2006.
[11] Friedman, N., Russel, S. “Image segmentation in video sequences: A probabilistic approach,” Proc. of the Thirteenth Conference of Uncertainty in Artificial Intelligence (UAI), 1-3, 1997.
[12] K. Yamaguchi, T. Kato, Y.i Ninomiya, “Vehicle Ego-Motion Estimation and Moving Object Detection using a Monocular Camera,” p.610-613, 18th International Conference on Pattern Recognition, Vol.4, 2006.
[13] Michal Irani and P. Anandan, “A Unified Approach to Moving Object Detection in 2D and 3D Scenes,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 20, 1998.
[14] Xavier Armangue, Helder Araujo ,Joaquim Salvi, “A Review on Ego-motion by Means of Differential Epipolar Geometry Applied to the Movement of a Mobile Robot,” Pattern Recognition Vol. 36, pp. 2927-2944, 2003.
[15] M. Irani, B. Rousso and S. Peleg, “Recovery of Ego-Motion Using Region Alignment,” IEEE Transactions on Pattern Analysis and Machine, Vol. 19, No. 3, pp.268-272, 1997.
[16] J. Horn, A. Bachmann and T. Dang, “Stereo Vision Based Ego-Motion Estimation with Sensor Supported Subset Validation,” IEEE Intelligent Vehicles Symposium, pp. 741-748, 2007.
[17] H. Christopher Longuet-Higgins, “A Computer Algorithm for Reconstructing a Scene from Two Projections,” Nature 293, pp. 133–135, 1981.
[18] Olivier D. Faugeras, Quang-Tuan Luong, Steven Maybank, “Camera Self-calibration: Theory and experiments,” Proceedings of European Conference on Computer Vision, pp. 321-334, 1992.
[19] Zhengyou Zhang , Rachid Deriche, Olivier Faugeras and Quang-Tuan Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence, Vol. 78, Issues 1-2, pp. 87-119, October 1995.
[20] Richard I. Hartley, “In Defence of the 8-point Algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, pp. 580-593, June 1997.
[21] Z. Zhang, “Determining the epipolar geometry and its uncertainty: a review,” International Journal Computer Vision, Vol. 27, pp. 161–198, 1998.
[22] Brown, M.Z, Burschka, D, Hager, G.D. “Advances in computational stereo,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 25, NO. 8, 2003.
[23] P. Aschwanden and W. Guggenbuhl, “Experimental Results from a Comparative Study on Correlation-Type Registration Algorithms,” In Robust Computer Vision (Forstner and Ruwiedel, Eds.), Wickmann, pp. 268-289, 1993.
[24] D. N. Bhat and S. K. Nayar, “Ordinal Measures for Image Correspondence,” IEEE Transactions Pattern Analysis and Machine Intelligence, Vol. 20, pp. 415-423, 1998.
[25] O. Faugeras, B. Hotz, H. Matthieu, T. Vieville, Z. Zhang, P. Fua, E. Theron, L. Moll, G. Berry, J. Vuillemin, P. Bertin, and C. Proy, “Real Time Correlation-Based Stereo: Algorithm, Implementations and Applications,” INRIA Technical Report 2013, 1993.
[26] V. S. Kluth, G. W. Kunkel, and U. A. Rauhala, “Global Least Squares Matching,” Proc.Int’l Geoscience and Remote Sensing Symp., Vol. 2, pp. 1615-1618, 1992.
[27] B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. Int’l Joint Conf. A. I., pp. 674-679, 1981.
[28] M. Hatzitheodorou1, E.A. Karabassi, G. Papaioannou, A. Boehm2 and T. Theoharis, “Stereo Matching Using Optic Flow,” Real-Time Imaging, Vol. 6, pp.251-266 ,2000.
[29] B. K. P. Horn and B. G. Schunk, “Determining Optical Flow,” Artificial Intelligence, Vol.17, pp. 185-204, 1981.
[30] TENG Chin-Hung, LAI Shang-Hong, CHEN Yung-Sheng, HSU Wen-Hsing, “Accurate optical flow computation under non-uniform brightness variations,” Computer Vision and Image Understanding, Vol.97, pp.315-346, 2005.
[31] S. Randriamasy and A. Gagalowicz, “Region Based Stereo Matching Oriented Image Processing,” Proc. Computer Vision and Pattern Recognition, pp. 736-737, 1991.
[32] S. Birchfield and C. Tomasi, “Multiway Cut for Stereo and Motion with Slanted Surfaces,” Proc. Int’l Conf. Computer Vision, Vol. 1, pp. 489-495, 1999.
[33] V. Venkateswar and R. Chellappa, “Hierarchical Stereo and Motion Correspondence Using Feature Groupings,” Int’l J. Computer Vision, Vol. 15, pp. 245-269, 1995.
[34] Mohamed El Ansari a,*, Lhoussaine Masmoudi b, Abdelaziz Bensrhair, “A new regions matching for color stereo images,” Pattern Recognition Letters 28, pp. 1679-1687, 2007.
[35] C. Harris, M. Stephens, “A combined corner and edge detector,” Alvey Vision Conference, pp. 147-152, 1988.
[36] Koschan. Andreas, Kropatsch, W., Chetverikov, D., “Dense stereo correspondence using polychromatic block matching,” Proc. 5th Internat. Conf. Computer Analysis of Images and Patterns, pp. 538-542, 1993.
[37] Yuan, T., Subbarao, M., “Integration of multiple-baseline color stereo vision with focus and defocus analysis for 3d shape measurement,” Proc. SPIE, Three Dimensional Imaging, Optical Metrology, and Inspection IV, Vol. 3520, pp. 44-51, 1998.
[38] den Ouden, H.E.M., van Ee, R., de Haan, E.H., “Colour helps to solve the binocular matching problem,” J. Physiol. (567.2), pp.665-671 , 2005.
[39] Joon Hee Han, Member, IEEE, and Jong Seung Park, “Contour Matching Using Epipolar Geometry,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 22, NO. 4, 2000.
[40] M. El Ansari , L. Masmoudi, L. Radouanem, “A new region matching method for stereoscopic images,” Pattern Recognition Letters 21, pp. 283-294 ,2000.
[41] R. Hartley and A.Zisserman, “Multiple View Geometry in Computer Vision”, Cambridge University Press, Cambridge, 2000.
[42] V. P. Sreedharan, “Least squares algorithms for finding solutions of over determined linear equations which minimize error in an abstract norm,” Numerische Mathematik, Vol. 17, No. 5, 1971.
[43] Stoer Josef, Bulirsch Roland, Introduction to Numerical Analysis (3rd ed.), Springer, 2002
[44] Golub Gene H., Van Loan, Charles F., Matrix Computations (3rd ed.), Johns Hopkins, 1996.
[45] Horn Roger A., Johnson Charles R., Matrix Analysis, Cambridge University Press, Section 2.8, 1985.
[46] Mohamed El Ansari, Lhoussaine Masmoudi, Abdelaziz Bensrhair, “A new regions matching for color stereo images,” Pattern Recognition Letters, Vol.28, pp. 1679-1687, 2007.
[47] Tarak Gandhi and Mohan Trivedi, "Vehicle Surround Capture: Survey of Techniques and a Novel Omni Video Based Approach for Dynamic Panoramic Surround Maps," IEEE Transactions on Intelligent Transportation Systems, Vol. 8, pp. 108-120, 2007.
[48] Grant Grubb, Lars Nilsson and Magnus Rilbe, "3D Vision Sensing for Improved Pedestrian Safety," 2004 IEEE Intelligent Vehicles Symposium, University of Parma, Parma, Italy June 1447, pp. 19-24, 2004.
[49] U. Franke and S. Heinrich, "Fast Obstacle Detection for Urban Traffic Situations," IEEE Transaction on Intelligent Transportation System, Vol. 3, NO. 3, SEPTEMBER, pp. 173-181, 2002.
[50] A. Discant, A. Rogozan, C. Rusu and A. Bensrhair, "Sensors for Obstacle Detection A Survey," 30th International Spring Seminar on Electronics Technology, Cluj-Napoca ROMANIA, pp. 100-105, 2007.
[51] 輔具資源入口網站 http://repat.moi.gov.tw/index.asp
[52] M. Chang, H. Lee, I. Akiyama, “Image reconstruction and enhancement for subsurface radar imaging using wavefield statistics,” Conference Record of the Twenty-Eighth Asilomar Conference on Signals, Systems and Computers (Cat. No.94CH34546), Vol.2, pp. 1200-1204, 1994.
[53] S. Schmerwitz, H. U. Dohler, N. Peinecke, B. Korn, “"Stereo radar": Reconstructing 3D data from 2D radar,” Proceedings of SPIE - The International Society for Optical Engineering, Vol. 6957, pp. 695704, 2008.
[54] P. T. Gough, B. R. Hunt, “Synthetic aperture radar image reconstruction algorithms designed for subsurface imaging,” IGARSS'97. 1997 International Geoscience and Remote Sensing Symposium. Remote Sensing - A Scientific Vision for Sustainable Development (Cat. No.97CH36042), vol. 4, pp. 1588-1590, 1997.