研究生: |
潘威丞 Pan, Wei-Cheng |
---|---|
論文名稱: |
具定位與解碼功能之影像系統與其於高精度影像量測之應用 Image processing system with positioning and decoding function and its application in high precision image measurement |
指導教授: |
蔡宏營
Tsai, Hung-Yin |
口試委員: |
宋震國
Sung, Cheng-Kuo 黃衍任 Hwang, Yean-Ren |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 動力機械工程學系 Department of Power Mechanical Engineering |
論文出版年: | 2017 |
畢業學年度: | 105 |
語文別: | 中文 |
論文頁數: | 99 |
中文關鍵詞: | 影像處理 、定位 、解碼 、機器視覺 、影像尺 |
外文關鍵詞: | image process, positioning, decoding, machine vision, image scale |
相關次數: | 點閱:3 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究藉由影像處理技術和光學元件設備,搭建一套具定位與解碼功能之影像擷取與分析系統,透過低成本、非接觸式的方法,達到目標物移動定位之效果,以利後續諸多應用,例如X-Y平台定位、量測物件的長寬、外觀尺寸檢測、更新圖檔尺寸與姿態資訊。本研究的主要成果為搭建一套基於影像處理之影像定位與解碼系統,但由於目前尚未有合適的X-Y平台來做此系統之定位精度之驗證,本研究基於此系統,另設計一個高精度影像量測方法,加入電腦視覺與機器學習領域之技術,進行線段尺寸的量測實驗,以目前量測4 mm的一級精度塊規為例,75組數據的平均量測誤差為11 µm,最大誤差為37 µm。
在線段量測的過程中,為了取得足夠高的量測解析度,主相機的視野將會變得非常狹窄,無法在單一影像看到完整的物件,因此會需要至少兩張影像才能完成量測。藉由本研究所搭建之影像定位與編碼的系統,可以不依賴X-Y平台的定位回饋,僅靠影像中的定位符號與編碼圖案來作為線段量測時,不同張影像座標匹配時的定位依據,最終獲得線段兩端點在影像中的座標位置,進而計算出線段的長度、與基準軸的夾角等資訊。由於經過相機校正,因此可以輕易推得線段在真實世界的幾何資訊。
影像定位與解碼系統共有四個步驟:(一)相機和光源的架設與校正;(二)影像尺:在不鏽鋼板上雷射雕刻出定位符號與編碼圖案; (三)定位符號偵測:利用直方圖分布來分析影像中定位符號的位置; (四)解碼:對定位符號附近的編碼圖案進行解碼動作,用來識別定位符號之身分。
高精度影像量測技術則是基於影像定位與解碼系統,再加上兩個步驟:(一)物件邊緣偵測:將待測物件的邊緣線段透過影像處理與機器視覺技術提取出來;(二)端點偵測:紀錄邊緣線段之交點。
Through the image processing technology and optical component equipment, this Thesis demonstrate a low cost, non-contact, image capture and analysis system with positioning and decoding function, which can achieve the target positioning. The positioning effect has many Applications such as X-Y table positioning, measuring object length and width, dimension checkout, update cad file dimension and pose information. The main result of this research is to build an image processing system with positioning and decoding function, but because there is no suitable X-Y table to verify the positioning accuracy of this system, this research based on the above system, design a high-precision image Measurement method, which is based on the computer vision and machine learning technology. The line size measurement test of the 4 mm precision block gauge with 75 sets of data, has the average 11 μm measurement error, and the maximum 37 μm measurement error.
In the process of line segment measurement, in order to obtain a sufficiently high measurement resolution, the field of view of the main camera will become too narrow to see the complete object in a single image, so at least two images will be required for complete measurement. By the image positioning and coding system built in this research, it does not need the positioning feedback of the XY platform. The positioning symbol and the coding pattern in the image are used as the positioning basis when the different image coordinates are matched. Obtain the coordinates of the two ends of the line in the image, and then calculate the length of the line segment, and the reference axis angle and other information. As a result of the camera correction, so it can easily deduce the line in the real world of geometric information.
Image processing system with positioning and decoding function has four steps: (a) camera and light source setup and calibration; (b) image scale: the stainless steel plate laser engraved positioning symbols and coding patterns; (c) positioning symbol detection: using histogram distribution of positioning mark "L" to locate the positioning symbol in the image; (d) decoding: The decoding operation in the vicinity of the positioning symbol is performed to identify the identity of the positioning symbol.
High-precision image measurement technology is based on the Image processing system with positioning and decoding function, plus two steps: (a) object edge detection: extracting the edge of object by image processing and machine vision technology; Endpoint detection: record the intersection of edge lines.
[1] F. Li, A. P. Longstaff, S. Fletcher, and A. Myers, "Rapid and accurate reverse engineering of geometry based on a multi-sensor system," The International Journal of Advanced Manufacturing Technology, vol. 74, pp. 369-382, 2014.
[2] H. Srinivasan, O. L. A. Harrysson, and R. A. Wysk, "Automatic part localization in a CNC machine coordinate system by means of 3D scans," The International Journal of Advanced Manufacturing Technology, vol. 81, pp. 1127–1138, 2015.
[3] L.-y. Lei, X.-j. Zhou, and M.-q. Pan, "Automated Vision Inspection System for the Size Measurement of Workpieces," in Instrumentation and Measurement Technology Conference, pp. 872-877, 2005.
[4] R. Anchini, G. D. Leo, and C. Liguori, "Metrological Characterization of a Vision-Based Measurement System for the Online Inspection of Automotive Rubber Profile," IEEE Transactions on Instrumentation and Measurement, vol. 58, pp. 4-13, 2009.
[5] S.-Y. Lee, Y. Kumar, and J.-M. Cho, "Enhanced Autofocus Algorithm Using Robust Focus Measure and Fuzzy Reasoning," IEEE Transactions on Circuits and Systems for Video Technology vol. 18, pp. 1237-1246, 2008.
[6] M. E. Sobel, "Asymptotic Confidence Intervals for Indirect Effects in Structural Equation Models," Sociological Methodology, vol. 13, pp. 290-312, 1982.
[7] J. Canny, "A Computational Approach to Edge Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence vol. PAMI-8, pp. 679-698, 1986.
[8] L. Shih, "Autofocus survey: A comparison of algorithms," SPIE Proceedings, vol. 6502, pp. 65020B1-65020B11, 2007.
[9] W. M. Kuo, S. F. Chuang, C. Y. Nian, and Y. S. Tarng, "Precision nano-alignment system using machine vision with motion controlled by piezoelectric motor," Mechatronics, vol. 18, pp. 21-34, 2008.
[10] (2014). 北美智權報 RD專欄 3D IC晶圓接合技術. Available: http://www.naipo.com/Portals/1/web_tw/Knowledge_Center/Research_Development/publish-54.htm
[11] N. Dalal and B. Triggs, "Histograms of Oriented Gradients for Human Detection," in IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
[12] G. Papari and N. Petkov, "Edge and line oriented contour detection: State of the art," Image and Vision Computing, vol. 29, pp. 79-103, 2011.
[13] M. S. Prewitt, "Object enhancement and extraction," in Picture processing and Psychopictorics, New York: Academic Press, 1970, pp. 75-149.
[14] C. Grigorescu, N. Petkov, and M. A. Westenberg, "Contour and boundary detection improved by surround suppression of texture edges," Image and Vision Computing, vol. 22, pp. 609-622, 2004.
[15] J. H. Elder and R. M. Goldberg, "Ecological statistics of Gestalt laws for the perceptual organization of contours," Journal of Vision, vol. 2, pp. 324-353, 2002.
[16] R. O. Duda and P. E. Hart, "Use of the Hough transformation to detect lines and curves in pictures," Communications of the ACM, vol. 15, pp. 11-15, 1972.
[17] (2012). Hough Transform 霍夫變換檢測直線. Available: http://blog.csdn.net/rocky_shared_image/article/details/8037361
[18] D. H. Ballard, "Generalizing the Hough transform to detect arbitrary shapes " Pattern Recognition, vol. 13, pp. 111-122, 1981.
[19] H. P. Moravec, Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover. Pittsburgh, Pennsylvania: Carnegie Mellon University Robotics Institue Technical Report, 1980.
[20] C. Harris and M. Stephens, "A combined corner and edge detector," in Alvey Vision Conference, pp. 147-151, 1988.
[21] J. Shi and Tomasi, "Good Features to Track," in IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
[22] E. Rosten and T. Drummond, "Machine Learning for High-Speed Corner Detection," in European conference on computer vision, vol. 3951, pp. 430-443, 2006.
[23] D. Lowe, "Object recognition from local scale-invariant features," in IEEE International Conference on Computer Vision, vol. 2, pp. 1150-1157, 1999.
[24] H. Bay, T. Tuytelaars, and L. V. Gool, "SURF: Speeded Up Robust Features," in European Conference on Computer Vision, vol. 3951, pp. 404-417, 2006.
[25] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, "BRIEF: Binary Robust Independent Elementary Features," in European Conference on Computer Vision vol. 6314, pp. 778-792, 2010.
[26] E. Rublee, V. Rabaud, and K. Konolige, "ORB: An efficient alternative to SIFT or SURF," in IEEE International Conference on Computer Vision pp. 2564-2571, 2011.
[27] (2015). 開發1微米線寬印刷電子技術應用於顯示器. Available: http://www.digitimes.com.tw/tw/rpt/rpt_show.asp?report_type=&CnlID=3&v=20151227-435&n=1
[28] (2016). 國立交通大學奈米中心儀器設備 - 雷射光罩製作系統Available: http://www.nfc.nctu.edu.tw/mechine_new/mechine/DWL-200.htm
[29] (2007). 交大奈米中心光罩製作流程交大奈米中心光罩製作Available: http://www.nfc.nctu.edu.tw/mechine_new/notice/C_Laser_notice.pdf