研究生: |
黃韋中 Huang, Wei-Chung |
---|---|
論文名稱: |
用於眼妝的眼線偵測 Eyeline Detection for Automatic Eyeline Makeup |
指導教授: | 張智星 |
口試委員: |
黃仲陵
賴尚宏 |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 中文 |
論文頁數: | 64 |
中文關鍵詞: | 眼線偵測、物件偵測、輪廓跟蹤 |
外文關鍵詞: | eyeline detection、object detection、contour tracing |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
基於自動化彩妝之眼線偵測是一項全新的創意,而本論文重點將在於近距離影像拍攝之眼線的偵測。本論文從觀察影像特性開始,整個實驗流程包含兩大部分:第一部分為「輪廓跟蹤」,第二部分為「眼線端點偵測」。
在第一部分,由於彩妝機構關係,影像與實際眼睛比例接近1:1,眼部周圍雜訊明顯,且在此影像中,眼線呈現區域型態,因此提出利用顏色與眼線方向特性做為依據,使用影像位移與相減以突顯眼線區域,並進行輪廓跟蹤動作。輪廓跟蹤透過圖框位移的方式計算圖框內像素深度,做垂直投影找出圖框中眼線位置。
而輪廓跟蹤的停止判斷使用第二部分「端點偵測」,比較非監督式學習法(使用k-means)與監督式學習法做為端點偵測演算法,其中特徵部分採用以下三種特徵,分別是基於像素強度特徵(intensity-based)、局部二元特徵(local binary pattern)與Haar-like特徵,並搭配三種分類器分別為支持向量機(support vector machine)、最近鄰居法(nearest neighbor rule)與層疊式調式性推昇法(cascade adaptive boosting)。
在實驗分析上,端點偵測方面使用偵測歐氏距離誤差取得偵測準確率,實驗結果顯示,基於像素強度特徵搭配支持向量機效能最佳,在眼線誤差範圍約3.8mm(假設實際眼線長3.5 cm)內準確率可達96%。而整體眼線偵測(含輪廓跟蹤)方面則是採用豪斯多夫(Hausdorff)距離評估偵測出的眼線與實際眼線相似程度,由整體系統實驗結果說明,在一合理容忍範圍內,眼線偵測能達到可接受精確度。此外,安全機制上本論文使用樣板匹配(template matching)與Hough轉換做為即時的張眼閉眼偵測。
This research is about constructing a novel close-distance eye-line detection technique as a part of an automatic eye-line makeup system. Based on the inspection of the characteristics of the images, the proposed eye-line detection method includes the following two steps: eye-line contour tracing and eye-line end-point detection.
For the eye-line contour tracing, due to the photo capturing mechanism of the device, the scale between the sizes of the captured and the real eye-line is close to 1:1. This results in an amplification effect of the noisy region (e.g. wrinkles or freckles) around the eye-line. This also magnifies the thickness of the eye-line so that we treat it as a region rather than a line. We therefore propose the use of image shifting and subtraction based on the color and eye-line orientation properties. We then perform a contour tracing technique based on pixel intensity projection and sub-image shifting.
We use an eye-line end-point detection method as the stopping criterion for contour tracing. We examine the performance of an unsupervised method (k-means) and three supervised learning methods. Three types of feature are used based on intensity, local binary pattern, and Haar wavelet, respectively. The supervised learning methods under inspection include support vector machine, nearest-neighbor rule, and cascade adaptive boosting.
In the end-point detection experiment, we use Euclidean distance to measure the accuracy. The experimental result shows that support vector machine with intensity-based feature can achieve the best accuracy 96% within 3.8mm tolerance (suppose the length of a real eye-line is around 3.5 cm). For the overall performance of the system, we use the Hausdorff distance to measure the similarity between the detected eye-line and the real eye-line. The experimental result shows that an acceptable accuracy is achieved within a reasonable tolerance. On the other hand, we adopt a safety mechanism by detecting whether the eye is opened or closed in real time using template matching and Hough transform.
[1]. R. Mehrotra, K. Namuduri, and H. Ranganathan, “Gabor filter-basededge detection,” Pan. Recogn., vol. 25, pp. 1479-1494, Dec. 1992.
[2]. R. Brunelli, 2009 Template Matching Techniques in Computer Vision: Theory and Practice. J. Wiley & Sons
[3]. N. Otsu, “A Threshold Selection Method From Gray-level Histograms,” IEEE Trans. Syst. Man Cybern., Jan. 1979
[4]. D. Q. Goldin and P. C. Kanellakis. On similarity queries for timeseriesdata: Constraint specification and implementation. In CP,pages 137–153, France, September 1995.
[5]. S. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 674–693, July 1989.
[6]. C. Papageorgiou, M. Oren, and T. Poggio. A general framework for object detection. In International Conference on Computer Vision, 1998.
[7]. P. Viola and M. Jones. “Robust real-time object detection.” 2nd Intl. Workshop on Statistical and Computational Theoriesof Vision, 2001
[8]. F. Crow. Summed-area tables for texture mapping. In Proceedings of SIGGRAPH, volume 18(3), pages 207-212, 1984
[9]. T. Ojala, M. Pietikainen, and D. Harwook, “A Comparative Study of Texture Measures with Classification Based on Feature Distributions,” Pattern Recognition, vol. 29, no. 1, pp. 51-59, 1996
[10]. T. Ahonen, A. Hadid, and M. Pietikainen, “Face Description with Local Binary Patterns: Application to Face Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037-2041, Dec. 2006.
[11]. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol 24, no. 7 pp. 971-987, July 2002
[12]. Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139,August 1997.
[13]. K. Peng, L. Chen, S. Ruan, and G. Kukharev. A robust algorithm for eye detection on gray intensity face without spectacles. Journal of Computer Science and Technology, 5(3):127–132, 2005.
[14]. T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, A.Y. Wu, “An efficient k-means clustering algorithm: analysis and implementation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24 (7), 2002, pp 881-892
[15]. C.W. Chen, J. Luo, K.J. Parker, “Image segmentation via adaptive K-mean clustering and knowledge based morphological operations with biomedical applications”, IEEE Transactions on Image Processing, Vol.7 (12), 1998, pp 1673-1683
[16]. D. Huttenlocher, G. Klanderman, and W. Rucklidge. Comparing images using the hausdorff distance. IEEE Trans. Patt. Analysis and Mach. Intell., 15:850–863, 1993.
[17]. Yi, X. and O. I. Camps, "Line-based recognition using a multidimensional Hausdorff distance," IEEE Trans. Pattern Anal. Machine Intell., Vol.21, No.9, pp.901-916, Sep. 1999.
[18]. H. Bay, T. Tuytelaars, L. Van Gool, SURF: speeded up robust features, in: ECCV, 2006.
[19]. A. K. Jain, Fundamentals of Digital Image Processing. Englewood Cliffs, NJ: Prentice Hall, 1989.