簡易檢索 / 詳目顯示

研究生: 黃韋中
Huang, Wei-Chung
論文名稱: 用於眼妝的眼線偵測
Eyeline Detection for Automatic Eyeline Makeup
指導教授: 張智星
口試委員: 黃仲陵
賴尚宏
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2012
畢業學年度: 100
語文別: 中文
論文頁數: 64
中文關鍵詞: 眼線偵測、物件偵測、輪廓跟蹤
外文關鍵詞: eyeline detection、object detection、contour tracing
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 基於自動化彩妝之眼線偵測是一項全新的創意,而本論文重點將在於近距離影像拍攝之眼線的偵測。本論文從觀察影像特性開始,整個實驗流程包含兩大部分:第一部分為「輪廓跟蹤」,第二部分為「眼線端點偵測」。
    在第一部分,由於彩妝機構關係,影像與實際眼睛比例接近1:1,眼部周圍雜訊明顯,且在此影像中,眼線呈現區域型態,因此提出利用顏色與眼線方向特性做為依據,使用影像位移與相減以突顯眼線區域,並進行輪廓跟蹤動作。輪廓跟蹤透過圖框位移的方式計算圖框內像素深度,做垂直投影找出圖框中眼線位置。
    而輪廓跟蹤的停止判斷使用第二部分「端點偵測」,比較非監督式學習法(使用k-means)與監督式學習法做為端點偵測演算法,其中特徵部分採用以下三種特徵,分別是基於像素強度特徵(intensity-based)、局部二元特徵(local binary pattern)與Haar-like特徵,並搭配三種分類器分別為支持向量機(support vector machine)、最近鄰居法(nearest neighbor rule)與層疊式調式性推昇法(cascade adaptive boosting)。
    在實驗分析上,端點偵測方面使用偵測歐氏距離誤差取得偵測準確率,實驗結果顯示,基於像素強度特徵搭配支持向量機效能最佳,在眼線誤差範圍約3.8mm(假設實際眼線長3.5 cm)內準確率可達96%。而整體眼線偵測(含輪廓跟蹤)方面則是採用豪斯多夫(Hausdorff)距離評估偵測出的眼線與實際眼線相似程度,由整體系統實驗結果說明,在一合理容忍範圍內,眼線偵測能達到可接受精確度。此外,安全機制上本論文使用樣板匹配(template matching)與Hough轉換做為即時的張眼閉眼偵測。


    This research is about constructing a novel close-distance eye-line detection technique as a part of an automatic eye-line makeup system. Based on the inspection of the characteristics of the images, the proposed eye-line detection method includes the following two steps: eye-line contour tracing and eye-line end-point detection.
    For the eye-line contour tracing, due to the photo capturing mechanism of the device, the scale between the sizes of the captured and the real eye-line is close to 1:1. This results in an amplification effect of the noisy region (e.g. wrinkles or freckles) around the eye-line. This also magnifies the thickness of the eye-line so that we treat it as a region rather than a line. We therefore propose the use of image shifting and subtraction based on the color and eye-line orientation properties. We then perform a contour tracing technique based on pixel intensity projection and sub-image shifting.
    We use an eye-line end-point detection method as the stopping criterion for contour tracing. We examine the performance of an unsupervised method (k-means) and three supervised learning methods. Three types of feature are used based on intensity, local binary pattern, and Haar wavelet, respectively. The supervised learning methods under inspection include support vector machine, nearest-neighbor rule, and cascade adaptive boosting.
    In the end-point detection experiment, we use Euclidean distance to measure the accuracy. The experimental result shows that support vector machine with intensity-based feature can achieve the best accuracy 96% within 3.8mm tolerance (suppose the length of a real eye-line is around 3.5 cm). For the overall performance of the system, we use the Hausdorff distance to measure the similarity between the detected eye-line and the real eye-line. The experimental result shows that an acceptable accuracy is achieved within a reasonable tolerance. On the other hand, we adopt a safety mechanism by detecting whether the eye is opened or closed in real time using template matching and Hough transform.

    摘要 I Abstract II 謝誌 IV 目錄 V 表目次 VIII 圖目次 IX 圖表目次 XIII 第一章 緒論 1 1.1 研究主題 1 1.1.1 研究簡介 1 1.2 相關研究簡介 2 1.2.1 影像前處理 2 1.2.2 邊緣偵測 3 1.2.3 Hough 轉換 3 1.2.4 蓋伯濾波器 (Gabor Filter) 4 1.2.5 樣板匹配(Template Matching) 5 1.2.6 區域生長 (Region Growing) 6 1.3 章節概要 7 第二章 研究方法 8 2.1 系統架構 8 2.1.1 相關研究說明 8 2.2 眼線偵測 11 2.2.1 影像位移與相減 11 2.2.2 區域重建與生長 14 2.2.3 基於圖框之輪廓跟蹤 15 2.3 基於非監督式學習法之端點偵測 17 2.3.1 分群法則:K-means 17 2.3.2 使用K-means判斷輪廓跟蹤邊界(眼頭與眼尾) 18 2.4 物件偵測 21 2.4.1 像素深度影像特徵(Intensity-based raw image) 21 2.4.2 Haar-Like 特徵 21 2.4.3 局部二元樣板特徵(Local Binary Pattern, LBP) 23 2.4.4 支持向量機(Support Vector Machine , SVM) 25 2.4.5 調適性推昇法(Adaptive boosting, Adaboost ) 26 2.4.6 階層式Adaboost(Cascade Adaboost) 28 2.4.7 最近鄰居法(nearest neighbor rule, NNR) 28 2.5 眼線位置偵測 29 2.6 物件偵測使用滑動視窗 31 2.7 彩妝安全機制之張閉眼偵測 34 第三章 研究結果與分析 35 3.1 眼線偵測資料庫說明 35 3.1.2 物件偵測使用之訓練資料說明 35 3.1.3 物件偵測使用之測試資料說明 36 3.2 眼線端點之物件辨識實驗結果 37 3.2.1 針對眼線端點之評估方法 37 3.2.2 特徵比較與參數設定:Intensity-based(Itn) 38 3.2.3 特徵比較與參數設定:Local Binary Pattern(LBP) 39 3.2.4 特徵比較與參數設定:Haar-like 40 3.2.5 各特徵組合辨識率分析 42 3.2.6 Intensity-based 特徵使用SVM 43 3.2.7 Intensity-based特徵使用SVM之錯誤分析與討論 45 3.2.8 使用K-means之端點偵測分析 49 3.2.9 監督式與非監督式學習對於端點偵測結果 51 3.3 眼線偵測最終效能評估 52 3.3.1 豪斯多夫距離(Hausdorff Distance) 52 3.3.2 端點偵測於輪廓跟蹤之影響 53 3.4 討論與錯誤分析 55 3.4.2 偵測結果比較 57 第四章 結論與建議 59 4.1 結論 59 4.2 未來研究方向 61 參考文獻 62

    [1]. R. Mehrotra, K. Namuduri, and H. Ranganathan, “Gabor filter-basededge detection,” Pan. Recogn., vol. 25, pp. 1479-1494, Dec. 1992.
    [2]. R. Brunelli, 2009 Template Matching Techniques in Computer Vision: Theory and Practice. J. Wiley & Sons
    [3]. N. Otsu, “A Threshold Selection Method From Gray-level Histograms,” IEEE Trans. Syst. Man Cybern., Jan. 1979
    [4]. D. Q. Goldin and P. C. Kanellakis. On similarity queries for timeseriesdata: Constraint specification and implementation. In CP,pages 137–153, France, September 1995.
    [5]. S. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 674–693, July 1989.
    [6]. C. Papageorgiou, M. Oren, and T. Poggio. A general framework for object detection. In International Conference on Computer Vision, 1998.
    [7]. P. Viola and M. Jones. “Robust real-time object detection.” 2nd Intl. Workshop on Statistical and Computational Theoriesof Vision, 2001
    [8]. F. Crow. Summed-area tables for texture mapping. In Proceedings of SIGGRAPH, volume 18(3), pages 207-212, 1984
    [9]. T. Ojala, M. Pietikainen, and D. Harwook, “A Comparative Study of Texture Measures with Classification Based on Feature Distributions,” Pattern Recognition, vol. 29, no. 1, pp. 51-59, 1996
    [10]. T. Ahonen, A. Hadid, and M. Pietikainen, “Face Description with Local Binary Patterns: Application to Face Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037-2041, Dec. 2006.
    [11]. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol 24, no. 7 pp. 971-987, July 2002
    [12]. Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139,August 1997.
    [13]. K. Peng, L. Chen, S. Ruan, and G. Kukharev. A robust algorithm for eye detection on gray intensity face without spectacles. Journal of Computer Science and Technology, 5(3):127–132, 2005.
    [14]. T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, A.Y. Wu, “An efficient k-means clustering algorithm: analysis and implementation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24 (7), 2002, pp 881-892
    [15]. C.W. Chen, J. Luo, K.J. Parker, “Image segmentation via adaptive K-mean clustering and knowledge based morphological operations with biomedical applications”, IEEE Transactions on Image Processing, Vol.7 (12), 1998, pp 1673-1683
    [16]. D. Huttenlocher, G. Klanderman, and W. Rucklidge. Comparing images using the hausdorff distance. IEEE Trans. Patt. Analysis and Mach. Intell., 15:850–863, 1993.
    [17]. Yi, X. and O. I. Camps, "Line-based recognition using a multidimensional Hausdorff distance," IEEE Trans. Pattern Anal. Machine Intell., Vol.21, No.9, pp.901-916, Sep. 1999.
    [18]. H. Bay, T. Tuytelaars, L. Van Gool, SURF: speeded up robust features, in: ECCV, 2006.
    [19]. A. K. Jain, Fundamentals of Digital Image Processing. Englewood Cliffs, NJ: Prentice Hall, 1989.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE