簡易檢索 / 詳目顯示

研究生: 涂晏彰
Yen-Chang Tu
論文名稱: 在不同光影環境下的臉部特徵萃取
Facial Feature Extraction under Different Lighting Conditions
指導教授: 陳永昌
Yung-Chang Chen
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2006
畢業學年度: 94
語文別: 英文
論文頁數: 47
中文關鍵詞: 臉部特徵特徵萃取邊緣偵測
外文關鍵詞: facial feature, feature extraction, edge detection
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 虛擬視訊會議系統使得即使在不同地方的使用者也可以經由這個系統來達成溝通的目的,而且對使用者來說這並不是一件困難的事情。只要擁有可以與電腦連結的攝影機跟一台與網際網路連接的電腦就可以參與虛擬視訊會議。在大多數有關於自動化的臉部偵測以及人臉辨識的應用上,多少都會受到不同光影環境與背景影像的影響。而在虛擬視訊會議系統當中,臉部偵測和表情分析是相當重要的一個部份,而且這個部份也會受到光影環境的影響。對一個虛擬視訊會議系統的使用者來說,並沒有辦法輕易地去改變所處的光影環境,而在不好的光影環境下所做的自動化人臉特徵偵測也會因此有較差的結果。因此,我們提出一個方法用以在不同的光影環境底下,有正確的臉部特徵萃取是我們提出的方法所要達成的目標。
    首先,我門找出所要的臉部特徵區塊,所使用的是一個以映射為基礎的方法。這是一個簡單而且快速的方式,讓我們找到各個特徵區塊在影像當中所在的位置。在找出這些臉部特徵區塊之後,我們使用Canny 所提出的邊緣偵測器(Canny edge detector)來分別對這些區塊作邊緣偵查,經過偵測之後就可以得到相對應的二元化邊緣影像。最後我們就可以利用這些經由前面兩個程序所得到的二元化邊緣影像的資訊來做分析,決定我們所需要並且具有代表性的臉部特徵點的位置。


    Most of facial animation applications, such as face detection and face recognition are sensitive to the different lighting conditions and background environment. Head tracking and expression analysis are important in the virtual conferencing system and also sensitive to the lighting conditions.
    The virtual conferencing system makes communication for people from different places. It is convenient for people to join a virtual conferencing system wherever they have a camera and a computer with internet. Some key techniques in the virtual conferencing system are sensitive to the lighting conditions. For a participant in the system, it is not easy to control the lighting conditions. Worse results may be caused by worse lighting conditions. In order to have a good performance on expression analysis, we need to have a better extraction of facial features. Having a better extraction of facial features is the goal of our approach.
    The first step is to extract the facial feature blocks which we desire. We extract the blocks by a projection-based method. After those blocks are extracted, a Canny edge detector is applied to each block. Then we can have binary edge images of those extracted blocks. We want to extract some points which are meaningful to the features for each block and we can determine the positions of feature points by the information of those binary edge images.

    Abstract i Table of Contents ii List of Figures iv List of Tables v Chapter 1: Introduction 1 1.1 Background 1 1.2 Motivation 1 1.3 Related works 2 1.4 Thesis organization 3 Chapter 2: System Overview 4 2.1 Virtual conferencing system 4 2.2 Overview of our approach 6 Chapter 3: Face Detection under Different Lighting Conditions 8 3.1.Adaptive skin color model 8 3.1.1 Skin color model 8 3.1.2 Adaptability of skin color model 10 3.2.Face Detection 14 3.2.1 Overall scanning and skin mask 14 3.2.2 Face verification 14 Chapter 4: Facial Feature Extraction 16 4.1 Feature block extraction 17 4.1.1 Gray value reliefs 17 4.1.2 Feature block extraction based on gray value relief18 4.2 Edge detection 23 4.2.1 Canny edge detector 24 4.2.2 Edge detection under different lighting conditions30 4.2.3 Fine tuning algorithm 31 4.3 Determining the positions of feature points 33 4.4 Experimental results 36 Chapter 5: FAP Mapping 39 5.1 MPEG-4 facial animation 39 5.1.1 Facial animation parameter(FAP) 39 5.1.2 FAP interpolation table 40 5.1.3 I.S.T model 41 5.2 FAP Mapping 41 5.3 Discussions 44 Chapter 6: Conclusions and Future Works 45 References 46

    [1] Berzins, “V.Accuracy of Laplacian Edge Detector Computer Vision, Graphics, and Image Processing,,” Vol. 27, pp. 195-210, 1984
    .
    [2] Robert A. Schowengerdt,” Remote sensing, Models and Methods for Image
    Processing,” 1997.

    [3] J. F. Canny,”A computational approach to edge detection,” IEEE Trans.Pattern
    Analysis and Machine Intelligence, 8:679-698, 1986

    [4]W.T. Chang, “Fast Multiple Head Pose Estimation under Different Lighting Conditions,” Mater Thesis, National Tsing Hua University, Jun.2002.

    [5]A Yuille, P.Hallinan and D. Cohen, “Feature Extraction from Faces Using Deformable Templates,” International Journal of Computer Vision, Vol. 8, No.2, pp99-111, August 1992.

    [6]C. Garcia and G. Tziritas, “Face Detection Using Quantized Skin Color Regions Merging and Wavelet Packet Analysis,” IEEE Trans. On Mulitimedia, vol.1, no.3, pp. 264-277, Sept. 1999.

    [7]G. A. Abrantes and F. Pereira, “ MPEG-4 Facial Animation Technology: Survey, Implementation, and Results,” IEEE Trans. On CSVT, vol.9, no.2, pp. 290-305, March. 1999.

    [8]T. Cootes. “An introduction to active shape models”, In Image Proc. and Anal, ” pp 223-248. Oxford Univ. Press, 2000.

    [9]T .Cootes, G.J.Edwards, and C. J. Taylor. “Active appearance models, “ In Proc. ECCV. Vol.2, pp484-498, 1998.

    [10] S. Baskan , M. M. Bulut , V.Atalay, “Projection based method for segmentation of human face and its evaluation,” Pattern Recognition
    Letters, vol.23 no.14, p.1623-1629, December 2002

    [11]”Text of ISO/IEC FDIS 14496-2: Visual, “ ISO/IEC JTC1/SC29/WG11 N2502, Atlantic City MPEG Meeting, Oct. 1998

    [12]P.Eisert, T. Wiegand, and BGirod, “ Model-Aided Coding : A New Approach to Incorporate Facial Animation into Motion-Compensated Video Coding,” IEEE TR-CSVT, special Issue on 3D Video Technology, pp1-15,1999.

    [13] J.C. Chou, “Feature Point Tracking of Human Face and Facial Expression Analysis,” Mater Thesis, National Tsing Hua University, Jun.2002

    [14]M. Martinez, and R. Benavente, “The AR Face Database,” CVC Technical Report #24.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE