簡易檢索 / 詳目顯示

研究生: 簡千佳
Chien-Chia Chien
論文名稱: 在任意頭部姿勢下之臉部表情分析
Facial Expression Analysis under Various Head Poses
指導教授: 陳永昌
Yung-Chang Chen
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2002
畢業學年度: 90
語文別: 英文
論文頁數: 57
中文關鍵詞: 臉部表情分析特徵點追蹤虛擬視訊會議頭部姿勢估測
外文關鍵詞: facial expression analysis, feature point tracking, virtual conferencing, pose estimation
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在虛擬視訊會議系統中,人臉面部表情的變化是所有使用者注目的焦點。在目前已發表的文獻中,已經有許多針對正面臉部的臉部表情分析架構被提出來;為了能更適合實際的應用,我們發展了一個能讓使用者在交談中自由地轉動頭部的臉部表情分析系統,以及一個採用錯誤分類方法的頭部姿勢修正的理論,用來輔助我們的臉部表情分析。
    在我們的臉部表情分析方法中,我們將在頭部各個不同轉向下的臉部影像轉成合成的正面頭部影像,並在這些正面的影像上進行特徵點追蹤及臉部表情分析。我們藉由使用者專屬的三維頭部模型的輔助,得到臉部特徵點的初始位置;接下來,我們利用一些資訊,利如形狀、顏色、時間上的相關性及嘴唇的色彩及紋路等,來追蹤特徵點的移動。當頭部的轉向太大以致於有特徵點被遮蔽時,我們採取對稱的假設來估測被遮蔽點的位置。最後,我們將特徵點的追蹤結果轉成臉部表情參數,用來控制虛擬會議中虛擬代理人頭部模型的形變。

    為了要能分析不同頭部轉向下的臉部表情,我們必需知道精確的頭部姿勢。在本篇論文中,我們還提出了一個快速頭部姿勢微調的理論,能夠修正在粗略的頭部姿勢估測後遺留的誤差。我們採取「費雪臉部分類」的方法來分類二維的誤差影像。這個架構包含了兩個分類機制,分別是頭部姿勢驗證及錯誤角度分類;這兩個機制會被反覆的執行,直到正確的頭部姿勢被找到。


    In model-based virtual conference system, the facial expressions on human faces are major focus of all users. Many facial expression analysis algorithms for frontal face have been proposed. For practical use, we develop a facial expression analysis method that can allow users to feel free to rotate their heads in communication. Furthermore, in order to aid our facial expression analysis method, a pose refinement algorithm is proposed by using error classification method.
    In our expression analysis method, we translate all facial images under different head pose into artificial frontal facial images called stabilized view, and track facial feature points in these frontal facial images. We obtain initial locations of the feature points by the assistance of user-customized 3D facial model, and then track their movement by some information, such as shape, intensity, temporal correlation, lip color and lip texture. When head rotation is too large such that some feature points are hidden from view, we adopt symmetric assumption to estimate the locations of these hidden feature points. After acquiring the feature-point tracking result, we translate them into Facial Animation Parameters that control the animation of the talking head at the client terminal in virtual conference.

    In order to analyze facial expression under different head pose, we need to know exact pose information. In this thesis, we also propose a pose refinement algorithm that can refine head pose rapidly after coarse pose estimation. We adopt Fisherface classification method to classify pose error in a 2D difference image, and two classification schemes both using Fisherface method are designed in our system. First one, pose verification, is used to verify whether the estimated head pose is correct or not. If the pose is not correct, the other one, error type classification, is applied to determine what kind of pose error occurs, and then correct the error pose. The pose verification and error type classification are iteratively applied on difference image until the correct head gesture is obtained.

    Abstract i Table of Contents iii Chapter 1: Introduction 1 1.1 Model-based Coding and Virtual Conferencing 1 1.2 Overview of Our Work 1 1.3 Related Work 2 1.4 Thesis Organization 3 Chapter 2: Expression Analysis from Frontal View of the Face — Previous Work 4 2.1 Real-time Facial Feature Point Tracking 4 2.1.1 Registration Phase—Feature Extraction 5 2.1.2 Tracking Phase—Feature Tracking 6 2.2 Mesh-based Facial Feature Point Tracking 7 2.2.1 Mesh-based Mouth Model 7 2.1.2 Coarse-to-fine Mesh-based Tracking 8 2.3 FAP Mapping 9 2.4 Limitation 10 Chapter 3: Head Pose Estimation and Pose Refinement 12 3.1 Robust Head Pose Estimation 12 3.1.1 Facial Model Adaptation 13 3.1.2 Head Pose Tracking 13 3.2 Stabilized View Generation 15 3.3 Pose Estimation Refinement 17 3.3.1 Motivation 17 3.3.2 The Observation of the Estimated Pose Error 18 3.3.3 Fisherface Method 24 3.3.4 Experimental Design, Results, and Discussions 26 (a) Experimental Data 28 (b) Classification Method 29 (c) Pose Verification Problem 30 (d) Error Type Classification Problem 31 (e) Practical Procedures for Pose Refinement 33 Chapter 4: Facial Expression Analysis 37 4.1 Feature Tracking Under Small Rotation Angle 37 4.2 Feature Tracking Under Large Rotation Angle 41 4.3 Experimental Results and Discussions 44 (a) Yaw Rotation 45 (b) Pitch Rotation 49 (c) Roll Rotation 51 4.3.1 Error Analysis 51 4.3.2 Computation Complexity 53 Chapter 5: Discussion and Future Work 54 Reference 56

    [1] Ying-li Tian, T. Kanade and J. F. Cohn , “Recognizing Action Units for Facial Expression Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2, February 2001.
    [2] A. Yuille, P. Hallinan, D. Cohen, “Feature Extraction from Faces Using Deformable Templates,” International Journal of Computer Vision, Vol. 8, No. 2, pp.99-111, August 1992.
    [3] Liang Zhang, “Estimation of the Mouth Features Using Deformable Templates,” IEEE Int. Conf. Image Processing, Vol. III, pp. 328-331, Santa Barbara, CA, October 1997.
    [4] P. L. Rudianto, K. N. Ngan, “Automatic 3D Wireframe Model Fitting to Frontal Facial Image in Model-based Video Coding,” Picture Coding Symposium (PCS’96) , pp. 585-588, Melbourne, Australia, March 1996.
    [5] Nikolaos Sarris and Michael G. Strintzis, “Constructing a Videophone for the Hearing Impaired using MPEG-4 Tools,” IEEE MultiMedia, July-Sept. 2001.
    [6] S. Valenta and J.-L Dugelay, “Face Tracking and Realistic Animations for Telecommunicant Clones,” IEEE MultiMedia, Vol.7, No.1, pp. 34-43, Jan.-Mar. 2000.
    [7] F. Pighin, R. Szeliski, and D. Salesin, “Resynthesizing Facial Animation through 3D Model-Based Tracking,” Proc. Int’l Conf. Computer Vision, pp. 143-150, Los Alamitos, CA, 1999.
    [8] Jörgen Ahlberg, “Using the Active Appearance Algorithm for Face and Facial Feature Tracking,” Proceedings of 2001 IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, 2001.
    [9] Jen-Chung Chou, “Feature Point Tracking of Human Face and Facial Expression Analysis,” 國立清華大學碩士論文, June 2000.
    [10] Yao-Jen Chang and Yung-Chang Chen, "Robust Head Pose Estimation Using Textured Polygonal Model with Local Correlation Measure," Proceedings of the Second IEEE Pacific-Rim Conference on Multimedia (IEEE-PCM2001), pp. 245-252, Beijing, China, Oct. 24-26, 2001.
    [11] M. Kampmann and R. Farhoud, “Precise Face Model Adaptation for Semantic Coding of Video Sequences,” Picture Coding Symposium’97, 1997.
    [12] F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, “Synthesizing Realistic Facial Expressions from Photographs,” SIGGRAPH’98, Orlando, Florida, July 1998.
    [13] A. Schödl, A. Haro, and I. Essa, “Head Tracking Using a Textured Polygonal Model,” Workshop on Perceptual User Interfaces, Nov. 1998.
    [14] B. K. P. Horn, Robot Vision, Cambridge, MA: MIT Press, 1986.
    [15] D. F. Rogers and J. A. Adams, Mathematical Elements for Computer Graphics, New York, NY: McGraw Hill, 1976.
    [16] P. N. Belhumeur, J. P. Hespanha, and D. J. Kiregman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No.7, pp.721-732, 1997.
    [17] R. A. Fisher, “The Use of Multiple Measures in Taxonomic Problems,” Ann. Eugenics, Vol. 7, pp.179-188, 1936.
    [18] Yao-Jen Chang and Yung-Chang Chen, "Textured Polygonal Model Assisted Facial Model Estimation from Image Sequence," Proceedings of 2001 IEEE International Conference on Image Processing (ICIP-2001), Vol. 3, pp. 106-109, Thessaloniki, Greece, Oct. 7-10, 2001.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE