簡易檢索 / 詳目顯示

研究生: 吳育昆
Yu-Kuen Wu
論文名稱: 利用光流場及倍率影像之監督式LLE分析應用至臉部表情辨識
Facial Expression Recognition Based on Supervised LLE Analysis of Optical Flow and Ratio Image
指導教授: 賴尚宏
Shang-Hong Lai
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Computer Science
論文出版年: 2006
畢業學年度: 94
語文別: 英文
論文頁數: 50
中文關鍵詞: 表情辨識光流場倍率影像
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在這篇論文中,我們提出了一個利用光流場及倍率影像之監督式LLE分析的臉部表情辨識演算法。在這個演算法中,我們首先對臉部影像擷取出正確的臉部區域,藉此去除整體頭部運動所造成的影響。在第二部分,我們在沒有表情跟有表情的影像之間計算出光流場及倍率影像,然後應用SLLE從表情運動及亮度變化擷取出低維度具有代表性的特徵。第三部分,我們計算在低維度特徵向量的距離來辨識臉部的表情。最後我們結合光流場及倍率影像來改善臉部表情分類。在JAFFE實驗的結果顯示,我們提出的方法比之前的方法在臉部表情辨識上較為傑出,我們也用Yale Face資料庫來測試這個藉由JAFFE訓練出來的表情辨識系統,在表情辨識率上仍然有相當好的效果。所以,我們成功地利用不同的資料庫來作測試並且這個辨識結果比得上在JAFFE測試的時候。實驗結果顯示,這個系統不僅能夠在JAFFE資料庫有不錯的結果,在Yale Face資料庫也能夠有相當好的效果。


    In this thesis, we propose a new facial expression recognition algorithm based on supervised locally linear embedding (SLLE) analysis on the optical flow and ratio image. In this algorithm, we first extract the face region from the face image to remove factors due to global head motion. Secondly, we compute the optical flow and ratio image between the neutral face and expression images and then apply the SLLE to extract the low-dimensional discriminating features from the expression motion and brightness variation. Thirdly, we compute the distance between the low-dimensional feature vectors to recognize the facial expression. Finally, we combine optical flow and ratio image properly to improve the facial expression classification. The experimental results on the JAFFE face database show the proposed algorithm outperforms the previous methods for facial expression recognition. We also use the Yale Face database for testing the expression recognition system trained from the JAFFE database. It still has good performance on the expression recognition rate. Therefore, we successfully use different database for testing and the result is comparable with testing on the same JAFFE database. The result shows that the system not only works well on JAFFE database but also has good performance on the Yale Face database.

    Contents List of Figures iii List of Tables v 1 Introduction 1 1.1 Motivation 1 1.2 Thesis Organization 3 2 Review of Dimension Reduction 5 2.1 Principle Component Analysis 6 2.2 Locally Linear Embedding 8 2.2.1 Determine neighbors 8 2.2.2 Reconstruct with linear combination 9 2.2.3 Map to embedded coordinates .9 2.3 Supervised Locally Linear Embedding11 3 Review of Facial Expression Recognition 12 3.1 Image-based Approach 13 3.1.1 Principle Component Analysis 13 3.1.2 Supervised Locally Linear Embedding 14 3.2 Motion-based Approach 14 4 Proposed Facial Expression Recognition Algorithm 17 4.1 Overview 18 4.1.1 Training stage 18 4.1.2 Testing stage 19 4.2 Normalization 21 4.3 Extraction of Optical Flow and Ratio Image Features 22 4.4 Classification 25 4.4.1 minimum class-mean distance 25 4.4.2 k-nearest-neighbor classification 27 4.5 Feature Integration . 27 4.5.1 minimum class-mean distance 27 4.5.2 k-nearest-neighbor classi‾cation 30 5 Experimental Results 31 5.1 Comparison with PCA, LLE, SLLE 32 5.1.1 Single-subject testing 32 5.1.2 Multiple-subject testing 33 5.2 Using optical flow and ratio image for expression recognition by SLLE 34 5.2.1 Dimensionality selection 37 5.2.2 Neighbor K selection 39 5.2.3 Compare with Support Vector Machine 40 5.3 Experiments on using a universal neutral face image 41 5.4 Yale database 43 5.4.1 Dimensionality selection 44 5.4.2 Neighbor K selection 45 6 Conclussion 47 Bibliography 49

    [1] M. Bartlett. Face image analysis by unsupervised learning and redundancy reduction.
    PhD thesis, California, 1998.
    [2] Y. Chang, C. Hu, and M. Turk. Manifold of facial expression. In In IEEE International
    Workshop on Analysis and Modeling of Faces and Gestures, pages 28{35, 2003.
    [3] D. De Ridder, O. Kouropteva, O. Okun, M. Pietikainen, and R.P.W. Duin. Supervised
    locally linear embedding. In Proceedings of Joint 13th International Conference on
    Arti‾cial Neural Networks and 10th International Conference on Neural Information
    Processing ICANN/ICONIP, volume 2714, 2003.
    [4] Paul Ekman. Emotion in the Human Face. Cambridge University, New York, 1982.
    [5] D. Liang, J. Yang, Z. Zheng, and Y. Chang. A facial expression recognition system
    based on supervised locally linear embedding. Pattern Recognition Letters, 26, 2005.
    [6] J. Lien, T. Kanade, J. Cohn, and C. Li. Automated facial expression recognition based
    on facs action units. In IEEE Conference on Automatic Face and Gesture Recognition,
    pages 390 { 395, 1998.
    50
    [7] X. Liu, T. Chen, and B.V.K. V. Kumar. Face authentication for multiple subjects
    using eigen°ow. Pattern Recognition, 36:313{328, 2003.
    [8] Michael J. Lyons, Shigeru Akamatsu, Miyuki Kamachi, and Jiro Gyoba. Coding facial
    expressions with gabor wavelets. In Third IEEE International Conference on Automatic
    Face and Gesture Recognition, pages 200{205, 1998.
    [9] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear em-
    bedding. Science, 290(5500):2323{2326, 2000.
    [10] Nicu Sebe, Michael S. Lew, Ira Cohen, Yafei Sun, Theo Gevers, and Thomas S. Huang.
    Authentic facial expression analysis. In Proceedings of the Sixth IEEE International
    Conference on Automatic Face and Gesture Recognition (FGR04), 2004.
    [11] C.-H. Teng, S.-H. Lai, Y.-S. Chen, and W.-H. Hsu. Accurate optical °ow computation
    under non-uniform brightness variations. Computer Vision and Image Understanding,
    pages 315{346, 2005.
    [12] Mohammed Yeasin. and B. Bullot. C

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE