簡易檢索 / 詳目顯示

研究生: 余儒育
論文名稱: 利用音框線頻譜對之多使用者音訊視訊轉換
Frame Based Audio to Visual Conversion Using Line Spectrum Pair
指導教授: 陳永昌
Yung-Chang Chen
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2001
畢業學年度: 89
語文別: 英文
論文頁數: 44
中文關鍵詞: audio to visual conversionframe basedLSPmodel adaptationGMM
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 多媒體常常被做為傳輸訊息的媒介並能提供使用者較為豐富生動的感受。多媒體不單單只是各種型式資料的組合,它還整合了不同型式資料之間的交互關係。在這些不同型式的資料中,又以視訊和音訊這兩者較為重要,因為它們在感官上提供了豐富的資訊。
    人類的語音不論是在發音或是在感知上,都具有聲音和視覺這兩種型態。由於視訊和音訊之間有著極高的相關性,視訊的參數可以由音訊參數來估測。在這篇論文中,我們提出一個基於混合高斯模型音框層次的音訊視訊轉換系統。經由混合高斯模型,視訊參數可以其對應的音訊特徵參數來估得。

    在語音信號處理上,有許多種語音特徵參數被發展出來。不論使用何種語音特徵參數,訊視參數估測的準確度幾乎一樣。在這裡我們使用線頻譜對當做語音特徵參數,因為線頻譜對會在語音壓縮時自動產生。在另一方面,使用線頻譜對可以使這套系統和採用線譜對的語音壓縮標準如G.723.1及MPEG-4 CELP、HVXC做整合。此外我們把基礎單位由單一個音框延申為多重音框,試圖來處理連音問題。

    音訊和視訊特徵是因人而異的。一個以特定使用者為條件的混合高斯模型在使用上並不方便,因為此模型套用在其它使用者身上時,估測的準確度將會下降。在這裡我們提出了四種模型調適的演算法,冀望能以較少的計算量來為新使用者鍛造出適用的混合高斯模型。在這些模型的調適之下,這套轉換系統的應用性將更加提高。


    Table of Contents………………………………………………………...i List of Figure……………………………………………………………iii Chapter 1. Introduction 1 1.1 Audio to Visual Conversion and Previous Work 2 1.2 Thesis Organization 3 Chapter 2. Background 4 2.1 Speech Production Model 4 2.1.1 Multitube Lossless Model of Vocal Tract 5 2.1.2 The All Pole Model 6 2.2 Linear Prediction Analysis 7 2.3 Audio Features 9 2.3.1 Line Spectrum Pair 10 2.3.2 Cepstrum 11 2.4 LSP in Speech Coders 12 Chapter 3. Single-User Audio to Visual Conversion and Feature Selection 15 3.1 Framework 15 3.2 Database 17 3.3 Gaussian Mixture Model 19 3.3.1 Gaussian Mixture Model 19 3.3.2 Estimation of Visual Parameters from Gaussian Mixture Model 20 3.4 Experimental Result and Error Analysis 21 3.4.1 Audio Feature Selection 22 3.4.2 Module Number 24 3.4.3 Single Frame and Multi-Frames 25 Chapter 4. Multi-user Audio-Visual Conversion 27 4.1 Model Adaptation with Audio and Visual 27 4.2 Audio Only Adaptation 31 4.2.1 Audio to Audio Conversion 33 4.2.2 Normalization and Adaptation of Audio Features 35 4.2.2.1 Cepstral Mean Normalization 36 4.2.2.1 Semblable MLLR 38 Chapter 5. Conclusion and Future Work 41 Reference 43

    Reference:
    [1] K. Aizawa, H. Harashima, and T. Saito, “Model-based synthesis image coding (MBASIC) system for a person’s face,” Signal Processing: Image Communication, Vol. 1, No. 2, pp. 139–152, Oct. 1989.
    [2] Jay Jevon Williams “Speech to Video Conversion for Individuals with Impaired Hearing ”, dissertation for PHD of NorthWestern University. June 2000.
    [3] You Zhang, Stephen Levinson and Thomas Huang “Speaker Independent Audio-Visual Speech Recognition ” ICME 2000 TP8.
    [4] Yao-Jen Chang, Chih-Chung Chen, Jen-Chung Chou, and Yung-Chang Chen, “Implementation of a Virtual Chat Room for Multimedia Communications,” IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing (MMSP99), Copenhagen, Denmark, Sept. 13-15, 1999.
    [5] ITU-T Recommendation G.723.1, “Dual Rate Speech Coder for Multimedia Communications Transmitting at 5.3 and 6.3 kbit/s,” Mar. 1996.
    [6] ISO/JTC 1/SC29/WG11 N2503-sec2 &sub 3.
    [7] John R. Deller, Jr., John G Proakis and John H. L. Hansen “Discrete-Time Processing of Speech Signals” chapter 5, Macmillan Publishing Company.
    [8] John R. Deller, Jr., John G Proakis and John H. L. Hansen “Discrete-Time Processing of Speech Signals” chapter 3, Macmillan Publishing Company.
    [9] John R. Deller, Jr., John G Proakis and John H. L. Hansen “Discrete-Time Processing of Speech Signals” chapter 5 pp.300, Macmillan Publishing Company.
    [10] Ram R. Rao, Tsuhan Chen, “Audio-to-Visual Conversion for Multimedia Communication,” IEEE Transactions on Industrial Electronics, Vol. 45, No.1, pp. 15-22, Feb. 1998.
    [11] http://amp.ece.cmu.edu/projects/AudioVisualSpeechProcessing/
    [12] Chin-Chung Chen “Adaptation of Gaussian Mixture Model for Multi-user Audio to Visual Conversion”, 國立清華大學碩士論文, June 2000.
    [13] 王志忠,吳宗憲教授,”利用變形脣位於合成語音動畫中連接音模型之建立”,國成立功大學碩士論文, June 2000.
    [14] John R. Deller, Jr., John G Proakis and John H. L. Hansen “Discrete-Time Processing of Speech Signals” chapter 6, Macmillan Publishing Company.
    [15] Hans H. Bothe and Frauke Rieger “Visual Speech and Coarticulation Effects”, IEEE International Conference. Volume: 5 , 1993 , Page(s): 634 -637 vol.5.
    [16] Carey E. Priebe, “Adaptive Mixtures,” Journal of the American Statistical Association, Vol. 89, No. 427, Sep. 1994.
    [17] Yfantis, E.A., Lazarakis, T., Angelopoulos, A., Elison, J.D.; Zhang, Y., ”On Time Alignment and Metric Algorithms for Speech Recognition”, Information Intelligence and Systems, 1999. Proceedings. 1999 International Conference, 1999 , Page(s): 423 –428.
    [18] Gales, M.J.F.; Pye, D.; Woodland, P.C. ,Spoken Language, “Variance Compensation within the MLLR framework for Robust Speech Recognition and Speaker Adaptation”, ICSLP 96. Proceedings., Fourth International Conference 1996 , Page(s): 1832 -1835 vol.3

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE