簡易檢索 / 詳目顯示

研究生: 吳尚鴻
Wu, Shang-Hung
論文名稱: 基於隱藏式馬可夫模型之中文語音合成與吼叫情緒轉換
Chinese text-to-speech and roar emotion conversion based on hidden Markov model
指導教授: 王小川
Wang, Hsiao-Chuan
口試委員:
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2010
畢業學年度: 98
語文別: 中文
論文頁數: 66
中文關鍵詞: 語音合成情緒轉換隱藏式馬可夫模型文字轉語音
外文關鍵詞: emotion conversion, speech synthesis, hmm
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Mandarin HMM-based speech synthesis system can use small amount of corpus to synthesize quite natural speech. By the characteristic of the parametric speech, convert the speech characteristic to arbitrary target speaker or speaking style.

    Model adaptation is a technique to improve the recognition rate of the Automatic Speech Recognition system. By using model adaptation method to the synthesis system, we can change the source speaker’s model parameters, try to mimic arbitrary target speaker’s speaking characteristic. This paper use CSMAPLR(Constrained Structural Maximum a Posteriori Linear Regression) algorithm to do the model adaptation. The source model is build by the newspaper reading corpus and adapt to a roar speaking style.

    From the result of subjective and objective tests, the synthetic speech is close to the target speaker’ characteristic and can mimic the emotion.


    合成出自然、富有情緒的真實語音一直是各方追求的目標,而基於隱藏式馬可夫模型(Hidden Markov Model)實作之中文語音合成系統(Text-To-Speech System),能夠利用少量的語料,合成出相當自然的語音。利用參數化之語音表示法的特性,能夠模仿任意目標語者的說話風格、情緒特徵,進而達到聲音轉換(voice conversion)的效果。

    本系統分為三部份,分別是訓練端、調適端以及合成端。在訓練端,做法類似於語音辨認,對語料庫中的語料經過特徵參數抽取後,將特徵參數類似或相同的語音單元作訓練。合成單元的統計機率模型經過訓練後,可以充分的表達出合成單元的聲學特性。在調適端中,許多利用模型調適(Model Adaptation)來增進語音辨識系統(Automatic Speech Recognition)準確性的方法被應用在語音合成系統上,透過模型調適的演算法,將描述原始語者的模型參數調適為描述目標語者的模型參數,以期許可以模擬出任意目標語者自然語音的特色。本論文利用CSMAPLR(Constrained Structural Maximum A Posteriori Linear Regression)模型調適方法,將原始模擬新聞播報方式的模型,調適為吼叫聲(roar)之模型,達到情緒轉換之目的。最後為合成端,由輸入的文字作文句分析(text analysis),挑選出對應的合成單元統計機率模型,這些統計機率模型串接為句模型(sentence model),然後從利用參數產生演算法(parameter generation algorithm)產生特徵參數序列,最後利用反向濾波器產生語音訊號。

    實驗中,利用主觀及客觀實驗,評量本系統對情緒以及語者特性之轉換。

    第一章 緒論 6 1.1研究動機與目的 6 1.2語音合成研究背景 6 1.2.1單元選取合成系統 7 1.2.2統計參數合成系統 9 1.3研究方法基本架構簡介 10 第二章 使用方法 13 2.1中文語音特性 13 2.2語音合成特徵參數 16 2.2.1 激發訊號 16 2.2.2 頻譜參數 16 2.3隱藏式馬可夫模型(HMM)與半隱藏式馬可夫模型(HSMM) 20 2.3隱藏式馬可夫模型(HMM)與半隱藏式馬可夫模型(HSMM) 20 2.3.1隱藏式馬可夫模型訓練 20 2.3.2隱藏式馬可夫模型語音合成 23 2.3.3半隱藏式馬可夫模型(HSMM) 25 2.4文本相關決策樹(context-dependent decision tree) 28 2.4.1多音素模型 28 2.4.2決策樹之建構 29 2.5半隱藏式馬可夫模型調適(HSMM model adaptation) 35 2.5.1模型調適方法簡述 35 2.5.2設限線性轉換(CMLLR) 36 2.5.3模型調適文本相關分類樹 38 2.5.4 結構性限制轉換(CSMAPLR) 40 第三章 系統實作 44 3.1系統流程概述 44 3.2語料庫簡介 44 3.3文句分析與語料標記 44 3.4特徵參數抽取 46 3.5模型訓練及調適 46 第四章 實驗 49 4.1各項主觀評量 50 4.2語者相似度客觀評量 59 第五章 結論與未來工作 62

    [1] A. Black and K. Lenzo, “Limited domain synthesis”, ICSLP
    pp. 411–414, 2000.
    [2] A. Hunt and A. Black, “Unit selection in a concatenative speechsynthesis system using a large speech database” , ICASSP, pp. 373–376, 1996.
    [3] K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, T.Kitamura, “Speech parameter generation algorithms for HMM-based speech synthesis”, Proc. of ICASSP, June 2000.
    [4] Junichi Yamagishi1, Heiga Zen, Tomoki Toda and Keiichi Tokuda
    “Speaker-Independent HMM-based Speech Synthesis System— HTS-2007 System for the Blizzard Challenge”, Blizzard 2007, 2007.
    [5] S. Imai, “Cepstral analysis synthesis on the mel-frequency scale”, ICASSP, 1983.
    [6] 王小川, “語音訊號處理”.
    [7] Wavesurfer, http://www.speech.kth.se/wavesurfer/.
    [8] K. Tokuda, T. Kobayashi, T. Masuko, and S. Imai, “Mel-generalized
    cepstral analysis – A unified approach to speech spectral estimation”, Proc. ICASSP, pp.1043–1046, 1994.
    [9] T. Kobayashi and S. Imai, “Spectral analysis using generalized cepstrum,” IEEE Trans. Acoust., Speech, Signal processing, vol. ASSP-32, pp.1087–1089, Oct. 1984.
    [10] K. Tokuda, T. Kobayashi, T. Masuko, S. Imai, “Mel-generalized cepstral analysis-a unified approach to speech spectral estimation”, Citeseer, 1994.
    [11]H.Zen, K.Tokuda, T.Masuko, T.Kobayashi,
    and T. Kitamura, ”Hidden semi-Markov model based speech synthesis”, Proc. ICSLP 2004.
    [12] T. Yoshimura, T. Masuko, K. Tokuda, T. Kobayashi, and T. Kitamura,“Duration modeling for HMM-based speech synthesis,” Proc. ICSLP-98, vol.2, Tu3A4, pp.29--32, Nov. 1998.
    [13] K. Tokuda, T. Kobayashi, and S. Imai, “Speech parameter generation from HMM using dynamic features,” in Proc. of ICASSP, 1995, pp. 660–663.
    [14] K. Shinoda and T.Watanabe, “MDL-based context-dependent subword modeling for speech recognition,” J. Acoust. Soc. Japan (E), vol. 21,pp. 79–86, Mar. 2000.
    [15] C. Leggetter and P.Woodland, “Maximum likelihood linear egression for speaker adaptation of continuous density hidden Markov models,” Comput. Speech Lang., vol. 9, no. 2, pp. 171–185, 1995.
    [16] J. Yamagishi, T. Kobayashi, Y. Nakano, K. Ogata, and J. Isogai,
    “Analysis of speaker adaptation algorithms for HMM-based speech
    synthesis and a constrained SMAPLR adaptation algorithm,” IEEE
    Trans. Speech, Audio, Lang. Process., vol. 17, no. 1, pp. 66–83, Jan.
    2009, 2007.
    [17] K. Shinoda and C. Lee, “A structural Bayes approach to speaker adaptation,” IEEE Trans. Speech Audio Process., vol. 9, no. 3, pp. 276–287, Mar. 2001.
    [18] Hidden Markov Model Toolkit (HTK), http://htk.eng.cam.ac.uk/
    [19] Speech Signal Processing Toolkit (SPTK),
    http://sp-tk.sourceforge.net/
    [20] HMM-based Speech Synthesis System (HTS),
    http://hts.sp.nitech.ac.jp/
    [21] HTS engine,
    http://hts-engine.sourceforge.net/
    [24] Hidden Markov Model Toolkit (HTK),
    http://htk.eng.cam.ac.uk/

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE