研究生: |
吳尚鴻 Wu, Shang-Hung |
---|---|
論文名稱: |
基於隱藏式馬可夫模型之中文語音合成與吼叫情緒轉換 Chinese text-to-speech and roar emotion conversion based on hidden Markov model |
指導教授: |
王小川
Wang, Hsiao-Chuan |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
論文出版年: | 2010 |
畢業學年度: | 98 |
語文別: | 中文 |
論文頁數: | 66 |
中文關鍵詞: | 語音合成 、情緒轉換 、隱藏式馬可夫模型 、文字轉語音 |
外文關鍵詞: | emotion conversion, speech synthesis, hmm |
相關次數: | 點閱:3 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
Mandarin HMM-based speech synthesis system can use small amount of corpus to synthesize quite natural speech. By the characteristic of the parametric speech, convert the speech characteristic to arbitrary target speaker or speaking style.
Model adaptation is a technique to improve the recognition rate of the Automatic Speech Recognition system. By using model adaptation method to the synthesis system, we can change the source speaker’s model parameters, try to mimic arbitrary target speaker’s speaking characteristic. This paper use CSMAPLR(Constrained Structural Maximum a Posteriori Linear Regression) algorithm to do the model adaptation. The source model is build by the newspaper reading corpus and adapt to a roar speaking style.
From the result of subjective and objective tests, the synthetic speech is close to the target speaker’ characteristic and can mimic the emotion.
合成出自然、富有情緒的真實語音一直是各方追求的目標,而基於隱藏式馬可夫模型(Hidden Markov Model)實作之中文語音合成系統(Text-To-Speech System),能夠利用少量的語料,合成出相當自然的語音。利用參數化之語音表示法的特性,能夠模仿任意目標語者的說話風格、情緒特徵,進而達到聲音轉換(voice conversion)的效果。
本系統分為三部份,分別是訓練端、調適端以及合成端。在訓練端,做法類似於語音辨認,對語料庫中的語料經過特徵參數抽取後,將特徵參數類似或相同的語音單元作訓練。合成單元的統計機率模型經過訓練後,可以充分的表達出合成單元的聲學特性。在調適端中,許多利用模型調適(Model Adaptation)來增進語音辨識系統(Automatic Speech Recognition)準確性的方法被應用在語音合成系統上,透過模型調適的演算法,將描述原始語者的模型參數調適為描述目標語者的模型參數,以期許可以模擬出任意目標語者自然語音的特色。本論文利用CSMAPLR(Constrained Structural Maximum A Posteriori Linear Regression)模型調適方法,將原始模擬新聞播報方式的模型,調適為吼叫聲(roar)之模型,達到情緒轉換之目的。最後為合成端,由輸入的文字作文句分析(text analysis),挑選出對應的合成單元統計機率模型,這些統計機率模型串接為句模型(sentence model),然後從利用參數產生演算法(parameter generation algorithm)產生特徵參數序列,最後利用反向濾波器產生語音訊號。
實驗中,利用主觀及客觀實驗,評量本系統對情緒以及語者特性之轉換。
[1] A. Black and K. Lenzo, “Limited domain synthesis”, ICSLP
pp. 411–414, 2000.
[2] A. Hunt and A. Black, “Unit selection in a concatenative speechsynthesis system using a large speech database” , ICASSP, pp. 373–376, 1996.
[3] K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, T.Kitamura, “Speech parameter generation algorithms for HMM-based speech synthesis”, Proc. of ICASSP, June 2000.
[4] Junichi Yamagishi1, Heiga Zen, Tomoki Toda and Keiichi Tokuda
“Speaker-Independent HMM-based Speech Synthesis System— HTS-2007 System for the Blizzard Challenge”, Blizzard 2007, 2007.
[5] S. Imai, “Cepstral analysis synthesis on the mel-frequency scale”, ICASSP, 1983.
[6] 王小川, “語音訊號處理”.
[7] Wavesurfer, http://www.speech.kth.se/wavesurfer/.
[8] K. Tokuda, T. Kobayashi, T. Masuko, and S. Imai, “Mel-generalized
cepstral analysis – A unified approach to speech spectral estimation”, Proc. ICASSP, pp.1043–1046, 1994.
[9] T. Kobayashi and S. Imai, “Spectral analysis using generalized cepstrum,” IEEE Trans. Acoust., Speech, Signal processing, vol. ASSP-32, pp.1087–1089, Oct. 1984.
[10] K. Tokuda, T. Kobayashi, T. Masuko, S. Imai, “Mel-generalized cepstral analysis-a unified approach to speech spectral estimation”, Citeseer, 1994.
[11]H.Zen, K.Tokuda, T.Masuko, T.Kobayashi,
and T. Kitamura, ”Hidden semi-Markov model based speech synthesis”, Proc. ICSLP 2004.
[12] T. Yoshimura, T. Masuko, K. Tokuda, T. Kobayashi, and T. Kitamura,“Duration modeling for HMM-based speech synthesis,” Proc. ICSLP-98, vol.2, Tu3A4, pp.29--32, Nov. 1998.
[13] K. Tokuda, T. Kobayashi, and S. Imai, “Speech parameter generation from HMM using dynamic features,” in Proc. of ICASSP, 1995, pp. 660–663.
[14] K. Shinoda and T.Watanabe, “MDL-based context-dependent subword modeling for speech recognition,” J. Acoust. Soc. Japan (E), vol. 21,pp. 79–86, Mar. 2000.
[15] C. Leggetter and P.Woodland, “Maximum likelihood linear egression for speaker adaptation of continuous density hidden Markov models,” Comput. Speech Lang., vol. 9, no. 2, pp. 171–185, 1995.
[16] J. Yamagishi, T. Kobayashi, Y. Nakano, K. Ogata, and J. Isogai,
“Analysis of speaker adaptation algorithms for HMM-based speech
synthesis and a constrained SMAPLR adaptation algorithm,” IEEE
Trans. Speech, Audio, Lang. Process., vol. 17, no. 1, pp. 66–83, Jan.
2009, 2007.
[17] K. Shinoda and C. Lee, “A structural Bayes approach to speaker adaptation,” IEEE Trans. Speech Audio Process., vol. 9, no. 3, pp. 276–287, Mar. 2001.
[18] Hidden Markov Model Toolkit (HTK), http://htk.eng.cam.ac.uk/
[19] Speech Signal Processing Toolkit (SPTK),
http://sp-tk.sourceforge.net/
[20] HMM-based Speech Synthesis System (HTS),
http://hts.sp.nitech.ac.jp/
[21] HTS engine,
http://hts-engine.sourceforge.net/
[24] Hidden Markov Model Toolkit (HTK),
http://htk.eng.cam.ac.uk/