研究生: |
呂理鈞 Li-Chun Lu |
---|---|
論文名稱: |
基於馬可夫鏈與音樂理論之互動式藍調吉他呼喊與回應即興系統 An Interactive Call and Response Blues Guitar Jamming System based on Markov Chain and Music Theory |
指導教授: |
劉奕汶
Liu, Yi-Wen 蘇郁惠 Su, Yu-Huei |
口試委員: |
蘇黎
Su, Li 林宜徵 Lin, Yi-Cheng 俞韋亘 Yu, Wei-Hsuan |
學位類別: |
碩士 Master |
系所名稱: |
藝術學院 - 音樂學系所 Music |
論文出版年: | 2022 |
畢業學年度: | 111 |
語文別: | 中文 |
論文頁數: | 49 |
中文關鍵詞: | 馬可夫鍊 、藍調 、吉他 、呼喊與回應 、即興 、互動系統 |
外文關鍵詞: | Blues, guitar, call and response, jamming, interactive system |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
Call and Response是藍調音樂中經典的音樂技巧之一,在音樂的呈現上為前後旋律、聲響的相互呼應,如同人與人之間的對話、語意或情緒的相互交流,此技巧更是藍調吉他手在即興演奏過程中最有趣的部分之一。本研究旨在建立一個讓人能夠獨自與電腦進行十二小節藍調吉他即興的系統,並專注在Call and Response的實現。本系統共有三個聲部,分別是爵士鼓與電貝斯伴奏,以及回應玩家的彈奏旋律的電鋼琴獨奏。玩家設定即興的拍速、調性、和弦進行以開啟伴奏環境,並透過真實的電吉他彈奏一段旋律(Call),送進系統分析並生成一段與玩家樂句相對應的回應旋律(Response),且立刻在節拍上演奏,如此一來一往達到模擬真實多人即興的情境與氛圍。本系統使用馬可夫鏈與音樂理論,對藍調即興音樂片段進行數據統計與結構分析,探討多種旋律結構定義與生成方式,經過實驗歸納出三種回應旋律生成的演算法,並開發成實際可遊玩的互動即興程式。最後徵求具有藍調即興能力的吉他手進行實測及訪談、問卷回饋。結果顯示:多數玩家認為本系統回應旋律符合樂理與藍調風格,且認同此系統對於吉他學習與練習均具有很大的幫助。
“Call and Response” is an important technique in Blues music. One melody echoes with other melodies, like how people communicate with each other. This technique is the most fascinating part of Blues guitar jamming. In this thesis, we propose an interactive 12-bar Blues guitar jamming system that focuses on the realization of call and response. This system contains three parts: drums, bass, and the e-piano which responds to the user. After the tempo, key, and chord progression are set, the user plays a melody and the system will analyze and generate a response melody. The groove flows back and forth like how we improvise with real people. We used Markov chain and music theory to analyze Blues guitar improvising music, and sought to define the structure of music and generate the melody. We proposed three algorithms for the generation of response melodies, then developed a playable jamming system. Finally, we invited subjects with Blues guitar jamming experiences for tests, interviews, and questionnaire feedback. The results show that most players think the response melody of this system is in line with music theory and Blues style, and agree that this system is very helpful for guitar learning and practice.
[1] L. Pearley, “The historical roots of blues music,” 2018. [Online]. Available: https://www.aaihs.org/the-historical-roots-of-blues-music/.
[2] Britannica, T. Editors of Encyclopaedia., “blues,” Britannica, 2021. [Online]. Available: https://www.britannica.com/art/blues-music.
[3] J. Powell, “好音樂的科學:破解基礎樂理和美妙旋律的音階秘密,” 大寫出版, p. 188, 2016.
[4] T. Kamei, “用大譜面遊賞爵士吉他!令人恍然大悟的輕鬆樂句大全,” 典絃音樂文化出版社, p. 30, 2018.
[5] The Blues, “Understanding the 12-bar blues,” 2013. [Online]. Available: https://www.pbs.org/theblues/classroom/essays12bar.html.
[6] A. Aledort, “Make your blues solos awesome using dominant seven chords,” 2019. [Online]. Available: https://www.guitarlessons.com/guitar-lessons/blues-guitar-quick-start-series/dominant-seventh-blues-chords.
[7] H.-Y. Liu, “音樂即興:理論與實務初探,” Ph.D dissertation, Department of Music, National Taiwan Normal University, Taiwan, 2011.
[8] A. L. Parker, “Afriqua presents principles of black music: call & response,” 2022. [Online]. Available: https://www.ableton.com/en/blog/afriqua-presents-principles-of-black-music-call-response/.
[9] P. Davids, “How the pros use call & response (must-know solo trick!),” 2020. [Online]. Available: https://www.youtube.com/watch?v=6n2308MSpjk.
[10] H.-W. Tsai, “An HMM-based variation system for producing blues style music,” M.S. thesis, Department of Computer Science, National Tsing Hua University, Taiwan, 2004.
[11] D. Eck and J. Schmidhuber, “Finding temporal structure in music: blues improvisation with LSTM recurrent networks,” Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, pp. 747-756, 2002.
[12] S. H. Hakimi, N. Bhonker, and R. El-Yaniv, “BebopNet: Neural models for jazz improvisations,” Proceedings of the 21st International Society for Music Information Retrieval Conference (ISMIR), p. 828–836, 2020.
[13] Google brain, “magenta,” 2016. [Online]. Available: https://magenta.tensorflow.org/.
[14] S. Oore, I. Simon, S. Dieleman et al., “This time with feeling: learning expressive musical performance,” Neural Comput & Applic 32, p. 955–967, 2020.
[15] A. Roberts, J. Engel, C. Raffel, C. Hawthorne and D. Eck, “A hierarchical latent vector model for learning long-term structure in music,” Proceedings of the 35th International Conference on Machine Learning, pp. PMLR 80:4364-4373, 2018.
[16] P.-C. Chu, “An automated composistion system based on machine learning and music data analysis,” M.S. thesis, Department of Music, National Tsing Hua University, Taiwan, 2020.
[17] W. Schweer, “MuseScore,” 2011. [Online]. Available: https://musescore.org/zh-hant.
[18] O. M. Bjørndalen et al., “Mido - MIDI objects for python,” 2013. [Online]. Available: https://mido.readthedocs.io/en/latest/#.
[19] B. McFee, C. Raffel, D. Liang, D. P. Ellis, M. McVicar, E. Battenberg and O. Nieto, “librosa: Audio and music signal analysis in python,” Proceedings of the 14th Python in Science Conference , pp. 18-24, 2015.
[20] D. Page, “Blues Guitar Call and Response Jam in C with Danny and Dan,” 2017. [Online]. Available: https://www.youtube.com/watch?v=a3fNcBTerRM.
[21] H.-C. Park, “SRV Style Blues Jam (With Student),” 2021. [Online]. Available: https://www.youtube.com/watch?v=6G3AOXyeM7o.
[22] gsmentertainment, “BB King & John Mayer Live - Part 1,” 2009. [Online]. Available: https://www.youtube.com/watch?v=f6dnI1WsFrA.
[23] T. Hori, K. Nakamura amd S. Sagayama, “Jazz piano trio synthesizing system based on HMM and DNN,” Proceedings of the 14th Sound and Music Computing Conference, pp. 153-158, 2017
[24] A. S. Ramanto and N. U. Maulidevi, “Markov chain based procedural music generator with user chosen mood compatibility,” International Journal of Asia Digital Art and Design Association, pp. 19-24, 2017.
[25] T. Kathiresan, “Automatic melody generation,” M.S. thesis, School of Electrical Engineering, University of Zurich, Switzerland, 2015.
[26] M. Weiland, A. Smaill and P. Nelson, “Learning musical pitch structures with hierarchical hidden markov models,” Journees d’Informatique Musical, 2005.
[27] W. Clinton, “Music improvisation in python using a markov chain algorithm,” M.S. thesis, Department of Computer Science and Statistics, Trinity College, Dublin, 2019.
[28] D. Conklin, “Music generation from statistical models,” Proceedings of the AISB 2003 Symposium on Artificial Intelligence and Creativity in the Arts and Sciences, pp. 160-183, 2003.