研究生: |
鄭鉅融 Cheng, Jyu-Rong |
---|---|
論文名稱: |
使用阿倫·福特的音類集理論及特徵抽取實現音樂之重構與風格轉換 Music Reconstruction and Style Transfer Using Allen Forte’s Pitch-Class Set Theory and Feature Extraction |
指導教授: |
蘇豐文
Soo, Von-Wun |
口試委員: |
胡敏君
Hu, Min-Chun 劉瑞瓏 Liu, Rey-Long |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊系統與應用研究所 Institute of Information Systems and Applications |
論文出版年: | 2020 |
畢業學年度: | 109 |
語文別: | 英文 |
論文頁數: | 51 |
中文關鍵詞: | 音樂生成 、風格轉換 、深度學習 、音類集理論 、調性和聲 、對位法 |
外文關鍵詞: | Music Generation, Style Transfer, Deep Learning, Pitch-Class Set Theory, Tonal Harmony, Counterpoint |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
音樂生成在深度學習與音樂資訊檢索相關的研究中,是個涵蓋很廣的領域。
在本論文中,我們使用阿倫·福特於 1973 年在著作《無調性音樂結構》中發表的音類集理論方法,並提取其他樂曲特徵,包含節奏線與旋律線包絡,不僅增加了音樂及數學的理論基礎,同時也實現作曲時,了解在多種音樂曲風間各種音樂元素互相參照並轉化的目的。
我們提出的研究方法富具彈性,在不同情況,都能產生多樣性的組合,即使在資料量有限的情況下仍能進行操作。
生成的樂曲不僅有可聽性,從訓練好的分類模型預測結果中,也可以看出生成的樂曲在不同音樂因素組合間成功地轉換。
Music generation is a subject that is widely investigated in deep learning and MIR (Music Information Retrieval) research.
In this paper, we adopt Allen Forte’s Pitch-Class Set Theory from his book, The Structure of Atonal Music, 1973, and extract other musical features including rhythm patterns as well as melody contours. Not only do we enhance the theoretic foundation of music and mathematics, but also understand the reciprocal reference among music factors and different kinds of music styles while composing.
The method we propose is highly flexible. You can have a diversity of outcomes under various circumstances. Additionally, it can be applied with limited training data.
The music we generated is pleasant from subjective evaluation results, and successfully transformed into other styles from the results given by a pre-trained classifier model.
[1] Jean-Pierre Briot, Gaëtan Hadjeres¸ François-David Pachet. (2019, August). Deep Learning Techniques for Music Generation – A Survey.
[2] Noriko Otani, Daisuke Okabe, Masayuki Numao. (2018, July). Generating a Melody Based on Symbiotic Evolution for Musicians’ Creative Activities.
[3] Greg Bickerman, Sam Bosley, Peter Swire, Robert M. Keller. (2010, Jan). Learning to Create Jazz Melodies Using Deep Belief Nets. ResearchGate.
[4] Smolensky, (1986). Information processing in dynamical systems: Foundations of harmony theory. In: D. E. Rumelhart and J. L. McClelland, (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. vol 1: Foundations. MIT Press.
[5] Jian Wu, Changran Hu, Yulong Wang, Xiaolin Hu, Senior Member, (2019). A Hierarchical Recurrent Neural Network for Symbolic Melody Generation, IEEE
[6] M. C. Mozer. (1994). Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multi-scale processing. Connection Sci., vol. 6, nos. 2–3, pp. 247–280.
[7] C.-C. J. Chen and R. Miikkulainen. (2001). Creating melodies with evolving recurrent neural networks. Proc. Int. Joint Conf. Neural Netw. (IJCNN), vol. 3, pp. 2241–2246.
[8] S. Hochreiter and J. Schmidhuber. (1997). Long short-term memory. NeuralComput., vol. 9, no. 8, pp. 1735–1780.
[9] N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent. (2012). Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. Proc. 29th Int. Conf. Mach. Learn. (ICML), Edinburgh, Scotland, U.K., pp. 1159–1166.
[10] Yen-Ling Li (劉彥玲). (2001). The Development of Allen Forte's Set Theory. Graduate Institute of Musicology National Taiwan University.
[11] Paulo Camacho. (2018 Jan). These Songs Have Rhythm: A Musical Analysis, All Things Picardy.
[12] Jyh-Shing Roger Jang (張智星), Data Clustering and Pattern Recognition. available at the links for on-line courses at the author's homepage at http://mirlab.org/jang.
[13] Keogh, E. J. (2002 Aug). Exact indexing of Dynamic Time Warping In Proc. 28th int’l Conf. on Very Large Data Bases, Hong Kong, China, pp. 406-417.
[14] Gaëtan Hadjeres. Constraint-transformer-bach. https://github.com/Ghadjeres/constraint-transformer-bach
[15] Gaëtan Hadjeres, François Pachet, Frank Nielsen. (2017 June). DeepBach: a Steerable Model for Bach Chorales Generation. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70.
[16] TJ Tsai, Kevin Ji. (2020). Composer Style Classification Of Piano Sheet Music Images Using Language Model Pretraining. 21st International Society for Music Information Retrieval Conference, Montréal, Canada,.
[17] Keunwoo Choi, George Fazekas, Mark Sandler, Kyunghyun Cho. (2016 Dec). Convolutional Recurrent Neural Networks for Music Classification.
[18] Li Lihua. (2010 Sept). Audio Musical Genre Classification using Convolutional Neural Networks and Pitch and Tempo transformations. Department of Computer Science in Partial Fulfillment of the Requirements for the Degree of Master of Philosophy
[19] Enric Guaus I Termens. (2009). Audio Content Processing for Automatic Music Genre Classification: Descriptors, Databases, and Classifiers. Department of Information and Communication Technologies at the Universitat Pompeu Fabra.
[20] Sergio Oramas, Oriol Nieto, Francesco Barbieri, Xavier Serra. (2017). Multi-Label Music Genre Classification From Audio, Text, And Images Using Deep Features. 18th International Society for Music Information Retrieval Conference, Suzhou, China.