研究生: |
李奇謀 Lee, Chi-Mou |
---|---|
論文名稱: |
基於限制性蒙地卡羅搜尋樹之因果故事生成系統 Causal Story Generation Systems Based on Constrained Monte Carlo Tree Search |
指導教授: |
蘇豐文
Soo, Von-Wun |
口試委員: |
陳宜欣
Chen, Yi-Shin 陳煥宗 Chen, Hwann-Tzong |
學位類別: |
碩士 Master |
系所名稱: |
|
論文出版年: | 2018 |
畢業學年度: | 106 |
語文別: | 英文 |
論文頁數: | 137 |
中文關鍵詞: | 故事生成 、蒙地卡羅搜尋樹 、因果關係 、常識知識庫 、故事模型 、深度學習 |
外文關鍵詞: | Story generation, Monte Carlo Tree Search, Causal relation, Common-sense ontology, Fabula ontology, Deep learning |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
現存的故事產生系統多為弱人工智慧,由於電腦本身並不具有語意理解及常識推理的能力,因此自然語言的產生往往建立在填詞模板和訂定大量規則之下,除了極具仰賴人類所給定的專業知識而無法彈性擴增外,大量的規則也讓故事的產生必須耗費冗長的時間。由此,我們建立了一套因果性故事生成系統,在使用者給定生成參數後,能夠在短時間內從大量語料庫中搜索出多樣化、合理且符合使用者需求的故事。在此篇論文中將會提及如何自動的從語料中搜索出因果性劇情結構,並應用限制性蒙地卡羅搜尋樹來有效率的搜索故事序列。此外,我們設計了一套知識庫系統,結合不同深度學習的框架,用於協助多種故事範本以及限制性蒙地卡羅搜尋樹來生成故事。最後,我們設計多種自然語言模版來將我們所生成的故事序列轉換為可讀性故事。在實驗結果中不同故事的生成結果將會分別列出,並且比較限制性蒙地卡羅搜尋樹在調變各種參數上對於故事生成的差別,另外在知識庫系統部分,我們也會展示各種設計的功能性以及差異性,以及對於故事生成結果的影響。
Current story generation systems are in general weak AI systems in the sense that it lacks of semantic understanding and common-sense knowledge that often result in requirements of enormous story templates and narrative rules for computers to generate stories. Consequently, without experts, computers cannot flexibly augment knowledge. Moreover, plenty of time and efforts are taken in create stories by a complicated rule-based generation system. We wish to construct a system of causal story generation capable of searching for diverse, reasonable and user-desired stories quickly from a large knowledge database. In this thesis, we proposed an approach of automatically extracting causal knowledge from existing knowledge base ConceptNet to construct our own database so that we can efficiently search for story sequences using Constrained Monte Carlo Tree Search (cMCTS) algorithm. Furthermore, a Knowledge-Based System is built with deep learning techniques in order to support cMCTS and story frameworks for generating stories. To generate the story in human comprehensive way, we design of various translation templates to convert the formal causal story sequences into natural language sentences. The stories generated under different parameter settings of cMCTS are illustrated as well as comparisons in several simulation experiments along with the evaluations on the functionality of the Knowledge-Based System.
[1] Su Myat Mon and Hla Myo Tun. Speech-to-text conversion (stt) sys- tem using hidden markov model (hmm). International Journal of Sci- entific & Technology Research, 4(6):349–352, 2015.
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[3] Jose ́ P Zagal, Noriko Tomuro, and Andriy Shepitsen. Natural language processing in game studies research: An overview. Simulation & Gam- ing, 43(3):356–373, 2012.
[4] Jiwei Li, Will Monroe, Tianlin Shi, Se ́bastien Jean, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017. 122
[5] Julie Porteous, Marc Cavazza, and Fred Charles. Narrative generation through characters’ point of view. In Proceedings of the 9th Inter- national Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pages 1297–1304. International Foundation for Autonomous Agents and Multiagent Systems, 2010.
[6] Yun-Gyung Cheong and R Michael Young. A computational model of narrative generation for suspense. In AAAI, pages 1906–1907, 2006.
[7] Julie Porteous and Marc Cavazza. Controlling narrative generation with planning trajectories: the role of constraints. In Joint Interna- tional Conference on Interactive Digital Storytelling, pages 234–245. Springer, 2009.
[8] Sylvain Gelly, Levente Kocsis, Marc Schoenauer, Michele Sebag, David Silver, Csaba Szepesva ́ri, and Olivier Teytaud. The grand chal- lenge of computer go: Monte carlo tree search and extensions. Com- munications of the ACM, 55(3):106–113, 2012.
[9] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte 123 carlo tree search methods. IEEE Transactions on Computational Intel- ligence and AI in games, 4(1):1–43, 2012.
[10] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Lau- rent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529 (7587):484–489, 2016.
[11] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
[12] Bilal Kartal, John Koenig, and Stephen J Guy. Generating believable stories in large domains. In Ninth Artificial Intelligence and Interactive Digital Entertainment Conference, 2013.
[13] Boyang Li, Stephen Lee-Urban, George Johnston, and Mark Riedl. Story generation with crowdsourced plot graphs. In AAAI, 2013. 124
[14] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforce- ment learning. In Icml, pages 663–670, 2000.
[15] Ivo Swartjes and Marie ̈t Theune. A fabula model for emergent nar- rative. In International Conference on Technologies for Interactive Digital Storytelling and Entertainment, pages 49–60. Springer, 2006.
[16] Ivo Swartjes. The plot thickens: bringing structure and meaning into automated story generation. Master’s thesis, University of Twente, 2006.
[17] Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In International Conference on Machine Learning, pages 1188–1196, 2014.
[18] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their com- positionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
[19] Ju ̈rgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015. 125
[20] Mark O Riedl and Robert Michael Young. Narrative planning: Bal- ancing plot and character. Journal of Artificial Intelligence Research, 39:217–268, 2010.
[21] Stephen G Ware and Robert Michael Young. Cpocl: A narrative plan- ner supporting conflict. In AIIDE, 2011.
[22] Neil McIntyre and Mirella Lapata. Learning to tell tales: A data-driven approach to story generation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1- Volume 1, pages 217–225. Association for Computational Linguistics, 2009.
[23] Robert Speer and Catherine Havasi. Representing general relational knowledge in conceptnet 5. In LREC, pages 3679–3686, 2012.
[24] Omer Levy and Yoav Goldberg. Dependency-based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 302–308, 2014. 126
[25] Think City. Wikipedia the free encyclopedia. Website: http://en. wikipedia. org/wiki/Think City Updated, 23, 2015.
[26] George A Miller. Wordnet: a lexical database for english. Communi- cations of the ACM, 38(11):39–41, 1995.
[27] Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60, 2014.
[28] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
[29] Robert Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI, pages 4444– 4451, 2017.
[30] Radim Rˇehu ̊ˇrek and Petr Sojka. Software Framework for Topic 127 Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45– 50, Valletta, Malta, May 2010. ELRA. http://is.muni.cz/ publication/884893/en.
[31] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
[32] Amy N Langville and Carl D Meyer. Updating markov chains with an eye on google’s pagerank. SIAM journal on matrix analysis and applications, 27(4):968–987, 2006. [33]LeventeKocsis,CsabaSzepesva ́ri,andJanWillemson. Improved monte-carlo search. Univ. Tartu, Estonia, Tech. Rep, 1, 2006.
[34] Robert McKee. Story: substance, structure, style, and the principles of screenwriting. 1997. http://www.apinnovationsociety. com/apinnovationportal/uploads/funding_ proposal/5ae9b4367eb89_1525265462.pdf.
[35] Christian M Meyer and Iryna Gurevych. Wiktionary: A new rival for 128 expert-built lexicons? Exploring the possibilities of collaborative lexi- cography. na, 2012.