研究生: |
郭昱辰 Kuo, Yu-Chen |
---|---|
論文名稱: |
利用增強式學習法來學習漢語片語結構的剖析 Using Reinforcement Learning to Learn Phrase Structure Parsing in Mandarin Chinese |
指導教授: |
蘇豐文
Soo, Von-Wun |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊系統與應用研究所 Institute of Information Systems and Applications |
論文出版年: | 2009 |
畢業學年度: | 97 |
語文別: | 英文 |
論文頁數: | 33 |
中文關鍵詞: | 自然語言 、監督式學習 、增強式學習 、獎勵給予 |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在自然語言學習的領域當中,如何正確地剖析一個句子一直是個非常具有挑戰性的問題。傳統的監督式剖析器學習法通常對於正確的訓練資料做了非常強的語法和規則上的假設。而在如此的假設之下,要取得大量的正確訓練語料對於標定正確答案的訓練者來說是一大負擔。因此語言剖析器的習得也變得相當困難。增強式學習是一種強而有力的學習方法,因為訓練者只需要在正確的動作序列中給定正的報酬值即可。相較於傳統的監督式學習,這種的獎勵給予方式顯然較為輕鬆。但在自然語言學習的方法中,這種獎勵給予的特色卻鮮少被應用到訓練當中。在此篇論文當中,我們證實了如果採用適當的資料結構,那麼增強式學習會是一個相當合適的學習方法。在這篇論文當中,利用增強式學習來學習剖析器的有效性與強大性也是研究的重點。我們特別強調增強式學習中的獎勵給予方法,並且探討在不同的獎勵給予方法設定之下,習得剖析器的效果比較。在此篇論文中,我們提出了兩種不同的獎勵給予方式並且藉由學習中文句子語法結構的實驗來比較他們的優缺點。第一種我們稱之為「中途獎勵給予」;第二種則為「延遲部份獎勵給予」。對於「中途獎勵給予」法來說,當剖析器做完某個動作而到達某個狀態下(這個狀態是在實
行正確動作時會經過的),那麼環境將會給予剖析器獎勵。在其他的狀態下,剖析器則會被懲罰。
對於「延遲部份獎勵給予」來說,只有當剖析器完成了正確的部份剖析(或者說一個片語)時,環境才會給予剖析器正向的獎勵。除了一些違反常理的狀態之外,在其他的狀態下,剖析器將不會受到懲罰。針對此兩種獎勵給予方法的表現來做比較,可以看見在F-分數上,「中途獎勵給予」比「延遲部份獎勵給予」有較突出的表現。但就覆蓋率來說,「延遲部份獎勵給予」比「中途獎勵給予」表現為佳。我們將會在實驗那一個章節做詳細的探討。
Learning how to parse a sentence has been a challenging problem in natural language acquisition. Traditional supervised parser learning methods normally had given very strong assumptions on the preparation of the correct parsed training data that makes the acquisition of a large set of well trained corpora a big burden on trainers and thus often makes the parser-learning problem becomes infeasible.Reinforcement learning (RL) is a very powerful learning technique in that only rewards are needed to give in a successful sequence of actions and thus it requires less requirements on the trainers than traditional supervised learning methods. This feature is less addressed in traditional Natural language parser learning methods. In this thesis we show that it is suitable to apply RL if we adopt the proper data structures. The effectiveness and robustness on learning a parser using RL are also the research foci in this thesis. In particular, we emphasize on the strategies of reward giving schema in RL and discuss their corresponding performances on the trained parsers given different rewarding schema. In this dissertation, we proposed two kinds of rewarding schema and compared their advantages and disadvantages by experiments on the learning of Chinese sentence phrase structures. The first one is called intermediate-route rewarding (IRR), and the second one is called delayed partial rewarding (DPR). IRR schema gives the parser reward when it achieves the state that will be traversed if correct actions are conducted. And the parser will be punished when arriving other states. DPR gives the parser rewards only when it finishes a correct sub-parse (or a phrase). Under other states, the parser will not be rewarded or punished. Comparing the performance of these two rewarding schemata, IRR rewarding schema outperforms the DPR schema in F-score. For the coverage, DPR performs better. We will make the discussion in detail in Chapter 4.
1] Sinica online mandarin chinese segmentation system. Website, 2009. http://ckipsvr.iis.sinica.edu.tw/.
[2] R. Bod. ”An effcient implementation of a new DOP model”. EACL ’03 Proceedings of the tenth conference on European chapter of the Association
for Computational Linguistics, pages 19–26, Morristown, NJ, USA, 2003
[3] Tung-Bo Chen and Von-Wun Soo. Training Recurrent Neural Networks to Learn Lexical Encoding and Thematic Role Assignment in Parsing Mandarin Chinese Sentences. PhD thesis, National Tsing Hua University, Hsinchu, Taiwan, 1997.
[4] D. Chiang and D. M. Bikel. ”Recovering latent information in treebanks”. Proceedings of the 19th international conference on Computational linguistics, pages 1–7, Morristown, NJ, USA, 2002
[5] M. Collins. ”Discriminative reranking for natural language parsing”. Proc. 17th International Conf. on Machine Learning, pages 175–182. Morgan Kaufmann, San Francisco, CA, 2000.
[6] The Stanford Natural Language Processing Group. The stanford parser. Website, October 2008. http://nlp.stanford.edu/software/lex-parser.shtml.
[7] J. Henderson. ”Discriminative training of a neural network statistical parser". In ACL ’04: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 95, Morristown, NJ, USA, 2004.
[8] Yu-Ming Hsieh, Duen-Chi Yang, and Keh-Jiann Chen. ”Improve parsing performance by self-learning” International Journal of Computational Linguistics and Chinese Language Processing, 2007.
[9] T. F. Kalt. Control Models of Natural Language Parsing. PhD thesis, University of Massachusetts Amherst, Amherst, MA, 2005.
[10] M. Marcus. WAIT-AND-SEE Strategy for Parsing Natural Language. PhD thesis, Massachusetts Institute of Technology, 1974.
[11] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. Website. http://www.cs.ualberta.ca/~sutton/book/ebook/node65.html.
[12] R. S. Sutton and A. G. Barto. Reinforcement Learning I: Introduction, 1998.
[13] N. Yalabk, W. Konig, C. Bozsahin, D. Zeyrek, S. Bagce, U. Turan, G. Tekman, and Y. Altun. Machine learning and language acquisition: A model of child’s learning of turkish morphophonology, 1999