Responsive image
博碩士論文 etd-0708118-120036 詳細資訊
Title page for etd-0708118-120036
論文名稱
Title
基於主題正規化遞歸神經網路的自動名詞解釋
Automatic Term Explanation based on Topic-regularized Recurrent Neural Network
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
38
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2018-07-20
繳交日期
Date of Submission
2018-08-10
關鍵字
Keywords
非負矩陣分解、遞歸神經網絡、自動名詞解釋、主題模型、長短期記憶、自動文句生成、自動摘要
Recurrent neural network, Automatic sentence generation, Automatic term explanation, Automatic summarization, Nonnegative matrix factorization, Topic model, Long short-term memory
統計
Statistics
本論文已被瀏覽 5869 次,被下載 299
The thesis/dissertation has been browsed 5869 times, has been downloaded 299 times.
中文摘要
在這項研究中,我們提出了一個經由主題正規化後的遞歸神經網絡模型,目標是要產生一段文字來解釋給定的術語。基於遞歸神經網絡的模型通常會生成具有正確語法但缺乏文義連貫性的文句,而主題模型則是產生由彼此相關的關鍵字所組成的多個主題。以生成文句為目標的前提下,基於遞歸神經網絡的模型和主題模型,在語法正確性和語義連貫性之間的平衡上似乎有著互補關係。因此,我們將它們組合成一個兼具兩者益處的新模型。在我們的實驗中,我們在選定的文章中訓練長短期記憶模型,並在文件-術語矩陣上應用非平滑的非負矩陣分解以獲得語境。我們的實驗結果表明,主題正規化後的長短期記憶模型在生成可讀句子方面優於原始模型。此外,主題正規化後的長短期記憶模型可以採用不同的主題,來針對指定術語從各個方面仔細描述,而原始模型通常無法做到這點。
Abstract
In this study, we propose a topic-regularized Recurrent Neural Network(RNN)-based model designed to explain given terms. RNN-based models usually generate text results that have correct syntax but lack coherence, whereas topic models produce several topics consisting of coherent keywords. Here we consider combining them into a new model that takes advantages of both. In our experiment, we trained Long Short-Term Memory (LSTM) models on selected articles that mention given terms, applying nonsmooth nonnegative matrix factorization(nsNMF) on document-term matrix to obtain contextual biases. Our empirical results showed that topic-regularizing LSTM outperforms original models while generating readable sentences. Additionally, topic-regularized LSTM could adopt different topics to generate description about subtle but important aspects of a certain field, which is usually not captured by original LSTM.
目次 Table of Contents
論文審定書 i
中文摘要 ii
英文摘要 iii
1 Introduction 1
2 Background and Related work 4
Language Model 5
Topic Model 9
3 Topic-regularized Recurrent Neural Network for Automatic Term Explanation 11
LSTM 12
Filtering 12
Grouping by First Word 13
Logarithm 14
Softmax 15
Generating Terms 16
4 Experimental Result 20
5 Discussion 26
Hyperparameter 26
Randomness 27
Practicality 28
6 Conclusion 28
Reference 29
參考文獻 References
Arora, S., Ge, R., & Moitra, A. (2012). Learning topic models–going beyond SVD. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on (pp. 1–10). IEEE.
Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. ArXiv Preprint ArXiv:1409.0473.
Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157–166.
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3(Jan), 993–1022.
Das, D., & Martins, A. F. (2007). A survey on automatic text summarization. Literature Survey for the Language and Statistics II Course at CMU, 4, 192–195.
Dieng, A. B., Wang, C., Gao, J., & Paisley, J. (2016). Topicrnn: A recurrent neural network with long-range semantic dependency. ArXiv Preprint ArXiv:1611.01702.
Gimpel, K., & Smith, N. A. (2010). Softmax-margin CRFs: Training log-linear models with cost functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 733–736). Association for Computational Linguistics.
Gu, J., Lu, Z., Li, H., & Li, V. O. (2016). Incorporating copying mechanism in sequence-to-sequence learning. ArXiv Preprint ArXiv:1603.06393.
Karpathy, A. (2015). The unreasonable effectiveness of recurrent neural networks. Andrej Karpathy Blog.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
Lee, D. D., & Seung, H. S. (2001). Algorithms for non-negative matrix factorization. In Advances in neural information processing systems (pp. 556–562).
Lin, C.-J. (2007). Projected gradient methods for nonnegative matrix factorization. Neural Computation, 19(10), 2756–2779.
Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out.
Mihalcea, R., & Tarau, P. (2004). Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing.
Nallapati, R., Zhou, B., Gulcehre, C., & Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence rnns and beyond. ArXiv Preprint ArXiv:1602.06023.
Olah, C. (2015). Understanding lstm networks. GITHUB Blog, Posted on August, 27, 2015.
Pascual-Montano, A., Carazo, J. M., Kochi, K., Lehmann, D., & Pascual-Marqui, R. D. (2006). Nonsmooth nonnegative matrix factorization (nsNMF). IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(3), 403–415.
Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532–1543).
Statistics. (n.d.). Retrieved August 9, 2018, from https://arxiv.org/archive/stat.ML
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內校外完全公開 unrestricted
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code