博碩士論文 etd-0708118-120036 詳細資訊


[回到前頁查詢結果 | 重新搜尋]

姓名 周詠捷(Yung-Chieh Chou) 電子郵件信箱 E-mail 資料不公開
畢業系所 資訊管理學系研究所(Information Management)
畢業學位 碩士(Master) 畢業時期 106學年第2學期
論文名稱(中) 基於主題正規化遞歸神經網路的自動名詞解釋
論文名稱(英) Automatic Term Explanation based on Topic-regularized Recurrent Neural Network
檔案
  • etd-0708118-120036.pdf
  • 本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
    請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
    論文使用權限

    紙本論文:立即公開

    電子論文:校內校外完全公開

    論文語文/頁數 英文/38
    統計 本論文已被瀏覽 5568 次,被下載 263 次
    摘要(中) 在這項研究中,我們提出了一個經由主題正規化後的遞歸神經網絡模型,目標是要產生一段文字來解釋給定的術語。基於遞歸神經網絡的模型通常會生成具有正確語法但缺乏文義連貫性的文句,而主題模型則是產生由彼此相關的關鍵字所組成的多個主題。以生成文句為目標的前提下,基於遞歸神經網絡的模型和主題模型,在語法正確性和語義連貫性之間的平衡上似乎有著互補關係。因此,我們將它們組合成一個兼具兩者益處的新模型。在我們的實驗中,我們在選定的文章中訓練長短期記憶模型,並在文件-術語矩陣上應用非平滑的非負矩陣分解以獲得語境。我們的實驗結果表明,主題正規化後的長短期記憶模型在生成可讀句子方面優於原始模型。此外,主題正規化後的長短期記憶模型可以採用不同的主題,來針對指定術語從各個方面仔細描述,而原始模型通常無法做到這點。
    摘要(英) In this study, we propose a topic-regularized Recurrent Neural Network(RNN)-based model designed to explain given terms. RNN-based models usually generate text results that have correct syntax but lack coherence, whereas topic models produce several topics consisting of coherent keywords. Here we consider combining them into a new model that takes advantages of both. In our experiment, we trained Long Short-Term Memory (LSTM) models on selected articles that mention given terms, applying nonsmooth nonnegative matrix factorization(nsNMF) on document-term matrix to obtain contextual biases. Our empirical results showed that topic-regularizing LSTM outperforms original models while generating readable sentences. Additionally, topic-regularized LSTM could adopt different topics to generate description about subtle but important aspects of a certain field, which is usually not captured by original LSTM.
    關鍵字(中)
  • 非負矩陣分解
  • 遞歸神經網絡
  • 自動名詞解釋
  • 主題模型
  • 長短期記憶
  • 自動文句生成
  • 自動摘要
  • 關鍵字(英)
  • Recurrent neural network
  • Automatic sentence generation
  • Automatic term explanation
  • Automatic summarization
  • Nonnegative matrix factorization
  • Topic model
  • Long short-term memory
  • 論文目次 論文審定書 i
    中文摘要 ii
    英文摘要 iii
    1 Introduction 1
    2 Background and Related work 4
    Language Model 5
    Topic Model 9
    3 Topic-regularized Recurrent Neural Network for Automatic Term Explanation 11
    LSTM 12
    Filtering 12
    Grouping by First Word 13
    Logarithm 14
    Softmax 15
    Generating Terms 16
    4 Experimental Result 20
    5 Discussion 26
    Hyperparameter 26
    Randomness 27
    Practicality 28
    6 Conclusion 28
    Reference 29
    參考文獻 Arora, S., Ge, R., & Moitra, A. (2012). Learning topic models–going beyond SVD. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on (pp. 1–10). IEEE.
    Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. ArXiv Preprint ArXiv:1409.0473.
    Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157–166.
    Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3(Jan), 993–1022.
    Das, D., & Martins, A. F. (2007). A survey on automatic text summarization. Literature Survey for the Language and Statistics II Course at CMU, 4, 192–195.
    Dieng, A. B., Wang, C., Gao, J., & Paisley, J. (2016). Topicrnn: A recurrent neural network with long-range semantic dependency. ArXiv Preprint ArXiv:1611.01702.
    Gimpel, K., & Smith, N. A. (2010). Softmax-margin CRFs: Training log-linear models with cost functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 733–736). Association for Computational Linguistics.
    Gu, J., Lu, Z., Li, H., & Li, V. O. (2016). Incorporating copying mechanism in sequence-to-sequence learning. ArXiv Preprint ArXiv:1603.06393.
    Karpathy, A. (2015). The unreasonable effectiveness of recurrent neural networks. Andrej Karpathy Blog.
    LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
    Lee, D. D., & Seung, H. S. (2001). Algorithms for non-negative matrix factorization. In Advances in neural information processing systems (pp. 556–562).
    Lin, C.-J. (2007). Projected gradient methods for nonnegative matrix factorization. Neural Computation, 19(10), 2756–2779.
    Lin, C.-Y. (2004). Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out.
    Mihalcea, R., & Tarau, P. (2004). Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing.
    Nallapati, R., Zhou, B., Gulcehre, C., & Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence rnns and beyond. ArXiv Preprint ArXiv:1602.06023.
    Olah, C. (2015). Understanding lstm networks. GITHUB Blog, Posted on August, 27, 2015.
    Pascual-Montano, A., Carazo, J. M., Kochi, K., Lehmann, D., & Pascual-Marqui, R. D. (2006). Nonsmooth nonnegative matrix factorization (nsNMF). IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(3), 403–415.
    Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532–1543).
    Statistics. (n.d.). Retrieved August 9, 2018, from https://arxiv.org/archive/stat.ML
    口試委員
  • 林耕霈 - 召集委員
  • 李珮如 - 委員
  • 康藝晃 - 指導教授
  • 口試日期 2018-07-20 繳交日期 2018-08-10

    [回到前頁查詢結果 | 重新搜尋]


    如有任何問題請與論文審查小組聯繫