Title page for etd-0727118-134901
 論文名稱Title 基於深度規則森林的可解釋表徵學習演算法Interpretable representation learning based on Deep Rule Forests 系所名稱Department 資訊管理學系Department of Information Management 畢業學年期Year, semester 106 學年度 第 2 學期The spring semester of Academic Year 106 語文別Language 英文English 學位類別Degree 碩士Master 頁數Number of pages 59 研究生Author 郭博文Bo-Wen Kuo 指導教授Advisor 召集委員Convenor 口試委員Advisory Committee 口試日期Date of Exam 2018-07-20 繳交日期Date of Submission 2018-08-28 關鍵字Keywords 規則學習、隨機森林、表徵學習、解釋性、深度規則森林Rule Learning, Random Forest, Representation Learning, Interpretability, Deep Rule Forest 統計Statistics 本論文已被瀏覽 6072 次，被下載 68 次The thesis/dissertation has been browsed 6072 times, has been downloaded 68 times.
 中文摘要 以樹為基礎的方法之精神在於學習規則，有很多的機器學習演算法皆以樹為 基礎，一些較複雜的樹學習器可能提供更高準確的預測模型，卻也犧牲了模型的解 釋性。表徵學習的精神在於從表面的資料中萃取抽象的概念，深度神經網路是最熱 門的表徵學習方法，然而，不可究責的表徵學習特性一直以來是深度神經網路的缺 點。在本篇論文中，我們提出了一種方法名為深度規則森林，能夠在層疊結構中藉 由隨機森林學習區域的表徵，學習到的區域表徵能夠搭配其他機器學習演算法，我 們以深度規則森林習得的區域表徵來訓練 CART 決策樹，並發現預測率有時甚至 會高於整體學習的方法。 Abstract The spirit of tree-based methods is to learn rules. A large number of machine learning techniques are tree-based. More complicated tree learners may result in higher predictive models, but may sacrifice for model interpretability. On the other hand, the spirit of representation learning is to extract abstractive concepts from manifestations of the data.For instance, Deep Neural networks (DNNs) is the most popular method in representation learning. However, unaccountable feature representation is the shortcoming of DNNs. In this paper, we proposed an approach, Deep Rule Forest (DRF), to learn region representations based on random forest in the deep layer-wise structures. The learned interpretable rules region representations combine other machine learning algorithms. We trained CART which learned from DRF region representations, and found that the prediction accuracies sometime are better than ensemble learning methods.
 目次 Table of Contents [論文審定書+i][中文摘要+ii][Abstract+iii][Table of content+iv][1. Introduction+1][2. Background Review+3][2.1. Tree-based methods+3][2.2. Representation learning+9][2.3. Forward thinking+11][2.4. Explainable A+14][2.5. Information processing+15][3. Building Deep Rule Forest+19][3.1. Forward forest structur+19][3.2. Growing stage and pruning stage+22][3.3. Interpretabilit+24][3.4. Hyper-parameters+26][4.Experiment+28][4.1. Prediction accuracy+30][4.2. Information bottleneck principle+31][4.3. Influence of hyper-parameters+34][4.4. Backtracking rules+38][5. Conclusion+43][6. Reference+45][7. Appendix A+48]