URN |
etd-1025115-185021 |
Author |
Hung-shyuan Lin |
Author's Email Address |
xuvuu3143@hotmail.com |
Statistics |
This thesis had been viewed 5588 times. Download 12 times. |
Department |
Electrical Engineering |
Year |
2015 |
Semester |
1 |
Degree |
Master |
Type of Document |
|
Language |
zh-TW.Big5 Chinese |
Title |
Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning |
Date of Defense |
2015-11-06 |
Page Count |
52 |
Keyword |
Inverse reinforcement learning
Reward function
Fuzzy
Reinforcement learning
AdaBoost
Apprenticeship learning
|
Abstract |
It’s a study on Reinforcement Learning, learning interaction of agents and dynamic environment to get reward function R, and update the policy, converge learning and behavior. But in the cases of complex and difficult case, the feedback function R is especially difficult to determine. Inverse Reinforcement Learning can solve this problem. A policy π and a reward function R are defined by expert behavior demonstration, and compare π with the policy π′ of learning agent reward function R′, update reward function R′ until the policy π′ learn the same behavior as expert’s. The comparison procedure use error to adjust the weights, and utilize the level of dissimilarity for adjusting reward function R′, with the Reinforcement Learning Fuzzy concept to verify strategies (π). This method approximate expert policy faster. |
Advisory Committee |
Jin-Ling Lin - chair
Ming-Yi Ju - co-chair
Yu-Jen Chen - co-chair
Kao-Shing Hwang - advisor
|
Files |
Indicate in-campus at 5 year and off-campus access at 5 year. |
Date of Submission |
2015-11-26 |