Title page for etd-1025115-185021


[Back to Results | New Search]

URN etd-1025115-185021
Author Hung-shyuan Lin
Author's Email Address xuvuu3143@hotmail.com
Statistics This thesis had been viewed 5351 times. Download 0 times.
Department Electrical Engineering
Year 2015
Semester 1
Degree Master
Type of Document
Language zh-TW.Big5 Chinese
Title Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning
Date of Defense 2015-11-06
Page Count 52
Keyword
  • Inverse reinforcement learning
  • Reward function
  • Fuzzy
  • Reinforcement learning
  • AdaBoost
  • Apprenticeship learning
  • Abstract It’s a study on Reinforcement Learning, learning interaction of agents and dynamic environment to get reward function R, and update the policy, converge learning and behavior. But in the cases of complex and difficult case, the feedback function R is especially difficult to determine. Inverse Reinforcement Learning can solve this problem. A policy π and a reward function R are defined by expert behavior demonstration, and compare π with the policy π′ of learning agent reward function R′, update reward function R′ until the policy π′ learn the same behavior as expert’s. The comparison procedure use error to adjust the weights, and utilize the level of dissimilarity for adjusting reward function R′, with the Reinforcement Learning Fuzzy concept to verify strategies (π). This method approximate expert policy faster.
    Advisory Committee
  • Jin-Ling Lin - chair
  • Ming-Yi Ju - co-chair
  • Yu-Jen Chen - co-chair
  • Kao-Shing Hwang - advisor
  • Files
  • etd-1025115-185021.pdf
  • Indicate in-campus at 5 year and off-campus access at 5 year.
    Date of Submission 2015-11-26

    [Back to Results | New Search]


    Browse | Search All Available ETDs

    If you have more questions or technical problems, please contact eThesys