Curiosity-driven algorithm for reinforcement learning

Abstract
One problem of current Reinforcement Learning algorithms is finding a balance between exploitation of existing knowledge and exploration for a new experience. Curiosity exploration bonus has been proposed to address this problem, but current implementations are vulnerable to stochastic noise inside the environment. The new approach presented in this thesis utilises exploration bonus based on the predicted novelty of the next state. That protects exploration from noise issues during training. This work also introduces a new way of combining extrinsic and intrinsic rewards. Both improvements help to overcome a number of problems that Reinforcement Learning had until now.
Main Author
Format
Theses Master thesis
Published
2019
Subjects
The permanent address of the publication
https://urn.fi/URN:NBN:fi:jyu-201905292863Use this for linking
Language
English
License
In CopyrightOpen Access

Share