Auto-Perceptive Reinforcement Learning (APRiL) (bibtex)
by Allday, Rebecca, Hadfield, Simon and Bowden, Richard
Abstract:
The relationship between the feedback given in Reinforcement Learning (RL) and visual data input is often extremely complex. Given this, expecting a single system trained end-to-end to learn both how to perceive and interact with its environment is unrealistic for complex domains. In this paper we propose Auto-Perceptive Reinforcement Learning (APRiL), separating the perception and the control elements of the task. This method uses an auto-perceptive network to encode a feature space. The feature space may explicitly encode available knowledge from the semantically understood state space but the network is also free to encode unanticipated auxiliary data. By decoupling visual perception from the RL process, APRiL can make use of techniques shown to improve performance and efficiency of RL training, which are often difficult to apply directly with a visual input. We present results showing that APRiL is effective in tasks where the semantically understood state space is known. We also demonstrate that allowing the feature space to learn auxiliary information, allows it to use the visual perception system to improve performance by approximately 30%. We also show that maintaining some level of semantics in the encoded state, which can then make use of state-of-the art RL techniques, saves around 75% of the time that would be used to collect simulation examples
Reference:
Auto-Perceptive Reinforcement Learning (APRiL) (Allday, Rebecca, Hadfield, Simon and Bowden, Richard), In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) workshops, IEEE/RSJ, 2019.
Bibtex Entry:
@InProceedings{Allday19,
author = {Allday, Rebecca and Hadfield, Simon and Bowden, Richard},
year = {2019},
month = {10},
pages = {},
title = {Auto-Perceptive Reinforcement Learning (APRiL)},
  Publisher                = {IEEE/RSJ},
  Booktitle                = {Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) workshops},
  
  Abstract                 = {The relationship between the feedback given in Reinforcement Learning (RL) and visual data input is often extremely complex. Given this, expecting a single system trained end-to-end to learn both how to perceive and interact with its environment is unrealistic for complex domains. In this paper we propose Auto-Perceptive Reinforcement Learning (APRiL), separating the perception and the control elements of the task. This method uses an auto-perceptive network to encode a feature space. The feature space may explicitly encode available knowledge from the semantically understood state space but the network is also free to encode unanticipated auxiliary data. By decoupling visual perception from the RL process, APRiL can make use of techniques shown to improve performance and efficiency of RL training, which are often difficult to apply directly with a visual input. We present results showing that APRiL is effective in tasks where the semantically
understood state space is known. We also demonstrate that allowing the feature space to learn auxiliary information, allows it to use the visual perception system to improve performance by approximately 30\%. We also show that maintaining some level of semantics in the encoded state, which can then make use of state-of-the art RL techniques, saves around 75\% of the time that would be used to collect simulation examples},
  %Comment                  = {},
  Url                      = {http://personalpages.surrey.ac.uk/s.hadfield/papers/Allday19.pdf},
}
Powered by bibtexbrowser