Teaching inverse reinforcement learners via features and demonstrations (bibtex)
by Luis Haug, Sebastian Tschiatschek, Adish Singla
Abstract:
Learning near-optimal behaviour from an expert's demonstrations typically relies on the assumption that the learner knows the features that the true reward function depends on. In this paper, we study the problem of learning from demonstrations in the setting where this is not the case, i.e., where there is a mismatch between the worldviews of the learner and the expert. We introduce a natural quantity, the teaching risk, which measures the potential suboptimality of policies that look optimal to the learner in this setting. We show that bounds on the teaching risk guarantee that the learner is able to find a near-optimal policy using standard algorithms based on inverse reinforcement learning. Based on these findings, we suggest a teaching scheme in which the expert can decrease the teaching risk by updating the learner's worldview, and thus ultimately enable her to find a near-optimal policy.
Reference:
Teaching inverse reinforcement learners via features and demonstrationsLuis Haug, Sebastian Tschiatschek, Adish Singla In 2018.
Bibtex Entry:
@article{haug2018teaching-risk,
  title={Teaching inverse reinforcement learners via features and demonstrations},
  author={Luis Haug and Sebastian Tschiatschek and Adish Singla},
  year={2018},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  abstract={Learning near-optimal behaviour from an expert's demonstrations typically relies on the assumption that the learner knows the features that the true reward function depends on. In this paper, we study the problem of learning from demonstrations in the setting where this is not the case, i.e., where there is a mismatch between the worldviews of the learner and the expert. We introduce a natural quantity, the teaching risk, which measures the potential suboptimality of policies that look optimal to the learner in this setting. We show that bounds on the teaching risk guarantee that the learner is able to find a near-optimal policy using standard algorithms based on inverse reinforcement learning. Based on these findings, we suggest a teaching scheme in which the expert can decrease the teaching risk by updating the learner's worldview, and thus ultimately enable her to find a near-optimal policy.},
  teaserImage={figures/sdm/neurips2018-teaching-risk.png},
  teaserCaption={Success of teaching for various teaching risks},
  tag={SDM},
  url={https://arxiv.org/abs/1810.08926}
}
Powered by bibtexbrowser