Where academic tradition
meets the exciting future

Learning to act with recurrent neural networks

Yad Faeq, Cosmo Harrigan, Natalia Díaz Rodríguez, Bogdana Rakova, Dan Girshovich, Learning to act with recurrent neural networks. In: Christine Peterson, Julia Bossmann, Steve Burgess, Allison Duettmann, Maya Lockwood, Miguel Aznar, Marcia Seidler, Jim Lewis (Eds.), AI for Scientific Progress Workshop, 45–47, Foresight Institute, 2016.

Abstract:

In the general reinforcement learning setting, an agent needs to learn an optimal policy for achieving goals in an environment, while general purpose agents need to exhibit this ability across a diverse range of environments. Our aim is to design a recurrent neural network architecture that uses a hierarchy of LSTM units, an external memory, and learned compositions of modules to achieve transfer learning and
avoid catastrophic forgetting. This functionality could be applied to the design and control of nanoscale systems.

Files:

Full publication in PDF-format

BibTeX entry:

@INPROCEEDINGS{inpFaHaDxRaGi16a,
  title = {Learning to act with recurrent neural networks},
  booktitle = {AI for Scientific Progress Workshop},
  author = {Faeq, Yad and Harrigan, Cosmo and Díaz Rodríguez, Natalia and Rakova, Bogdana and Girshovich, Dan},
  editor = {Peterson, Christine and Bossmann, Julia and Burgess, Steve and Duettmann, Allison and Lockwood, Maya and Aznar, Miguel and Seidler, Marcia and Lewis, Jim},
  publisher = {Foresight Institute},
  pages = {45–47},
  year = {2016},
}

Belongs to TUCS Research Unit(s): Embedded Systems Laboratory (ESLAB)

Edit publication