Continuous control with deep reinforcement learning TP Lillicrap, JJ Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, ... arXiv preprint arXiv:1509.02971, 2015 | 17773 | 2015 |
Deterministic policy gradient algorithms D Silver, G Lever, N Heess, T Degris, D Wierstra, M Riedmiller ICML, 2014 | 5509 | 2014 |
Recurrent models of visual attention V Mnih, N Heess, A Graves Advances in neural information processing systems, 2204-2212, 2014 | 4934 | 2014 |
Relational inductive biases, deep learning, and graph networks PW Battaglia, JB Hamrick, V Bapst, A Sanchez-Gonzalez, V Zambaldi, ... arXiv preprint arXiv:1806.01261, 2018 | 3919 | 2018 |
Emergence of locomotion behaviours in rich environments N Heess, S Sriram, J Lemmon, J Merel, G Wayne, Y Tassa, T Erez, ... arXiv preprint arXiv:1707.02286, 2017 | 1162 | 2017 |
Feudal networks for hierarchical reinforcement learning AS Vezhnevets, S Osindero, T Schaul, N Heess, M Jaderberg, D Silver, ... Proceedings of the 34th International Conference on Machine Learning-Volume …, 2017 | 1114 | 2017 |
Sample efficient actor-critic with experience replay Z Wang, V Bapst, N Heess, V Mnih, R Munos, K Kavukcuoglu, ... arXiv preprint arXiv:1611.01224, 2016 | 1042 | 2016 |
A Generalist Agent S Reed, K Zolna, E Parisotto, SG Colmenarejo, A Novikov, G Barth-Maron, ... arXiv preprint arXiv:2205.06175, 2022 | 927 | 2022 |
Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards M Večerík, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, ... arXiv preprint arXiv:1707.08817, 2017 | 837 | 2017 |
Graph networks as learnable physics engines for inference and control A Sanchez-Gonzalez, N Heess, JT Springenberg, J Merel, M Riedmiller, ... arXiv preprint arXiv:1806.01242, 2018 | 738 | 2018 |
Imagination-augmented agents for deep reinforcement learning T Weber, S Racanière, DP Reichert, L Buesing, A Guez, DJ Rezende, ... arXiv preprint arXiv:1707.06203, 2017 | 732* | 2017 |
Learning continuous control policies by stochastic value gradients N Heess, G Wayne, D Silver, T Lillicrap, T Erez, Y Tassa Advances in Neural Information Processing Systems, 2944-2952, 2015 | 689 | 2015 |
Distributed distributional deterministic policy gradients G Barth-Maron, MW Hoffman, D Budden, W Dabney, D Horgan, A Muldal, ... arXiv preprint arXiv:1804.08617, 2018 | 674 | 2018 |
Continuous control with deep reinforcement learning. arXiv 2015 TP Lillicrap, JJ Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, ... arXiv preprint arXiv:1509.02971, 1935 | 656 | 1935 |
Sim-to-real robot learning from pixels with progressive nets AA Rusu, M Vecerik, T Rothörl, N Heess, R Pascanu, R Hadsell arXiv preprint arXiv:1610.04286, 2016 | 650 | 2016 |
Distral: Robust multitask reinforcement learning Y Teh, V Bapst, WM Czarnecki, J Quan, J Kirkpatrick, R Hadsell, N Heess, ... Advances in Neural Information Processing Systems, 4496-4506, 2017 | 630 | 2017 |
Attend, infer, repeat: Fast scene understanding with generative models SMA Eslami, N Heess, T Weber, Y Tassa, D Szepesvari, GE Hinton Advances in Neural Information Processing Systems, 3225-3233, 2016 | 609 | 2016 |
Maximum a posteriori policy optimisation A Abdolmaleki, JT Springenberg, Y Tassa, R Munos, N Heess, ... arXiv preprint arXiv:1806.06920, 2018 | 531 | 2018 |
Learning by playing-solving sparse reward tasks from scratch M Riedmiller, R Hafner, T Lampe, M Neunert, J Degrave, T Van de Wiele, ... arXiv preprint arXiv:1802.10567, 2018 | 507 | 2018 |
Imagination-augmented agents for deep reinforcement learning S Racanière, T Weber, D Reichert, L Buesing, A Guez, DJ Rezende, ... Advances in neural information processing systems, 5690-5701, 2017 | 483 | 2017 |