Folgen
Jakob Hollenstein
Jakob Hollenstein
Bestätigte E-Mail-Adresse bei uibk.ac.at
Titel
Zitiert von
Zitiert von
Jahr
Pink noise is all you need: Colored noise exploration in deep reinforcement learning
O Eberhard, J Hollenstein, C Pinneri, G Martius
The Eleventh International Conference on Learning Representations, 2022
322022
Action Noise in Off-Policy Deep Reinforcement Learning: Impact on Exploration and Performance
J Hollenstein, S Auddy, M Saveriano, E Renaudo, J Piater
Transactions on Machine Learning Research, 2022
192022
Continual learning from demonstration of robotics skills
S Auddy, J Hollenstein, M Saveriano, A Rodríguez-Sánchez, J Piater
Robotics and Autonomous Systems 165, 104427, 2023
172023
A Visual Intelligence Scheme for Hard Drive Disassembly in Automated Recycling Routines.
E Yildiz, T Brinker, E Renaudo, JJ Hollenstein, S Haller-Seeber, JH Piater, ...
ROBOVIS, 17-27, 2020
92020
Scalable and Efficient Continual Learning from Demonstration via Hypernetwork-generated Stable Dynamics Model
S Auddy, J Hollenstein, M Saveriano, A Rodríguez-Sánchez, J Piater
arXiv preprint arXiv:2311.03600, 2023
52023
Hypernetwork-ppo for continual reinforcement learning
P Schöpf, S Auddy, J Hollenstein, A Rodriguez-Sanchez
Deep Reinforcement Learning Workshop NeurIPS 2022, 2022
52022
Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search
JJ Hollenstein, E Renaudo, J Piater
32019
Visual Control of Hidden-Semi-Markov-Model based Acoustic Speech Synthesis
J Hollenstein, ichael Pucher, D Schabus
Auditory-Visual Speech Processing (AVSP) 2013, 2013
32013
How does the type of exploration-noise affect returns and exploration on Reinforcement Learning benchmarks?
J Hollenstein, M Saveriano, S Auddy, E Renaudo, J Piater
Austrian Robotics Workshop, 22-26, 2021
22021
How do Offline Measures for Exploration in Reinforcement Learning behave?
JJ Hollenstein, S Auddy, M Saveriano, E Renaudo, J Piater
arXiv preprint arXiv:2010.15533, 2020
22020
Colored Noise in PPO: Improved Exploration and Performance Through Correlated Action Sampling
J Hollenstein, G Martius, J Piater
Proceedings of the AAAI Conference on Artificial Intelligence 38 (11), 12466 …, 2024
12024
Improving the Exploration of Deep Reinforcement Learning in Continuous Domains using Planning for Policy Search
JJ Hollenstein, E Renaudo, M Saveriano, J Piater
arXiv preprint arXiv:2010.12974, 2020
12020
How Does Explicit Exploration Influence Deep Reinforcement Learning
JJ Hollenstein, E Renaudo, S Matteo, J Piater
Joint Austrian Computer Vision and Robotics Workshop, 29-30, 2020
12020
Evaluating Planning for Policy Search
JJ Hollenstein, J Piater
1st Workshop on Workshop on Closing the Reality Gap in Sim2real Transfer for …, 2019
12019
Unsupervised Learning of Effective Actions in Robotics
M Zaric, J Hollenstein, J Piater, E Renaudo
arXiv preprint arXiv:2404.02728, 2024
2024
Differentiable Forward Kinematics for TensorFlow 2
L Mölschl, JJ Hollenstein, J Piater
arXiv preprint arXiv:2301.09954, 2023
2023
An Extended Visual Intelligence Scheme for Disassembly in Automated Recycling Routines
E Yildiz, E Renaudo, J Hollenstein, J Piater, F Wörgötter
International Conference on Robotics, Computer Vision and Intelligent …, 2020
2020
Pink Noise LQR: How does Colored Noise affect the Optimal Policy in RL?
J Hollenstein, M Zaric, S Tosatto, J Piater
ICML 2024 Workshop: Foundations of Reinforcement Learning and Control …, 0
Pink Noise Is All You Need: Colored Noise Exploration in Deep Reinforcement Learning
O Eberhard, J Hollenstein, C Pinneri, G Martius
Deep Reinforcement Learning Workshop NeurIPS 2022, 0
Can Expressive Posterior Approximations Improve Variational Continual Learning?
S Auddy, JHMSA Rodrıguez, SJ Piater
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20