Shayegan Omidshafiei
Shayegan Omidshafiei
Field AI (previously: Google DeepMind, Google Research, MIT)
Bestätigte E-Mail-Adresse bei
Zitiert von
Zitiert von
Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
S Omidshafiei, J Pazis, C Amato, JP How, J Vian
Proceedings of the 34th International Conference on Machine Learning (ICML …, 2017
OpenSpiel: A framework for reinforcement learning in games
M Lanctot, E Lockhart, JB Lespiau, V Zambaldi, S Upadhyay, J Pérolat, ...
arXiv preprint arXiv:1908.09453, 2019
Mastering the game of Stratego with model-free multiagent reinforcement learning
J Perolat, B De Vylder, D Hennes, E Tarassov, F Strub, V de Boer, ...
Science 378 (6623), 990-996, 2022
Learning to Teach in Cooperative Multiagent Reinforcement Learning
S Omidshafiei, DK Kim, M Liu, G Tesauro, M Riemer, C Amato, ...
AAAI 2019, Best Student Paper Honorable Mention, 2019
α-Rank: Multi-Agent Evaluation by Evolution
S Omidshafiei, C Papadimitriou, G Piliouras, K Tuyls, M Rowland, ...
Scientific reports 9 (1), 9937, 2019
From motor control to team play in simulated humanoid football
S Liu, G Lever, Z Wang, J Merel, SMA Eslami, D Hennes, WM Czarnecki, ...
Science Robotics 7 (69), eabo0235, 2022
A generalized training approach for multiagent learning
P Muller, S Omidshafiei, M Rowland, K Tuyls, J Perolat, S Liu, D Hennes, ...
International Conference on Learning Representations (ICLR), 2020
Real World Games Look Like Spinning Tops
WM Czarnecki, G Gidel, B Tracey, K Tuyls, S Omidshafiei, D Balduzzi, ...
Conference on Neural Information Processing Systems (NeurIPS), 2020
Game Plan: What AI can do for Football, and What Football can do for AI
K Tuyls, S Omidshafiei, P Muller, Z Wang, J Connor, D Hennes, I Graham, ...
Journal of Artificial Intelligence Research 71, 41-88, 2021
Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions
S Omidshafiei, AA Agha–Mohammadi, C Amato, SY Liu, JP How, J Vian
The International Journal of Robotics Research (IJRR), 0278364917692864, 2017
Neural replicator dynamics: Multiagent learning via hedging policy gradients
D Hennes, D Morrill, S Omidshafiei, R Munos, J Perolat, M Lanctot, ...
Proceedings of the 19th international conference on autonomous agents and …, 2020
From poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization
J Perolat, R Munos, JB Lespiau, S Omidshafiei, M Rowland, P Ortega, ...
International Conference on Machine Learning, 8525-8535, 2021
Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions
S Omidshafiei, A Agha-mohammadi, C Amato, JP How
IEEE International Conference on Robotics and Automation (ICRA), 5962-5969, 2015
Navigating the landscape of multiplayer games
S Omidshafiei, K Tuyls, WM Czarnecki, FC Santos, M Rowland, J Connor, ...
Nature communications 11 (1), 5603, 2020
Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions
M Liu, K Sivakumar, S Omidshafiei, C Amato, JP How
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, 2017
Multiagent evaluation under incomplete information
M Rowland, S Omidshafiei, K Tuyls, J Perolat, M Valko, G Piliouras, ...
arXiv preprint arXiv:1909.09849, 2019
Simultaneous mapping and planning by a robot
A Aghamohammadi, SD Spindola, BF Behabadi, C Lott, S Omidshafiei, ...
US Patent 10,093,021, 2018
Learning hierarchical teaching policies for cooperative agents
DK Kim, M Liu, S Omidshafiei, S Lopez-Cot, M Riemer, G Habibi, ...
arXiv preprint arXiv:1903.03216, 2019
MAR-CPS: Measurable augmented reality for prototyping cyber-physical systems
S Omidshafiei, AA Agha-Mohammadi, YF Chen, NK Üre, JP How, JL Vian, ...
AIAA Infotech@ Aerospace, 0643, 2015
Policy distillation and value matching in multiagent reinforcement learning
S Wadhwania, DK Kim, S Omidshafiei, JP How
2019 IEEE/RSJ international conference on intelligent robots and systems …, 2019
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20