Laixi Shi
Laixi Shi
Computing & Mathematical Sciences, Caltech
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
Pessimistic Q-learning for offline reinforcement learning: Towards optimal sample complexity
L Shi, G Li, Y Wei, Y Chen, Y Chi
International Conference on Machine Learning (ICML), 2022, 2022
Micro hand gesture recognition system using ultrasonic active sensing
Y Sang, L Shi, Y Liu
IEEE Access 6, 49339-49347, 2018
Settling the sample complexity of model-based offline reinforcement learning
G Li, L Shi, Y Chen, Y Chi, Y Wei
The Annals of Statistics DOI: 10.1214/23-AOS2342; arXiv preprint arXiv:2204 …, 2022
Breaking the sample complexity barrier to regret-optimal model-free reinforcement learning
G Li, L Shi, Y Chen, Y Chi
Information and Inference: A Journal of the IMA; Short version as NeurIPS …, 2023
Distributionally robust model-based offline reinforcement learning with near-optimal sample complexity
L Shi, Y Chi
arXiv preprint arXiv:2208.05767, 2022
Manifold gradient descent solves multi-channel sparse blind deconvolution provably and efficiently
L Shi, Y Chi
IEEE Transactions on Information Theory 67 (7), 4784-4811, 2021
Device-free multiple people localization through floor vibration
L Shi, M Mirshekari, J Fagert, Y Chi, HY Noh, P Zhang, S Pan
Proceedings of the 1st ACM International Workshop on Device-Free Human …, 2019
The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model
L Shi, G Li, Y Wei, Y Chen, M Geist, Y Chi
Conference on Neural Information Processing Systems (NeurIPS), 2023, 2023
Curriculum Reinforcement Learning using Optimal Transport via Gradual Domain Adaptation
P Huang, M Xu, J Zhu, L Shi, F Fang, D Zhao
Conference on Neural Information Processing Systems (NeurIPS), 2022, 2022
Seeing is not Believing: Robust Reinforcement Learning against Spurious Correlation
W Ding*, L Shi*, Y Chi, D Zhao
Conference on Neural Information Processing Systems (NeurIPS), 2023, 2023
A trajectory is worth three sentences: multimodal transformer for offline reinforcement learning
Y Wang, M Xu, L Shi, Y Chi
Uncertainty in Artificial Intelligence, 2226-2236, 2023
Sample complexity of offline distributionally robust linear markov decision processes
H Wang, L Shi, Y Chi
arXiv preprint arXiv:2403.12946, 2024
Federated Offline Reinforcement Learning: Collaborative Single-Policy Coverage Suffices
J Woo, L Shi, G Joshi, Y Chi
arXiv preprint arXiv:2402.05876, 2024
Robust Camera Pose Estimation For Image Stitching
L Shi, D Liu, J Thornton
2021 IEEE International Conference on Image Processing (ICIP), 2838-2842, 2021
Fusion-based digital image correlation framework for strain measurement
L Shi, D Liu, M Umeda, N Hana
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
Latent Goal Allocation for Multi-Agent Goal-Conditioned Self-Supervised Imitation Learning
R Chen*, P Huang*, L Shi*
NeurIPS workshop on Bayesian Deep Learning, 2021
Sample-Efficient Robust Multi-Agent Reinforcement Learning in the Face of Environmental Uncertainty
L Shi, E Mazumdar, Y Chi, A Wierman
arXiv preprint arXiv:2404.18909, 2024
Offline Reinforcement Learning with On-Policy Q-Function Regularization
L Shi, R Dadashi, Y Chi, PS Castro, M Geist
European Conference on Machine Learning (ECML), 2023, 2023
Zoom Out: Abstractions for Efficient Radar Algorithms on COTS architectures
TM Low, Y Chi, J Hoe, S Kumar, A Prabhakara, L Shi, U Sridhar, ...
2022 IEEE International Symposium on Phased Array Systems & Technology (PAST …, 2022
Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation
S Gu*, L Shi*, Y Ding, A Knoll, C Spanos, A Wierman, M Jin
arXiv preprint arXiv:2405.20860, 2024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20