Folgen
Praneeth Netrapalli
Praneeth Netrapalli
Bestätigte E-Mail-Adresse bei google.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Low-rank matrix completion using alternating minimization
P Jain, P Netrapalli, S Sanghavi
Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013
12282013
How to escape saddle points efficiently
C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan
International conference on machine learning, 1724-1732, 2017
9942017
Phase retrieval using alternating minimization
P Netrapalli, P Jain, S Sanghavi
Advances in Neural Information Processing Systems 26, 2013
7252013
Morel: Model-based offline reinforcement learning
R Kidambi, A Rajeswaran, P Netrapalli, T Joachims
Advances in neural information processing systems 33, 21810-21823, 2020
6992020
What is local optimality in nonconvex-nonconcave minimax optimization?
C Jin, P Netrapalli, M Jordan
International conference on machine learning, 4880-4889, 2020
453*2020
Non-convex robust PCA
P Netrapalli, N UN, S Sanghavi, A Anandkumar, P Jain
Advances in neural information processing systems 27, 2014
3612014
The pitfalls of simplicity bias in neural networks
H Shah, K Tamuly, A Raghunathan, P Jain, P Netrapalli
Advances in Neural Information Processing Systems 33, 9573-9585, 2020
3582020
Accelerated gradient descent escapes saddle points faster than gradient descent
C Jin, P Netrapalli, MI Jordan
Conference On Learning Theory, 1042-1085, 2018
2882018
On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
Journal of the ACM (JACM) 68 (2), 1-29, 2021
239*2021
Learning the graph of epidemic cascades
P Netrapalli, S Sanghavi
ACM SIGMETRICS Performance Evaluation Review 40 (1), 211-222, 2012
2312012
Efficient algorithms for smooth minimax optimization
KK Thekumparampil, P Jain, P Netrapalli, S Oh
Advances in Neural Information Processing Systems 32, 2019
2162019
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford
Journal of machine learning research 18 (223), 1-42, 2018
213*2018
Efficient domain generalization via common-specific low-rank decomposition
V Piratla, P Netrapalli, S Sarawagi
International conference on machine learning, 7728-7738, 2020
2082020
Learning sparsely used overcomplete dictionaries via alternating minimization
A Agarwal, A Anandkumar, P Jain, P Netrapalli
SIAM Journal on Optimization 26 (4), 2775-2799, 2016
1982016
The step decay schedule: A near optimal, geometrically decaying learning rate procedure for least squares
R Ge, SM Kakade, R Kidambi, P Netrapalli
Advances in neural information processing systems 32, 2019
1862019
A short note on concentration inequalities for random vectors with subgaussian norm
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
arXiv preprint arXiv:1902.03736, 2019
1602019
Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for oja’s algorithm
P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford
Conference on learning theory, 1147-1164, 2016
1542016
Information-theoretic thresholds for community detection in sparse networks
J Banks, C Moore, J Neeman, P Netrapalli
Conference on Learning Theory, 383-416, 2016
149*2016
On the insufficiency of existing momentum schemes for stochastic optimization
R Kidambi, P Netrapalli, P Jain, S Kakade
2018 Information Theory and Applications Workshop (ITA), 1-9, 2018
1322018
Accelerating stochastic gradient descent for least squares regression
P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford
Conference On Learning Theory, 545-604, 2018
1312018
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20