Follow
Haochuan Li
Title
Cited by
Cited by
Year
Gradient descent finds global minima of deep neural networks
S Du, J Lee, H Li, L Wang, X Zhai
International conference on machine learning, 1675-1685, 2019
8402019
Convergence of adversarial training in overparametrized neural networks
R Gao, T Cai, H Li, CJ Hsieh, L Wang, JD Lee
Advances in Neural Information Processing Systems 32, 2019
802019
Complexity lower bounds for nonconvex-strongly-concave min-max optimization
H Li, Y Tian, J Zhang, A Jadbabaie
Advances in Neural Information Processing Systems 34, 1792-1804, 2021
142021
Byzantine-robust federated linear bandits
A Jadbabaie, H Li, J Qian, Y Tian
arXiv preprint arXiv:2204.01155, 2022
22022
Neural Network Weights Do Not Converge to Stationary Points: An Invariant Measure Perspective
J Zhang, H Li, S Sra, A Jadbabaie
International Conference on Machine Learning, 26330-26346, 2022
2022
On Convergence of Gradient Descent Ascent: A Tight Local Analysis
H Li, F Farnia, S Das, A Jadbabaie
International Conference on Machine Learning, 12717-12740, 2022
2022
On the Complexity of Nonconvex-Strongly-Concave Smooth Minimax Optimization Using First-Order Methods
H Li
Massachusetts Institute of Technology, 2021
2021
Randomness in Deconvolutional Networks for Visual Representation
K He, J Wang, H Li, Y Shu, M Zhang, M Zhu, L Wang, JE Hopcroft
arXiv preprint arXiv:1704.00330, 2017
2017
The system can't perform the operation now. Try again later.
Articles 1–8