Milenas: Efficient neural architecture search via mixed-level reformulation C He, H Ye, L Shen, T Zhang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020 | 145 | 2020 |
Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems L Luo, H Ye, Z Huang, T Zhang Advances in Neural Information Processing Systems 33, 20566-20577, 2020 | 110 | 2020 |
Multi-consensus decentralized accelerated gradient descent H Ye, L Luo, Z Zhou, T Zhang Journal of Machine Learning Research 24 (306), 1-50, 2023 | 59 | 2023 |
Approximate newton methods H Ye, L Luo, Z Zhang Journal of Machine Learning Research 22 (66), 1-41, 2021 | 43* | 2021 |
Hessian-aware zeroth-order optimization for black-box adversarial attack H Ye, Z Huang, C Fang, CJ Li, T Zhang arXiv preprint arXiv:1812.11377, 2018 | 38 | 2018 |
Fast Fisher discriminant analysis with randomized algorithms H Ye, Y Li, C Chen, Z Zhang Pattern Recognition 72, 82-92, 2017 | 33 | 2017 |
Decentralized accelerated proximal gradient descent H Ye, Z Zhou, L Luo, T Zhang Advances in Neural Information Processing Systems 33, 18308-18317, 2020 | 28 | 2020 |
Nesterov's acceleration for approximate Newton H Ye, L Luo, Z Zhang Journal of Machine Learning Research 21 (142), 1-37, 2020 | 21* | 2020 |
DeEPCA: Decentralized exact PCA with linear convergence rate H Ye, T Zhang Journal of Machine Learning Research 22 (238), 1-27, 2021 | 20 | 2021 |
Explicit convergence rates of greedy and random quasi-Newton methods D Lin, H Ye, Z Zhang Journal of Machine Learning Research 23 (162), 1-40, 2022 | 18 | 2022 |
Towards explicit superlinear convergence rate for SR1 H Ye, D Lin, X Chang, Z Zhang Mathematical Programming 199 (1), 1273-1303, 2023 | 17* | 2023 |
Greedy and random quasi-newton methods with faster explicit superlinear convergence D Lin, H Ye, Z Zhang Advances in Neural Information Processing Systems 34, 6646-6657, 2021 | 16 | 2021 |
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction H Ye, W Xiong, T Zhang arXiv preprint arXiv:2012.15010, 2020 | 13 | 2020 |
Explicit superlinear convergence rates of Broyden's methods in nonlinear equations D Lin, H Ye, Z Zhang arXiv preprint arXiv:2109.01974, 2021 | 10 | 2021 |
An optimal stochastic algorithm for decentralized nonconvex finite-sum optimization L Luo, H Ye arXiv preprint arXiv:2210.13931, 2022 | 8 | 2022 |
Eigencurve: Optimal learning rate schedule for sgd on quadratic objectives with skewed hessian spectrums R Pan, H Ye, T Zhang arXiv preprint arXiv:2110.14109, 2021 | 7 | 2021 |
Greedy and Random Broyden's Methods with Explicit Superlinear Convergence Rates in Nonlinear Equations H Ye, D Lin, Z Zhang arXiv preprint arXiv:2110.08572, 2021 | 6 | 2021 |
Accelerated distributed approximate Newton method H Ye, C He, X Chang IEEE Transactions on Neural Networks and Learning Systems, 2022 | 4 | 2022 |
Decentralized stochastic variance reduced extragradient method L Luo, H Ye arXiv preprint arXiv:2202.00509, 2022 | 4 | 2022 |
Accelerating random Kaczmarz algorithm based on clustering information Y Li, K Mo, H Ye Proceedings of the AAAI Conference on Artificial Intelligence 30 (1), 2016 | 4 | 2016 |