Follow
Jerry Li
Jerry Li
Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
D Alistarh, D Grubic, J Li, R Tomioka, M Vojnovic
Advances in Neural Information Processing Systems, 1707-1718, 2017
19442017
Spectral signatures in backdoor attacks
B Tran, J Li, A Madry
Advances in neural information processing systems 31, 2018
8352018
Provably robust deep learning via adversarially trained smoothed classifiers
H Salman, J Li, I Razenshteyn, P Zhang, H Zhang, S Bubeck, G Yang
Advances in neural information processing systems 32, 2019
5822019
Robust estimators in high-dimensions without the computational intractability
I Diakonikolas, G Kamath, D Kane, J Li, A Moitra, A Stewart
SIAM Journal on Computing 48 (2), 742-864, 2019
5212019
Quantum advantage in learning from experiments
HY Huang, M Broughton, J Cotler, S Chen, J Li, M Mohseni, H Neven, ...
Science 376 (6598), 1182-1186, 2022
4682022
Aligning ai with shared human values
D Hendrycks, C Burns, S Basart, A Critch, J Li, D Song, J Steinhardt
arXiv preprint arXiv:2008.02275, 2020
3782020
Byzantine stochastic gradient descent
D Alistarh, Z Allen-Zhu, J Li
Advances in neural information processing systems 31, 2018
3402018
Sever: A robust meta-algorithm for stochastic optimization
I Diakonikolas, G Kamath, D Kane, J Li, J Steinhardt, A Stewart
International Conference on Machine Learning, 1596-1606, 2019
3262019
Being robust (in high dimensions) can be practical
I Diakonikolas, G Kamath, DM Kane, J Li, A Moitra, A Stewart
International Conference on Machine Learning, 999-1008, 2017
2682017
ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning
H Zhang, J Li, K Kara, D Alistarh, J Liu, C Zhang
International Conference on Machine Learning, 4035-4043, 2017
247*2017
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
S Chen, S Chewi, J Li, Y Li, A Salim, AR Zhang
arXiv preprint arXiv:2209.11215, 2022
2222022
Randomized smoothing of all shapes and sizes
G Yang, T Duan, JE Hu, H Salman, I Razenshteyn, J Li
International Conference on Machine Learning, 10693-10705, 2020
2142020
Mixture models, robustness, and sum of squares proofs
SB Hopkins, J Li
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing …, 2018
1922018
Automatic prompt optimization with" gradient descent" and beam search
R Pryzant, D Iter, J Li, YT Lee, C Zhu, M Zeng
arXiv preprint arXiv:2305.03495, 2023
1612023
Privately learning high-dimensional distributions
G Kamath, J Li, V Singhal, J Ullman
Conference on Learning Theory, 1853-1902, 2019
1582019
The spraylist: A scalable relaxed priority queue
D Alistarh, J Kopinsky, J Li, N Shavit
Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of …, 2015
1492015
Robustly learning a gaussian: Getting optimal error, efficiently
I Diakonikolas, G Kamath, DM Kane, J Li, A Moitra, A Stewart
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete …, 2018
1452018
Computationally efficient robust sparse estimation in high dimensions
S Balakrishnan, SS Du, J Li, A Singh
Conference on Learning Theory, 169-212, 2017
1362017
Exponential separations between learning with and without quantum memory
S Chen, J Cotler, HY Huang, J Li
2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS …, 2022
1152022
On the limitations of first-order approximation in GAN dynamics
J Li, A Madry, J Peebles, L Schmidt
International Conference on Machine Learning, 3005-3013, 2018
110*2018
The system can't perform the operation now. Try again later.
Articles 1–20