Follow
Sanyam Kapoor
Title
Cited by
Cited by
Year
Multi-agent reinforcement learning: A report on challenges and approaches
S Kapoor
arXiv preprint arXiv:1807.09427, 2018
502018
PAC-Bayes compression bounds so tight that they can explain generalization
S Lotfi, M Finzi, S Kapoor, A Potapczynski, M Goldblum, AG Wilson
Advances in Neural Information Processing Systems 35, 31459-31473, 2022
472022
Backplay:" man muss immer umkehren"
C Resnick, R Raileanu, S Kapoor, A Peysakhovich, K Cho, J Bruna
arXiv preprint arXiv:1807.06919, 2018
462018
On uncertainty, tempering, and data augmentation in bayesian classification
S Kapoor, WJ Maddox, P Izmailov, AG Wilson
Advances in Neural Information Processing Systems 35, 18211-18225, 2022
392022
Pre-train your loss: Easy bayesian transfer learning with informative priors
R Shwartz-Ziv, M Goldblum, H Souri, S Kapoor, C Zhu, Y LeCun, ...
Advances in Neural Information Processing Systems 35, 27706-27715, 2022
382022
Variational auto-regressive Gaussian processes for continual learning
S Kapoor, T Karaletsos, TD Bui
International Conference on Machine Learning, 5290-5300, 2021
312021
Large Language Models Must Be Taught to Know What They Don't Know
S Kapoor, N Gruver, M Roberts, K Collins, A Pal, U Bhatt, A Weller, ...
arXiv preprint arXiv:2406.08391, 2024
13*2024
Function-space regularization in neural networks: A probabilistic perspective
TGJ Rudner, S Kapoor, S Qiu, AG Wilson
International Conference on Machine Learning, 29275-29290, 2023
132023
Skiing on simplices: Kernel interpolation on the permutohedral lattice for scalable gaussian processes
S Kapoor, M Finzi, KA Wang, AGG Wilson
International Conference on Machine Learning, 5279-5289, 2021
132021
When are Iterative Gaussian Processes Reliably Accurate?
WJ Maddox, S Kapoor, AG Wilson
arXiv preprint arXiv:2112.15246, 2021
112021
Policy gradients in a nutshell
S Kapoor
Towards Data Science 20, 18, 2018
72018
Should we learn most likely functions or parameters?
S Qiu, TGJ Rudner, S Kapoor, AG Wilson
Advances in Neural Information Processing Systems 36, 2024
62024
A simple and fast baseline for tuning large XGBoost models
S Kapoor, V Perrone
arXiv preprint arXiv:2111.06924, 2021
62021
First-order preconditioning via hypergradient descent
T Moskovitz, R Wang, J Lan, S Kapoor, T Miconi, J Yosinski, A Rawal
arXiv preprint arXiv:1910.08461, 2019
62019
The system can't perform the operation now. Try again later.
Articles 1–14