Follow
Yongchao Zhou
Yongchao Zhou
Verified email at mail.utoronto.ca
Title
Cited by
Cited by
Year
Large language models are human-level prompt engineers
Y Zhou, AI Muresanu, Z Han, K Paster, S Pitis, H Chan, J Ba
International Conference on Learning Representations (ICLR 2023), 2023
4762023
Dataset distillation using neural feature regression
Y Zhou, E Nezhadarya, J Ba
Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022
852022
Transcriptome-wide off-target effects of steric-blocking oligonucleotides
EM Holgersen, S Gandhi, Y Zhou, J Kim, B Vaz, J Bogojeski, M Bugno, ...
nucleic acid therapeutics 31 (6), 392-403, 2021
512021
On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes
R Agarwal, N Vieillard, Y Zhou, P Stanczyk, S Ramos, M Geist, O Bachem
International Conference on Learning Representations (ICLR 2024), 2024
34*2024
Distillspec: Improving speculative decoding via knowledge distillation
Y Zhou, K Lyu, AS Rawat, AK Menon, A Rostamizadeh, S Kumar, JF Kagy, ...
International Conference on Learning Representations (ICLR 2024), 2024
182024
Training on Thin Air: Improve Image Classification with Generated Data
Y Zhou, H Sahak, J Ba
ICML Workshop on Data-centric Machine Learning, 2023, 2023
182023
Identifying the risks of lm agents with an lm-emulated sandbox
Y Ruan, H Dong, A Wang, S Pitis, Y Zhou, J Ba, Y Dubois, CJ Maddison, ...
International Conference on Learning Representations (ICLR 2024), 2024
172024
Transformers Can Achieve Length Generalization But Not Robustly
Y Zhou, U Alon, X Chen, X Wang, R Agarwal, D Zhou
ICLR Workshop on Mathematical and Empirical Understanding of Foundation …, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–8