Follow
Yao Fu
Title
Cited by
Cited by
Year
C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models
Y Huang, Y Bai, Z Zhu, J Zhang, J Zhang, T Su, J Liu, C Lv, Y Zhang, Y Fu, ...
Advances in Neural Information Processing Systems 36, 2024
295*2024
Decomposed prompting: A modular approach for solving complex tasks
T Khot, H Trivedi, M Finlayson, Y Fu, K Richardson, P Clark, A Sabharwal
arXiv preprint arXiv:2210.02406, 2022
2642022
Complexity-based prompting for multi-step reasoning
Y Fu, H Peng, A Sabharwal, P Clark, T Khot
The Eleventh International Conference on Learning Representations, 2022
2522022
Mammoth: Building math generalist models through hybrid instruction tuning
X Yue, X Qu, G Zhang, Y Fu, W Huang, H Sun, Y Su, W Chen
arXiv preprint arXiv:2309.05653, 2023
1692023
Specializing smaller language models towards multi-step reasoning
Y Fu, H Peng, L Ou, A Sabharwal, T Khot
International Conference on Machine Learning, 10421-10430, 2023
1472023
Paraphrase generation with latent bag of words
Y Fu, Y Feng, JP Cunningham
Advances in Neural Information Processing Systems 32, 2019
962019
Improving language model negotiation with self-play and in-context learning from ai feedback
Y Fu, H Peng, T Khot, M Lapata
arXiv preprint arXiv:2305.10142, 2023
952023
Prototypical representation learning for relation extraction
N Ding, X Wang, Y Fu, G Xu, R Wang, P Xie, Y Shen, F Huang, HT Zheng, ...
arXiv preprint arXiv:2103.11647, 2021
632021
Noisy-labeled NER with confidence estimation
K Liu, Y Fu, C Tan, M Chen, N Zhang, S Huang, S Gao
arXiv preprint arXiv:2104.04318, 2021
592021
How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources
Y Fu, H Peng, T Khot
https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent …, 2022
512022
Nested named entity recognition with partially-observed treecrfs
Y Fu, C Tan, M Chen, S Huang, F Huang
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12839 …, 2021
482021
Probing BERT in hyperbolic spaces
B Chen, Y Fu, G Xu, P Xie, C Tan, M Chen, L Jing
arXiv preprint arXiv:2104.03869, 2021
472021
To repeat or not to repeat: Insights from scaling llm under token-crisis
F Xue, Y Fu, W Zhou, Z Zheng, Y You
Advances in Neural Information Processing Systems 36, 2024
422024
Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance
Y Fu, L Ou, M Chen, Y Wan, H Peng, T Khot
arXiv preprint arXiv:2305.17306, 2023
41*2023
Natural answer generation with heterogeneous memory
Y Fu, Y Feng
Proceedings of the 2018 Conference of the North American Chapter of the …, 2018
382018
Data engineering for scaling language models to 128k context
Y Fu, R Panda, X Niu, X Yue, H Hajishirzi, Y Kim, H Peng
arXiv preprint arXiv:2402.10171, 2024
312024
Openmoe: An early effort on open mixture-of-experts language models
F Xue, Z Zheng, Y Fu, J Ni, Z Zheng, W Zhou, Y You
arXiv preprint arXiv:2402.01739, 2024
312024
Data-to-text generation with variational sequential planning
R Puduppully, Y Fu, M Lapata
Transactions of the Association for Computational Linguistics 10, 697-715, 2022
262022
Rethinking text attribute transfer: A lexical analysis
Y Fu, H Zhou, J Chen, L Li
arXiv preprint arXiv:1909.12335, 2019
202019
Latent template induction with Gumbel-CRFS
Y Fu, C Tan, B Bi, M Chen, Y Feng, A Rush
Advances in Neural Information Processing Systems 33, 20259-20271, 2020
172020
The system can't perform the operation now. Try again later.
Articles 1–20