Rethinking the role of demonstrations: What makes in-context learning work? S Min, X Lyu, A Holtzman, M Artetxe, M Lewis, H Hajishirzi, L Zettlemoyer arXiv preprint arXiv:2202.12837, 2022 | 802 | 2022 |
Factscore: Fine-grained atomic evaluation of factual precision in long form text generation S Min, K Krishna, X Lyu, M Lewis, W Yih, PW Koh, M Iyyer, L Zettlemoyer, ... arXiv preprint arXiv:2305.14251, 2023 | 164 | 2023 |
Prompt waywardness: The curious case of discretized interpretation of continuous prompts D Khashabi, S Lyu, S Min, L Qin, K Richardson, S Welleck, H Hajishirzi, ... arXiv preprint arXiv:2112.08348, 2021 | 34* | 2021 |
Z-ICL: zero-shot in-context learning with pseudo-demonstrations X Lyu, S Min, I Beltagy, L Zettlemoyer, H Hajishirzi arXiv preprint arXiv:2212.09865, 2022 | 33 | 2022 |
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ... arXiv preprint arXiv:2402.00159, 2024 | 23* | 2024 |