Pretraining language models with human preferences T Korbak, K Shi, A Chen, RV Bhalerao, C Buckley, J Phang, SR Bowman, ... International Conference on Machine Learning, 17506-17533, 2023 | 89 | 2023 |
On Learning to Summarize with Large Language Models as References Y Liu, K Shi, KS He, L Ye, AR Fabbri, P Liu, D Radev, A Cohan NAACL 2024, 2023 | 26 | 2023 |
Automatic Error Analysis for Document-level Information Extraction A Das, X Du, B Wang, K Shi, J Gu, T Porter, C Cardie ACL 2023, 2022 | 9 | 2022 |
Dynamic queue-jump lane for emergency vehicles under partially connected settings: A multi-agent deep reinforcement learning approach H Su, K Shi, J Chow, L Jin arXiv preprint arXiv:2003.01025, 2020 | 7 | 2020 |
V2I connectivity-based dynamic queue-jump lane for emergency vehicles: a deep reinforcement learning approach H Su, K Shi, L Jin, JYJ Chow arXiv preprint arXiv:2008.00335, 2020 | 4 | 2020 |
ODSum: New Benchmarks for Open Domain Multi-Document Summarization Y Zhou, K Shi, W Zhang, Y Liu, Y Zhao, A Cohan arXiv preprint arXiv:2309.08960, 2023 | 2 | 2023 |
Medical Text Simplification: Optimizing for Readability with Unlikelihood Training and Reranked Beam Search Decoding LJY Flores, H Huang, K Shi, S Chheang, A Cohan Findings of EMNLP 2023, 2023 | | 2023 |
Social Bias in Masked-LMs Pretrained on Scientific Corpora K Shi, L Yan, C Xu SciNLP at AKBC 2021, 2021 | | 2021 |