PAL: Sample-Efficient Personalized Reward Modeling for Pluralistic Alignment D Chen, Y Chen, A Rege, Z Wang, RK Vinayak The Thirteenth International Conference on Learning Representations, 0 | 13* | |
Unraveling The Impact of Training Samples D Chen, J Zhang, RK Vinayak The Third Blogpost Track at ICLR 2024, 2024 | 1 | 2024 |
Learning Capacity: A Measure of the Effective Dimensionality of a Model D Chen, WK Chang, P Chaudhari arXiv preprint arXiv:2305.17332, 2023 | 1 | 2023 |
Modeling the Plurality of Human Preferences via Ideal Points D Chen, Y Chen, A Rege, RK Vinayak ICML 2024 Workshop on Models of Human Feedback for AI Alignment, 0 | | |