Follow
Daiwei Chen
Title
Cited by
Cited by
Year
PAL: Sample-Efficient Personalized Reward Modeling for Pluralistic Alignment
D Chen, Y Chen, A Rege, Z Wang, RK Vinayak
The Thirteenth International Conference on Learning Representations, 0
13*
Unraveling The Impact of Training Samples
D Chen, J Zhang, RK Vinayak
The Third Blogpost Track at ICLR 2024, 2024
12024
Learning Capacity: A Measure of the Effective Dimensionality of a Model
D Chen, WK Chang, P Chaudhari
arXiv preprint arXiv:2305.17332, 2023
12023
Modeling the Plurality of Human Preferences via Ideal Points
D Chen, Y Chen, A Rege, RK Vinayak
ICML 2024 Workshop on Models of Human Feedback for AI Alignment, 0
The system can't perform the operation now. Try again later.
Articles 1–4