IHEval: Evaluating Language Models on Following the Instruction Hierarchy Paper • 2502.08745 • Published 25 days ago • 18
Preference Datasets for DPO Collection This collection contains a list of curated preference datasets for DPO fine-tuning for intent alignment of LLMs • 7 items • Updated Dec 11, 2024 • 40