WPO
Collection
Models and datasets in paper "WPO: Enhancing RLHF with Weighted Preference Optimization".
•
11 items
•
Updated
•
5
mistral-7b-sft-beta model finetuned by hybrid WPO (GPT-4-turbo + on-policy sampling + Ultrafeedback). Details in WPO: Enhancing RLHF with Weighted Preference Optimization. The training data is wzhouad/zephyr-ultrafeedback-hybrid.
This model is licensed under the Zoom software license and is permitted for use only for noncommercial, educational, or academic research purposes.