|
--- |
|
library_name: transformers |
|
license: mit |
|
datasets: |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
language: |
|
- en |
|
--- |
|
<!-- This is a model released from the preprint: *[Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760)*. Please refer to our [repository](https://github.com/sail-sg/dice) for more details. --> |
|
|
|
# Llama-3-Base-8B-DICE-Iter2 |
|
|
|
This model was developed using [Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760) (DICE) at iteration 2, based on the [princeton-nlp/Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) architecture as the starting point. |
|
|
|
<!-- We utilized the prompt sets extracted from [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). --> |
|
|
|
## Links to Other Models |
|
- [Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1) |
|
- [Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2) |
|
|
|
## Model Description |
|
|
|
- Model type: An 8B parameter GPT-like model fine-tuned on synthetic datasets. |
|
- Language(s) (NLP): Primarily English |
|
- License: MIT |
|
- Fine-tuned from model: princeton-nlp/Llama-3-Base-8B-SFT-DPO |
|
|
|
## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/) |
|
|
|
| Model | LC. Win Rate | Win Rate | |
|
|-------------------------------------------|:------------:|:--------:| |
|
|[Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) |18.20 |15.50 |
|
|[Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1) |25.08 |25.77 |
|
|[Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2) |**27.55** |**30.99** |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{chen2024bootstrapping, |
|
title={Bootstrapping Language Models with DPO Implicit Rewards}, |
|
author={Chen, Changyu and Liu, Zichen and Du, Chao and Pang, Tianyu and Liu, Qian and Sinha, Arunesh and Varakantham, Pradeep and Lin, Min}, |
|
journal={arXiv preprint arXiv:2406.09760}, |
|
year={2024} |
|
} |
|
``` |