RichardErkhov
commited on
Commit
•
f3f17f2
1
Parent(s):
7e5fe2f
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
gemma-2-9b-it-DPO - GGUF
|
11 |
+
- Model creator: https://huggingface.co/princeton-nlp/
|
12 |
+
- Original model: https://huggingface.co/princeton-nlp/gemma-2-9b-it-DPO/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [gemma-2-9b-it-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q2_K.gguf) | Q2_K | 3.54GB |
|
18 |
+
| [gemma-2-9b-it-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.IQ3_XS.gguf) | IQ3_XS | 3.86GB |
|
19 |
+
| [gemma-2-9b-it-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.IQ3_S.gguf) | IQ3_S | 4.04GB |
|
20 |
+
| [gemma-2-9b-it-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q3_K_S.gguf) | Q3_K_S | 4.04GB |
|
21 |
+
| [gemma-2-9b-it-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.IQ3_M.gguf) | IQ3_M | 4.19GB |
|
22 |
+
| [gemma-2-9b-it-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q3_K.gguf) | Q3_K | 4.43GB |
|
23 |
+
| [gemma-2-9b-it-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q3_K_M.gguf) | Q3_K_M | 4.43GB |
|
24 |
+
| [gemma-2-9b-it-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q3_K_L.gguf) | Q3_K_L | 4.78GB |
|
25 |
+
| [gemma-2-9b-it-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.IQ4_XS.gguf) | IQ4_XS | 4.86GB |
|
26 |
+
| [gemma-2-9b-it-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q4_0.gguf) | Q4_0 | 5.07GB |
|
27 |
+
| [gemma-2-9b-it-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.IQ4_NL.gguf) | IQ4_NL | 5.1GB |
|
28 |
+
| [gemma-2-9b-it-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q4_K_S.gguf) | Q4_K_S | 5.1GB |
|
29 |
+
| [gemma-2-9b-it-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q4_K.gguf) | Q4_K | 5.37GB |
|
30 |
+
| [gemma-2-9b-it-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q4_K_M.gguf) | Q4_K_M | 5.37GB |
|
31 |
+
| [gemma-2-9b-it-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q4_1.gguf) | Q4_1 | 5.55GB |
|
32 |
+
| [gemma-2-9b-it-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q5_0.gguf) | Q5_0 | 6.04GB |
|
33 |
+
| [gemma-2-9b-it-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q5_K_S.gguf) | Q5_K_S | 6.04GB |
|
34 |
+
| [gemma-2-9b-it-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q5_K.gguf) | Q5_K | 6.19GB |
|
35 |
+
| [gemma-2-9b-it-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q5_K_M.gguf) | Q5_K_M | 6.19GB |
|
36 |
+
| [gemma-2-9b-it-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q5_1.gguf) | Q5_1 | 6.52GB |
|
37 |
+
| [gemma-2-9b-it-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q6_K.gguf) | Q6_K | 7.07GB |
|
38 |
+
| [gemma-2-9b-it-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-DPO-gguf/blob/main/gemma-2-9b-it-DPO.Q8_0.gguf) | Q8_0 | 9.15GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
base_model: google/gemma-2-9b-it
|
46 |
+
tags:
|
47 |
+
- alignment-handbook
|
48 |
+
- generated_from_trainer
|
49 |
+
datasets:
|
50 |
+
- princeton-nlp/gemma2-ultrafeedback-armorm
|
51 |
+
model-index:
|
52 |
+
- name: princeton-nlp/gemma-2-9b-it-DPO
|
53 |
+
results: []
|
54 |
+
---
|
55 |
+
|
56 |
+
# gemma-2-9b-it-DPO Model Card
|
57 |
+
|
58 |
+
This model was trained under the same setup as [gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO), with the DPO objective.
|
59 |
+
|
60 |
+
SimPO (Simple Preference Optimization) is an offline preference optimization algorithm designed to enhance the training of large language models (LLMs) with preference optimization datasets. SimPO aligns the reward function with the generation likelihood, eliminating the need for a reference model and incorporating a target reward margin to boost performance. Please refer to our [preprint](https://arxiv.org/pdf/2405.14734) and [github repo](https://github.com/princeton-nlp/SimPO) for more details.
|
61 |
+
|
62 |
+
## Model Details
|
63 |
+
|
64 |
+
### Model Description
|
65 |
+
|
66 |
+
We fine-tuned [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) with the DPO objective.
|
67 |
+
|
68 |
+
- **Developed by:** Yu Meng, Mengzhou Xia, Danqi Chen
|
69 |
+
- **Model type:** Causal Language Model
|
70 |
+
- **License:** gemma
|
71 |
+
- **Finetuned from model:** [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
|
72 |
+
|
73 |
+
### Model Sources
|
74 |
+
|
75 |
+
<!-- Provide the basic links for the model. -->
|
76 |
+
|
77 |
+
- **Repository:** https://github.com/princeton-nlp/SimPO
|
78 |
+
- **Paper:** https://arxiv.org/pdf/2405.14734
|
79 |
+
|
80 |
+
|
81 |
+
## How to Get Started with the Model
|
82 |
+
```
|
83 |
+
import torch
|
84 |
+
from transformers import pipeline
|
85 |
+
|
86 |
+
model_id = "princeton-nlp/gemma-2-9b-it-DPO"
|
87 |
+
|
88 |
+
generator = pipeline(
|
89 |
+
"text-generation",
|
90 |
+
model=model_id,
|
91 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
92 |
+
device="cuda",
|
93 |
+
)
|
94 |
+
outputs = generator([{"role": "user", "content": "What's the difference between llamas and alpacas?"}], do_sample=False, max_new_tokens=200)
|
95 |
+
print(outputs[0]['generated_text'])
|
96 |
+
```
|
97 |
+
|
98 |
+
## Training Details
|
99 |
+
|
100 |
+
### Training Data
|
101 |
+
|
102 |
+
We use [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) as the preference optimization dataset.
|
103 |
+
|
104 |
+
#### Training Hyperparameters
|
105 |
+
|
106 |
+
We used the following hyperparameters:
|
107 |
+
- learning rate: 5e-7
|
108 |
+
- batch size: 128
|
109 |
+
- beta: 0.01
|
110 |
+
|
111 |
+
The other hyperparameters are kept the same with our [SimPO recipe](https://github.com/princeton-nlp/SimPO/blob/main/training_configs/gemma-2-9b-it-simpo.yaml).
|
112 |
+
|
113 |
+
#### Speeds, Sizes, Times
|
114 |
+
|
115 |
+
Fine-tuning the [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) takes around 150 mins to finish on 8xH100 GPUs.
|
116 |
+
|
117 |
+
## Evaluation Results
|
118 |
+
|
119 |
+
|
120 |
+
| models | AE2 LC | AE2 WR | AE2 Length | AH | AH Length | GSM | GSM Length | MMLU | MMLU Length |
|
121 |
+
|-----------------------------------|:------:|:------:|:----------:|:----:|:---------:|:----:|:----------:|:----:|:-----------:|
|
122 |
+
| [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) | 51.1 | 38.1 | 1571 | 40.8 | 545 | 87.4 | 395 | 72.7 | 515 |
|
123 |
+
| [princeton-nlp/gemma-2-9b-it-DPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-DPO) | 67.8 | 65.4 | 2016 | 58.9 | 717 | 88.5 | 392 | 72.2 | 624 |
|
124 |
+
| [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) | 72.4 | 65.9 | 1833 | 59.1 | 693 | 88.0 | 341 | 72.2 | 441 |
|
125 |
+
|
126 |
+
|
127 |
+
## Technical Specifications
|
128 |
+
|
129 |
+
### Model Architecture and Objective
|
130 |
+
|
131 |
+
The model architecture is based on [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it). We use the DPO training objective.
|
132 |
+
|
133 |
+
#### Hardware
|
134 |
+
|
135 |
+
We used 8xH100 GPUs for model training.
|
136 |
+
|
137 |
+
#### Software
|
138 |
+
|
139 |
+
Training was done using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) library.
|
140 |
+
|
141 |
+
## Citation
|
142 |
+
|
143 |
+
gemma model:
|
144 |
+
```
|
145 |
+
@article{gemma_2024,
|
146 |
+
title={Gemma},
|
147 |
+
url={https://www.kaggle.com/m/3301},
|
148 |
+
DOI={10.34740/KAGGLE/M/3301},
|
149 |
+
publisher={Kaggle},
|
150 |
+
author={Gemma Team},
|
151 |
+
year={2024}
|
152 |
+
}
|
153 |
+
```
|
154 |
+
|
155 |
+
DPO paper:
|
156 |
+
```
|
157 |
+
@article{rafailov2024direct,
|
158 |
+
title={Direct Preference Optimization: Your language model is secretly a reward model},
|
159 |
+
author={Rafailov, Rafael and Sharma, Archit and Mitchell, Eric and Manning, Christopher D and Ermon, Stefano and Finn, Chelsea},
|
160 |
+
journal={Advances in Neural Information Processing Systems},
|
161 |
+
volume={36},
|
162 |
+
year={2024}
|
163 |
+
}
|
164 |
+
```
|
165 |
+
|
166 |
+
SimPO paper:
|
167 |
+
```
|
168 |
+
@article{meng2024simpo,
|
169 |
+
title={{SimPO}: Simple preference optimization with a reference-free reward},
|
170 |
+
author={Meng, Yu and Xia, Mengzhou and Chen, Danqi},
|
171 |
+
journal={arXiv preprint arXiv:2405.14734},
|
172 |
+
year={2024}
|
173 |
+
}
|
174 |
+
```
|
175 |
+
|
176 |
+
UltraFeedback paper:
|
177 |
+
```
|
178 |
+
@article{cui2023ultrafeedback,
|
179 |
+
title={{UltraFeedback}: Boosting language models with high-quality feedback},
|
180 |
+
author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong},
|
181 |
+
journal={arXiv preprint arXiv:2310.01377},
|
182 |
+
year={2023}
|
183 |
+
}
|
184 |
+
```
|
185 |
+
|
186 |
+
ArmoRM paper:
|
187 |
+
```
|
188 |
+
@article{wang2024interpretable,
|
189 |
+
title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts},
|
190 |
+
author={Wang, Haoxiang and Xiong, Wei and Xie, Tengyang and Zhao, Han and Zhang, Tong},
|
191 |
+
journal={arXiv preprint arXiv:2406.12845},
|
192 |
+
year={2024}
|
193 |
+
}
|
194 |
+
```
|
195 |
+
|