RichardErkhov commited on
Commit
2622ec1
1 Parent(s): d50bb2b

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +179 -0
README.md ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ gemma-2-9b-it-SimPO - GGUF
11
+ - Model creator: https://huggingface.co/princeton-nlp/
12
+ - Original model: https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [gemma-2-9b-it-SimPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q2_K.gguf) | Q2_K | 3.54GB |
18
+ | [gemma-2-9b-it-SimPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.IQ3_XS.gguf) | IQ3_XS | 3.86GB |
19
+ | [gemma-2-9b-it-SimPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.IQ3_S.gguf) | IQ3_S | 4.04GB |
20
+ | [gemma-2-9b-it-SimPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q3_K_S.gguf) | Q3_K_S | 4.04GB |
21
+ | [gemma-2-9b-it-SimPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.IQ3_M.gguf) | IQ3_M | 4.19GB |
22
+ | [gemma-2-9b-it-SimPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q3_K.gguf) | Q3_K | 4.43GB |
23
+ | [gemma-2-9b-it-SimPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q3_K_M.gguf) | Q3_K_M | 4.43GB |
24
+ | [gemma-2-9b-it-SimPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q3_K_L.gguf) | Q3_K_L | 4.78GB |
25
+ | [gemma-2-9b-it-SimPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.IQ4_XS.gguf) | IQ4_XS | 4.86GB |
26
+ | [gemma-2-9b-it-SimPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q4_0.gguf) | Q4_0 | 5.07GB |
27
+ | [gemma-2-9b-it-SimPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.IQ4_NL.gguf) | IQ4_NL | 5.1GB |
28
+ | [gemma-2-9b-it-SimPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q4_K_S.gguf) | Q4_K_S | 5.1GB |
29
+ | [gemma-2-9b-it-SimPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q4_K.gguf) | Q4_K | 5.37GB |
30
+ | [gemma-2-9b-it-SimPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q4_K_M.gguf) | Q4_K_M | 5.37GB |
31
+ | [gemma-2-9b-it-SimPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q4_1.gguf) | Q4_1 | 5.55GB |
32
+ | [gemma-2-9b-it-SimPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q5_0.gguf) | Q5_0 | 6.04GB |
33
+ | [gemma-2-9b-it-SimPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q5_K_S.gguf) | Q5_K_S | 6.04GB |
34
+ | [gemma-2-9b-it-SimPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q5_K.gguf) | Q5_K | 6.19GB |
35
+ | [gemma-2-9b-it-SimPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q5_K_M.gguf) | Q5_K_M | 6.19GB |
36
+ | [gemma-2-9b-it-SimPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q5_1.gguf) | Q5_1 | 6.52GB |
37
+ | [gemma-2-9b-it-SimPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q6_K.gguf) | Q6_K | 7.07GB |
38
+ | [gemma-2-9b-it-SimPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_gemma-2-9b-it-SimPO-gguf/blob/main/gemma-2-9b-it-SimPO.Q8_0.gguf) | Q8_0 | 9.15GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ base_model: google/gemma-2-9b-it
46
+ tags:
47
+ - alignment-handbook
48
+ - generated_from_trainer
49
+ datasets:
50
+ - princeton-nlp/gemma2-ultrafeedback-armorm
51
+ model-index:
52
+ - name: princeton-nlp/gemma-2-9b-it-SimPO
53
+ results: []
54
+ license: mit
55
+ ---
56
+
57
+ # gemma-2-9b-it-SimPO Model Card
58
+
59
+ SimPO (Simple Preference Optimization) is an offline preference optimization algorithm designed to enhance the training of large language models (LLMs) with preference optimization datasets. SimPO aligns the reward function with the generation likelihood, eliminating the need for a reference model and incorporating a target reward margin to boost performance. Please refer to our [preprint](https://arxiv.org/pdf/2405.14734) and [github repo](https://github.com/princeton-nlp/SimPO) for more details.
60
+
61
+
62
+ ## Model Details
63
+
64
+ ### Model Description
65
+
66
+ We fine-tuned [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) with the SimPO objective.
67
+
68
+ - **Developed by:** Yu Meng, Mengzhou Xia, Danqi Chen
69
+ - **Model type:** Causal Language Model
70
+ - **License:** gemma
71
+ - **Finetuned from model:** [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
72
+
73
+ ### Model Sources
74
+
75
+ <!-- Provide the basic links for the model. -->
76
+
77
+ - **Repository:** https://github.com/princeton-nlp/SimPO
78
+ - **Paper:** https://arxiv.org/pdf/2405.14734
79
+
80
+
81
+ ## How to Get Started with the Model
82
+ ```
83
+ import torch
84
+ from transformers import pipeline
85
+
86
+ model_id = "princeton-nlp/gemma-2-9b-it-SimPO"
87
+
88
+ generator = pipeline(
89
+ "text-generation",
90
+ model=model_id,
91
+ model_kwargs={"torch_dtype": torch.bfloat16},
92
+ device="cuda",
93
+ )
94
+ outputs = generator([{"role": "user", "content": "What's the difference between llamas and alpacas?"}], do_sample=False, max_new_tokens=200)
95
+ print(outputs[0]['generated_text'])
96
+ ```
97
+
98
+ ## Training Details
99
+
100
+ ### Training Data
101
+
102
+ We use [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) as the preference optimization dataset.
103
+
104
+ #### Training Hyperparameters
105
+
106
+ The hyperparameters used can be found in the [training script](https://github.com/princeton-nlp/SimPO/blob/main/training_configs/gemma-2-9b-it-simpo.yaml).
107
+
108
+ #### Speeds, Sizes, Times
109
+
110
+ Fine-tuning the [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) takes around 100 mins to finish on 8xH100 GPUs.
111
+
112
+ ## Evaluation Results
113
+
114
+
115
+ | models | AE2 LC | AE2 WR | AE2 Length | AH | AH Length | GSM | GSM Length | MMLU | MMLU Length |
116
+ |-----------------------------------|:------:|:------:|:----------:|:----:|:---------:|:----:|:----------:|:----:|:-----------:|
117
+ | [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) | 51.1 | 38.1 | 1571 | 40.8 | 545 | 87.4 | 395 | 72.7 | 515 |
118
+ | [princeton-nlp/gemma-2-9b-it-DPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-DPO) | 67.8 | 65.4 | 2016 | 58.9 | 717 | 88.5 | 392 | 72.2 | 624 |
119
+ | [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) | 72.4 | 65.9 | 1833 | 59.1 | 693 | 88.0 | 341 | 72.2 | 441 |
120
+
121
+
122
+ ## Technical Specifications
123
+
124
+ ### Model Architecture and Objective
125
+
126
+ The model architecture is based on [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it). We use the SimPO training objective proposed in our [preprint](https://arxiv.org/pdf/2405.14734).
127
+
128
+ #### Hardware
129
+
130
+ We used 8xH100 GPUs for model training.
131
+
132
+ #### Software
133
+
134
+ Training was done using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) library.
135
+
136
+ ## Citation
137
+
138
+ gemma model:
139
+ ```
140
+ @article{gemma_2024,
141
+ title={Gemma},
142
+ url={https://www.kaggle.com/m/3301},
143
+ DOI={10.34740/KAGGLE/M/3301},
144
+ publisher={Kaggle},
145
+ author={Gemma Team},
146
+ year={2024}
147
+ }
148
+ ```
149
+
150
+ SimPO paper:
151
+ ```
152
+ @article{meng2024simpo,
153
+ title={{SimPO}: Simple preference optimization with a reference-free reward},
154
+ author={Meng, Yu and Xia, Mengzhou and Chen, Danqi},
155
+ journal={arXiv preprint arXiv:2405.14734},
156
+ year={2024}
157
+ }
158
+ ```
159
+
160
+ UltraFeedback paper:
161
+ ```
162
+ @article{cui2023ultrafeedback,
163
+ title={{UltraFeedback}: Boosting language models with high-quality feedback},
164
+ author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong},
165
+ journal={arXiv preprint arXiv:2310.01377},
166
+ year={2023}
167
+ }
168
+ ```
169
+
170
+ ArmoRM paper:
171
+ ```
172
+ @article{wang2024interpretable,
173
+ title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts},
174
+ author={Wang, Haoxiang and Xiong, Wei and Xie, Tengyang and Zhao, Han and Zhang, Tong},
175
+ journal={arXiv preprint arXiv:2406.12845},
176
+ year={2024}
177
+ }
178
+ ```
179
+