File size: 2,635 Bytes
f77b611
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


xpo-qwen2 - AWQ
- Model creator: https://huggingface.co/qgallouedec/
- Original model: https://huggingface.co/qgallouedec/xpo-qwen2/




Original model description:
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: trl-lib/ultrafeedback-prompt
library_name: transformers
model_name: xpo-qwen2
tags:
- trl
- generated_from_trainer
- xpo
licence: license
---

# Model Card for xpo-qwen2

This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/ultrafeedback-prompt](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).

## Quick start

```python
from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/xpo-qwen2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=500)[0]
print(output["generated_text"][1]["content"])
```

## Training procedure

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/bg6y6mom)

This model was trained with XPO, a method introduced in [Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF](https://huggingface.co/papers/2405.21046).

### Framework versions

- TRL: 0.12.0.dev0
- Transformers: 4.45.0.dev0
- Pytorch: 2.4.1
- Datasets: 3.0.0
- Tokenizers: 0.19.1

## Citations

Cite XPO as:
    
```bibtex
@article{jung2024binary,
    title        = {{Binary Classifier Optimization for Large Language Model Alignment}},
    author       = {Seungjae Jung and Gunsoo Han and Daniel Wontae Nam and Kyoung{-}Woon On},
    year         = 2024,
    eprint       = {arXiv:2404.04656}
}
```

Cite TRL as:
    
```bibtex
@misc{vonwerra2022trl,
	title        = {{TRL: Transformer Reinforcement Learning}},
	author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
	year         = 2020,
	journal      = {GitHub repository},
	publisher    = {GitHub},
	howpublished = {\url{https://github.com/huggingface/trl}}
}
```