base_model:
- Qwen/Qwen2-72B-Instruct
datasets:
- mlabonne/orpo-dpo-mix-40k
language:
- en
library_name: transformers
tags:
- orpo
- qwen2
- rlhf
- sft
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
license: other
dfurman/Qwen2-72B-Orpo-v0.1
Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
Qwen2-72B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to this section for detailed instructions on how to deploy Qwen2 for handling long texts.
For more details, please refer to our blog, GitHub, and Documentation.
This finetune
Qwen2-72B-Orpo-v0.1 is a QLoRA finetune of Qwen/Qwen2-72B-Instruct
on 1.5k rows of mlabonne/orpo-dpo-mix-40k
.
You can find the experiment on W&B at this address.
💻 Usage
Setup
!pip install -qU transformers accelerate bitsandbytes
!huggingface-cli download dfurman/Qwen2-72B-Orpo-v0.1
from transformers import AutoTokenizer, BitsAndBytesConfig
import transformers
import torch
if torch.cuda.get_device_capability()[0] >= 8:
!pip install -qqq flash-attn
attn_implementation = "flash_attention_2"
torch_dtype = torch.bfloat16
else:
attn_implementation = "eager"
torch_dtype = torch.float16
# quantize if necessary
# bnb_config = BitsAndBytesConfig(
# load_in_4bit=True,
# bnb_4bit_quant_type="nf4",
# bnb_4bit_compute_dtype=torch_dtype,
# bnb_4bit_use_double_quant=True,
# )
model = "dfurman/Qwen2-72B-Orpo-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={
"torch_dtype": torch_dtype,
# "quantization_config": bnb_config,
"device_map": "auto",
"attn_implementation": attn_implementation,
}
)
Run
question = """The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning.
They sold 93 loaves in the morning and 39 loaves in the afternoon.
A grocery store then returned 6 unsold loaves back to the bakery.
How many loaves of bread did the bakery have left?
Respond as succinctly as possible. Format the response as a completion of this table:
|step|subquestion|procedure|result|
|:---|:----------|:--------|:-----:|"""
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# print("***Prompt:\n", prompt)
outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print("***Generation:")
print(outputs[0]["generated_text"][len(prompt):])
***Generation:
|1|Initial loaves|Start with total loaves|200|
|2|Sold in morning|Subtract morning sales|200 - 93 = 107|
|3|Sold in afternoon|Subtract afternoon sales|107 - 39 = 68|
|4|Returned loaves|Add returned loaves|68 + 6 = 74|