metadata
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- dpo
- rlhf
- trl
- autoquant
- gguf
Llama3-8B-SuperNova-Spectrum-Hermes-DPO
This model is a DPO fine-tuned version of my DARE_TIES
merged Model yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties
on the yuvraj17/chatml-OpenHermes2.5-dpo-binarized-alpha-2k dataset.
DPO (Direct Preference Optimization):
Direct Preference Optimization (DPO) is a fine-tuning technique that focuses on aligning a model's responses with human preferences or ranking data without requiring reinforcement learning steps, like in RLHF.
Training:
- Trained on 1x A40s (48GB VRAM) using the HuggingFace TRL.
- QLoRA(
4-bit precision
) for 1 epoch# LoRA configuration peft_config = LoraConfig( r=32, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] )
Training Params
The following hyperparameters were used during training:
- learning_rate: 5e-05
- beta=0.1
- num_devices: 1
- gradient_accumulation_steps: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
Training Time = 1:57:00 hours
Weight & Biases Report
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
π Evaluation Scores
Coming Soon