Edit model card

Visualize in Weights & Biases

Gemma 2B Zephyr DPO

The Zephyr DPO recipe applied on top of SFT finetuned Gemma 2B

Model description

  • Model type: A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
  • Language(s) (NLP): Primarily English
  • Finetuned from model: wandb/gemma-2b-zephyr-sft

Recipe

We trained using the DPO script in alignment handbook recipe and logging to W&B

Visit the W&B workspace here

License

This model has the same license as the original Gemma model collection

Compute provided by Lambda Labs - 8xA100 80GB node

around 13 hours of training

Downloads last month
20
Safetensors
Model size
2.51B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for wandb/gemma-2b-zephyr-dpo

Base model

google/gemma-2b
Finetuned
(7)
this model
Merges
1 model

Dataset used to train wandb/gemma-2b-zephyr-dpo

Spaces using wandb/gemma-2b-zephyr-dpo 2