gemma-7b-zephyr-dpo / README.md
tcapelle's picture
Update README.md
6ce862f verified
|
raw
history blame
1.32 kB
metadata
library_name: transformers
datasets:
  - HuggingFaceH4/ultrafeedback_binarized
base_model: tcapelle/gemma-7b-zephyr-sft
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms

Visualize in Weights & Biases

Gemma 7B Zephyr DPO

The Zephyr DPO recipe applied on top of SFT finetuned Gemma 7B

Model description

  • Model type: A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
  • Language(s) (NLP): Primarily English
  • Finetuned from model: tcapelle/gemma-7b-zephyr-sft

Recipe

We trained using the DPO script in alignment handbook recipe and logging to W&B

Visit the W&B workspace here

License

This model has the same license as the original Gemma model collection

Compute provided by Lambda Labs - 8xA100 80GB node