|
--- |
|
library_name: transformers |
|
datasets: |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
base_model: tcapelle/gemma-7b-zephyr-sft |
|
license: other |
|
license_name: gemma-terms-of-use |
|
license_link: https://ai.google.dev/gemma/terms |
|
--- |
|
|
|
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr) |
|
|
|
# Gemma 7B Zephyr DPO |
|
|
|
The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) DPO recipe applied on top of SFT finetuned Gemma 7B |
|
|
|
## Model description |
|
|
|
- **Model type:** A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. |
|
- **Language(s) (NLP):** Primarily English |
|
- **Finetuned from model:** [tcapelle/gemma-7b-zephyr-sft](https://huggingface.co/tcapelle/gemma-7b-zephyr-sft/) |
|
|
|
## Recipe |
|
|
|
We trained using the DPO script in [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_dpo.py) and logging to W&B |
|
|
|
Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape) |
|
|
|
|
|
## License |
|
This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms) |
|
|
|
## Compute provided by [Lambda Labs](https://lambdalabs.com/) - 8xA100 80GB node |
|
|
|
|