library_name: transformers | |
datasets: | |
- HuggingFaceH4/ultrachat_200k | |
finetuned_from: google/gemma-7b | |
# Gemma 7B Zephyr SFT | |
The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) SFT recipe applied on top of Gemma 7B | |
## Recipe | |
We trained using the [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_sft.py) and logging to W&B | |
Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape) | |