File size: 1,002 Bytes
cb8b510
 
ec5a06d
 
3e8fdc6
cb8b510
 
97201c0
 
ec5a06d
cb8b510
ec5a06d
cb8b510
3e8fdc6
 
 
 
 
 
ec5a06d
cb8b510
ec5a06d
cb8b510
ec5a06d
cb8b510
6ac82af
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
library_name: transformers
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: google/gemma-7b
---

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr)

# Gemma 7B Zephyr SFT

The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) SFT recipe applied on top of Gemma 7B

## Model description

- **Model type:** A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)

## Recipe

We trained using the [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_sft.py) and logging to W&B

Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape)

## Compute provided by Lambda Labs - 8xA100 80GB node