|
--- |
|
language: |
|
- en |
|
tags: |
|
- pytorch |
|
- causal-lm |
|
- pythia |
|
license: apache-2.0 |
|
datasets: |
|
- Anthropic/hh-rlhf |
|
--- |
|
|
|
|
|
# Infos |
|
|
|
Pythia-1b supervised finetuned with Anthropic-hh-rlhf dataset for 1 epoch (sft-model), before DPO [(paper)](https://arxiv.org/abs/2305.18290) with same dataset for 1 epoch. |
|
|
|
[wandb log](https://wandb.ai/pythia_dpo/Pythia_DPO_new/runs/jk09pzqb) |
|
|
|
See [Pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) for model details [(paper)](https://arxiv.org/abs/2101.00027). |
|
|
|
|
|
# Benchmark results: |
|
|
|
## Zero shot |
|
|
|
Results for the base model are taken from the [Pythia paper](https://arxiv.org/abs/2101.00027). |
|
|
|
|
|
|
|
## Five shot |
|
|