finetunning-week1 / README.md
illeto's picture
Update README.md
9e40937 verified
metadata
base_model:
  - EleutherAI/pythia-160m

Model Card

The Pythia-160M model is designed for research on language model behavior and interpretability, trained on the Pile dataset. Here we've evaluated it on HELLASWAG and can be fine-tuned for further experimentation.

Hellaswag Eval

Evaluated on the Eleuther evaluation harness, revision 100,000 steps

Tasks Version Filter n-shot Metric Value Stderr
hellaswag 1 none 0 acc 0.2872 ± 0.0045
none 0 acc_norm 0.3082 ± 0.0046

How to Use

Done just an exercise - not intended for deployment or human-facing interactions.