Metrics

PPL arc_easy arc_challenge piqa winogrande hellaswag mmlu QA Avg
74461.16 25.29 ± 0.89 23.38 ± 1.24 52.45 ± 1.17 50.99 ± 1.40 25.54 ± 0.44 - 35.53

Training method based on BitDistiller Paper

  • License: mit
  • Finetuned from: TinyLlama/TinyLlama_v1.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BrownianNotion/Llama-2-7b-hf_random_baseline

Finetuned
(49)
this model