Metrics
PPL | arc_easy | arc_challenge | piqa | winogrande | hellaswag | mmlu | QA Avg |
---|---|---|---|---|---|---|---|
74461.16 | 25.29 ± 0.89 | 23.38 ± 1.24 | 52.45 ± 1.17 | 50.99 ± 1.40 | 25.54 ± 0.44 | - | 35.53 |
Training method based on BitDistiller Paper
- License: mit
- Finetuned from: TinyLlama/TinyLlama_v1.1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for BrownianNotion/Llama-2-7b-hf_random_baseline
Base model
TinyLlama/TinyLlama_v1.1