Exllamav2 quant (exl2 / 5.0 bpw) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
Quant | Model Size | lm_head |
---|---|---|

Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.
There's:
Llama 3 SnowStorm v1.15B 4x8B
base_model: Sao10K_L3-8B-Stheno-v3.1
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: Nitral-AI_Poppy_Porpoise-1.0-L3-8B
- source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
- source_model: openlynn_Llama-3-Soliloquy-8B-v2
- source_model: Sao10K_L3-8B-Stheno-v3.1
Models used
- Nitral-AI/Poppy_Porpoise-1.0-L3-8B
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- openlynn/Llama-3-Soliloquy-8B-v2
- Sao10K/L3-8B-Stheno-v3.1
Difference(from SnowStorm v1.0)
- Update from ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B to Nitral-AI/Poppy_Porpoise-1.0-L3-8B
- Change base model from NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS to Sao10K/L3-8B-Stheno-v3.1
Vision
Prompt format: Llama 3
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 68.01 |
AI2 Reasoning Challenge (25-Shot) | 60.67 |
HellaSwag (10-Shot) | 81.60 |
MMLU (5-Shot) | 68.12 |
TruthfulQA (0-shot) | 51.69 |
Winogrande (5-shot) | 76.56 |
GSM8k (5-shot) | 69.45 |
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard60.670
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard81.600
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard68.120
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard51.690
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard76.560
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard69.450