🦙 Meta-Llama-3.1-8B-Instruct-abliterated

image/png

🦙 Llama 3.1 70B Instruct lorablated

This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see this article to know more about it).

Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.

⚡️ Quantization

Thanks to ZeroWw and Apel-sin for the quants.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 23.13
IFEval (0-Shot) 73.29
BBH (3-Shot) 27.13
MATH Lvl 5 (4-Shot) 6.42
GPQA (0-shot) 0.89
MuSR (0-shot) 3.21
MMLU-PRO (5-shot) 27.81
Downloads last month
46
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mav23/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF

Quantized
(302)
this model

Evaluation results