File size: 1,167 Bytes
4ac546a 8d8efe2 4ac546a 8d8efe2 4ac546a 8d8efe2 4ac546a 8d8efe2 4ac546a 8d8efe2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
base_model: huihui-ai/Falcon3-10B-Instruct-abliterated
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
datasets:
- mlabonne/orpo-dpo-mix-40k
- unalignment/toxic-dpo-v0.2
---
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6540a02d1389943fef4d2640/AWgZEpPD-oxWMyOmHhoi1.webp)
This is **huihui-ai/Falcon3-10B-Instruct-abliterated** finetuned w/ **unsloth** ORPO on **mlabonne/orpo-dpo-mix-40k** for one epoch. Then redundantly finetuned on **unalignment/toxic-dpo-v0.2** for one epoch to further uncensor the model. Provided as GGUF Q5KM.
The results are a very good Uncensored model for its size (10B). *When prompted* ***correctly***, this model shows no refusals. The model scores 33.44% on Open LLM Leaderboard.
Run GGUF with **LMStudio.ai**
[Open LLM Leaderboard Evaluation Results]
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 33.44|
|IFEval (0-Shot) | 77.31|
|BBH (3-Shot) | 43.57|
|MATH Lvl 5 (4-Shot)| 22.89|
|GPQA (0-shot) | 10.40|
|MuSR (0-shot) | 9.39|
|MMLU-PRO (5-shot) | 37.07| |