|
--- |
|
base_model: huihui-ai/Falcon3-10B-Instruct-abliterated |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- mlabonne/orpo-dpo-mix-40k |
|
--- |
|
|
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6540a02d1389943fef4d2640/-r7vmnMD5NqFi7fsHwPwz.jpeg) |
|
|
|
This is **huihui-ai/Falcon3-10B-Instruct-abliterated** finetuned w/ **unsloth** ORPO on **mlabonne/orpo-dpo-mix-40k** for one epoch. |
|
|
|
The results are a very good Uncensored model for its size (10B). *When prompted* ***correctly***, this model shows no refusals. The model scores 33.44% on Open LLM Leaderboard. With GGUF(Q4KM) coming in around 6.4GB this model will be usable on most devices even without a GPU. |
|
|
|
Run GGUF with **LMStudio.ai** |
|
|
|
|
|
[Open LLM Leaderboard Evaluation Results] |
|
|
|
| Metric |Value (%)| |
|
|-------------------|--------:| |
|
|**Average** | 33.44| |
|
|IFEval (0-Shot) | 77.31| |
|
|BBH (3-Shot) | 43.57| |
|
|MATH Lvl 5 (4-Shot)| 22.89| |
|
|GPQA (0-shot) | 10.40| |
|
|MuSR (0-shot) | 9.39| |
|
|MMLU-PRO (5-shot) | 37.07| |
|
|
|
|
|
|
|
|
|
|