Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: other
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
base_model: mlabonne/Daredevil-8B-abliterated
|
6 |
+
---
|
7 |
+
|
8 |
+
# Daredevil-8B-abliterated-GGUF
|
9 |
+
This is quantized version of [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) created using llama.cpp
|
10 |
+
|
11 |
+
|
12 |
+
# Model Description
|
13 |
+
|
14 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg)
|
15 |
+
|
16 |
+
Abliterated version of [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) using [failspy](https://huggingface.co/failspy)'s notebook.
|
17 |
+
|
18 |
+
It based on the technique described in the blog post "[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)".
|
19 |
+
|
20 |
+
Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda, and failspy.
|
21 |
+
|
22 |
+
## π Applications
|
23 |
+
|
24 |
+
This is an uncensored model. You can use it for any application that doesn't require alignment, like role-playing.
|
25 |
+
|
26 |
+
Tested on LM Studio using the "Llama 3" preset.
|
27 |
+
|
28 |
+
|
29 |
+
## π Evaluation
|
30 |
+
|
31 |
+
### Open LLM Leaderboard
|
32 |
+
|
33 |
+
Daredevil-8B-abliterated is the second best-performing 8B model on the Open LLM Leaderboard in terms of MMLU score (27 May 24).
|
34 |
+
|
35 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
|
36 |
+
|
37 |
+
### Nous
|
38 |
+
|
39 |
+
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
|
40 |
+
|
41 |
+
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|
42 |
+
|---|---:|---:|---:|---:|---:|
|
43 |
+
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
|
44 |
+
| [**mlabonne/Daredevil-8B-abliterated**](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | **55.06** | **43.29** | **73.33** | **57.47** | **46.17** |
|
45 |
+
| [mlabonne/Llama-3-8B-Instruct-abliterated-dpomix](https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix) [π](https://gist.github.com/mlabonne/d711548df70e2c04771cc68ab33fe2b9) | 52.26 | 41.6 | 69.95 | 54.22 | 43.26 |
|
46 |
+
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
|
47 |
+
| [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [π](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
|
48 |
+
| [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [π](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
|
49 |
+
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
|
50 |
+
|
51 |
+
## π³ Model family tree
|
52 |
+
|
53 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
|
54 |
+
|
55 |
+
## π» Usage
|
56 |
+
|
57 |
+
```python
|
58 |
+
!pip install -qU transformers accelerate
|
59 |
+
|
60 |
+
from transformers import AutoTokenizer
|
61 |
+
import transformers
|
62 |
+
import torch
|
63 |
+
|
64 |
+
model = "mlabonne/Daredevil-8B-abliterated"
|
65 |
+
messages = [{"role": "user", "content": "What is a large language model?"}]
|
66 |
+
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
68 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
69 |
+
pipeline = transformers.pipeline(
|
70 |
+
"text-generation",
|
71 |
+
model=model,
|
72 |
+
torch_dtype=torch.float16,
|
73 |
+
device_map="auto",
|
74 |
+
)
|
75 |
+
|
76 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
77 |
+
print(outputs[0]["generated_text"])
|
78 |
+
```
|