Update README.md
Browse files
README.md
CHANGED
@@ -1,11 +1,11 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
license: apache-2.0
|
4 |
-
license_link: https://huggingface.co/huihui-ai/Qwen2.5-
|
5 |
language:
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
-
base_model: Qwen/Qwen2.5-
|
9 |
tags:
|
10 |
- chat
|
11 |
- abliterated
|
@@ -15,7 +15,7 @@ tags:
|
|
15 |
# huihui-ai/Qwen2.5-Code-1.5B-Instruct-abliterated
|
16 |
|
17 |
|
18 |
-
This is an uncensored version of [Qwen2.5-
|
19 |
|
20 |
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
|
21 |
|
@@ -27,7 +27,7 @@ You can use this model in your applications by loading it with Hugging Face's `t
|
|
27 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
28 |
|
29 |
# Load the model and tokenizer
|
30 |
-
model_name = "huihui-ai/Qwen2.5-
|
31 |
model = AutoModelForCausalLM.from_pretrained(
|
32 |
model_name,
|
33 |
torch_dtype="auto",
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
license: apache-2.0
|
4 |
+
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated/blob/main/LICENSE
|
5 |
language:
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
+
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
9 |
tags:
|
10 |
- chat
|
11 |
- abliterated
|
|
|
15 |
# huihui-ai/Qwen2.5-Code-1.5B-Instruct-abliterated
|
16 |
|
17 |
|
18 |
+
This is an uncensored version of [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
|
19 |
|
20 |
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
|
21 |
|
|
|
27 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
28 |
|
29 |
# Load the model and tokenizer
|
30 |
+
model_name = "huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated"
|
31 |
model = AutoModelForCausalLM.from_pretrained(
|
32 |
model_name,
|
33 |
torch_dtype="auto",
|