cgus's picture
Update README.md
dc70074 verified
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
tags:
- chat
- abliterated
- uncensored
---
# Qwen2.5-14B-Instruct-abliterated-v2-exl2
Model: [Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)
Made by: [huihui-ai](https://huggingface.co/huihui-ai)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/Qwen2.5-14B-Instruct-abliterated-v2-exl2/tree/8bpw-h8)
## Quantization notes
Made with exllamav2 0.2.3 with the default dataset.
Exl2 quants can be used with Nvidia RTX2xxx or newer GPUs on Windows/Linux or AMD on Linux.
This model format works the best when a model fits your GPU, otherwise it's better to use GGUF versions.
For example with RTX3060/12GB I could fit 4.5bpw/5bpw with Q6 cache and 16k context.
Use with with Text-Generation-WebUI, TabbyAPI or other apps that have exllamav2 loader.
# Original model card
# huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
This is an uncensored version of [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
**Important Note** This version is an improvement over the previous one [Qwen2.5-14B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated).
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
## Evaluations
Evaluation is ongoing, to be continued later.