File size: 2,872 Bytes
a752f53 c837894 a752f53 d9611d2 a752f53 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
license_link: https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-linear50/blob/main/LICENSE
language:
- en
base_model:
- Qwen/QwQ-32B-Preview
- huihui-ai/QwQ-32B-Preview-abliterated
tags:
- chat
- abliterated
- uncensored
- mergekit
- merge
library_name: transformers
---
This is a 50% abliterated model obtained from linear-weighted merging [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) (weight: 0.5) and [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) (weight: 0.5), using [mergekit](https://github.com/arcee-ai/mergekit).
This is an experimental model, and from my preliminary experiments, this gives more natural result than Qwen's original model for sensitive contents while still maintaining refusal capability.
Base on some of my experiments, I found when it comes to "sensitive contents" generation, the more percentage of abliteration, the more peace but direct result it might get, and the less "conflict and disagreement".
To get best effect when it comes to "uncensoring", for low percentage of abliteration mixture like this one, it would be better to use it in RP or story writing without using official or other "AI assistant" prompts. E.g. You should use chat mode instead of instruct mode in [Text generation web UI](https://github.com/oobabooga/text-generation-webui), and avoid prompting like AI assistant.
From my experiments, this model removes some artificial wording of the "censorship", when prompting for RP or story writing correctly.
I also offer other percentages of abliteration so you can try which one best suits your use case.
Or you may use [this LoRA](https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-lora-rank32) if you know how to apply LoRA and adjust its weight for the app you use.
**NOTE: I bear no responsibility for any output of this model. When properly prompted, this model may generate contents that are not suitable in some situations. Use it with your own caution.**
---
# pipihand01/QwQ-32B-Preview-abliterated-linear50
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
* [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/QwQ-32B-Preview
parameters:
weight: 0.5
- model: huihui-ai/QwQ-32B-Preview-abliterated
parameters:
weight: 0.5
merge_method: linear
dtype: bfloat16
```
|