File size: 1,838 Bytes
0a48744 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
base_model:
- JackCloudman/QwQ-56B-Ghost
library_name: transformers
tags:
- mergekit
- merge
- uncensored
- abliterated
- chat
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# JackCloudman/QwQ-56B-Ghost-GGUF
**QwQ-56B-Ghost** is an extended model derived from Qwen-32B-Preview-Jackterated using the _merging passthrough_ technique inspired in models like [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b). The result is a **~56B parameter model**
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* JackCloudman/QwQ-32B-Preview-jackterated
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: JackCloudman/QwQ-32B-Preview-jackterated
- sources:
- layer_range: [8, 24]
model: JackCloudman/QwQ-32B-Preview-jackterated
- sources:
- layer_range: [16, 32]
model: JackCloudman/QwQ-32B-Preview-jackterated
- sources:
- layer_range: [24, 40]
model: JackCloudman/QwQ-32B-Preview-jackterated
- sources:
- layer_range: [32, 48]
model: JackCloudman/QwQ-32B-Preview-jackterated
- sources:
- layer_range: [40, 56]
model: JackCloudman/QwQ-32B-Preview-jackterated
- sources:
- layer_range: [48, 64]
model: JackCloudman/QwQ-32B-Preview-jackterated
```
## Credits
- Qwen and QwQ-32B-Preview
- @FailSpy [notebook](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) with abliterate technique
- [Mergekit](https://github.com/cg123/mergekit)
- [Archive](https://youtu.be/FzmaF5p96pc?si=RqMjDgEh5J_090Js&t=2015)
- You :D
|