--- base_model: - JackCloudman/QwQ-56B-Ghost library_name: transformers tags: - mergekit - merge - uncensored - abliterated - chat license: apache-2.0 language: - en pipeline_tag: text-generation --- # JackCloudman/QwQ-56B-Ghost-GGUF **QwQ-56B-Ghost** is an extended model derived from Qwen-32B-Preview-Jackterated using the _merging passthrough_ technique inspired in models like [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b). The result is a **~56B parameter model** ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * JackCloudman/QwQ-32B-Preview-jackterated ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - layer_range: [0, 16] model: JackCloudman/QwQ-32B-Preview-jackterated - sources: - layer_range: [8, 24] model: JackCloudman/QwQ-32B-Preview-jackterated - sources: - layer_range: [16, 32] model: JackCloudman/QwQ-32B-Preview-jackterated - sources: - layer_range: [24, 40] model: JackCloudman/QwQ-32B-Preview-jackterated - sources: - layer_range: [32, 48] model: JackCloudman/QwQ-32B-Preview-jackterated - sources: - layer_range: [40, 56] model: JackCloudman/QwQ-32B-Preview-jackterated - sources: - layer_range: [48, 64] model: JackCloudman/QwQ-32B-Preview-jackterated ``` ## Credits - Qwen and QwQ-32B-Preview - @FailSpy [notebook](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) with abliterate technique - [Mergekit](https://github.com/cg123/mergekit) - [Archive](https://youtu.be/FzmaF5p96pc?si=RqMjDgEh5J_090Js&t=2015) - You :D