CleverQwen2-1.5B / README.md
trollek's picture
Update README.md
331c250 verified
---
base_model:
- cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
- trollek/Qwen2-1.5B-Instruct-Abliterated
- M4-ai/Hercules-5.0-Qwen2-1.5B
- Replete-AI/Replete-Coder-Qwen2-1.5b
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# CleverQwen2-1.5B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
It has grown by about 300M parameters and I don't know why. I would like to know though. It works as expexted - **amazing** - I just can't see any reason for the Qwen2 models to gain parameters when merged.
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [trollek/Qwen2-1.5B-Instruct-Abliterated](https://huggingface.co/trollek/Qwen2-1.5B-Instruct-Abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.9.3-qwen2-1.5b](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-1.5b)
* [M4-ai/Hercules-5.0-Qwen2-1.5B](https://huggingface.co/M4-ai/Hercules-5.0-Qwen2-1.5B)
* [Replete-AI/Replete-Coder-Qwen2-1.5b](https://huggingface.co/Replete-AI/Replete-Coder-Qwen2-1.5b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Replete-AI/Replete-Coder-Qwen2-1.5b
- model: M4-ai/Hercules-5.0-Qwen2-1.5B
- model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
merge_method: model_stock
base_model: trollek/Qwen2-1.5B-Instruct-Abliterated
architecture: qwen2
dtype: bfloat16
```
## Quants
- [trollek/CleverQwen2-1.5B-GGUF](https://huggingface.co/trollek/CleverQwen2-1.5B-GGUF)
### Ollama
```bash
ollama pull trollek/cleverqwen2:1.5b-q4_k_s
ollama pull trollek/cleverqwen2:1.5b-q5_k_s
ollama pull trollek/cleverqwen2:1.5b-q6_k
```