qwen2-2x1.5B / README.md
djuna's picture
4 to 2
784e6a3 verified
---
base_model:
- cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
- macadeliccc/Samantha-Qwen2-1.5B
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
- macadeliccc/Samantha-Qwen2-1.5B
---
# Qwen2-2x1.5B
Qwen2-2x1.5B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [cognitivecomputations/dolphin-2.9.3-qwen2-1.5b](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-1.5b)
* [macadeliccc/Samantha-Qwen2-1.5B](https://huggingface.co/macadeliccc/Samantha-Qwen2-1.5B)
## 🧩 Configuration
```yaml
base_model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
gate_mode: hidden
architecture: qwen
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
positive_prompts:
- "chat"
- "assistant"
- "explain"
- "describe"
- "define"
- "what is"
- "tell me"
- "help me"
- "show me"
- "can you"
- source_model: macadeliccc/Samantha-Qwen2-1.5B
positive_prompts:
- "characters"
- "scene"
- "roleplay"
- "writing"
- "creative"
- "you are"
- "act as"
shared_experts:
- source_model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
positive_prompts: # required by Qwen MoE for "hidden" gate mode, otherwise not allowed
- "chat"
- "assistant"
# (optional, but recommended:)
residual_scale: 0.1
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "djuna/Qwen2-2x1.5B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```