cstr's picture
Create README.md
b1f946e verified
---
tags:
- merge
- mergekit
- lazymergekit
- abideen/AlphaMonarch-dora
base_model:
- abideen/AlphaMonarch-dora
license: cc-by-nc-4.0
---
# Spaetzle-v69-7b
This is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks.
There is also an [unquantized](https://huggingface.co/cstr/Spaetzle-v69-7b) version.
It achieves (running quantized) in
- German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).
- English EQ Bench: Score (v2): 76.43 (Parseable: 171.0).
It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).
Spaetzle-v69-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
* [cstr/Spaetzle-v68-7b](https://huggingface.co/cstr/Spaetzle-v68-7b)
The merge tree in total involves to following original models:
- [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
- [mayflowergmbh/Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo)
- [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
- [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
- [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
- [occiglot/occiglot-7b-de-en-instruct](https://huggingface.co/occiglot/occiglot-7b-de-en-instruct)
- [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
- [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
- [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
- [LeoLM/leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b)
- [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
- [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
- [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b)
- [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
- [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
## 🧩 Configuration
```yaml
models:
- model: cstr/Spaetzle-v68-7b
# no parameters necessary for base model
- model: abideen/AlphaMonarch-dora
parameters:
density: 0.60
weight: 0.30
merge_method: dare_ties
base_model: cstr/Spaetzle-v68-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v69-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```