File size: 4,954 Bytes
87b6654
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
tags:
- merge
- mergekit
- lazymergekit
- DiscoResearch/DiscoLM_German_7b_v1
- DRXD1000/Phoenix
- VAGOsolutions/SauerkrautLM-7b-v1-mistral
- malteos/hermeo-7b
base_model:
- DiscoResearch/DiscoLM_German_7b_v1
- DRXD1000/Phoenix
- VAGOsolutions/SauerkrautLM-7b-v1-mistral
- malteos/hermeo-7b
license: apache-2.0
language:
- de
- en
---

# Wiedervereinigung-7b-dpo
![image/png](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b/resolve/main/Wiedervereinigung-7b.png)

This is a dpo aligned merge of our favourite german models, scoring 7.11 on the mt-bench-de average.
Since the original models based on mistral - three of them on the brilliant german LeoLM/leo-mistral-hessianai-7b - they are reunited in this merged model. 
Therefore the name, no nationalist ideas involved :-). 

To improve result quality they are dpo-trained with a german translation of slimorca dpo using hermeo-7B for reject results. 

If you are gpu-poor like me you can now use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to train with german datasets.

Kudos to the authors of the original models at [DiscoResearch](https://huggingface.co/DiscoResearch) and [VAGOsolutions](https://huggingface.co/VAGOsolutions), [Malte Ostendorff](https://huggingface.co/malteos) 
and [Matthias Uhlig](https://huggingface.co/DRXD1000). We are your fan club.

This model was brought to you and the nvidia bill was paid by [Mayflower GmbH](https://mayflower.de/).

## Benchmark results: mt-bench-de

Is the merged model alone already good? Well, of course. But it is even better with the help of some dpo tuning.

```json
{
    "first_turn": 7.3,
    "second_turn": 6.925,
    "categories": {
        "writing": 8.425,
        "roleplay": 8.6,
        "reasoning": 5.4,
        "math": 4.35,
        "coding": 4.3,
        "extraction": 7.975,
        "stem": 8.5,
        "humanities": 9.35
    },
    "average": 7.1125
}
```

## Other Versions

A big thank you to [LoneStriker](https://huggingface.co/LoneStriker) for the quantized models.

| Name | Quant method | Bits | 
| ---- | ---- | ---- | 
[Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo)| Unquantized | 16 |
[Wiedervereinigung-7b-dpo-GPTQ](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-GPTQ)| GPTQ | 4 |
[Wiedervereinigung-7b-dpo-AWQ](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-AWQ)| AWQ | 4 |
[Wiedervereinigung-7b-dpo-GGUF](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-GGUF)| GGUF | 3-8 |
[Wiedervereinigung-7b-dpo-8.0bpw-h8-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-8.0bpw-h8-exl2)| EXL2 | 8 |
[Wiedervereinigung-7b-dpo-6.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-6.0bpw-h6-exl2)| EXL2 | 6 |
[Wiedervereinigung-7b-dpo-5.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-5.0bpw-h6-exl2)| EXL2 | 5 |
[Wiedervereinigung-7b-dpo-4.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-4.0bpw-h6-exl2)| EXL2 | 4 |
[Wiedervereinigung-7b-dpo-3.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Wiedervereinigung-7b-dpo-3.0bpw-h6-exl2)| EXL2 | 3 |

Wiedervereinigung-7b is a  [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing) merge of:
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
* [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
* [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
* [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b)


## 🧩 Configuration

```yaml
models:
  - model: LeoLM/leo-mistral-hessianai-7b
    # No parameters necessary for base model
  - model: DiscoResearch/DiscoLM_German_7b_v1
    parameters:
      density: 0.6
      weight: 0.25
  - model: DRXD1000/Phoenix
    parameters:
      density: 0.6
      weight: 0.25
  - model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
    parameters:
      density: 0.6
      weight: 0.25
  - model: malteos/hermeo-7b
    parameters:
      density: 0.6
      weight: 0.25
merge_method: dare_ties
base_model: LeoLM/leo-mistral-hessianai-7b
parameters:
  int8_mask: true
dtype: bfloat16
```


## 💻 Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mayflowergmbh/Wiedervereinigung-7b-dpo"
messages = [{"role": "user", "content": "Was ist ein deutsches Large Language Model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```