File size: 2,020 Bytes
1c84f84
 
 
5af30d0
 
 
 
 
 
 
 
 
 
 
1c84f84
5af30d0
 
 
 
 
 
 
4cbe878
5af30d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
203ac5e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
tags:
- merge
- mergekit
- qwen2
- chat
- conversational
language:
- en
- chi
library_name: transformers
---
# Qwen1.5-22B-Chat-Merge
**--This is a 22b frankenmerge of [qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) created by interleaving layers of [qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) with itself using [mergekit](https://github.com/arcee-ai/mergekit).--**

**Due to the current absence of intermediary-sized models between 14B and 72B in the Qwen1.5 series, I am trying to make some middle-sized models, such as those with 20B+ and 30B+ parameters, through a merging approach. This initiative aims to enable more individual users to maximize the utilization of their hardware capabilities.**

**-Quantize**

GGUF Here:[gguf](https://huggingface.co/DisOOM/Qwen1.5-22B-Chat-Merge-GGUF/tree/main)

**-Merge Configuration**

This yaml below:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 5]
    model: Qwen/Qwen1.5-14B-Chat
- sources:
  - layer_range: [5, 15]
    model: Qwen/Qwen1.5-14B-Chat
- sources:
  - layer_range: [10, 20]
    model: Qwen/Qwen1.5-14B-Chat
- sources:
  - layer_range: [15, 25]
    model: Qwen/Qwen1.5-14B-Chat
- sources:
  - layer_range: [20, 30]
    model: Qwen/Qwen1.5-14B-Chat
- sources:
  - layer_range: [25, 35]
    model: Qwen/Qwen1.5-14B-Chat
- sources:
  - layer_range: [30, 40]
    model: Qwen/Qwen1.5-14B-Chat
```
**-Performance**

* Tips:I don't have the capability to conduct benchmark tests, nor can I even use it extensively enough, so my test results might not be accurate.

It has better performance than the 14B version in most of my own tests (subjective) including comprehension, reasoning and coherence and also writing skills. If you believe in this model's performance, feel free to test it out or offer evaluations. Everyone's tests or evaluations are welcome.