|
--- |
|
license: other |
|
license_name: yi-license |
|
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE |
|
tags: |
|
- merge |
|
- mergekit |
|
- Yi |
|
- chat |
|
- conversational |
|
language: |
|
- en |
|
- chi |
|
library_name: transformers |
|
--- |
|
# Qwen1.5-22B-Chat-Merge |
|
**--This is a 22b frankenmerge of [Yi-34B-200K-RPMerge](https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge) created by interleaving layers of [Yi-34B-200K-RPMerge](https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge) with itself using [mergekit](https://github.com/arcee-ai/mergekit).--** |
|
|
|
**By attempting to merge the yi-34B (RPMerge, which I consider to be a better-performing version), to create a 70B-level Yi, what surprised me was that it didn't seem to exhibit the increased logical confusion and linguistic errors that many models with more than double the original parameter count do. It appeared to just get stronger with the increase in parameters. I also tried several other fine-tuned versions of Yi, and the results were satisfactory.** |
|
|
|
**-Quantize** |
|
|
|
GGUF Here:[gguf](https://huggingface.co/DisOOM/Qwen1.5-22B-Chat-Merge-GGUF/tree/main) |
|
|
|
**-Merge Configuration** |
|
|
|
This yaml below: |
|
```yaml |
|
dtype: float16 |
|
merge_method: passthrough |
|
slices: |
|
- sources: |
|
- layer_range: [0, 4] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [4, 14] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [8, 18] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [12, 22] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [16, 26] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [20, 30] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [24, 34] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [28, 38] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [32, 42] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [36, 46] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [40, 50] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [44, 54] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
- sources: |
|
- layer_range: [48, 60] |
|
model: brucethemoose/Yi-34B-200K-RPMerge |
|
|
|
``` |
|
**-Performance** |
|
|
|
* Tips:I don't have the capability to conduct benchmark tests, nor can I even use it extensively enough, so my test results might not be accurate. |
|
|
|
It has better performance than the 34B version in most of my own tests (subjective) including comprehension, reasoning and coherence and also writing skills. If you believe in this model's performance, feel free to test it out or offer evaluations. Everyone's tests or evaluations are welcome. |