File size: 1,551 Bytes
fa64c9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
base_model:
- grimjim/magnum-twilight-12b
- Nohobby/MN-12B-Siskin-v0.2
- RozGrov/NemoDori-v0.2.2-12B-MN-ties
- spow12/ChatWaifu_v1.4
- ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3
- GalrionSoftworks/Canidori-12B-v1
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3) as a base.

### Models Merged

The following models were included in the merge:
* [grimjim/magnum-twilight-12b](https://huggingface.co/grimjim/magnum-twilight-12b)
* [Nohobby/MN-12B-Siskin-v0.2](https://huggingface.co/Nohobby/MN-12B-Siskin-v0.2)
* [RozGrov/NemoDori-v0.2.2-12B-MN-ties](https://huggingface.co/RozGrov/NemoDori-v0.2.2-12B-MN-ties)
* [spow12/ChatWaifu_v1.4](https://huggingface.co/spow12/ChatWaifu_v1.4)
* [GalrionSoftworks/Canidori-12B-v1](https://huggingface.co/GalrionSoftworks/Canidori-12B-v1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Nohobby/MN-12B-Siskin-v0.2
  - model: spow12/ChatWaifu_v1.4
  - model: grimjim/magnum-twilight-12b
  - model: RozGrov/NemoDori-v0.2.2-12B-MN-ties
  - model: GalrionSoftworks/Canidori-12B-v1
merge_method: model_stock
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.3
dtype: bfloat16

```