Triangle104 commited on
Commit
f7ab3d3
1 Parent(s): 404a4d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -15,6 +15,55 @@ license_link: https://mistral.ai/licenses/MRL-0.1.md
15
  This model was converted to GGUF format from [`SvdH/RPLament-22B`](https://huggingface.co/SvdH/RPLament-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/SvdH/RPLament-22B) for more details on the model.
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Use with llama.cpp
19
  Install llama.cpp through brew (works on Mac and Linux)
20
 
 
15
  This model was converted to GGUF format from [`SvdH/RPLament-22B`](https://huggingface.co/SvdH/RPLament-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/SvdH/RPLament-22B) for more details on the model.
17
 
18
+ ---
19
+ Model details:
20
+ -
21
+ This is a merge of pre-trained language models created using mergekit.
22
+
23
+ Merge Method
24
+ -
25
+ This model was merged using the DARE TIES merge method using ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 as a base.
26
+ Models Merged
27
+
28
+ The following models were included in the merge:
29
+
30
+ allura-org/MS-Meadowlark-22B
31
+ Gryphe/Pantheon-RP-1.6.2-22b-Small
32
+ rAIfle/Acolyte-22B
33
+ anthracite-org/magnum-v4-22b
34
+
35
+ Configuration
36
+ -
37
+ The following YAML configuration was used to produce this model:
38
+
39
+ merge_method: dare_ties
40
+ base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
41
+ parameters:
42
+ int8_mask: true
43
+ dtype: bfloat16
44
+ models:
45
+ - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
46
+ parameters:
47
+ weight: 0.30
48
+ density: 0.78
49
+ - model: anthracite-org/magnum-v4-22b
50
+ parameters:
51
+ weight: 0.25
52
+ density: 0.66
53
+ - model: allura-org/MS-Meadowlark-22B
54
+ parameters:
55
+ weight: 0.20
56
+ density: 0.54
57
+ - model: rAIfle/Acolyte-22B
58
+ parameters:
59
+ weight: 0.15
60
+ density: 0.42
61
+ - model: Gryphe/Pantheon-RP-1.6.2-22b-Small
62
+ parameters:
63
+ weight: 0.10
64
+ density: 0.42
65
+
66
+ ---
67
  ## Use with llama.cpp
68
  Install llama.cpp through brew (works on Mac and Linux)
69