DavidAU commited on
Commit
bb065cb
1 Parent(s): 3046921

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -48
README.md CHANGED
@@ -1,48 +1,51 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # MN-Three-RCM-Instruct1-2j
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using E:/MN-Rocinante-12B-v1.1-Instruct as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * E:/MN-12B-Celeste-V1.9-Instruct
22
- * E:/MN-magnum-v2.5-12b-kto-Instruct
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- # Config 1
30
- # E:/MN-Rocinante-12B-v1.1-Instruct
31
- # E:/MN-12B-Celeste-V1.9-Instruct
32
- # E:/MN-magnum-v2.5-12b-kto-Instruct
33
-
34
- models:
35
- - model: E:/MN-Rocinante-12B-v1.1-Instruct
36
- - model: E:/MN-magnum-v2.5-12b-kto-Instruct
37
- parameters:
38
- weight: .6
39
- density: .8
40
- - model: E:/MN-12B-Celeste-V1.9-Instruct
41
- parameters:
42
- weight: .38
43
- density: .6
44
- merge_method: dare_ties
45
- tokenizer_source: union
46
- base_model: E:/MN-Rocinante-12B-v1.1-Instruct
47
- dtype: bfloat16
48
- ```
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - mergekit
5
+ - merge
6
+ base_model: []
7
+ ---
8
+
9
+ <h2>MN-WORDSTORM-pt9-RCM-Preamble-Bang-18.5B-Instruct</h2>
10
+
11
+ This is part 9 in a 10 part series.
12
+
13
+ This version's highlights (relative to others in this 10 part series):
14
+
15
+ "Will develop a pre-amble before the main action. Like cliffhangers, and making sure your character(s) have a tough time."
16
+
17
+ (PPL = 7.7838 +/- 0.12667 @ Q4KM)
18
+
19
+ This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
20
+ The source code can also be used directly.
21
+
22
+ For full information about this model, including:
23
+
24
+ - Details about this model and its use case(s).
25
+ - Context limits
26
+ - Special usage notes / settings.
27
+ - Any model(s) used to create this model.
28
+ - Template(s) used to access/use this model.
29
+ - Example generation(s)
30
+ - GGUF quants of this model
31
+
32
+ GGUF quants of this version will follow shortly and appear on this page.
33
+
34
+
35
+ Settings, Templates, Context limits etc etc are the same for all 10 in the series, you can view
36
+ any below for "part 9's" information.
37
+
38
+ Also, each of the 5 below include example generations which will indicate in part this "parts" generation abilities. However
39
+ there will be variations which is what this 10 parts series is really all about.
40
+
41
+ For more information on this 10 part series see one or more of these versions:
42
+
43
+ [ https://huggingface.co/DavidAU/MN-WORDSTORM-pt1-RCM-Kiss-of-Madness-18.5B-Instruct-GGUF ]
44
+
45
+ [ https://huggingface.co/DavidAU/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-GGUF ]
46
+
47
+ [ https://huggingface.co/DavidAU/MN-WORDSTORM-pt3-RCM-POV-Nightmare-18.5B-Instruct-GGUF ]
48
+
49
+ [ https://huggingface.co/DavidAU/MN-WORDSTORM-pt4-RCM-Cliffhanger-18.5B-Instruct-GGUF ]
50
+
51
+ [ https://huggingface.co/DavidAU/MN-WORDSTORM-pt5-RCM-Extra-Intense-18.5B-Instruct-gguf ]