ryzen88 commited on
Commit
7bc900b
1 Parent(s): f624b3b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -54
README.md CHANGED
@@ -1,54 +1,54 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # model
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the breadcrumbs_ties merge method using I:\Llama-3-70B-Instruct-Gradient-262k as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * E:\Llama-3-Lumimaid-70B-v0.1-OAS
22
- * Z:\peter\LLM's\Smaug-Llama-3-70B-Instruct
23
- * Z:\peter\LLM's\Llama-3-70B-Instruct-abliterated-v3
24
-
25
- ### Configuration
26
-
27
- The following YAML configuration was used to produce this model:
28
-
29
- ```yaml
30
- models:
31
- - model: I:\Llama-3-70B-Instruct-Gradient-262k
32
- parameters:
33
- weight: 0.40
34
- density: 0.90
35
- gamma: 0.01
36
- - model: Z:\peter\LLM's\Llama-3-70B-Instruct-abliterated-v3
37
- parameters:
38
- weight: 0.20
39
- density: 0.90
40
- gamma: 0.01
41
- - model: Z:\peter\LLM's\Smaug-Llama-3-70B-Instruct
42
- parameters:
43
- weight: 0.40
44
- density: 0.90
45
- gamma: 0.01
46
- - model: E:\Llama-3-Lumimaid-70B-v0.1-OAS
47
- parameters:
48
- weight: 0.20
49
- density: 0.90
50
- gamma: 0.01
51
- merge_method: breadcrumbs_ties
52
- base_model: I:\Llama-3-70B-Instruct-Gradient-262k
53
- dtype: bfloat16
54
- ```
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+
8
+ ---
9
+ # model
10
+
11
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
+
13
+ ## Merge Details
14
+ ### Merge Method
15
+
16
+ This model was merged using the breadcrumbs_ties merge method using \Llama-3-70B-Instruct-Gradient-262k as a base.
17
+
18
+ ### Models Merged
19
+
20
+ The following models were included in the merge:
21
+ * \Llama-3-Lumimaid-70B-v0.1-OAS
22
+ * \Smaug-Llama-3-70B-Instruct
23
+ * \Llama-3-70B-Instruct-abliterated-v3
24
+
25
+ ### Configuration
26
+
27
+ The following YAML configuration was used to produce this model:
28
+
29
+ ```yaml
30
+ models:
31
+ - model: \Llama-3-70B-Instruct-Gradient-262k
32
+ parameters:
33
+ weight: 0.40
34
+ density: 0.90
35
+ gamma: 0.01
36
+ - model: \Llama-3-70B-Instruct-abliterated-v3
37
+ parameters:
38
+ weight: 0.20
39
+ density: 0.90
40
+ gamma: 0.01
41
+ - model: \Smaug-Llama-3-70B-Instruct
42
+ parameters:
43
+ weight: 0.40
44
+ density: 0.90
45
+ gamma: 0.01
46
+ - model: \Llama-3-Lumimaid-70B-v0.1-OAS
47
+ parameters:
48
+ weight: 0.20
49
+ density: 0.90
50
+ gamma: 0.01
51
+ merge_method: breadcrumbs_ties
52
+ base_model: \Llama-3-70B-Instruct-Gradient-262k
53
+ dtype: bfloat16
54
+ ```