schonsense
commited on
Commit
•
86bbc6c
1
Parent(s):
b2d8295
Update README.md
Browse files
README.md
CHANGED
@@ -1,50 +1,50 @@
|
|
1 |
-
---
|
2 |
-
base_model:
|
3 |
-
- flammenai/Llama3.1-Flammades-70B
|
4 |
-
- huihui-ai/Llama-3.3-70B-Instruct-abliterated
|
5 |
-
library_name: transformers
|
6 |
-
tags:
|
7 |
-
- mergekit
|
8 |
-
- merge
|
9 |
-
|
10 |
-
---
|
11 |
-
# lammerged
|
12 |
-
|
13 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
14 |
-
|
15 |
-
## Merge Details
|
16 |
-
### Merge Method
|
17 |
-
|
18 |
-
This model was merged using the SLERP merge method.
|
19 |
-
|
20 |
-
### Models Merged
|
21 |
-
|
22 |
-
The following models were included in the merge:
|
23 |
-
* [flammenai/Llama3.1-Flammades-70B](https://huggingface.co/flammenai/Llama3.1-Flammades-70B)
|
24 |
-
* [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated)
|
25 |
-
|
26 |
-
### Configuration
|
27 |
-
|
28 |
-
The following YAML configuration was used to produce this model:
|
29 |
-
|
30 |
-
```yaml
|
31 |
-
models:
|
32 |
-
- model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
|
33 |
-
- model: flammenai/Llama3.1-Flammades-70B
|
34 |
-
merge_method: slerp
|
35 |
-
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
|
36 |
-
dtype: bfloat16
|
37 |
-
parameters:
|
38 |
-
t: [
|
39 |
-
0, 0.018, 0.073, 0.091, 0.109, 0.127, 0.145, 0.164, 0.182, 0.2,
|
40 |
-
0.218, 0.236, 0.255, 0.273, 0.291, 0.309, 0.327, 0.345, 0.364, 0.382, 0.4,
|
41 |
-
0.418, 0.436, 0.455, 0.473, 0.491, 0.509, 0.527, 0.545, 0.564, 0.582, 0.6,
|
42 |
-
0.588, 0.576, 0.564, 0.552, 0.54, 0.527, 0.515, 0.503, 0.491, 0.479, 0.467,
|
43 |
-
0.455, 0.442, 0.43, 0.418, 0.406, 0.394, 0.382, 0.369, 0.357, 0.345, 0.333,
|
44 |
-
0.321, 0.309, 0.297, 0.285, 0.273, 0.26, 0.248, 0.236, 0.224, 0.212, 0.2,
|
45 |
-
0.188, 0.176, 0.164, 0.151, 0.139, 0.127, 0.115, 0.103, 0.091, 0.079, 0.067,
|
46 |
-
0.055, 0.028, 0, 0
|
47 |
-
]
|
48 |
-
|
49 |
-
|
50 |
-
```
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- flammenai/Llama3.1-Flammades-70B
|
4 |
+
- huihui-ai/Llama-3.3-70B-Instruct-abliterated
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
+
license: llama3.3
|
10 |
+
---
|
11 |
+
# lammerged
|
12 |
+
|
13 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
14 |
+
|
15 |
+
## Merge Details
|
16 |
+
### Merge Method
|
17 |
+
|
18 |
+
This model was merged using the SLERP merge method.
|
19 |
+
|
20 |
+
### Models Merged
|
21 |
+
|
22 |
+
The following models were included in the merge:
|
23 |
+
* [flammenai/Llama3.1-Flammades-70B](https://huggingface.co/flammenai/Llama3.1-Flammades-70B)
|
24 |
+
* [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated)
|
25 |
+
|
26 |
+
### Configuration
|
27 |
+
|
28 |
+
The following YAML configuration was used to produce this model:
|
29 |
+
|
30 |
+
```yaml
|
31 |
+
models:
|
32 |
+
- model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
|
33 |
+
- model: flammenai/Llama3.1-Flammades-70B
|
34 |
+
merge_method: slerp
|
35 |
+
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
|
36 |
+
dtype: bfloat16
|
37 |
+
parameters:
|
38 |
+
t: [
|
39 |
+
0, 0.018, 0.073, 0.091, 0.109, 0.127, 0.145, 0.164, 0.182, 0.2,
|
40 |
+
0.218, 0.236, 0.255, 0.273, 0.291, 0.309, 0.327, 0.345, 0.364, 0.382, 0.4,
|
41 |
+
0.418, 0.436, 0.455, 0.473, 0.491, 0.509, 0.527, 0.545, 0.564, 0.582, 0.6,
|
42 |
+
0.588, 0.576, 0.564, 0.552, 0.54, 0.527, 0.515, 0.503, 0.491, 0.479, 0.467,
|
43 |
+
0.455, 0.442, 0.43, 0.418, 0.406, 0.394, 0.382, 0.369, 0.357, 0.345, 0.333,
|
44 |
+
0.321, 0.309, 0.297, 0.285, 0.273, 0.26, 0.248, 0.236, 0.224, 0.212, 0.2,
|
45 |
+
0.188, 0.176, 0.164, 0.151, 0.139, 0.127, 0.115, 0.103, 0.091, 0.079, 0.067,
|
46 |
+
0.055, 0.028, 0, 0
|
47 |
+
]
|
48 |
+
|
49 |
+
|
50 |
+
```
|