Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,10 @@
|
|
1 |
---
|
2 |
base_model:
|
3 |
- v000000/L3.1-8B-RP-Test-003-Task_Arithmetic
|
|
|
|
|
|
|
|
|
4 |
- v000000/L3.1-8B-RP-Test-002-Task_Arithmetic
|
5 |
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
6 |
library_name: transformers
|
@@ -24,6 +28,10 @@ This model was merged using the SLERP merge method.
|
|
24 |
### Models Merged
|
25 |
|
26 |
The following models were included in the merge:
|
|
|
|
|
|
|
|
|
27 |
* [v000000/L3.1-8B-RP-Test-003-Task_Arithmetic](https://huggingface.co/v000000/L3.1-8B-RP-Test-003-Task_Arithmetic)
|
28 |
* [v000000/L3.1-8B-RP-Test-002-Task_Arithmetic](https://huggingface.co/v000000/L3.1-8B-RP-Test-002-Task_Arithmetic) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
|
29 |
|
@@ -32,6 +40,7 @@ The following models were included in the merge:
|
|
32 |
The following YAML configuration was used to produce this model:
|
33 |
|
34 |
```yaml
|
|
|
35 |
slices:
|
36 |
- sources:
|
37 |
- model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
@@ -45,6 +54,7 @@ parameters:
|
|
45 |
- value: 0.0001
|
46 |
dtype: bfloat16
|
47 |
out_type: float16 #oops
|
|
|
48 |
models:
|
49 |
- model: arcee-ai/Llama-3.1-SuperNova-Lite
|
50 |
parameters:
|
@@ -57,6 +67,7 @@ base_model: arcee-ai/Llama-3.1-SuperNova-Lite
|
|
57 |
parameters:
|
58 |
normalize: false
|
59 |
dtype: float16
|
|
|
60 |
models:
|
61 |
- model: arcee-ai/Llama-3.1-SuperNova-Lite
|
62 |
parameters:
|
@@ -69,6 +80,8 @@ base_model: arcee-ai/Llama-3.1-SuperNova-Lite
|
|
69 |
parameters:
|
70 |
normalize: false
|
71 |
dtype: float16
|
|
|
|
|
72 |
models:
|
73 |
- model: v000000/L3.1-8B-RP-Test-003-Task_Arithmetic
|
74 |
merge_method: slerp
|
|
|
1 |
---
|
2 |
base_model:
|
3 |
- v000000/L3.1-8B-RP-Test-003-Task_Arithmetic
|
4 |
+
- v000000/L3.1-Niitorm-8B-t0.0001
|
5 |
+
- Sao10K/L3.1-8B-Niitama-v1.1
|
6 |
+
- arcee-ai/Llama-3.1-SuperNova-Lite
|
7 |
+
- akjindal53244/Llama-3.1-Storm-8B
|
8 |
- v000000/L3.1-8B-RP-Test-002-Task_Arithmetic
|
9 |
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
10 |
library_name: transformers
|
|
|
28 |
### Models Merged
|
29 |
|
30 |
The following models were included in the merge:
|
31 |
+
* [v000000/L3.1-Niitorm-8B-t0.0001](https://huggingface.co/v000000/L3.1-Niitorm-8B-t0.0001)
|
32 |
+
* [Sao10K/L3.1-8B-Niitama-v1.1](https://huggingface.co/Sao10K/L3.1-8B-Niitama-v1.1) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
|
33 |
+
* [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
|
34 |
+
* [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
|
35 |
* [v000000/L3.1-8B-RP-Test-003-Task_Arithmetic](https://huggingface.co/v000000/L3.1-8B-RP-Test-003-Task_Arithmetic)
|
36 |
* [v000000/L3.1-8B-RP-Test-002-Task_Arithmetic](https://huggingface.co/v000000/L3.1-8B-RP-Test-002-Task_Arithmetic) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
|
37 |
|
|
|
40 |
The following YAML configuration was used to produce this model:
|
41 |
|
42 |
```yaml
|
43 |
+
#Step1 - Add smarts to Niitama with alchemonaut's algorithm.
|
44 |
slices:
|
45 |
- sources:
|
46 |
- model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
|
|
54 |
- value: 0.0001
|
55 |
dtype: bfloat16
|
56 |
out_type: float16 #oops
|
57 |
+
#Step 2 - Learn vectors onto Supernova 0.4
|
58 |
models:
|
59 |
- model: arcee-ai/Llama-3.1-SuperNova-Lite
|
60 |
parameters:
|
|
|
67 |
parameters:
|
68 |
normalize: false
|
69 |
dtype: float16
|
70 |
+
#Step 3 - Fully learn vectors onto Supernova 1.255
|
71 |
models:
|
72 |
- model: arcee-ai/Llama-3.1-SuperNova-Lite
|
73 |
parameters:
|
|
|
80 |
parameters:
|
81 |
normalize: false
|
82 |
dtype: float16
|
83 |
+
|
84 |
+
#Step 4 - Merge checkpoints and keep output/input Supernova heavy with a triangular slerp from sophosympatheia.
|
85 |
models:
|
86 |
- model: v000000/L3.1-8B-RP-Test-003-Task_Arithmetic
|
87 |
merge_method: slerp
|