MrRobotoAI commited on
Commit
6e25cd9
·
verified ·
1 Parent(s): 7243fed

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,8 +1,11 @@
1
  ---
2
  base_model:
3
- - MrRobotoAI/MrRoboto-BASE-v1-8b-64k
4
  - MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k
 
 
 
5
  - MrRobotoAI/Llama-3-8B-Uncensored-test8
 
6
  library_name: transformers
7
  tags:
8
  - mergekit
@@ -16,13 +19,13 @@ This is a merge of pre-trained language models created using [mergekit](https://
16
  ## Merge Details
17
  ### Merge Method
18
 
19
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k](https://huggingface.co/MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k) as a base.
20
 
21
  ### Models Merged
22
 
23
  The following models were included in the merge:
24
- * [MrRobotoAI/MrRoboto-BASE-v1-8b-64k](https://huggingface.co/MrRobotoAI/MrRoboto-BASE-v1-8b-64k)
25
- * [MrRobotoAI/Llama-3-8B-Uncensored-test8](https://huggingface.co/MrRobotoAI/Llama-3-8B-Uncensored-test8)
26
 
27
  ### Configuration
28
 
@@ -30,20 +33,20 @@ The following YAML configuration was used to produce this model:
30
 
31
  ```yaml
32
  models:
33
- - model: MrRobotoAI/MrRoboto-BASE-v1-8b-64k
34
  parameters:
35
  density: 0.5
36
  weight: 0.9
37
- - model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k
38
  parameters:
39
  density: 0.5
40
  weight: 0.9
41
- - model: MrRobotoAI/Llama-3-8B-Uncensored-test8
42
  parameters:
43
  density: 0.5
44
  weight: 0.9
45
  merge_method: ties
46
- base_model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k
47
  parameters:
48
  int8_mask: true
49
  rescale: true
 
1
  ---
2
  base_model:
 
3
  - MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k
4
+ - Blackroot/Llama-3-LongStory-LORA
5
+ - MrRobotoAI/MrRoboto-BASE-v1-8b-64k
6
+ - Blackroot/Llama-3-LongStory-LORA
7
  - MrRobotoAI/Llama-3-8B-Uncensored-test8
8
+ - Blackroot/Llama-3-LongStory-LORA
9
  library_name: transformers
10
  tags:
11
  - mergekit
 
19
  ## Merge Details
20
  ### Merge Method
21
 
22
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k](https://huggingface.co/MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) as a base.
23
 
24
  ### Models Merged
25
 
26
  The following models were included in the merge:
27
+ * [MrRobotoAI/MrRoboto-BASE-v1-8b-64k](https://huggingface.co/MrRobotoAI/MrRoboto-BASE-v1-8b-64k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA)
28
+ * [MrRobotoAI/Llama-3-8B-Uncensored-test8](https://huggingface.co/MrRobotoAI/Llama-3-8B-Uncensored-test8) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA)
29
 
30
  ### Configuration
31
 
 
33
 
34
  ```yaml
35
  models:
36
+ - model: MrRobotoAI/MrRoboto-BASE-v1-8b-64k+Blackroot/Llama-3-LongStory-LORA
37
  parameters:
38
  density: 0.5
39
  weight: 0.9
40
+ - model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k+Blackroot/Llama-3-LongStory-LORA
41
  parameters:
42
  density: 0.5
43
  weight: 0.9
44
+ - model: MrRobotoAI/Llama-3-8B-Uncensored-test8+Blackroot/Llama-3-LongStory-LORA
45
  parameters:
46
  density: 0.5
47
  weight: 0.9
48
  merge_method: ties
49
+ base_model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k+Blackroot/Llama-3-LongStory-LORA
50
  parameters:
51
  int8_mask: true
52
  rescale: true
mergekit_config.yml CHANGED
@@ -1,18 +1,18 @@
1
  models:
2
- - model: MrRobotoAI/MrRoboto-BASE-v1-8b-64k
3
  parameters:
4
  density: 0.5
5
  weight: 0.9
6
- - model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k
7
  parameters:
8
  density: 0.5
9
  weight: 0.9
10
- - model: MrRobotoAI/Llama-3-8B-Uncensored-test8
11
  parameters:
12
  density: 0.5
13
  weight: 0.9
14
  merge_method: ties
15
- base_model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k
16
  parameters:
17
  int8_mask: true
18
  rescale: true
 
1
  models:
2
+ - model: MrRobotoAI/MrRoboto-BASE-v1-8b-64k+Blackroot/Llama-3-LongStory-LORA
3
  parameters:
4
  density: 0.5
5
  weight: 0.9
6
+ - model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k+Blackroot/Llama-3-LongStory-LORA
7
  parameters:
8
  density: 0.5
9
  weight: 0.9
10
+ - model: MrRobotoAI/Llama-3-8B-Uncensored-test8+Blackroot/Llama-3-LongStory-LORA
11
  parameters:
12
  density: 0.5
13
  weight: 0.9
14
  merge_method: ties
15
+ base_model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k+Blackroot/Llama-3-LongStory-LORA
16
  parameters:
17
  int8_mask: true
18
  rescale: true
model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a389aa3ff7f06a36feb217f4986de8140295f035c4e7bba51e67c462938b08c5
3
  size 4953586384
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54f29aada6dd6f067acc5c0dd9b72090a80f6dfc6725b5fe22ec319cbb01b24e
3
  size 4953586384
model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e101b1d5299cc9c2d8ccfb0fb6f618866f49220baaeafa2a0ccedddb7ab06431
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cdfba1abfe545ef05932beb65dcc7e9015688acaf51850c1c5b4652380fd8b9
3
  size 4999819336
model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7b42adac7be1d4b5b2d3038aa736049b315af923cbc33c2b08bfbb4f48138c29
3
  size 4915916144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9daeaa2a9c2b1495784b0ff79df5598deb770c680a46244388b80006bdbe999
3
  size 4915916144
model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a1067038dda69efc7830c8ba8ac88261cbb4bed12752a3a75c67a498667c4501
3
  size 1191234472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e982af3cbace9a66d6527979233607c631e34b88b5a207afa0792f638302b371
3
  size 1191234472