File size: 8,044 Bytes
785fe8a
 
 
 
 
 
 
e8150c5
785fe8a
 
 
 
 
 
7143865
 
 
e58cb21
7b7c9a4
 
 
7143865
852fb8e
7143865
 
 
76f24b5
 
 
cb6f594
7143865
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9a1201
4ef3101
 
 
 
 
 
502f448
4ef3101
 
 
1f59fe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a44c736
1f59fe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcbd3e6
 
 
90f562f
1f59fe0
9fc8349
a44c736
65b736a
 
67fc187
65b736a
e87b5a7
 
65b736a
 
7e4fe9f
65b736a
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
---
language:
- en
license: apache-2.0
---

<div align="center">
  <b style="font-size: 30px;">LLAMA-3_8B_Unaligned_Alpha_RP_Soup</b>


</div>


<img src="https://i.imgur.com/pXcjpoV.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 50%; min-width: 400px; display: block; margin: auto;">


# Model Details
Censorship level: <b>Medium</b>

  
  This model is the outcome of multiple merges, starting with the base model **[SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)**. The merging process was conducted in several stages:

    Merge 1: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with invisietch/EtherealRainbow-v0.3-8B.
    Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2.
    Soup 1: Merge 1 was combined with Merge 2.
    Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4.
  
<details>
<summary>Mergekit configs:</summary>
  
# Merge 1
```yaml
slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: BeaverAI/Llama-3SOME-8B-v2d
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```

# Merge 2
```yaml
slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: invisietch/EtherealRainbow-v0.3-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```

# Soup 1
```yaml
slices:
  - sources:
      - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
        layer_range: [0, 32]
      - model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4
        layer_range: [0, 32]
merge_method: slerp
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```
# Final Merge
```yaml
slices:
  - sources:
      - model: Soup 1
        layer_range: [0, 32]
      - model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4
        layer_range: [0, 32]
merge_method: slerp
base_model: Soup 1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

```
</details>
  
The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent. 

## LLAMA-3_8B_Unaligned_Alpha_RP_Soup is available at the following quantizations:

- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_GGUF) | [iMatrix_GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-i1-GGUF)
- EXL2: [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_8.0bpw)



# Model instruction template: (Can use either ChatML or Llama-3)
# ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```

# Llama-3-Instruct

```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>
```

**Recommended generation Presets:**
<details>
<summary><b>No idea</b>, but sometimes <b>Midnight Enigma</b> gives nice results.</summary>
  max_new_tokens: 512

  temperature: 0.98
  
  top_p: 0.37
  
  top_k: 100
  
  typical_p: 1
  
  min_p: 0
  
  repetition_penalty: 1.18
  
  do_sample: True

  <img src="https://i.imgur.com/rQ7V6OC.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;">
  <img src="https://i.imgur.com/caL0m8G.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;">
  <img src="https://i.imgur.com/jyLDlds.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;">

</details>

*Sometimes the model might output a text that's too long.


## The base model used for the merge - LLAMA-3_8B_Unaligned_Alpha - is available at the following quantizations:

Censorship level: <b>Low - Medium</b>

- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF) | [iMatrix_GGUF](https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF)
- EXL2: [2.6 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_2.6bpw) | [3.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.0bpw) | [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.0bpw) | [4.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.5bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.0bpw)  | [5.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.5bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_6.0bpw) | [6.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_6.5bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.0bpw) | [7.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.5bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_8.0bpw)


### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">

- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit is appreciated πŸ™πŸ»
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit appreciated πŸ™πŸ»

## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
- [Tenebra 30B](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) My original Tenebra model, very unique, 'self aware', very uncensored.
- [Tenebra 13B](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) A smaller Tenebra in 13B, I called it 'Tinybra'
- [Question_Builder](https://huggingface.co/SicariusSicariiStuff/Question_Builder) A small, highly useful model to help our open source community in generating new datasets. It returns a single question based on any input.