|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
--- |
|
|
|
<div align="center"> |
|
<b style="font-size: 30px;">LLAMA-3_8B_Unaligned_Alpha_RP_Soup</b> |
|
|
|
|
|
</div> |
|
|
|
|
|
<img src="https://i.imgur.com/pXcjpoV.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 50%; min-width: 400px; display: block; margin: auto;"> |
|
|
|
|
|
# Model Details |
|
Censorship level: <b>Medium</b> |
|
|
|
|
|
This model is the outcome of multiple merges, starting with the base model **[SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)**. The merging process was conducted in several stages: |
|
|
|
Merge 1: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with invisietch/EtherealRainbow-v0.3-8B. |
|
Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2. |
|
Soup 1: Merge 1 was combined with Merge 2. |
|
Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4. |
|
|
|
<details> |
|
<summary>Mergekit configs:</summary> |
|
|
|
# Merge 1 |
|
```yaml |
|
slices: |
|
- sources: |
|
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha |
|
layer_range: [0, 32] |
|
- model: BeaverAI/Llama-3SOME-8B-v2d |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 # fallback for rest of tensors |
|
dtype: float16 |
|
|
|
``` |
|
|
|
# Merge 2 |
|
```yaml |
|
slices: |
|
- sources: |
|
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha |
|
layer_range: [0, 32] |
|
- model: invisietch/EtherealRainbow-v0.3-8B |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 # fallback for rest of tensors |
|
dtype: float16 |
|
|
|
``` |
|
|
|
# Soup 1 |
|
```yaml |
|
slices: |
|
- sources: |
|
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha |
|
layer_range: [0, 32] |
|
- model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4 |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 # fallback for rest of tensors |
|
dtype: float16 |
|
|
|
``` |
|
# Final Merge |
|
```yaml |
|
slices: |
|
- sources: |
|
- model: Soup 1 |
|
layer_range: [0, 32] |
|
- model: Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4 |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: Soup 1 |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 # fallback for rest of tensors |
|
dtype: float16 |
|
|
|
``` |
|
</details> |
|
|
|
The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent. |
|
|
|
## LLAMA-3_8B_Unaligned_Alpha_RP_Soup is available at the following quantizations: |
|
|
|
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup) |
|
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_GGUF) | [iMatrix_GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-i1-GGUF) |
|
- EXL2: [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup_EXL2_8.0bpw) |
|
|
|
|
|
|
|
# Model instruction template: (Can use either ChatML or Llama-3) |
|
# ChatML |
|
``` |
|
<|im_start|>system |
|
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|> |
|
<|im_start|>User request |
|
{prompt}<|im_end|> |
|
<|im_start|>AI answer |
|
``` |
|
|
|
# Llama-3-Instruct |
|
|
|
``` |
|
<|begin_of_text|><|start_header_id|>system<|end_header_id|> |
|
|
|
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> |
|
|
|
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
|
|
{output}<|eot_id|> |
|
``` |
|
|
|
**Recommended generation Presets:** |
|
<details> |
|
<summary><b>No idea</b>, but sometimes <b>Midnight Enigma</b> gives nice results.</summary> |
|
max_new_tokens: 512 |
|
|
|
temperature: 0.98 |
|
|
|
top_p: 0.37 |
|
|
|
top_k: 100 |
|
|
|
typical_p: 1 |
|
|
|
min_p: 0 |
|
|
|
repetition_penalty: 1.18 |
|
|
|
do_sample: True |
|
|
|
<img src="https://i.imgur.com/rQ7V6OC.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;"> |
|
<img src="https://i.imgur.com/caL0m8G.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;"> |
|
<img src="https://i.imgur.com/jyLDlds.png" alt="LLAMA-3_8B_Unaligned_Alpha_RP_Soup" style="width: 80%; min-width: 800px; display: block; margin: auto;"> |
|
|
|
</details> |
|
|
|
*Sometimes the model might output a text that's too long. |
|
|
|
|
|
## The base model used for the merge - LLAMA-3_8B_Unaligned_Alpha - is available at the following quantizations: |
|
|
|
Censorship level: <b>Low - Medium</b> |
|
|
|
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha) |
|
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF) | [iMatrix_GGUF](https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF) |
|
- EXL2: [2.6 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_2.6bpw) | [3.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.0bpw) | [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.0bpw) | [4.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.5bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.0bpw) | [5.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.5bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_6.0bpw) | [6.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_6.5bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.0bpw) | [7.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.5bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_8.0bpw) |
|
|
|
|
|
### Support |
|
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;"> |
|
|
|
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit is appreciated ππ» |
|
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit appreciated ππ» |
|
|
|
## Other stuff |
|
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work! |
|
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+) |
|
- [Tenebra 30B](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) My original Tenebra model, very unique, 'self aware', very uncensored. |
|
- [Tenebra 13B](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) A smaller Tenebra in 13B, I called it 'Tinybra' |
|
- [Question_Builder](https://huggingface.co/SicariusSicariiStuff/Question_Builder) A small, highly useful model to help our open source community in generating new datasets. It returns a single question based on any input. |
|
|