File size: 1,892 Bytes
b624231 bf7119a b624231 bf7119a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
library_name: transformers
tags:
- llama
- llama-3
- uncensored
- mergekit
- merge
---
# Llama3-vodka
- Input: text only
- Output: text only
This model is like vodka. It aims to be pure, potent, and versatile.
- Pure: shouldn't greatly affect Llama 3 Instruct's capabilities and writing style except for uncensoring
- Potent: it's a merge of abliterated models - it should stay uncensored after merging and finetuning
- Versatile: basically Llama 3 Instruct except uncensored - drink it straight, mix it, finetune it, and make cocktails
Please enjoy responsibly.
## Safety and risks
- Excessive consumption is bad for your health
- The model can produce harmful, offensive, or inappropriate content if prompted to do so
- The model has weakened safeguards and a lack of moral and ethical judgements
- The user takes responsibility for all outputs produced by the model
- It is recommended to use the model in controlled environments where its risks can be safely managed
## Models used:
- [cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2](https://huggingface.co/cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2)
- [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3)
- Meta-Llama-3-Daredevil-8B-abliterated-Instruct-16, which is Llama 3 8B Instruct with
- rank 32 LoRA of [Meta-Llama-3-Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) vs. [Meta-Llama-3-Daredevil](https://huggingface.co/mlabonne/Daredevil-8B)
- rank 16 LoRA of Llama 3 8B Instruct vs. Llama 3 8B Base
The above models were merged onto [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) using the "task arithmetic" merge method. The model merges and LoRA extractions were done using [mergekit](https://github.com/arcee-ai/mergekit). |