--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - stephenlzc/dolphin-llama3-zh-cn-uncensored - georgesung/llama3_8b_chat_uncensored - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K --- # Testing Testing is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [stephenlzc/dolphin-llama3-zh-cn-uncensored](https://huggingface.co/stephenlzc/dolphin-llama3-zh-cn-uncensored) * [georgesung/llama3_8b_chat_uncensored](https://huggingface.co/georgesung/llama3_8b_chat_uncensored) * [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K) ## 🧩 Configuration \```yaml models: - model: Orenguteng/Llama-3-8B-Lexi-Uncensored - model: stephenlzc/dolphin-llama3-zh-cn-uncensored parameters: density: 0.53 weight: 0.4 - model: georgesung/llama3_8b_chat_uncensored parameters: density: 0.53 weight: 0.3 - model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored parameters: int8_mask: true dtype: bfloat16 \```