v000000's picture
Update README.md
42fc217 verified
|
raw
history blame
2.27 kB
---
base_model:
- x0000001/mergekit-task_arithmetic-vlehhex
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
library_name: transformers
tags:
- mergekit
- merge
---
# SwallowMaid-8B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/0vhS2LvbcQm6dwaFkC_HK.png)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using a multi-step merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
* [UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3)
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [maldv/llama-3-fantasy-writer-8b](https://huggingface.co/maldv/llama-3-fantasy-writer-8b)
* [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1)
* [Nitral-AI/Hathor_Respawn-L3-8B-v0.8](https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Part 3, Apply abliteration (SwallowMaid-8B)
models:
- model: sppo-rpmix-part2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
weight: 1.0
merge_method: linear
dtype: float32
# Part 2, infuse 35% swallow+rpmix to SPPO-Iter3 (sppo-rpmix-part2)
models:
- model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
parameters:
weight: 1.0
- model: rpmix-part1
parameters:
weight: 0.35
merge_method: task_arithmetic
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
parameters:
normalize: false
dtype: float32
# Part 1, linear merge rpmix (rpmix-part1)
models:
- model: Nitral-AI/Hathor_Respawn-L3-8B-v0.8
parameters:
weight: 0.6
- model: maldv/llama-3-fantasy-writer-8b
parameters:
weight: 0.1
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: 0.4
- model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
parameters:
weight: 0.15
merge_method: linear
dtype: float32
```