metadata
base_model:
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- NousResearch/Meta-Llama-3-8B-Instruct
- UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
library_name: transformers
tags:
- mergekit
- merge
license: llama3
SPPO-abliterated
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using NousResearch/Meta-Llama-3-8B-Instruct as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: NousResearch/Meta-Llama-3-8B-Instruct
dtype: float32
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
parameters:
weight: 1.0
- layer_range: [0, 32]
model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
weight: 1.0
- layer_range: [0, 32]
model: NousResearch/Meta-Llama-3-8B-Instruct