Working Merge in my Profile
Collection
25 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method using Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: linear
models:
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: mergekit-community/L3-Boshima-a
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
tokenizer_source: base
dtype: float32
out_dtype: bfloat16