personal merge highlights
Collection
7 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: DreadPoor/Aspire_1.3-8B_model-stock+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
weight: 1.0
- model: DreadPoor/LemonP_ALT-8B-Model_Stock
parameters:
weight: 1.0
- model: DreadPoor/Heart_Stolen-8B-Model_Stock
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16