Somewhat working
Collection
4 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the della merge method using win10/Mistral-Nemo-abliterated-Nemo-Pro-v2 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: elinas/Chronos-Gold-12B-1.0
parameters:
density: 0.8
weight: 1.0
- model: Gryphe/Pantheon-RP-1.5-12b-Nemo
parameters:
density: 0.8
weight: 1.0
- model: DavidAU/MN-Dark-Planet-TITAN-12B
parameters:
density: 0.8
weight: 1.0
- model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
parameters:
density: 0.8
weight: 1.0
- model: Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24
parameters:
density: 0.8
weight: 1.0
merge_method: della
base_model: win10/Mistral-Nemo-abliterated-Nemo-Pro-v2
parameters:
epsilon: 0.10
lambda: 1.00
int8_mask: true
dtype: float16