metadata
base_model:
- mergekit-community/MN-Sappho-g2-12B
- mergekit-community/MN-Sappho-e-12B
- mergekit-community/MN-Sappho-j-12B
- mergekit-community/MN-Sappho-k-12B
- jtatman/mistral_nemo_12b_reasoning_psychology_lora
- mistralai/Mistral-Nemo-Instruct-2407
- XeroCodes/aurora-12b
- Khetterman/AbominationScience-12B-v4
- LatitudeGames/Wayfarer-12B
- jtatman/mistral_nemo_12b_reasoning_psychology_lora
- mistralai/Mistral-Nemo-Base-2407
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using Khetterman/AbominationScience-12B-v4 as a base.
Models Merged
The following models were included in the merge:
- mergekit-community/MN-Sappho-g2-12B
- mergekit-community/MN-Sappho-e-12B
- mergekit-community/MN-Sappho-j-12B
- mergekit-community/MN-Sappho-k-12B + jtatman/mistral_nemo_12b_reasoning_psychology_lora
- mistralai/Mistral-Nemo-Instruct-2407 + XeroCodes/aurora-12b
- LatitudeGames/Wayfarer-12B + jtatman/mistral_nemo_12b_reasoning_psychology_lora
- mistralai/Mistral-Nemo-Base-2407
Configuration
The following YAML configuration was used to produce this model:
dtype: float32
out_dtype: bfloat16
merge_method: model_stock
base_model: Khetterman/AbominationScience-12B-v4
models:
- model: mergekit-community/MN-Sappho-k-12B+jtatman/mistral_nemo_12b_reasoning_psychology_lora
parameters:
weight: 0.5
- model: mergekit-community/MN-Sappho-j-12B
parameters:
weight: 1.3
- model: mergekit-community/MN-Sappho-g2-12B
parameters:
weight: 1.3
- model: mergekit-community/MN-Sappho-e-12B
- model: LatitudeGames/Wayfarer-12B+jtatman/mistral_nemo_12b_reasoning_psychology_lora
- model: mistralai/Mistral-Nemo-Instruct-2407+XeroCodes/aurora-12b
- model: mistralai/Mistral-Nemo-Base-2407
parameters:
weight: 1.1