merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using mistralai/Mistral-Nemo-Instruct-2407 as a base.
Models Merged
The following models were included in the merge:
- LatitudeGames/Wayfarer-12B
- mergekit-community/MN-Sappho-j-12B
- Khetterman/AbominationScience-12B-v4
- mergekit-community/MN-Sappho-g2-12B
- mistralai/Mistral-Nemo-Base-2407
- nbeerbower/mistral-nemo-wissenschaft-12B
- anthracite-org/magnum-v2.5-12b-kto
- mergekit-community/MN-Sappho-l-12B
- PygmalionAI/Eleusis-12B
- Khetterman/DarkAtom-12B-v3
- DavidAU/MN-Dark-Planet-TITAN-12B
- inflatebot/MN-12B-Mag-Mell-R1
Configuration
The following YAML configuration was used to produce this model:
dtype: float32
out_dtype: bfloat16
merge_method: model_stock
base_model: mistralai/Mistral-Nemo-Instruct-2407
models:
- model: mistralai/Mistral-Nemo-Base-2407
parameters:
weight: 1.2
- model: mergekit-community/MN-Sappho-j-12B
parameters:
weight: 1.2
- model: mergekit-community/MN-Sappho-g2-12B
parameters:
weight: 1.2
- model: Khetterman/AbominationScience-12B-v4
parameters:
weight: 1.2
- model: DavidAU/MN-Dark-Planet-TITAN-12B
parameters:
weight: 1.2
- model: LatitudeGames/Wayfarer-12B
parameters:
weight: 1.2
- model: mergekit-community/MN-Sappho-l-12B
parameters:
weight: 1.1
- model: Khetterman/DarkAtom-12B-v3
parameters:
weight: 1.1
- model: inflatebot/MN-12B-Mag-Mell-R1
- model: PygmalionAI/Eleusis-12B
- model: anthracite-org/magnum-v2.5-12b-kto
parameters:
weight: 0.8
- model: nbeerbower/mistral-nemo-wissenschaft-12B
parameters:
weight: 0.8
parameters:
normalize: true
tokenizer:
source: Khetterman/AbominationScience-12B-v4
- Downloads last month
- 68
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for mergekit-community/MN-Sappho-g3-12B
Merge model
this model