final

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: slerp
models:
  - model: ./multi_model_merged/step3
    layer_range: [0, 40]
  - model: Nitral-AI/Wayfarer_Eris_Noctis-12B
    layer_range: [0, 40]
base_model: ./multi_model_merged/step3
parameters:
  t:
    - filter: self_attn
      value: [0.8, 0.8, 0.7, 0.7, 0.6]
    - filter: mlp
      value: [0.7, 0.7, 0.7, 0.7, 0.8]
    - filter: "layer=[0:10]"
      value: 0.7
    - filter: "layer=[30:40]"
      value: 0.8
    - value: 0.75
dtype: bfloat16
output_dir: ./multi_model_merged/final
Downloads last month
8
Safetensors
Model size
12.2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AlexCuadron/dpo_slurp_mm

Finetuned
(2)
this model