merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using KidIkaros/Llama-3.2-1B-Instruct-abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: model_stock
models:
  - model: Nexesenex/Llama_3.2_1b_Dolto_0.1
    parameters:
      weight: 1.0
  - model: Nexesenex/Llama_3.2_1b_OpenTree_R1_0.1
    parameters:
      weight: 1.0
base_model: KidIkaros/Llama-3.2-1B-Instruct-abliterated
dtype: bfloat16
normalize: true

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 12.81
IFEval (0-Shot) 55.43
BBH (3-Shot) 7.94
MATH Lvl 5 (4-Shot) 5.66
GPQA (0-shot) 0.00
MuSR (0-shot) 1.58
MMLU-PRO (5-shot) 6.26
Downloads last month
31
Safetensors
Model size
1.5B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Nexesenex/Llama_3.2_1b_RandomLego_RP_R1_0.1

Evaluation results