image/webp

Saul-Instruct-Clown-7b

Saul-Instruct-Clown-7b is a merge of the following models using mergekit:

πŸ† Evaluation

OpenLLM

Saul-Instruct-Clown-7b OpenLLM benchmark suite

Model Average arc HellaSwag mmlu TruthfulQA gsm8k
arcee-ai/Saul-Instruct-Clown-7b 72.79 68.26 86.28 63.12 64.68 83.43

🧩 Configuration

  slices:
    - sources:
        - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
          layer_range: [0, 32]
        - model: Equall/Saul-Instruct-v1
          layer_range: [0, 32]
  merge_method: slerp
  base_model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
  parameters:
    t:
      - filter: self_attn
        value: [0, 0.5, 0.3, 0.7, 1]
      - filter: mlp
        value: [1, 0.5, 0.7, 0.3, 0]
      - value: 0.5
  dtype: bfloat16
Downloads last month
80
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for arcee-ai/Saul-Instruct-Clown-7b

Quantizations
1 model

Collection including arcee-ai/Saul-Instruct-Clown-7b