Llama 3.1 Daredevilish

  • This model is an experimental Llama 3.1-based merge, inspired by mlabonne/Daredevil-8B.
  • It combines the top-performing Llama 3.1 8B models on the MMLU-Pro task as of January 21, 2025.

Model Details

  • Architecture: Llama 3.1 (8.03B parameters)
  • Training: Merged from top MMLU-Pro models, with additional supervised fine-tuning (SFT)
  • Release Date: January 21, 2025

The model fails to end replies properly when used with some system prompts. If this is a problem, consider using agentlans/Llama3.1-Daredevilish-Instruct in instruct mode.

Key Features

  1. Merged Architecture: Combines high-performing MMLU-Pro models to enhance overall capabilities.
  2. Llama 3 Compatibility: Additional Supervised Fine-Tuning (SFT) ensures adherence to Llama 3 prompt format.
  3. SFT Dataset: agentlans/crash-course dataset (1200 row configuration) for supervised fine-tuning in LLaMA-Factory.
  4. Fine-Tuning Approach:
    • 1 epoch training
    • Rank 4 LoRA
    • Alpha = 4
    • rslora

Merge Configuration

The model was created using mergekit with the following merge configuration:

models:
  - model: DreadPoor/LemonP-8B-Model_Stock
    parameters:
      density: 0.6
      weight: 0.16
  - model: Youlln/1PARAMMYL-8B-ModelStock
    parameters:
      density: 0.6
      weight: 0.13
  - model: jaspionjader/f-2-8b
    parameters:
      density: 0.6
      weight: 0.10
  - model: Etherll/SuperHermes
    parameters:
      density: 0.6
      weight: 0.08
merge_method: dare_ties
base_model: meta-llama/Llama-3.1-8B
dtype: bfloat16

Usage and Limitations

This experimental model is designed for research and development purposes. Users should be aware of potential biases and limitations inherent in language models. Always validate outputs and use the model responsibly.

Future Work

Further evaluation and fine-tuning may be necessary to optimize performance across various tasks. Researchers are encouraged to build upon this experimental merge to advance the capabilities of Llama-based models.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 25.54
IFEval (0-Shot) 62.92
BBH (3-Shot) 29.20
MATH Lvl 5 (4-Shot) 12.76
GPQA (0-shot) 6.82
MuSR (0-shot) 11.60
MMLU-PRO (5-shot) 29.96
Downloads last month
29
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for agentlans/Llama3.1-Daredevilish

Dataset used to train agentlans/Llama3.1-Daredevilish

Evaluation results