image/png

Qwen2.5-Gutenberg-Doppel-14B

Qwen/Qwen2.5-14B-Instruct finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 4x A40 for 3 epochs.

Thank you @ParasiticRogue for sponsoring.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 32.30
IFEval (0-Shot) 80.91
BBH (3-Shot) 48.24
MATH Lvl 5 (4-Shot) 0.00
GPQA (0-shot) 11.07
MuSR (0-shot) 10.02
MMLU-PRO (5-shot) 43.57
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for async0x42/Qwen2.5-Gutenberg-Doppel-14B-exl2_4.0bpw

Base model

Qwen/Qwen2.5-14B
Quantized
(87)
this model

Datasets used to train async0x42/Qwen2.5-Gutenberg-Doppel-14B-exl2_4.0bpw

Evaluation results