DPO Finetuned CultriX/NeuralTrix-7B-dpo using argilla/OpenHermes2.5-dpo-binarized-alpha

argilla dpo binarized pairs is a dataset built on top of: https://huggingface.co/datasets/teknium/OpenHermes-2.5 using https://github.com/argilla-io/distilabel if interested.

Thx for the great data sources.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.17
AI2 Reasoning Challenge (25-Shot) 72.35
HellaSwag (10-Shot) 88.89
MMLU (5-Shot) 64.09
TruthfulQA (0-shot) 79.07
Winogrande (5-shot) 84.61
GSM8k (5-shot) 68.01
Downloads last month
135
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for eren23/dpo-binarized-NeuralTrix-7B

Finetunes
1 model
Merges
4 models
Quantizations
3 models

Dataset used to train eren23/dpo-binarized-NeuralTrix-7B

Spaces using eren23/dpo-binarized-NeuralTrix-7B 6

Evaluation results