Finetune of the pre-DPO Bagel model (https://huggingface.co/jondurbin/bagel-34b-v0.2) on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset

Evaluation Results

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K

For comparison the GSM8K score for the original metamath/MetaMath-Mistral-7B was 46.17 and average score was 69.7

Downloads last month
1,055
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for abacusai/MetaMath-bagel-34b-v0.2-c1500

Quantizations
1 model

Dataset used to train abacusai/MetaMath-bagel-34b-v0.2-c1500