Model Card for Model ID
The LEMMA series models are trained on the LEMMA Dataset. This dataset uses the training set of MATH and GSM8K to generate error-corrective reasoning trajectories. For each question in these datasets, the student model (LLaMA3-8B) generates self-generated errors, and the teacher model (GPT-4o) deliberately introduces errors based on the error type distribution of the student model. Then, both "Fix & Continue" and "Fresh & Restart" correction strategies are applied to these errors to create error-corrective revision trajectories. After filtering out trajectories with incorrect final answers, we obtain this dataset. Fine-tuning on this dataset achieves up to 13.3% average accuracy improvement for LLaMA3-8B with less than 90k synthesized data. For more details, please refer to our paper LEMMA: Learning from Errors for MatheMatical Advancement in LLMs.
Model Details
Model Description
- Finetuned from model [optional]: Llama-3-8B
Model Sources [optional]
- Repository: https://github.com/pzs19/LEMMA/
- Paper: https://arxiv.org/abs/2503.17439
Direct Use
The same as Llama-3-8B.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training Details
The LEMMA series models are trained on the LEMMA Dataset using LLaMA-Factory. For more details, please refer to our paper.
Results
Model | Checkpoint | Paper | GSM8k | MATH | License |
---|---|---|---|---|---|
LEMMA-LLAMA-3-8B | π€ HF Link | π [LEMMA] | 79.2 | 38.3 | Llama 3 |
LEMMA-LLAMA-3-70B | π€ HF Link | π [LEMMA] | 91.5 | 51.8 | Llama 3 |
Citation [optional]
Please cite the paper if you refer to our model, code, data or paper from MetaMath.
@article{LEMMA,
title={LEMMA: Learning from Errors for MatheMatical Advancement in LLMs},
author={Zhuoshi Pan, Yu Li, Honglin Lin, Qizhi Pei, Zinan Tang, Wei Wu, Chenlin Ming, H. Vicky Zhao, Conghui He, Lijun Wu},
journal={arXiv preprint arXiv:2503.17439},
year={2025}
}
- Downloads last month
- 8