Model Card for Qwen2.5-Math-1.5B-Instruct-PRM-0.2
This model is a fine-tuned version of Qwen/Qwen2.5-Math-1.5B-Instruct on the HuggingFaceH4/prm800k-trl-dedup dataset. It has been trained using TRL.
Quick start
How to use the model:
from transformers import pipeline
pipe = pipeline("token-classification", model="HuggingFaceH4/Qwen2.5-Math-1.5B-Instruct-PRM-0.2", device="cuda")
example = {
"prompt": "Let $a,$ $b,$ and $c$ be positive real numbers. Find the set of all possible values of\n\\[\\frac{c}{a} + \\frac{a}{b + c} + \\frac{b}{c}.\\]",
"completions": [
"This problem involves finding the range of an expression involving three variables.",
"One possible strategy is to try to eliminate some variables and write the expression in terms of one variable only.",
"To do this, I might look for some common factors or symmetries in the expression.",
"I notice that the first and last terms have $c$ in the denominator, so I can factor out $c$ from the whole expression and get\n\\[\\frac{1}{c}\\left(c + \\frac{a^2}{b + c} + b\\right).\\]"
],
"labels": [True, True, True, False],
}
separator = "\n\n" # It's important to use the same separator as the one used during training
for idx in range(1, len(example["completions"]) + 1):
steps = example["completions"][0:idx]
text = separator.join((example["prompt"], *steps)) + separator # Add a separator between the prompt and each steps
pred_entity = pipe(text)[-1]["entity"]
pred = {"LABEL_0": False, "LABEL_1": True}[pred_entity]
label = example["labels"][idx - 1]
print(f"Step {idx}\tPredicted: {pred} \tLabel: {label}")
# Step 1 Predicted: True Label: True
# Step 2 Predicted: True Label: True
# Step 3 Predicted: True Label: True
# Step 4 Predicted: False Label: False
Training procedure
This model was trained with PRM.
Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0
- Pytorch: 2.4.1
- Datasets: 3.0.1
- Tokenizers: 0.21.0
Citations
Cite PRM as:
@article{uesato2022solving,
title = {Solving Math Word Problems With Process- and Outcome-Based Feedback},
author = {Uesato, Jonathan and Kushman, Nate and Kumar, Ramana and Song, Francis and Siegel, Noah and Wang, Lisa and Creswell, Antonia and Irving, Geoffrey and Higgins, Irina},
year = 2022,
journal = {arXiv preprint arXiv:2211.14275}
}
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for HuggingFaceH4/Qwen2.5-Math-1.5B-Instruct-PRM-0.2
Base model
Qwen/Qwen2.5-1.5B
Finetuned
Qwen/Qwen2.5-Math-1.5B
Finetuned
Qwen/Qwen2.5-Math-1.5B-Instruct