Update README.md
Browse files
README.md
CHANGED
@@ -15,25 +15,25 @@ pipeline_tag: text-generation
|
|
15 |
|
16 |
|
17 |
model-index:
|
18 |
-
- name:
|
19 |
results: []
|
20 |
---
|
21 |
|
22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
23 |
should probably proofread and complete it, then remove this comment. -->
|
24 |
|
25 |
-
<img src="https://huggingface.co/AI-MO/
|
26 |
|
27 |
|
28 |
-
# Model Card for NuminaMath 7B
|
29 |
|
30 |
-
NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning. NuminaMath 7B won the first progress prize of the [AI Math Olympiad (AIMO)](https://aimoprize.com), with a score of 29/50 on the public and private tests sets.
|
31 |
|
32 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/NyhBs_gzg40iwL995DO9L.png)
|
33 |
|
34 |
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) with two stages of supervised fine-tuning:
|
35 |
|
36 |
-
* **Stage 1:** fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate
|
37 |
* **Stage 2:** fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed [Microsoft’s ToRA paper](https://arxiv.org/abs/2309.17452) and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.
|
38 |
|
39 |
|
@@ -49,7 +49,8 @@ This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https:
|
|
49 |
|
50 |
<!-- Provide the basic links for the model. -->
|
51 |
|
52 |
-
- **Repository:**
|
|
|
53 |
|
54 |
## Intended uses & limitations
|
55 |
|
@@ -60,7 +61,7 @@ import re
|
|
60 |
import torch
|
61 |
from transformers import pipeline
|
62 |
|
63 |
-
pipe = pipeline("text-generation", model="AI-MO/
|
64 |
|
65 |
messages = [
|
66 |
{"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
|
@@ -90,7 +91,7 @@ The above executes a single step of Python code - for more complex problems, you
|
|
90 |
|
91 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
92 |
|
93 |
-
NuminaMath 7B was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of [AMC 12](https://artofproblemsolving.com/wiki/index.php/2023_AMC_12A_Problems), but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level.
|
94 |
|
95 |
|
96 |
## Training procedure
|
|
|
15 |
|
16 |
|
17 |
model-index:
|
18 |
+
- name: NuminaMath-7B-TIR
|
19 |
results: []
|
20 |
---
|
21 |
|
22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
23 |
should probably proofread and complete it, then remove this comment. -->
|
24 |
|
25 |
+
<img src="https://huggingface.co/AI-MO/NuminaMath-7B-TIR/resolve/main/thumbnail.png" alt="Numina Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
26 |
|
27 |
|
28 |
+
# Model Card for NuminaMath 7B TIR
|
29 |
|
30 |
+
NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning (TIR). NuminaMath 7B TIR won the first progress prize of the [AI Math Olympiad (AIMO)](https://aimoprize.com), with a score of 29/50 on the public and private tests sets.
|
31 |
|
32 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/NyhBs_gzg40iwL995DO9L.png)
|
33 |
|
34 |
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) with two stages of supervised fine-tuning:
|
35 |
|
36 |
+
* **Stage 1:** fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate reasoning.
|
37 |
* **Stage 2:** fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed [Microsoft’s ToRA paper](https://arxiv.org/abs/2309.17452) and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.
|
38 |
|
39 |
|
|
|
49 |
|
50 |
<!-- Provide the basic links for the model. -->
|
51 |
|
52 |
+
- **Repository:** https://github.com/project-numina/aimo-progress-prize/tree/main
|
53 |
+
- **Demo:** https://huggingface.co/spaces/AI-MO/math-olympiad-solver
|
54 |
|
55 |
## Intended uses & limitations
|
56 |
|
|
|
61 |
import torch
|
62 |
from transformers import pipeline
|
63 |
|
64 |
+
pipe = pipeline("text-generation", model="AI-MO/NuminaMath-7B-TIR", torch_dtype=torch.bfloat16, device_map="auto")
|
65 |
|
66 |
messages = [
|
67 |
{"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
|
|
|
91 |
|
92 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
93 |
|
94 |
+
NuminaMath 7B TIR was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of [AMC 12](https://artofproblemsolving.com/wiki/index.php/2023_AMC_12A_Problems), but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it's limited capacity and lack of other modalities like vision.
|
95 |
|
96 |
|
97 |
## Training procedure
|