Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: odc-by
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- math
|
7 |
+
- education
|
8 |
+
---
|
9 |
+
|
10 |
+
# Dataset Card for MathFish Tasks
|
11 |
+
|
12 |
+
<!-- Provide a quick summary of the dataset. -->
|
13 |
+
|
14 |
+
This dataset is a derivative of [MathFish](https://huggingface.co/datasets/allenai/mathfish), where dev set examples are inserted into prompts for models to assess their abilities to verify and tag standards in math problems.
|
15 |
+
|
16 |
+
See [MathFish](https://huggingface.co/datasets/allenai/mathfish) for more details on sources, creation, and uses of this data.
|
17 |
+
|
18 |
+
This data can be used in conjunction with our model API wrapper included in this [Github repository](https://github.com/allenai/mathfish/tree/main).
|
19 |
+
|
20 |
+
## Dataset Details
|
21 |
+
|
22 |
+
### Dataset Description
|
23 |
+
|
24 |
+
- **Curated by:** Lucy Li, Tal August, Rose E Wang, Luca Soldaini, Courtney Allison, Kyle Lo
|
25 |
+
- **Funded by:** The Gates Foundation
|
26 |
+
- **Language(s) (NLP):** English
|
27 |
+
- **License:** ODC-By 1.0
|
28 |
+
|
29 |
+
## Dataset Structure
|
30 |
+
|
31 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
32 |
+
|
33 |
+
Files are named in the following manner:
|
34 |
+
|
35 |
+
```
|
36 |
+
data_{task format}-{mathfish data split}_{other parameters}_{prompt number}_{table format}.jsonl
|
37 |
+
```
|
38 |
+
|
39 |
+
Each line in a tagging file is formatted as the following:
|
40 |
+
|
41 |
+
```
|
42 |
+
{
|
43 |
+
"id": unique instance ID
|
44 |
+
"dataset": some grouping of instances within a given task format,
|
45 |
+
"messages": [
|
46 |
+
{
|
47 |
+
"role": "user",
|
48 |
+
"prompt_template": "",
|
49 |
+
"options": [
|
50 |
+
# a list of tagging options
|
51 |
+
],
|
52 |
+
"problem_activity": "",
|
53 |
+
},
|
54 |
+
{
|
55 |
+
"role": "assistant",
|
56 |
+
"response_template": "{option}",
|
57 |
+
"response_format": "", # e.g. json or comma-separated list
|
58 |
+
"correct_option_index": [
|
59 |
+
# integer indices here that correspond to "options" above
|
60 |
+
]
|
61 |
+
}
|
62 |
+
]
|
63 |
+
}
|
64 |
+
```
|
65 |
+
|
66 |
+
Each instance may also include keys indicating few-shot exemplars.
|
67 |
+
|
68 |
+
Note that files labeled with `entailment` are inputs for the task we call "verification" in our paper. In verification files, the format is similar to tagging above, but instead of an `options` key, there is a `standards_description` key including a natural language description of a math standard, and the assistant's dictionary includes a yes/no entry for whether the given problem `aligns` with the described standard.
|
69 |
+
|
70 |
+
## Dataset Creation
|
71 |
+
|
72 |
+
The prompts in this repository are filtered by testing 15 possible prompts from [this file](https://github.com/allenai/mathfish/blob/main/mathfish/datasets/prompts.json) across three models: Llama 2 70B, Mixtral 8x7B, and GPT-4-turbo. This repo includes each models' top three performing prompts on tagging and verification tasks, to facilitate reproducibility of the findings in our paper (link TBD).
|
73 |
+
|
74 |
+
## Citation
|
75 |
+
|
76 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
77 |
+
|
78 |
+
BibTeX TBD
|
79 |
+
|
80 |
+
## Dataset Card Contact
|
81 |
+
|
82 |