Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
MaksimSTW commited on
Commit
c74bbcb
·
verified ·
1 Parent(s): 81192e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -12,6 +12,7 @@ Difficulty scores are estimated using the Qwen 2.5-MATH-7B model with the follow
12
 
13
  - `temperature = 0.6`
14
  - `top_p = 0.9`
 
15
  - Inference performed via [vLLM](https://github.com/vllm-project/vllm)
16
  - Each problem is attempted **128 times**
17
 
@@ -20,3 +21,6 @@ The difficulty score for each problem is computed as:
20
  d_i = 100 × (1 - (# successes / 128))
21
 
22
  This scoring approach ensures a balanced estimation: a strong model would trivially succeed on all problems, undermining difficulty measurement, while a weak model would fail uniformly, limiting the usefulness of the signal. Qwen 2.5-MATH-7B was chosen for its **mid-range capabilities**, providing **informative gradients** in problem difficulty across the dataset.
 
 
 
 
12
 
13
  - `temperature = 0.6`
14
  - `top_p = 0.9`
15
+ - `max_tokens=4096`
16
  - Inference performed via [vLLM](https://github.com/vllm-project/vllm)
17
  - Each problem is attempted **128 times**
18
 
 
21
  d_i = 100 × (1 - (# successes / 128))
22
 
23
  This scoring approach ensures a balanced estimation: a strong model would trivially succeed on all problems, undermining difficulty measurement, while a weak model would fail uniformly, limiting the usefulness of the signal. Qwen 2.5-MATH-7B was chosen for its **mid-range capabilities**, providing **informative gradients** in problem difficulty across the dataset.
24
+
25
+ ## Contact
26
+ Feel free to contact Taiwei Shi ([email protected]) if you have any questions.