Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_eng.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_hau.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_ibo.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_lug.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_sot.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_swa.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_twi.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/cot_yaml +37 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md +40 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu.yaml +12 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_humanities.yaml +9 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_language.yaml +9 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_other.yaml +9 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_social_science.yaml +9 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_stem.yaml +9 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_default_arabicmmlu_template_yaml +15 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_generate_configs.py +118 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_arabic_language_grammar.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_driving_test.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_general_knowledge.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_arabic_language.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_biology.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_civics.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_computer_science.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_geography.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_history.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_islamic_studies.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_philosophy.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_physics.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_islamic_studies.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_arabic_language.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_computer_science.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_economics.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_general_knowledge.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_geography.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_natural_science.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_social_science.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_computer_science.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_history.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_math.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_natural_science.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_univ_accounting.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_univ_computer_science.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_univ_economics.yaml +5 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md +59 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml +34 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-zeroshot.yaml +44 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot.yaml +83 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k.yaml +45 -0
- scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md +48 -0
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_eng.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: eng
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: afrimgsm_en_cot_eng
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_hau.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: hau
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: afrimgsm_en_cot_hau
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_ibo.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: ibo
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: afrimgsm_en_cot_ibo
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_lug.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: lug
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: afrimgsm_en_cot_lug
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_sot.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: sot
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: afrimgsm_en_cot_sot
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_swa.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: swa
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: afrimgsm_en_cot_swa
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/afrimgsm_en_cot_twi.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: twi
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: afrimgsm_en_cot_twi
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/afrimgsm/en_cot/cot_yaml
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file will be included in the generated language-specific task configs.
|
2 |
+
# It doesn't have a yaml file extension as it is not meant to be imported directly by the harness.
|
3 |
+
group:
|
4 |
+
- afrimgsm
|
5 |
+
- afrimgsm_en_cot
|
6 |
+
dataset_path: masakhane/afrimgsm
|
7 |
+
dataset_name: null # Overridden by language-specific config.
|
8 |
+
output_type: generate_until
|
9 |
+
training_split: train
|
10 |
+
test_split: test
|
11 |
+
generation_kwargs:
|
12 |
+
until:
|
13 |
+
- "\n\n"
|
14 |
+
- "\n"
|
15 |
+
do_sample: false
|
16 |
+
temperature: 0.0
|
17 |
+
target_delimiter: " "
|
18 |
+
metric_list:
|
19 |
+
- metric: exact_match
|
20 |
+
aggregation: mean
|
21 |
+
higher_is_better: true
|
22 |
+
ignore_case: true
|
23 |
+
ignore_punctuation: true
|
24 |
+
filter_list:
|
25 |
+
- name: "strict-match"
|
26 |
+
filter:
|
27 |
+
- function: "regex"
|
28 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)"
|
29 |
+
- function: "take_first"
|
30 |
+
- filter:
|
31 |
+
- function: regex
|
32 |
+
group_select: -1
|
33 |
+
regex_pattern: (-?[$0-9.,]{2,})|(-?[0-9]+)
|
34 |
+
- function: take_first
|
35 |
+
name: flexible-extract
|
36 |
+
metadata:
|
37 |
+
version: 2.0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ArabicMMLU
|
2 |
+
|
3 |
+
### Paper
|
4 |
+
|
5 |
+
Title: ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic
|
6 |
+
|
7 |
+
Abstract: https://arxiv.org/abs/2402.12840
|
8 |
+
|
9 |
+
The focus of language model evaluation has
|
10 |
+
transitioned towards reasoning and knowledge intensive tasks, driven by advancements in pretraining large models. While state-of-the-art models are partially trained on large Arabic texts, evaluating their performance in Arabic remains challenging due to the limited availability of relevant datasets. To bridge this gap, we present ArabicMMLU, the first multi-task language understanding benchmark for Arabic language, sourced from school exams across diverse educational levels in different countries spanning North Africa, the Levant, and the Gulf regions. Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA), and is carefully constructed by collaborating with native speakers in the region. Our comprehensive evaluations of 35 models reveal substantial room for improvement, particularly among the best open-source models. Notably, BLOOMZ, mT0, LLama2, and Falcon struggle to achieve a score of 50%, while even the top-performing Arabic centric model only achieves a score of 62.3%.
|
11 |
+
|
12 |
+
The authors of the paper conducted studies by varying the language of the initial prompt and answer keys between English and Arabic. However, they set English initial prompts and answer keys as the standard, which is the version implemented in this task.
|
13 |
+
|
14 |
+
Homepage: https://github.com/mbzuai-nlp/ArabicMMLU
|
15 |
+
|
16 |
+
|
17 |
+
### Citation
|
18 |
+
|
19 |
+
```
|
20 |
+
@misc{koto2024arabicmmlu,
|
21 |
+
title={ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic},
|
22 |
+
author={Fajri Koto and Haonan Li and Sara Shatnawi and Jad Doughman and Abdelrahman Boda Sadallah and Aisha Alraeesi and Khalid Almubarak and Zaid Alyafeai and Neha Sengupta and Shady Shehata and Nizar Habash and Preslav Nakov and Timothy Baldwin},
|
23 |
+
year={2024},
|
24 |
+
eprint={2402.12840},
|
25 |
+
archivePrefix={arXiv},
|
26 |
+
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
|
27 |
+
}
|
28 |
+
```
|
29 |
+
|
30 |
+
### Groups and Tasks
|
31 |
+
|
32 |
+
#### Groups
|
33 |
+
|
34 |
+
* `arabicmmlu`: evaluates all ArabicMMLU tasks.
|
35 |
+
|
36 |
+
* `arabicmmlu_stem`: evaluates STEM ArabicMMLU tasks.
|
37 |
+
* `arabicmmlu_stem_social_science`: evaluates social science ArabicMMLU tasks.
|
38 |
+
* `arabicmmlu_stem_humanities`: evaluates humanities ArabicMMLU tasks.
|
39 |
+
* `arabicmmlu_stem_language`: evaluates Arabic language ArabicMMLU tasks.
|
40 |
+
* `arabicmmlu_stem_other`: evaluates other ArabicMMLU tasks.
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group: arabicmmlu
|
2 |
+
task:
|
3 |
+
- arabicmmlu_other
|
4 |
+
- arabicmmlu_social_science
|
5 |
+
- arabicmmlu_humanities
|
6 |
+
- arabicmmlu_stem
|
7 |
+
- arabicmmlu_language
|
8 |
+
aggregate_metric_list:
|
9 |
+
- metric: acc
|
10 |
+
weight_by_size: True
|
11 |
+
metadata:
|
12 |
+
version: 0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_humanities.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group: arabicmmlu_humanities
|
2 |
+
group_alias: Humanities
|
3 |
+
task:
|
4 |
+
- arabicmmlu_humanities_tasks
|
5 |
+
aggregate_metric_list:
|
6 |
+
- metric: acc
|
7 |
+
weight_by_size: True
|
8 |
+
metadata:
|
9 |
+
version: 0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_language.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group: arabicmmlu_language
|
2 |
+
group_alias: Language
|
3 |
+
task:
|
4 |
+
- arabicmmlu_language_tasks
|
5 |
+
aggregate_metric_list:
|
6 |
+
- metric: acc
|
7 |
+
weight_by_size: True
|
8 |
+
metadata:
|
9 |
+
version: 0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_other.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group: arabicmmlu_other
|
2 |
+
group_alias: Other
|
3 |
+
task:
|
4 |
+
- arabicmmlu_other_tasks
|
5 |
+
aggregate_metric_list:
|
6 |
+
- metric: acc
|
7 |
+
weight_by_size: True
|
8 |
+
metadata:
|
9 |
+
version: 0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_social_science.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group: arabicmmlu_social_science
|
2 |
+
group_alias: Social Science
|
3 |
+
task:
|
4 |
+
- arabicmmlu_social_science_tasks
|
5 |
+
aggregate_metric_list:
|
6 |
+
- metric: acc
|
7 |
+
weight_by_size: True
|
8 |
+
metadata:
|
9 |
+
version: 0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_arabicmmlu_stem.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group: arabicmmlu_stem
|
2 |
+
group_alias: STEM
|
3 |
+
task:
|
4 |
+
- arabicmmlu_stem_tasks
|
5 |
+
aggregate_metric_list:
|
6 |
+
- metric: acc
|
7 |
+
weight_by_size: True
|
8 |
+
metadata:
|
9 |
+
version: 0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_default_arabicmmlu_template_yaml
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_path: yazeed7/ArabicMMLU
|
2 |
+
test_split: test
|
3 |
+
fewshot_split: dev
|
4 |
+
fewshot_config:
|
5 |
+
sampler: first_n
|
6 |
+
output_type: multiple_choice
|
7 |
+
doc_to_text: !function utils.doc_to_text
|
8 |
+
doc_to_choice: !function utils.doc_to_choice
|
9 |
+
doc_to_target: "Answer Key"
|
10 |
+
metric_list:
|
11 |
+
- metric: acc
|
12 |
+
aggregation: mean
|
13 |
+
higher_is_better: true
|
14 |
+
metadata:
|
15 |
+
version: 0.0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/_generate_configs.py
ADDED
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Take in a YAML, and output all "other" splits with this YAML
|
3 |
+
"""
|
4 |
+
|
5 |
+
import argparse
|
6 |
+
import logging
|
7 |
+
import os
|
8 |
+
|
9 |
+
import yaml
|
10 |
+
from tqdm import tqdm
|
11 |
+
|
12 |
+
|
13 |
+
eval_logger = logging.getLogger("lm-eval")
|
14 |
+
|
15 |
+
|
16 |
+
SUBJECTS = {
|
17 |
+
"Driving Test": "other",
|
18 |
+
"High Geography": "social_science",
|
19 |
+
"High History": "humanities",
|
20 |
+
"Islamic Studies": "humanities",
|
21 |
+
"Univ Accounting": "social_science",
|
22 |
+
"Primary General Knowledge": "other",
|
23 |
+
"Univ Political Science": "social_science",
|
24 |
+
"Primary Math": "stem",
|
25 |
+
"Middle General Knowledge": "other",
|
26 |
+
"High Biology": "stem",
|
27 |
+
"Primary Natural Science": "stem",
|
28 |
+
"High Economics": "social_science",
|
29 |
+
"Middle Natural Science": "stem",
|
30 |
+
"Middle Geography": "social_science",
|
31 |
+
"Primary Social Science": "social_science",
|
32 |
+
"Middle Computer Science": "stem",
|
33 |
+
"Middle Islamic Studies": "humanities",
|
34 |
+
"Primary Computer Science": "stem",
|
35 |
+
"High Physics": "stem",
|
36 |
+
"Middle Social Science": "social_science",
|
37 |
+
"Middle Civics": "social_science",
|
38 |
+
"High Computer Science": "stem",
|
39 |
+
"General Knowledge": "other",
|
40 |
+
"High Civics": "social_science",
|
41 |
+
"Prof Law": "humanities",
|
42 |
+
"High Islamic Studies": "humanities",
|
43 |
+
"Primary Arabic Language": "language",
|
44 |
+
"High Arabic Language": "language",
|
45 |
+
"Arabic Language (Grammar)": "language",
|
46 |
+
"Primary History": "humanities",
|
47 |
+
"Middle History": "humanities",
|
48 |
+
"Univ Economics": "social_science",
|
49 |
+
"Arabic Language (General)": "language",
|
50 |
+
"Univ Computer Science": "stem",
|
51 |
+
"Primary Islamic Studies": "humanities",
|
52 |
+
"Primary Geography": "social_science",
|
53 |
+
"High Philosophy": "humanities",
|
54 |
+
"Middle Arabic Language": "language",
|
55 |
+
"Middle Economics": "social_science",
|
56 |
+
"Univ Management": "other",
|
57 |
+
}
|
58 |
+
|
59 |
+
|
60 |
+
def parse_args():
|
61 |
+
parser = argparse.ArgumentParser()
|
62 |
+
parser.add_argument("--base_yaml_path", default="_default_arabicmmlu_template_yaml")
|
63 |
+
parser.add_argument("--save_prefix_path", default="arabicmmlu")
|
64 |
+
return parser.parse_args()
|
65 |
+
|
66 |
+
|
67 |
+
if __name__ == "__main__":
|
68 |
+
args = parse_args()
|
69 |
+
|
70 |
+
# get filename of base_yaml so we can `"include": ` it in our "other" YAMLs.
|
71 |
+
base_yaml_name = os.path.split(args.base_yaml_path)[-1]
|
72 |
+
with open(args.base_yaml_path, encoding="utf-8") as f:
|
73 |
+
base_yaml = yaml.full_load(f)
|
74 |
+
|
75 |
+
ALL_CATEGORIES = []
|
76 |
+
for subject, category in tqdm(SUBJECTS.items()):
|
77 |
+
if category not in ALL_CATEGORIES:
|
78 |
+
ALL_CATEGORIES.append(category)
|
79 |
+
|
80 |
+
# description = f"The following are multiple choice questions (with answers) about {' '.join(subject.split('_'))}.\n\n"
|
81 |
+
|
82 |
+
yaml_dict = {
|
83 |
+
"include": base_yaml_name,
|
84 |
+
"tag": f"arabicmmlu_{category}",
|
85 |
+
"task": f"arabicmmlu_{subject.lower().replace(' ', '_')}",
|
86 |
+
"task_alias": subject,
|
87 |
+
"dataset_name": subject,
|
88 |
+
# "description": description,
|
89 |
+
}
|
90 |
+
|
91 |
+
file_save_path = (
|
92 |
+
args.save_prefix_path
|
93 |
+
+ f"_{subject.lower().replace(' ', '_').replace('(', '').replace(')', '')}.yaml"
|
94 |
+
)
|
95 |
+
eval_logger.info(f"Saving yaml for subset {subject} to {file_save_path}")
|
96 |
+
with open(file_save_path, "w", encoding="utf-8") as yaml_file:
|
97 |
+
yaml.dump(
|
98 |
+
yaml_dict,
|
99 |
+
yaml_file,
|
100 |
+
allow_unicode=True,
|
101 |
+
default_style='"',
|
102 |
+
)
|
103 |
+
|
104 |
+
arabicmmlu_subcategories = [f"arabicmmlu_{category}" for category in ALL_CATEGORIES]
|
105 |
+
|
106 |
+
file_save_path = args.save_prefix_path + ".yaml"
|
107 |
+
|
108 |
+
eval_logger.info(f"Saving benchmark config to {file_save_path}")
|
109 |
+
with open(file_save_path, "w", encoding="utf-8") as yaml_file:
|
110 |
+
yaml.dump(
|
111 |
+
{
|
112 |
+
"group": "arabicmmlu",
|
113 |
+
"task": arabicmmlu_subcategories,
|
114 |
+
},
|
115 |
+
yaml_file,
|
116 |
+
indent=4,
|
117 |
+
default_flow_style=False,
|
118 |
+
)
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_arabic_language_grammar.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Arabic Language (Grammar)"
|
2 |
+
"tag": "arabicmmlu_language_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_arabic_language_(grammar)"
|
5 |
+
"task_alias": "Arabic Language (Grammar)"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_driving_test.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Driving Test"
|
2 |
+
"tag": "arabicmmlu_other_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_driving_test"
|
5 |
+
"task_alias": "Driving Test"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_general_knowledge.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "General Knowledge"
|
2 |
+
"tag": "arabicmmlu_other_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_general_knowledge"
|
5 |
+
"task_alias": "General Knowledge"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_arabic_language.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Arabic Language"
|
2 |
+
"tag": "arabicmmlu_language_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_arabic_language"
|
5 |
+
"task_alias": "High Arabic Language"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_biology.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Biology"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_biology"
|
5 |
+
"task_alias": "High Biology"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_civics.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Civics"
|
2 |
+
"tag": "arabicmmlu_social_science_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_civics"
|
5 |
+
"task_alias": "High Civics"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_computer_science.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Computer Science"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_computer_science"
|
5 |
+
"task_alias": "High Computer Science"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_geography.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Geography"
|
2 |
+
"tag": "arabicmmlu_social_science_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_geography"
|
5 |
+
"task_alias": "High Geography"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_history.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High History"
|
2 |
+
"tag": "arabicmmlu_humanities_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_history"
|
5 |
+
"task_alias": "High History"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_islamic_studies.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Islamic Studies"
|
2 |
+
"tag": "arabicmmlu_humanities_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_islamic_studies"
|
5 |
+
"task_alias": "High Islamic Studies"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_philosophy.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Philosophy"
|
2 |
+
"tag": "arabicmmlu_humanities_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_philosophy"
|
5 |
+
"task_alias": "High Philosophy"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_high_physics.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "High Physics"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_high_physics"
|
5 |
+
"task_alias": "High Physics"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_islamic_studies.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Islamic Studies"
|
2 |
+
"tag": "arabicmmlu_humanities_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_islamic_studies"
|
5 |
+
"task_alias": "Islamic Studies"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_arabic_language.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Middle Arabic Language"
|
2 |
+
"tag": "arabicmmlu_language_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_middle_arabic_language"
|
5 |
+
"task_alias": "Middle Arabic Language"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_computer_science.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Middle Computer Science"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_middle_computer_science"
|
5 |
+
"task_alias": "Middle Computer Science"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_economics.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Middle Economics"
|
2 |
+
"tag": "arabicmmlu_social_science_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_middle_economics"
|
5 |
+
"task_alias": "Middle Economics"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_general_knowledge.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Middle General Knowledge"
|
2 |
+
"tag": "arabicmmlu_other_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_middle_general_knowledge"
|
5 |
+
"task_alias": "Middle General Knowledge"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_geography.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Middle Geography"
|
2 |
+
"tag": "arabicmmlu_social_science_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_middle_geography"
|
5 |
+
"task_alias": "Middle Geography"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_natural_science.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Middle Natural Science"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_middle_natural_science"
|
5 |
+
"task_alias": "Middle Natural Science"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_middle_social_science.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Middle Social Science"
|
2 |
+
"tag": "arabicmmlu_social_science_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_middle_social_science"
|
5 |
+
"task_alias": "Middle Social Science"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_computer_science.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Primary Computer Science"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_primary_computer_science"
|
5 |
+
"task_alias": "Primary Computer Science"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_history.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Primary History"
|
2 |
+
"tag": "arabicmmlu_humanities_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_primary_history"
|
5 |
+
"task_alias": "Primary History"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_math.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Primary Math"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_primary_math"
|
5 |
+
"task_alias": "Primary Math"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_primary_natural_science.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Primary Natural Science"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_primary_natural_science"
|
5 |
+
"task_alias": "Primary Natural Science"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_univ_accounting.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Univ Accounting"
|
2 |
+
"tag": "arabicmmlu_social_science_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_univ_accounting"
|
5 |
+
"task_alias": "Univ Accounting"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_univ_computer_science.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Univ Computer Science"
|
2 |
+
"tag": "arabicmmlu_stem_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_univ_computer_science"
|
5 |
+
"task_alias": "Univ Computer Science"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/arabicmmlu_univ_economics.yaml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "Univ Economics"
|
2 |
+
"tag": "arabicmmlu_social_science_tasks"
|
3 |
+
"include": "_default_arabicmmlu_template_yaml"
|
4 |
+
"task": "arabicmmlu_univ_economics"
|
5 |
+
"task_alias": "Univ Economics"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# GSM8k
|
2 |
+
|
3 |
+
## Paper
|
4 |
+
Training Verifiers to Solve Math Word Problems
|
5 |
+
https://arxiv.org/abs/2110.14168
|
6 |
+
|
7 |
+
State-of-the-art language models can match human performance on many tasks, but
|
8 |
+
they still struggle to robustly perform multi-step mathematical reasoning. To
|
9 |
+
diagnose the failures of current models and support research, we introduce GSM8K,
|
10 |
+
a dataset of 8.5K high quality linguistically diverse grade school math word problems.
|
11 |
+
We find that even the largest transformer models fail to achieve high test performance,
|
12 |
+
despite the conceptual simplicity of this problem distribution.
|
13 |
+
|
14 |
+
NOTE: See the official implementation of the task:
|
15 |
+
https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py
|
16 |
+
for how to make use of the dataset's calculator annotations in your language
|
17 |
+
model's sample/generation function.
|
18 |
+
|
19 |
+
Homepage: https://github.com/openai/grade-school-math
|
20 |
+
|
21 |
+
|
22 |
+
## Citation
|
23 |
+
```
|
24 |
+
@misc{cobbe2021training,
|
25 |
+
title={Training Verifiers to Solve Math Word Problems},
|
26 |
+
author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},
|
27 |
+
year={2021},
|
28 |
+
eprint={2110.14168},
|
29 |
+
archivePrefix={arXiv},
|
30 |
+
primaryClass={cs.LG}
|
31 |
+
}
|
32 |
+
```
|
33 |
+
|
34 |
+
### Groups and Tasks
|
35 |
+
|
36 |
+
#### Groups
|
37 |
+
|
38 |
+
- `math_word_problems`
|
39 |
+
- `chain_of_thought`
|
40 |
+
- `self_consistency`
|
41 |
+
|
42 |
+
#### Tasks
|
43 |
+
|
44 |
+
- `gsm8k_yaml`
|
45 |
+
- `gsm8k_cot`: GSM8K with Chain-of-Thought
|
46 |
+
- `gsm8k_cot_self_consistency`: GSM8K with Chain-of-Thought and Self-Consistency
|
47 |
+
|
48 |
+
### Checklist
|
49 |
+
|
50 |
+
- [x] Is in Eval-harness v1.0 ?
|
51 |
+
- [ ] Has been checked for regression from v1.0?
|
52 |
+
- [ ] Has been checked for equivalence with original paper methodology?
|
53 |
+
- [ ] "Main" checked variant clearly denoted?
|
54 |
+
|
55 |
+
### Variant Wishlist
|
56 |
+
|
57 |
+
- [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation)
|
58 |
+
- [ ] Using Verifiers
|
59 |
+
- [ ] Majority voting "without CoT"
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
include: gsm8k-cot.yaml
|
2 |
+
tag:
|
3 |
+
- chain_of_thought
|
4 |
+
- self_consistency
|
5 |
+
task: gsm8k_cot_self_consistency
|
6 |
+
generation_kwargs:
|
7 |
+
until:
|
8 |
+
- "Q:"
|
9 |
+
- "\n\n"
|
10 |
+
do_sample: true
|
11 |
+
temperature: 0.2
|
12 |
+
repeats: 64
|
13 |
+
filter_list:
|
14 |
+
- name: "score-first" # pick only the first response, and report metrics on that
|
15 |
+
filter:
|
16 |
+
- function: "regex"
|
17 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
18 |
+
- function: "take_first"
|
19 |
+
- name: "maj@64"
|
20 |
+
filter:
|
21 |
+
- function: "regex"
|
22 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
23 |
+
- function: "majority_vote"
|
24 |
+
- function: "take_first"
|
25 |
+
- name: "maj@8" # get Maj@8 , via selecting the first 8 responses. Using a better estimator would be optimal.
|
26 |
+
filter:
|
27 |
+
- function: "take_first_k"
|
28 |
+
k: 8
|
29 |
+
- function: "regex"
|
30 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
31 |
+
- function: "majority_vote"
|
32 |
+
- function: "take_first"
|
33 |
+
metadata:
|
34 |
+
version: 2.0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-zeroshot.yaml
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
tag:
|
2 |
+
- math_word_problems
|
3 |
+
task: gsm8k_cot_zeroshot
|
4 |
+
dataset_path: gsm8k
|
5 |
+
dataset_name: main
|
6 |
+
output_type: generate_until
|
7 |
+
training_split: train
|
8 |
+
fewshot_split: train
|
9 |
+
test_split: test
|
10 |
+
doc_to_text: "Q: {{question}}\nA: Let's think step by step."
|
11 |
+
doc_to_target: "{{answer}}" #" {{answer.split('### ')[-1].rstrip()}}"
|
12 |
+
metric_list:
|
13 |
+
- metric: exact_match
|
14 |
+
aggregation: mean
|
15 |
+
higher_is_better: true
|
16 |
+
ignore_case: true
|
17 |
+
ignore_punctuation: false
|
18 |
+
regexes_to_ignore:
|
19 |
+
- ","
|
20 |
+
- "\\$"
|
21 |
+
- "(?s).*#### "
|
22 |
+
- "\\.$"
|
23 |
+
generation_kwargs:
|
24 |
+
until:
|
25 |
+
- "Q:"
|
26 |
+
- "</s>"
|
27 |
+
- "<|im_end|>"
|
28 |
+
do_sample: false
|
29 |
+
repeats: 1
|
30 |
+
num_fewshot: 0
|
31 |
+
filter_list:
|
32 |
+
- name: "strict-match"
|
33 |
+
filter:
|
34 |
+
- function: "regex"
|
35 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)."
|
36 |
+
- function: "take_first"
|
37 |
+
- name: "flexible-extract"
|
38 |
+
filter:
|
39 |
+
- function: "regex"
|
40 |
+
group_select: -1
|
41 |
+
regex_pattern: "(-?[$0-9.,]{2,})|(-?[0-9]+)"
|
42 |
+
- function: "take_first"
|
43 |
+
metadata:
|
44 |
+
version: 3.0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot.yaml
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: main
|
2 |
+
dataset_path: gsm8k
|
3 |
+
doc_to_target: '{{answer.split(''####'')[-1].strip() if answer is defined else target}}'
|
4 |
+
doc_to_text: 'Q: {{question}}
|
5 |
+
|
6 |
+
A:'
|
7 |
+
fewshot_config:
|
8 |
+
sampler: first_n
|
9 |
+
samples:
|
10 |
+
- question: There are 15 trees in the grove. Grove workers will plant trees in the
|
11 |
+
grove today. After they are done, there will be 21 trees. How many trees did
|
12 |
+
the grove workers plant today?
|
13 |
+
target: There are 15 trees originally. Then there were 21 trees after some more
|
14 |
+
were planted. So there must have been 21 - 15 = 6. The answer is 6.
|
15 |
+
- question: If there are 3 cars in the parking lot and 2 more cars arrive, how many
|
16 |
+
cars are in the parking lot?
|
17 |
+
target: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer
|
18 |
+
is 5.
|
19 |
+
- question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many
|
20 |
+
pieces do they have left in total?
|
21 |
+
target: Originally, Leah had 32 chocolates. Her sister had 42. So in total they
|
22 |
+
had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.
|
23 |
+
- question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12
|
24 |
+
lollipops. How many lollipops did Jason give to Denny?
|
25 |
+
target: Jason started with 20 lollipops. Then he had 12 after giving some to Denny.
|
26 |
+
So he gave Denny 20 - 12 = 8. The answer is 8.
|
27 |
+
- question: Shawn has five toys. For Christmas, he got two toys each from his mom and
|
28 |
+
dad. How many toys does he have now?
|
29 |
+
target: Shawn started with 5 toys. If he got 2 toys each from his mom and dad,
|
30 |
+
then that is 4 more toys. 5 + 4 = 9. The answer is 9.
|
31 |
+
- question: There were nine computers in the server room. Five more computers were
|
32 |
+
installed each day, from monday to thursday. How many computers are now in the
|
33 |
+
server room?
|
34 |
+
target: There were originally 9 computers. For each of 4 days, 5 more computers
|
35 |
+
were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is
|
36 |
+
29.
|
37 |
+
- question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday,
|
38 |
+
he lost 2 more. How many golf balls did he have at the end of wednesday?
|
39 |
+
target: Michael started with 58 golf balls. After losing 23 on tuesday, he had
|
40 |
+
58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer
|
41 |
+
is 33.
|
42 |
+
- question: Olivia has $23. She bought five bagels for $3 each. How much money does
|
43 |
+
she have left?
|
44 |
+
target: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15
|
45 |
+
dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8.
|
46 |
+
filter_list:
|
47 |
+
- filter:
|
48 |
+
- function: regex
|
49 |
+
regex_pattern: The answer is (\-?[0-9\.\,]+).
|
50 |
+
- function: take_first
|
51 |
+
name: strict-match
|
52 |
+
- filter:
|
53 |
+
- function: regex
|
54 |
+
group_select: -1
|
55 |
+
regex_pattern: (-?[$0-9.,]{2,})|(-?[0-9]+)
|
56 |
+
- function: take_first
|
57 |
+
name: flexible-extract
|
58 |
+
generation_kwargs:
|
59 |
+
do_sample: false
|
60 |
+
until:
|
61 |
+
- 'Q:'
|
62 |
+
- </s>
|
63 |
+
- <|im_end|>
|
64 |
+
tag:
|
65 |
+
- chain_of_thought
|
66 |
+
metadata:
|
67 |
+
version: 3.0
|
68 |
+
metric_list:
|
69 |
+
- aggregation: mean
|
70 |
+
higher_is_better: true
|
71 |
+
ignore_case: true
|
72 |
+
ignore_punctuation: false
|
73 |
+
metric: exact_match
|
74 |
+
regexes_to_ignore:
|
75 |
+
- ','
|
76 |
+
- \$
|
77 |
+
- '(?s).*#### '
|
78 |
+
- \.$
|
79 |
+
num_fewshot: 8
|
80 |
+
output_type: generate_until
|
81 |
+
repeats: 1
|
82 |
+
task: gsm8k_cot
|
83 |
+
test_split: test
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k.yaml
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
tag:
|
2 |
+
- math_word_problems
|
3 |
+
task: gsm8k
|
4 |
+
dataset_path: gsm8k
|
5 |
+
dataset_name: main
|
6 |
+
output_type: generate_until
|
7 |
+
training_split: train
|
8 |
+
fewshot_split: train
|
9 |
+
test_split: test
|
10 |
+
doc_to_text: "Question: {{question}}\nAnswer:"
|
11 |
+
doc_to_target: "{{answer}}" #" {{answer.split('### ')[-1].rstrip()}}"
|
12 |
+
metric_list:
|
13 |
+
- metric: exact_match
|
14 |
+
aggregation: mean
|
15 |
+
higher_is_better: true
|
16 |
+
ignore_case: true
|
17 |
+
ignore_punctuation: false
|
18 |
+
regexes_to_ignore:
|
19 |
+
- ","
|
20 |
+
- "\\$"
|
21 |
+
- "(?s).*#### "
|
22 |
+
- "\\.$"
|
23 |
+
generation_kwargs:
|
24 |
+
until:
|
25 |
+
- "Question:"
|
26 |
+
- "</s>"
|
27 |
+
- "<|im_end|>"
|
28 |
+
do_sample: false
|
29 |
+
temperature: 0.0
|
30 |
+
repeats: 1
|
31 |
+
num_fewshot: 5
|
32 |
+
filter_list:
|
33 |
+
- name: "strict-match"
|
34 |
+
filter:
|
35 |
+
- function: "regex"
|
36 |
+
regex_pattern: "#### (\\-?[0-9\\.\\,]+)"
|
37 |
+
- function: "take_first"
|
38 |
+
- name: "flexible-extract"
|
39 |
+
filter:
|
40 |
+
- function: "regex"
|
41 |
+
group_select: -1
|
42 |
+
regex_pattern: "(-?[$0-9.,]{2,})|(-?[0-9]+)"
|
43 |
+
- function: "take_first"
|
44 |
+
metadata:
|
45 |
+
version: 3.0
|
scripts/yans/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# gsm_plus
|
2 |
+
|
3 |
+
### Paper
|
4 |
+
|
5 |
+
Title: `GSM-PLUS: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers`
|
6 |
+
|
7 |
+
Abstract: `Large language models (LLMs) have achieved impressive performance across various mathematical reasoning benchmarks. However, there are increasing debates regarding whether these models truly understand and apply mathematical knowledge or merely rely on shortcuts for mathematical reasoning. One essential and frequently occurring evidence is that when the math questions are slightly changed, LLMs can behave incorrectly. This motivates us to evaluate the robustness of LLMs’ math reasoning capability by testing a wide range of question variations. We introduce the adversarial grade school math (GSM-PLUS) dataset, an extension of GSM8K augmented with various mathematical perturbations. Our experiments on 25 LLMs and 4 prompting techniques show that while LLMs exhibit different levels of math reasoning abilities, their performances are far from robust. In particular, even for problems that have been solved in GSM8K, LLMs can make mistakes when new statements are added or the question targets are altered. We also explore whether more robust performance can be achieved by composing existing prompting methods, in which we try an iterative method that generates and verifies each intermediate thought based on its reasoning goal and calculation result.`
|
8 |
+
|
9 |
+
Homepage: https://huggingface.co/datasets/qintongli/GSM-Plus
|
10 |
+
|
11 |
+
### Citation
|
12 |
+
|
13 |
+
```bibtex
|
14 |
+
@misc{li2024gsmpluscomprehensivebenchmarkevaluating,
|
15 |
+
title={GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers},
|
16 |
+
author={Qintong Li and Leyang Cui and Xueliang Zhao and Lingpeng Kong and Wei Bi},
|
17 |
+
year={2024},
|
18 |
+
eprint={2402.19255},
|
19 |
+
archivePrefix={arXiv},
|
20 |
+
primaryClass={cs.CL},
|
21 |
+
url={https://arxiv.org/abs/2402.19255},
|
22 |
+
}
|
23 |
+
```
|
24 |
+
|
25 |
+
### Groups and Tasks
|
26 |
+
|
27 |
+
#### Groups
|
28 |
+
|
29 |
+
* Not part of a group yet
|
30 |
+
|
31 |
+
#### Tasks
|
32 |
+
|
33 |
+
The following tasks evaluate subjects in the gsm_plus dataset
|
34 |
+
- `gsm_plus`
|
35 |
+
- `gsm_plus_mini`
|
36 |
+
|
37 |
+
### Checklist
|
38 |
+
|
39 |
+
For adding novel benchmarks/datasets to the library:
|
40 |
+
* [x] Is the task an existing benchmark in the literature?
|
41 |
+
* [x] Have you referenced the original paper that introduced the task?
|
42 |
+
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
|
43 |
+
|
44 |
+
|
45 |
+
If other tasks on this dataset are already supported:
|
46 |
+
* [ ] Is the "Main" variant of this task clearly denoted?
|
47 |
+
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
|
48 |
+
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
|