ifujisawa commited on
Commit
182e513
·
verified ·
1 Parent(s): 1903229

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -1
README.md CHANGED
@@ -55,4 +55,62 @@ language:
55
  - en
56
  size_categories:
57
  - 10K<n<100K
58
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  - en
56
  size_categories:
57
  - 10K<n<100K
58
+ ---
59
+
60
+
61
+ # Dataset Card for ProcBench
62
+
63
+ ## Dataset Overview
64
+
65
+ ### Dataset Description
66
+ ProcBench is a benchmark designed to evaluate the multi-step reasoning abilities of large language models (LLMs). It focuses on instruction followability, requiring models to solve problems by following explicit, step-by-step procedures. The tasks included in this dataset do not require complex implicit knowledge but emphasize strict adherence to provided instructions. The dataset evaluates model performance on tasks that are straightforward for humans but challenging for LLMs as the number of steps increases.
67
+
68
+ - **Curated by:** Araya, AI Alignment Network, AutoRes
69
+ - **Language(s) (NLP):** English
70
+ - **License:** CC-BY-4.0
71
+
72
+ ### Dataset Sources
73
+ - **Repository:** [https://huggingface.co/datasets/ifujisawa/procbench](https://huggingface.co/datasets/ifujisawa/procbench)
74
+ - **Paper:** [https://arxiv.org/abs/2410.03117](https://arxiv.org/abs/2410.03117)
75
+ - **Github Repository:** [https://github.com/ifujisawa/proc-bench](https://github.com/ifujisawa/proc-bench)
76
+
77
+ ## Uses
78
+
79
+ ### Direct Use
80
+ ProcBench is intended for evaluating the instruction-following capability of LLMs in multi-step procedural tasks. It provides an assessment of how well models follow sequential instructions across a variety of task types.
81
+
82
+ ## Dataset Structure
83
+
84
+ ### Overview
85
+ The dataset consists of 23 task types, with a total of 5,520 examples. Tasks involve operations such as string manipulation, list processing, and numeric computation. Each task is paired with explicit instructions, requiring the model to output intermediate states along with the final result.
86
+
87
+ ### Difficulty Levels
88
+ Tasks are categorized into three difficulty levels:
89
+ - **Short**: 2-6 steps
90
+ - **Medium**: 7-16 steps
91
+ - **Long**: 17-25 steps
92
+
93
+ The difficulty levels can be obtained by running the preprocessing script `preprocess.py` provided in the GitHub repository.
94
+
95
+ ## Dataset Creation
96
+
97
+ ### Curation Rationale
98
+ ProcBench was created to evaluate LLMs’ ability to follow instructions in a procedural manner. The goal is to isolate and test instruction-followability without reliance on complex implicit knowledge, offering a unique perspective on procedural reasoning.
99
+
100
+ ### Source Data
101
+
102
+ #### Data Collection and Processing
103
+ Each example is composed of the combination of a template and a question. Each task is associated with a fixed template, which contains the procedure for solving the question. All templates and generators used to create the questions are available in the GitHub repository.
104
+
105
+ ## Citation
106
+
107
+ **BibTeX:**
108
+ ```bibtex
109
+ @misc{fujisawa2024procbench,
110
+ title={ProcBench: Benchmark for Multi-Step Reasoning and Following Procedure},
111
+ author={Ippei Fujisawa and Sensho Nobe and Hiroki Seto and Rina Onda and Yoshiaki Uchida and Hiroki Ikoma and Pei-Chun Chien and Ryota Kanai},
112
+ year={2024},
113
+ eprint={2410.03117},
114
+ archivePrefix={arXiv},
115
+ primaryClass={cs.AI}
116
+ }