Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
HumanEvalNext / README.md
Roham Koohestani
Update README
94f4200
metadata
dataset_info:
  features:
    - name: task_id
      dtype: string
    - name: prompt
      dtype: string
    - name: entry_point
      dtype: string
    - name: canonical_solution
      dtype: string
    - name: test
      dtype: string
  splits:
    - name: train
      num_bytes: 233557
      num_examples: 82
  download_size: 140824
  dataset_size: 233557
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

This is the dataset for the paper https://arxiv.org/abs/2503.05860

HumanEvalNext

HumanEvalNext is an improved version of the HumanEval code generation benchmark. The improvements made are based on the framework EvalPro. This framework is aimed at enhancing benchmark quality through a rigorous improvement process along with peer reviews. An overview of the approach along with the specific applicability to HumanEval has been illustrated below.

HumanEvalNext Improvement Process Figure 1. The Process of improving HumanEval through the BenchFrame framework

The evaluation results of benchmark can be found here. A more detailed description of HumanEvalNext can be found here. We find the following results results when benchmarking 10 SOTA open-weight models.

HumanEvalNext Comparative Results Figure 2. Comparison of HumanEval with two enhanced versions. (HumanEvalNext and EvalPlus)