Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
File size: 3,920 Bytes
3c5a0b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f8a1b66
3c5a0b8
 
f8a1b66
3c5a0b8
 
 
 
 
f8a1b66
 
 
 
 
 
 
 
3c5a0b8
f8a1b66
50af2be
b639103
c3855f7
b639103
 
 
de6a370
b639103
f8a1b66
b639103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3855f7
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
dataset_info:
  features:
  - name: qid
    dtype: string
  - name: ground_truth_solution
    dtype: string
  - name: image_description
    dtype: string
  - name: test_script
    dtype: string
  - name: function_signature
    dtype: string
  - name: image
    dtype: image
  splits:
  - name: test
    num_bytes: 12840101
    num_examples: 108
  download_size: 12571814
  dataset_size: 12840101
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
license: apache-2.0
task_categories:
- image-to-text
language:
- en
tags:
- code
pretty_name: humanevalv
---

## HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of LMMs Through Coding Tasks
<p align="left">
    <a href="https://arxiv.org/abs/2410.12381">πŸ“„ Paper </a> β€’
    <a href="https://humaneval-v.github.io">🏠 Home Page</a> β€’
    <a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">πŸ’» GitHub Repository </a> β€’
    <a href="https://humaneval-v.github.io/#leaderboard">πŸ† Leaderboard</a> β€’
    <a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">πŸ€— Dataset Viewer</a> 
</p>

HumanEval-V includes 108 carefully crafted, entry-level Python coding tasks. LMMs are required to complete the code solution based on the provided visual context and a predefined Python function signature outlining the task requirements. Every task is equipped with meticulously handcrafted test cases for execution-based pass@k evaluation.

## Dataset Structure
Each task in the dataset consists of the following fields:

- **qid**: Unique identifier for each coding task (e.g., _q1_, with mutated versions like _q1-2_, _q1-3_).
- **image**: A single image containing the essential visual context necessary to solve the task.
- **function_signature**: Includes the problem description, necessary imports, and the function signature that the LMMs must complete.
- **test_script**: Test cases used to validate the correctness of the generated code.
- **ground_truth_solution**: Expert-crafted solutions provided for reference but not used during the evaluation process.
- **image_description**: Human-labeled descriptions of the images, used for experimental analysis (not part of the benchmark evaluation).

## Prompt Format
Each task is formatted with a clear instruction and provided function signature to guide the model in generating the code solution:

````markdown
**Instructions:**
You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
Please complete the function based on the provided image and code context. Return the complete solution, including the function signature, in a single response, formatted within a Python code block.

**Code Context:**
```python
{code_context}
```
````

After the LMM generates a response, the code solution is extracted and validated using the following process:
- Extraction of content within the code block.
- Parsing the generated code to detect imports, class definitions, and functions using an Abstract Syntax Tree (AST) parser.
- Concatenation of these components to form the final predicted solution, which is then tested for correctness.
- Generated code solution is evaluated through an execution-based metric, specifically **pass@k**.


## Usage
You can easily load the dataset using the Hugging Face `datasets` library.

```python
from datasets import load_dataset
humaneval_v = load_dataset("HumanEval-V/HumanEval-V-Benchmark", split="test")
```

## Citation
```bibtex
@article{zhang2024humanevalv,
  title={HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of Large Multimodal Models Through Coding Tasks}, 
  author={Zhang, Fengji and Wu, Linquan and Bai, Huiyu and Lin, Guancheng and Li, Xiao and Yu, Xiao and Wang, Yue and Chen, Bei and Keung, Jacky},
  journal={arXiv preprint arXiv:2410.12381},
  year={2024},
}
```