|
--- |
|
dataset_info: |
|
features: |
|
- name: qid |
|
dtype: string |
|
- name: ground_truth_solution |
|
dtype: string |
|
- name: image_description |
|
dtype: string |
|
- name: test_script |
|
dtype: string |
|
- name: function_signature |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
splits: |
|
- name: test |
|
num_bytes: 12840101 |
|
num_examples: 108 |
|
download_size: 12571814 |
|
dataset_size: 12840101 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: test |
|
path: data/test-* |
|
license: apache-2.0 |
|
task_categories: |
|
- image-to-text |
|
language: |
|
- en |
|
tags: |
|
- code |
|
pretty_name: humanevalv |
|
--- |
|
|
|
## HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of LMMs Through Coding Tasks |
|
<p align="left"> |
|
<a href="https://humaneval-v.github.io">π Home Page</a> β’ |
|
<a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">π» GitHub Repository </a> β’ |
|
<a href="https://humaneval-v.github.io/#leaderboard">π Leaderboard</a> β’ |
|
<a href="">π€ Dataset Viewer</a> β’ |
|
<a href="">π Paper </a> |
|
</p> |
|
|
|
HumanEval-V includes 108 carefully crafted, entry-level Python coding tasks. LMMs are required to complete the code solution based on the provided visual context and a predefined Python function signature outlining the task requirements. Every task is equipped with meticulously handcrafted test cases for execution-based pass@k evaluation. |
|
|
|
## Dataset Structure |
|
Each task in the dataset consists of the following fields: |
|
|
|
- **qid**: Unique identifier for each coding task (e.g., _q1_, with mutated versions like _q1-2_, _q1-3_). |
|
- **image**: A single image containing the essential visual context necessary to solve the task. |
|
- **function_signature**: Includes the problem description, necessary imports, and the function signature that the LMMs must complete. |
|
- **test_script**: Test cases used to validate the correctness of the generated code. |
|
- **ground_truth_solution**: Expert-crafted solutions provided for reference but not used during the evaluation process. |
|
- **image_description**: Human-labeled descriptions of the images, used for experimental analysis (not part of the benchmark evaluation). |
|
|
|
## Prompt Format |
|
Each task is formatted with a clear instruction and provided function signature to guide the model in generating the code solution: |
|
|
|
````markdown |
|
**Instructions:** |
|
You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions. |
|
Please complete the function based on the provided image and code context. Return the complete solution, including the function signature, in a single response, formatted within a Python code block. |
|
|
|
**Code Context:** |
|
```python |
|
{code_context} |
|
``` |
|
```` |
|
|
|
After the LMM generates a response, the code solution is extracted and validated using the following process: |
|
- Extraction of content within the code block. |
|
- Parsing the generated code to detect imports, class definitions, and functions using an Abstract Syntax Tree (AST) parser. |
|
- Concatenation of these components to form the final predicted solution, which is then tested for correctness. |
|
- Generated code solution is evaluated through an execution-based metric, specifically **pass@k**. |
|
|
|
|
|
## Usage |
|
You can easily load the dataset using the Hugging Face `datasets` library. |
|
|
|
```python |
|
from datasets import load_dataset |
|
humaneval_v = load_dataset("HumanEval-V/HumanEval-V-Benchmark", split="test") |
|
``` |
|
|