Update README.md
Browse files
README.md
CHANGED
@@ -34,5 +34,52 @@ tags:
|
|
34 |
pretty_name: humanevalv
|
35 |
---
|
36 |
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
pretty_name: humanevalv
|
35 |
---
|
36 |
|
37 |
+
## HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of Large Multimodal Models Through Coding Tasks
|
38 |
+
<p align="left">
|
39 |
+
<a href="https://humaneval-v.github.io">π Home Page</a> β’
|
40 |
+
<a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">π» GitHub Repository </a> β’
|
41 |
+
<a href="https://humaneval-v.github.io/#leaderboard">π Leaderboard</a> β’
|
42 |
+
<a href="">π€ Dataset Viewer</a> β’
|
43 |
+
<a href="">π Paper </a>
|
44 |
+
</p>
|
45 |
|
46 |
+
HumanEval-V includes 108 carefully crafted, entry-level Python coding tasks. LMMs are required to complete the code solution based on the provided visual context and a predefined Python function signature outlining the task requirements. Every task is equipped with meticulously handcrafted test cases for execution-based pass@k evaluation.
|
47 |
+
|
48 |
+
## Dataset Structure
|
49 |
+
Each task in the dataset consists of the following fields:
|
50 |
+
|
51 |
+
- **qid**: Unique identifier for each coding task (e.g., _q1_, with mutated versions like _q1-2_, _q1-3_).
|
52 |
+
- **image**: A single image containing the essential visual context necessary to solve the task.
|
53 |
+
- **function_signature**: Includes the problem description, necessary imports, and the function signature that the LMMs must complete.
|
54 |
+
- **test_script**: Test cases used to validate the correctness of the generated code.
|
55 |
+
- **ground_truth_solution**: Expert-crafted solutions provided for reference but not used during the evaluation process.
|
56 |
+
- **image_description**: Human-labeled descriptions of the images, used for experimental analysis (not part of the benchmark evaluation).
|
57 |
+
|
58 |
+
## Prompt Format
|
59 |
+
Each task is formatted with a clear instruction and provided function signature to guide the model in generating the code solution:
|
60 |
+
|
61 |
+
````markdown
|
62 |
+
**Instructions:**
|
63 |
+
You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
|
64 |
+
Please complete the function based on the provided image and code context. Return the complete solution, including the function signature, in a single response, formatted within a Python code block.
|
65 |
+
|
66 |
+
**Code Context:**
|
67 |
+
```python
|
68 |
+
{code_context}
|
69 |
+
```
|
70 |
+
````
|
71 |
+
|
72 |
+
After the LMM generates a response, the code solution is extracted and validated using the following process:
|
73 |
+
- Extraction of content within the code block.
|
74 |
+
- Parsing the generated code to detect imports, class definitions, and functions using an Abstract Syntax Tree (AST) parser.
|
75 |
+
- Concatenation of these components to form the final predicted solution, which is then tested for correctness.
|
76 |
+
- Generated code solution is evaluated through an execution-based metric, specifically **pass@k**.
|
77 |
+
|
78 |
+
|
79 |
+
## Usage
|
80 |
+
You can easily load the dataset using the Hugging Face `datasets` library.
|
81 |
+
|
82 |
+
```python
|
83 |
+
from datasets import load_dataset
|
84 |
+
humaneval_v = load_dataset("HumanEval-V/HumanEval-V-Benchmark", split="test")
|
85 |
+
```
|