Spaces:
No application file
No application file
Update README.md
Browse files
README.md
CHANGED
@@ -6,5 +6,13 @@ colorTo: red
|
|
6 |
sdk: streamlit
|
7 |
pinned: false
|
8 |
---
|
|
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
sdk: streamlit
|
7 |
pinned: false
|
8 |
---
|
9 |
+
### HumanEval-V: A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating LMMs through Coding Tasks
|
10 |
|
11 |
+
<p align="center"> <a href="https://humaneval-v.github.io">π Home Page</a> β’ <a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">π» GitHub Repository</a> β’ <a href="https://humaneval-v.github.io/#leaderboard">π Leaderboard</a> β’ <a href="">π€ Dataset Viewer</a> β’ <a href="">π Paper</a> </p>
|
12 |
+
|
13 |
+
**HumanEval-V** is a novel and lightweight benchmark designed to evaluate the visual understanding and reasoning capabilities of Large Multimodal Models (LMMs) through coding tasks. The dataset comprises **108 entry-level Python programming challenges**, adapted from platforms like CodeForces and Stack Overflow. Each task includes **visual context that is indispensable to the problem**, requiring models to perceive, reason, and generate Python code solutions accordingly.
|
14 |
+
|
15 |
+
Key features:
|
16 |
+
- **Visual coding tasks** that require understanding images to solve.
|
17 |
+
- **Entry-level difficulty**, making it ideal for assessing the baseline performance of foundational LMMs.
|
18 |
+
- **Handcrafted test cases** for evaluating code correctness through an execution-based metric **pass@k**.
|