File size: 1,480 Bytes
9ccddbe
 
 
 
 
 
 
 
a4473b4
9ccddbe
df168cd
a4473b4
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
title: README
emoji: πŸ’»
colorFrom: green
colorTo: red
sdk: streamlit
pinned: false
---
### HumanEval-V: A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating LMMs through Coding Tasks

<p align="center"> <a href="">πŸ“„ Paper</a> β€’ <a href="https://humaneval-v.github.io">🏠 Home Page</a> β€’ <a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">πŸ’» GitHub Repository</a> β€’ <a href="https://humaneval-v.github.io/#leaderboard">πŸ† Leaderboard</a> β€’ <a href="https://huggingface.co/datasets/HumanEval-V/HumanEval-V-Benchmark">πŸ€— Dataset</a> β€’ <a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">πŸ€— Dataset Viewer</a>  </p> 

**HumanEval-V** is a novel and lightweight benchmark designed to evaluate the visual understanding and reasoning capabilities of Large Multimodal Models (LMMs) through coding tasks. The dataset comprises **108 entry-level Python programming challenges**, adapted from platforms like CodeForces and Stack Overflow. Each task includes **visual context that is  indispensable to the problem**, requiring models to perceive, reason, and generate Python code solutions accordingly.

Key features:
- **Visual coding tasks** that require understanding images to solve.
- **Entry-level difficulty**, making it ideal for assessing the baseline performance of foundational LMMs.
- **Handcrafted test cases** for evaluating code correctness through an execution-based metric **pass@k**.