Spaces:
No application file
No application file
title: README | |
emoji: π» | |
colorFrom: green | |
colorTo: red | |
sdk: streamlit | |
pinned: false | |
### HumanEval-V: A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating LMMs through Coding Tasks | |
<p align="center"> <a href="https://humaneval-v.github.io">π Home Page</a> β’ <a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">π» GitHub Repository</a> β’ <a href="https://humaneval-v.github.io/#leaderboard">π Leaderboard</a> β’ <a href="">π€ Dataset Viewer</a> β’ <a href="">π Paper</a> </p> | |
**HumanEval-V** is a novel and lightweight benchmark designed to evaluate the visual understanding and reasoning capabilities of Large Multimodal Models (LMMs) through coding tasks. The dataset comprises **108 entry-level Python programming challenges**, adapted from platforms like CodeForces and Stack Overflow. Each task includes **visual context that is indispensable to the problem**, requiring models to perceive, reason, and generate Python code solutions accordingly. | |
Key features: | |
- **Visual coding tasks** that require understanding images to solve. | |
- **Entry-level difficulty**, making it ideal for assessing the baseline performance of foundational LMMs. | |
- **Handcrafted test cases** for evaluating code correctness through an execution-based metric **pass@k**. |