zfj1998 commited on
Commit
a4473b4
β€’
1 Parent(s): 9ccddbe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -6,5 +6,13 @@ colorTo: red
6
  sdk: streamlit
7
  pinned: false
8
  ---
 
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
6
  sdk: streamlit
7
  pinned: false
8
  ---
9
+ ### HumanEval-V: A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating LMMs through Coding Tasks
10
 
11
+ <p align="center"> <a href="https://humaneval-v.github.io">🏠 Home Page</a> β€’ <a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">πŸ’» GitHub Repository</a> β€’ <a href="https://humaneval-v.github.io/#leaderboard">πŸ† Leaderboard</a> β€’ <a href="">πŸ€— Dataset Viewer</a> β€’ <a href="">πŸ“„ Paper</a> </p>
12
+
13
+ **HumanEval-V** is a novel and lightweight benchmark designed to evaluate the visual understanding and reasoning capabilities of Large Multimodal Models (LMMs) through coding tasks. The dataset comprises **108 entry-level Python programming challenges**, adapted from platforms like CodeForces and Stack Overflow. Each task includes **visual context that is indispensable to the problem**, requiring models to perceive, reason, and generate Python code solutions accordingly.
14
+
15
+ Key features:
16
+ - **Visual coding tasks** that require understanding images to solve.
17
+ - **Entry-level difficulty**, making it ideal for assessing the baseline performance of foundational LMMs.
18
+ - **Handcrafted test cases** for evaluating code correctness through an execution-based metric **pass@k**.