Generative Evaluation of Complex Reasoning in Large Language Models
Abstract
With powerful large language models (LLMs) demonstrating superhuman reasoning capabilities, a critical question arises: Do LLMs genuinely reason, or do they merely recall answers from their extensive, web-scraped training datasets? Publicly released benchmarks inevitably become contaminated once incorporated into subsequent LLM training sets, undermining their reliability as faithful assessments. To address this, we introduce KUMO, a generative evaluation framework designed specifically for assessing reasoning in LLMs. KUMO synergistically combines LLMs with symbolic engines to dynamically produce diverse, multi-turn reasoning tasks that are partially observable and adjustable in difficulty. Through an automated pipeline, KUMO continuously generates novel tasks across open-ended domains, compelling models to demonstrate genuine generalization rather than memorization. We evaluated 23 state-of-the-art LLMs on 5,000 tasks across 100 domains created by KUMO, benchmarking their reasoning abilities against university students. Our findings reveal that many LLMs have outperformed university-level performance on easy reasoning tasks, and reasoning-scaled LLMs reach university-level performance on complex reasoning challenges. Moreover, LLM performance on KUMO tasks correlates strongly with results on newly released real-world reasoning benchmarks, underscoring KUMO's value as a robust, enduring assessment tool for genuine LLM reasoning capabilities.
Community
Excited to present KUMO, a generative evaluation benchmark for LLMs. Unlike static benchmarks, KUMO dynamically generates diverse, multi-turn reasoning tasks with controllable difficulty—avoiding data leakage and ensuring trustworthy evaluation.
📄 Paper: https://arxiv.org/pdf/2504.02810
Why KUMO?
✅ 95%+ correlation with SOTA reasoning benchmarks—synthetic but realistic!
✅ Avoids test-set contamination (no risk of pre-training data leaks).
✅ Controllable difficulty & domain diversity for fine-grained evaluation.
Key Findings:
1️⃣ Simple vs. Complex Reasoning: LLMs outperform undergrads on easy tasks, but only deep-thinking models match humans on hard problems.
2️⃣ Universal Difficulty Metric: KUMO can standardize difficulty across benchmarks (LiveBench-Reason ≈ KUMO-Hard).
3️⃣ Domain Matters! Model performance varies widely across fields (medical, gaming, etc.)—knowledge structure is key.
4️⃣ Generalization Challenge: Fine-tuning on expert trajectories fails when KUMO’s tasks evolve, demanding strong OOD/domain/difficulty generalization.
🌐 Beyond KUMO: Generative evaluation is the future! Our earlier work on agent evaluation (https://arxiv.org/pdf/2310.08367) also shows how dynamic benchmarks can transform evaluation into a science.
💡 Join Us! KUMO is open-source with RL-friendly reward signals.
Fantastic benchmark!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MastermindEval: A Simple But Scalable Reasoning Benchmark (2025)
- Towards Reasoning Ability of Small Language Models (2025)
- AutoLogi: Automated Generation of Logic Puzzles for Evaluating Reasoning Abilities of Large Language Models (2025)
- Think Like Human Developers: Harnessing Community Knowledge for Structured Code Reasoning (2025)
- Large Language Models Meet Symbolic Provers for Logical Reasoning Evaluation (2025)
- TextGames: Learning to Self-Play Text-Based Puzzle Games via Language Model Reasoning (2025)
- LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper