Challenging the Boundaries of Reasoning: An Olympiad-Level Math Benchmark for Large Language Models
Abstract
In recent years, the rapid development of large reasoning models has resulted in the saturation of existing benchmarks for evaluating mathematical reasoning, highlighting the urgent need for more challenging and rigorous evaluation frameworks. To address this gap, we introduce OlymMATH, a novel Olympiad-level mathematical benchmark, designed to rigorously test the complex reasoning capabilities of LLMs. OlymMATH features 200 meticulously curated problems, each manually verified and available in parallel English and Chinese versions. The problems are systematically organized into two distinct difficulty tiers: (1) AIME-level problems (easy) that establish a baseline for mathematical reasoning assessment, and (2) significantly more challenging problems (hard) designed to push the boundaries of current state-of-the-art models. In our benchmark, these problems span four core mathematical fields, each including a verifiable numerical solution to enable objective, rule-based evaluation. Empirical results underscore the significant challenge presented by OlymMATH, with state-of-the-art models including DeepSeek-R1 and OpenAI's o3-mini demonstrating notably limited accuracy on the hard subset. Furthermore, the benchmark facilitates comprehensive bilingual assessment of mathematical reasoning abilities-a critical dimension that remains largely unaddressed in mainstream mathematical reasoning benchmarks. We release the OlymMATH benchmark at the STILL project: https://github.com/RUCAIBox/Slow_Thinking_with_LLMs.
Community
A great new math benchmark for top-tier reasoning models!
We are happy to share our new mathematical benchmarks, which is really challenging for the reasoning models, even o3-mini demonstrating notably limited accuracy on it (~ 30% accuracy).
We hope this benchmark can better assess the reasoning ability of the LLMs.
So basically these guys took matharena and wrote a paper on top?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Unveiling the Mathematical Reasoning in DeepSeek Models: A Comparative Study of Large Language Models (2025)
- PromptCoT: Synthesizing Olympiad-level Problems for Mathematical Reasoning in Large Language Models (2025)
- UGPhysics: A Comprehensive Benchmark for Undergraduate Physics Reasoning with Large Language Models (2025)
- PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning (2025)
- AutoLogi: Automated Generation of Logic Puzzles for Evaluating Reasoning Abilities of Large Language Models (2025)
- ProBench: Benchmarking Large Language Models in Competitive Programming (2025)
- MMLU-ProX: A Multilingual Benchmark for Advanced Large Language Model Evaluation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper