Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
Abstract
Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable abilities in complex reasoning tasks by scaling test-time compute and exhibiting human-like deep thinking. However, we identify a phenomenon we term underthinking, where o1-like LLMs frequently switch between different reasoning thoughts without sufficiently exploring promising paths to reach a correct solution. This behavior leads to inadequate depth of reasoning and decreased performance, particularly on challenging mathematical problems. To systematically analyze this issue, we conduct experiments on three challenging test sets and two representative open-source o1-like models, revealing that frequent thought switching correlates with incorrect responses. We introduce a novel metric to quantify underthinking by measuring token efficiency in incorrect answers. To address underthinking, we propose a decoding strategy with thought switching penalty TIP that discourages premature transitions between thoughts, encouraging deeper exploration of each reasoning path. Experimental results demonstrate that our approach improves accuracy across challenging datasets without requiring model fine-tuning. Our findings contribute to understanding reasoning inefficiencies in o1-like LLMs and offer a practical solution to enhance their problem-solving capabilities.
Community
Interesting insights. thanks for sharing.
Good job, promising metric, nice insights. If the same procedure is applied to deepseek equivalent model(s), that would be crucial info on deepseek's merit.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- REL: Working out is all you need (2024)
- Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning (2024)
- Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs (2024)
- Virgo: A Preliminary Exploration on Reproducing o1-like MLLM (2025)
- O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning (2025)
- LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs (2025)
- Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper