Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
Abstract
Numerous applications of large language models (LLMs) rely on their ability to perform step-by-step reasoning. However, the reasoning behavior of LLMs remains poorly understood, posing challenges to research, development, and safety. To address this gap, we introduce landscape of thoughts-the first visualization tool for users to inspect the reasoning paths of chain-of-thought and its derivatives on any multi-choice dataset. Specifically, we represent the states in a reasoning path as feature vectors that quantify their distances to all answer choices. These features are then visualized in two-dimensional plots using t-SNE. Qualitative and quantitative analysis with the landscape of thoughts effectively distinguishes between strong and weak models, correct and incorrect answers, as well as different reasoning tasks. It also uncovers undesirable reasoning patterns, such as low consistency and high uncertainty. Additionally, users can adapt our tool to a model that predicts the property they observe. We showcase this advantage by adapting our tool to a lightweight verifier that evaluates the correctness of reasoning paths. The code is publicly available at: https://github.com/tmlr-group/landscape-of-thoughts.
Community
We introduce Landscape of Thoughts, the first visualization tool for users to inspect the language models' reasoning paths of chain-of-thought and its derivatives on any multi-choice dataset.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs (2025)
- IAO Prompting: Making Knowledge Flow Explicit in LLMs through Structured Reasoning Templates (2025)
- Thinking Machines: A Survey of LLM based Reasoning Strategies (2025)
- A Survey on Feedback-based Multi-step Reasoning for Large Language Models on Mathematics (2025)
- Chain of Draft: Thinking Faster by Writing Less (2025)
- Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking (2025)
- CER: Confidence Enhanced Reasoning in LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper