Papers
arxiv:2412.06769

Training Large Language Models to Reason in a Continuous Latent Space

Published on Dec 9
· Submitted by Shibo-UCSD on Dec 10
#3 Paper of the day
Authors:
,
,

Abstract

Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.

Community

Paper author Paper submitter
edited 14 days ago

Coconut (Chain of Continuous Thought)

A twitter (X) thread for quick introduction: https://x.com/Ber18791531/status/1866561188664087017

besides the obvious advantage of reasoning in latent space in terms of efficiency, i think it's extremely risky and dangerous for advanced models.
Compared to "normal" reasoning in non-latent tokens, it's very hard to impossible to accurately see what the LLM is thinking or reasoning internally. You could make an autoencoder or something like that for the latent reasoning tensors, but how dependable and accurate that can be is questionable.
For example, in the test that was recently done with the new openai-o1 model, where it tried replacing a different model with itself in order to fulfill it's goal, that action was still alligned with the instruction of fulfilling it's goal but may have been not intended by humans. Stuff like that may only be reliably noticed if the reasoning happens non-latent i think

·

I thought this method was for exploring the "raw" method of reasoning rather than forcing the models to formalize their thinking process through discrete tokens. Of course, it's likely harder to interpret, but there are advantages to a more efficient process that is potentially unbounded by discrete token space. It's a trade-off between efficiency and interpretability in my opinion. CMIIW

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Is stop_gradient used anywhere at all, or does COCONUT fully unwind the hidden states? If it's the latter, I'm guessing the inherent quadratic number of backprop steps (wrt. N latent thought tokens) is the main reason why C > 2 wasn't tested?

·
Paper author

Hi, we didn't use stop_gradient, and I believe the cost of backprop should be linear wrt. N latent thought tokens. We actually tested C > 2, and presented a discussion in Appendix C.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.06769 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.06769 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.06769 in a Space README.md to link it from this page.

Collections including this paper 27