Papers
arxiv:2601.08763

Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs

Published on Jan 13
· Submitted by
Zhiyuan Hu
on Jan 16
#2 Paper of the day
Authors:
,
,
,

Abstract

Reinforcement learning for large language models is enhanced by a rollout-level objective that rewards rare high-level reasoning strategies, improving diverse solution discovery without sacrificing initial performance.

AI-generated summary

Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@k across large sampling budgets and increases the area under the pass@k curve (AUC@K) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.

Community

Paper author Paper submitter

Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs

好文章

等我有时间再好好看看罒▽罒

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.08763 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.08763 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.08763 in a Space README.md to link it from this page.

Collections including this paper 2