Papers
arxiv:2410.16184

RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style

Published on Oct 21
ยท Submitted by RicardoL1u on Oct 22
Authors:
,
,
,

Abstract

Reward models are critical in techniques like Reinforcement Learning from Human Feedback (RLHF) and Inference Scaling Laws, where they guide language model alignment and select optimal responses. Despite their importance, existing reward model benchmarks often evaluate models by asking them to distinguish between responses generated by models of varying power. However, this approach fails to assess reward models on subtle but critical content changes and variations in style, resulting in a low correlation with policy model performance. To this end, we introduce RM-Bench, a novel benchmark designed to evaluate reward models based on their sensitivity to subtle content differences and resistance to style biases. Extensive experiments demonstrate that RM-Bench strongly correlates with policy model performance, making it a reliable reference for selecting reward models to align language models effectively. We evaluate nearly 40 reward models on RM-Bench. Our results reveal that even state-of-the-art models achieve an average performance of only 46.6%, which falls short of random-level accuracy (50%) when faced with style bias interference. These findings highlight the significant room for improvement in current reward models. Related code and data are available at https://github.com/THU-KEG/RM-Bench.

Community

Paper submitter
โ€ข
edited 2 days ago

๐Ÿš€ Introducing RM-BENCH: A new benchmark for evaluating reward models for Large Language Models Alignment! ๐ŸŽฏ

๐ŸŒŸ RM-BENCH is the first to assess reward models':

  • Sensitivity to subtle content changes โ€” crucial for reward models to detect the subtle nuances errors in responses.
  • Robustness against style bias โ€” ensuring reward models focus on substance rather than being distracted by surface-level features like style.

๐Ÿ“ˆ These features help RM-BENCH provide deeper insights into the strengths and weaknesses of current reward models, and show strong correlation with policy model performance on downstream tasks.

๐Ÿ“Š We evaluated nearly 40 reward models across Chat, Code, Math, and Safety domains. Even top models averaged just 46.6% accuracy under style interference! ๐Ÿ˜ฒ

๐Ÿ‘‰ Curious? Check out our paper: https://arxiv.org/pdf/2410.16184 ๐Ÿ“„, code: https://github.com/THU-KEG/RM-Bench ๐Ÿ’ป, and dataset: https://huggingface.co/datasets/THU-KEG/RM-Bench/ ๐Ÿ“Š

#AI #MachineLearning #LLM #AIAlignment #RewardModels #Research #OpenSource

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.16184 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.16184 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.16184 in a Space README.md to link it from this page.

Collections including this paper 2