Papers
arxiv:2502.16614

CodeCriticBench: A Holistic Code Critique Benchmark for Large Language Models

Published on Feb 23
· Submitted by CheeryLJH on Feb 25
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

The critique capacity of Large Language Models (LLMs) is essential for reasoning abilities, which can provide necessary suggestions (e.g., detailed analysis and constructive feedback). Therefore, how to evaluate the critique capacity of LLMs has drawn great attention and several critique benchmarks have been proposed. However, existing critique benchmarks usually have the following limitations: (1). Focusing on diverse reasoning tasks in general domains and insufficient evaluation on code tasks (e.g., only covering code generation task), where the difficulty of queries is relatively easy (e.g., the code queries of CriticBench are from Humaneval and MBPP). (2). Lacking comprehensive evaluation from different dimensions. To address these limitations, we introduce a holistic code critique benchmark for LLMs called CodeCriticBench. Specifically, our CodeCriticBench includes two mainstream code tasks (i.e., code generation and code QA) with different difficulties. Besides, the evaluation protocols include basic critique evaluation and advanced critique evaluation for different characteristics, where fine-grained evaluation checklists are well-designed for advanced settings. Finally, we conduct extensive experimental results of existing LLMs, which show the effectiveness of CodeCriticBench.

Community

Paper submitter

The critique capacity of Large Language Models (LLMs) is essential for reasoning abilities,
which can provide necessary suggestions (e.g., detailed analysis and constructive feedback).
Therefore, how to evaluate the critique capacity of LLMs has drawn great attention and several
critique benchmarks have been proposed. However, existing critique benchmarks usually have
the following limitations: (1). Focusing on diverse reasoning tasks in general domains and
insufficient evaluation on code tasks (e.g., only covering code generation task), where the difficulty of queries is relatively easy (e.g., the code queries of CriticBench are from Humaneval and
MBPP). (2). Lacking comprehensive evaluation from different dimensions. To address these
limitations, we introduce a holistic code critique benchmark for LLMs called CodeCriticBench.
Specifically, our CodeCriticBench includes two mainstream code tasks (i.e., code generation
and code QA) with different difficulties. Besides, the evaluation protocols include basic critique
evaluation and advanced critique evaluation for different characteristics, where fine-grained
evaluation checklists are well-designed for advanced settings. Finally, we conduct extensive
experimental results of existing LLMs, which show the effectiveness of CodeCriticBench

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.16614 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.16614 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.