COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values
Abstract
Aligning large language models (LLMs) with human preferences has achieved remarkable success. However, existing Chinese preference datasets are limited by small scale, narrow domain coverage, and lack of rigorous data validation. Additionally, the reliance on human annotators for instruction and response labeling significantly constrains the scalability of human preference datasets. To address these challenges, we design an LLM-based Chinese preference dataset annotation pipeline with no human intervention. Specifically, we crawled and carefully filtered 92k high-quality Chinese queries and employed 15 mainstream LLMs to generate and score chosen-rejected response pairs. Based on it, we introduce COIG-P (Chinese Open Instruction Generalist - Preference), a high-quality, large-scale Chinese preference dataset, comprises 1,009k Chinese preference pairs spanning 6 diverse domains: Chat, Code, Math, Logic, Novel, and Role. Building upon COIG-P, to reduce the overhead of using LLMs for scoring, we trained a 8B-sized Chinese Reward Model (CRM) and meticulously constructed a Chinese Reward Benchmark (CRBench). Evaluation results based on AlignBench liu2024alignbenchbenchmarkingchinesealignment show that that COIG-P significantly outperforms other Chinese preference datasets, and it brings significant performance improvements ranging from 2% to 12% for the Qwen2/2.5 and Infinity-Instruct-3M-0625 model series, respectively. The results on CRBench demonstrate that our CRM has a strong and robust scoring ability. We apply it to filter chosen-rejected response pairs in a test split of COIG-P, and our experiments show that it is comparable to GPT-4o in identifying low-quality samples while maintaining efficiency and cost-effectiveness. Our codes and data are released in https://github.com/multimodal-art-projection/COIG-P.
Community
COIG-P, a high-quality, large-scale Chinese preference dataset, comprises 1,006k Chinese preference pairs spanning 6 diverse domains: Chat, Code, Math, Logic, Novel, and Role.
Project page: https://github.com/multimodal-art-projection/COIG-P
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AIR: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset (2025)
- Cheems: A Practical Guidance for Building and Evaluating Chinese Reward Models from Scratch (2025)
- Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language Models (2025)
- MM-RLHF: The Next Step Forward in Multimodal LLM Alignment (2025)
- Capturing Nuanced Preferences: Preference-Aligned Distillation for Small Language Models (2025)
- OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference (2025)
- HPS: Hard Preference Sampling for Human Preference Alignment (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 6
Browse 6 models citing this paperDatasets citing this paper 3
Spaces citing this paper 0
No Space linking this paper