id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2310.14122#25 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | 50. Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Jiliang Tang, and Qing Li. 2023. Recommender systems in the era of arXiv preprint large language models (llms). arXiv:2307.02046. Google, Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Pas- sos, Siamak Shakeri, Emanuel Taropa, Paige Bai- ley, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier- Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gus- tavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Brad- bury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Dà | 2310.14122#24 | 2310.14122#26 | 2310.14122 | [
"2305.06474"
] |
2310.14122#26 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | az, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lu- cas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jef- frey Hui, Jeremy Hurwitz, Michael Isard, Abe Itty- cheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Mar- cello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Par- rish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Ki- ran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. | 2310.14122#25 | 2310.14122#27 | 2310.14122 | [
"2305.06474"
] |
2310.14122#27 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | PaLM 2 technical report. Shuguang Han, Xuanhui Wang, Mike Bendersky, and Marc Najork. 2020. Learning-to-rank with BERT in TF-Ranking. arXiv preprint arXiv:2004.08476. Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumu- lated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422â 446. | 2310.14122#26 | 2310.14122#28 | 2310.14122 | [
"2305.06474"
] |
2310.14122#28 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Robert J Johnston, Kevin J Boyle, Wiktor Adamow- icz, Jeff Bennett, Roy Brouwer, Trudy Ann Cameron, W Michael Hanemann, Nick Hanley, Mandy Ryan, Riccardo Scarpa, et al. 2017. Contemporary guid- ance for stated preference studies. Journal of the As- sociation of Environmental and Resource Economists, 4(2):319â 405. Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Mah- eswaran Sathiamoorthy, Lichan Hong, Ed Chi, and Derek Zhiyuan Cheng. 2023. | 2310.14122#27 | 2310.14122#29 | 2310.14122 | [
"2305.06474"
] |
2310.14122#29 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Do llms understand user preferences? evaluating llms on user rating pre- diction. arXiv preprint arXiv:2305.06474. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Ku- mar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. | 2310.14122#28 | 2310.14122#30 | 2310.14122 | [
"2305.06474"
] |
2310.14122#30 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356â 2362. Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Now Publishers Inc. Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-shot listwise document arXiv reranking with a large language model. preprint arXiv:2305.02156. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- trained sequence-to-sequence model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: | 2310.14122#29 | 2310.14122#31 | 2310.14122 | [
"2305.06474"
] |
2310.14122#31 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Findings, pages 708â 718. Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. arXiv preprint arXiv:1910.14424. OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774. Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, et al. 2023. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563. Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Ku- mar Pasumarthi, Xuanhui Wang, Michael Bendersky, and Marc Najork. 2021. Are neural rankers still out- performed by gradient boosted decision trees? In International Conference on Learning Representa- tions. | 2310.14122#30 | 2310.14122#32 | 2310.14122 | [
"2305.06474"
] |
2310.14122#32 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Noelia Rivera-Garrido, MP Ramos-Sosa, Michela Ac- cerenzi, and Pablo Brañas-Garza. 2022. Continuous and binary sets of responses differ in the field. Scien- tific Reports, 12(1):14376. Kevin Roitero, Eddy Maddalena, Gianluca Demartini, and Stefano Mizzaro. 2018. On fine-grained rele- vance scales. In Proceedings of the 41st International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 675â 684. | 2310.14122#31 | 2310.14122#33 | 2310.14122 | [
"2305.06474"
] |
2310.14122#33 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. 2022. Improving passage retrieval with zero-shot question generation. arXiv preprint arXiv:2204.07496. Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is Chat- investigating large lan- GPT good at search? guage models as re-ranking agent. arXiv preprint arXiv:2304.09542. Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab- hishek Srivastava, and Iryna Gurevych. 2021. | 2310.14122#32 | 2310.14122#34 | 2310.14122 | [
"2305.06474"
] |
2310.14122#34 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra. 2023. Large language models can ac- curately predict searcher preferences. arXiv preprint arXiv:2309.10621. Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2021. Trec-covid: constructing a pandemic information re- trieval test collection. | 2310.14122#33 | 2310.14122#35 | 2310.14122 | [
"2305.06474"
] |
2310.14122#35 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | In ACM SIGIR Forum, vol- ume 54, pages 1â 12. ACM New York, NY, USA. Ellen M Voorhees. 2005. The trec robust retrieval track. In ACM SIGIR Forum, volume 39, pages 11â 20. ACM New York, NY, USA. Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, et al. 2023. | 2310.14122#34 | 2310.14122#36 | 2310.14122 | [
"2305.06474"
] |
2310.14122#36 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | A survey on large language models for recommendation. arXiv preprint arXiv:2305.19860. Ruicheng Xian, Honglei Zhuang, Zhen Qin, Hamed Zamani, Jing Lu, Ji Ma, Kai Hui, Han Zhao, Xuanhui Wang, and Michael Bendersky. 2022. Learning list- level domain-invariant representations for ranking. arXiv preprint arXiv:2212.10764. Honglei Zhuang, Zhen Qin, Shuguang Han, Xuanhui Wang, Michael Bendersky, and Marc Najork. 2021. Ensemble distillation for BERT-based ranking mod- els. In Proceedings of the 2021 ACM SIGIR Interna- tional Conference on Theory of Information Retrieval, pages 131â | 2310.14122#35 | 2310.14122#37 | 2310.14122 | [
"2305.06474"
] |
2310.14122#37 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | 136. Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2023a. RankT5: Fine-tuning T5 for text ranking with ranking losses. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2308â 2313. Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. 2023b. | 2310.14122#36 | 2310.14122#38 | 2310.14122 | [
"2305.06474"
] |
2310.14122#38 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | A setwise approach for effective and highly efficient zero-shot rank- ing with large language models. arXiv preprint arXiv:2310.09497. # A Alternative Relevance Levels We replace the relevance levels with other phrases to examine how the performance changes. For RG- 2L, we replace â Not Relevantâ with â Irrelevantâ ; for RG-3L, we replace â Somewhat Relevantâ with â Partially Relevantâ . The results are shown in Table 4. Regardless of using different textual representations of rele- vance labels, RG-3L consistently outperforms RG- 2L. This suggests that the discovery in this paper is generalizable to different choices of textual rel- evance labels. Another observation is that RG-2L performance varies slightly more than RG-3L per- formance. This might indicate that RG-3L is more robust to different wording of relevance labels. | 2310.14122#37 | 2310.14122#39 | 2310.14122 | [
"2305.06474"
] |
2310.14122#39 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Table 4: Comparing ranking performance with dif- ferent textual relevance levels. Measured by average NDCG@10 on BEIR data sets. Method Relevance Levels Average RG-2L â Irrelevantâ , â Relevantâ 0.4717 â Not Relevantâ , â Relevantâ 0.4789 RG-3L â Not Relevantâ , â Partially Rel- evantâ , â Highly Relevantâ 0.4975 â Not Relevantâ , â Somewhat Relevantâ , â Highly Relevantâ | 2310.14122#38 | 2310.14122#40 | 2310.14122 | [
"2305.06474"
] |
2310.14122#40 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | 0.4992 We also experiment with different rating scale formulation. Instead of prompting the LLM to rate the relevance from 0 to k, we also try to ask the LLM to rate the relevance from 1 to k, denoted as RG-S(1, k). We plot the average NDCG@10 performance in Figure 4. The performance of both methods do not differ much when k is larger than 4. But not providing the â 0â option substantially hurt the performance when k is lower than or equal to 3. This might also suggest that using the rating scale from 0 to k is slightly more robust. 0.50 pee = 0 048 wT / G0-46 t fet t 0 0.44 t Fal 1 20.42} 4 7 © RG-S(0,k) 0.40 rf â ® =RG-S(1,k) 2 3 4 5 6 7 é 910 k Figure 4: | 2310.14122#39 | 2310.14122#41 | 2310.14122 | [
"2305.06474"
] |
2310.14122#41 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Comparing rating scale relevance generation with different prompts. # B In-Depth Score Distribution We plot the in-depth score distribution of our meth- ods. Specifically, we group the query-document pairs in Covid data set by different ground-truth relevance and plot the distribution of the marginal probability pk for each prompted relevance label lk respectively. Figure 5 and 6 shows the results on Covid data set when we use RG-S(0, 4) and RG-4L respectively. The ground-truth relevance of Covid data set is 0, 1 or 2. In Figure 5, We observe that the distributions of marginal probability pk of relevance label â | 2310.14122#40 | 2310.14122#42 | 2310.14122 | [
"2305.06474"
] |
2310.14122#42 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | 0â , â 1â and â 2â shift down towards 0 as the ground- truth relevance increases. Meanwhile, the distri- butions of pk across relevance label â 3â and â 4â shift up towards 1. In Figure 6, we found a similar trend where the distributions of marginal proba- bility pk of â Not Relevantâ and â Somewhat Rel- evantâ shift down towards 0 as the ground-truth relevance increases, while the distributions of pk across â Highly Relevantâ and â Perfectly Relevantâ | 2310.14122#41 | 2310.14122#43 | 2310.14122 | [
"2305.06474"
] |
2310.14122#43 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | shift up towards 1. This reveals how our expected relevance values (ER) methods works in practice, and also given us hints on how peak relevance like- lihood (PR) alone works based on the distribution shift of the peak relevance label. # C Varying Assigned Relevance Values We also investigate how the user provided rele- vance values ykâ s make a difference to the ranking performance. We use RG-3L as the example. We fix y0 = 0 for â Not Relevantâ and y2 = 2 for â Highly Relevantâ , but vary the relevance value y1 for â Somewhat Relevantâ between y0 and y2. | 2310.14122#42 | 2310.14122#44 | 2310.14122 | [
"2305.06474"
] |
2310.14122#44 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | We evaluate the average NDCG@10 on the 8 BEIR data sets and presents the results in Table 5. As y1 varies, the average NDCG@10 does not change substantially when y1 decreases. Even when y1 = y0, the NDCG@10 performance re- mains high. This is expected as NDCG@10 metric only focuses on the top-ranked items. Hence chang- ing the relevance values of intermediate relevance labels may not change the order of top-ranked items a lot. This is also similar to using the peak rele- vance likelihood method. In contrast, when y1 = y2, the performance drops significantly to about the same level as RG- 2L. This might indirectly explain why RG-2L per- formance is worse than RG-3L, as it might not be able to distinguish partially relevant and highly relevant documents. | 2310.14122#43 | 2310.14122#45 | 2310.14122 | [
"2305.06474"
] |
2310.14122#45 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Table 5: Comparing ranking performance with different relevance values ykâ s. Measured by average NDCG@10 on BEIR data sets. Method [y0, y1, y2] Average RG-3L RG-3L RG-3L RG-3L RG-3L [0.00, 0.00, 2.00] [0.00, 0.50, 2.00] [0.00, 1.00, 2.00] [0.00, 1.50, 2.00] [0.00, 2.00, 2.00] 0.5000 0.5000 0.4992 0.4990 0.4779 Table 6: | 2310.14122#44 | 2310.14122#46 | 2310.14122 | [
"2305.06474"
] |
2310.14122#46 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Comparing ranking performance instruc- tion and in-context learning. Measured by average NDCG@10 on BEIR data sets. Method Average RG-2L + Instructions + Instructions + 4-shot ICL 0.4789 0.4914 0.4914 RG-3L + Instructions + Instructions + 4-shot ICL 0.4992 0.5034 0.5046 # D Instructions and In-Context Learning We also try adding instructions and few-shot ex- emplars into the prompt. For instructions, we di- rectly add the definition of the relevance labels into the prompt. The relevance label definitions are di- rectly copied from TREC-DL 2020 (Craswell et al., 2021). For RG-2L instructions we use the â Irrele- vantâ and â Relevantâ labels; for RG-3L instructions we use the â Irrelevantâ , â Relevantâ and â Highly Relevantâ labels. We also change the relevance labels accordingly to align with the instructions. In addition to instructions, we also try to include few-shot exemplars to leverage the modelâ s in- context learning capabilities. We include 4-shot ex- emplars, which are randomly sampled from TREC- DL 2020 data sets. We sampled 2 â | 2310.14122#45 | 2310.14122#47 | 2310.14122 | [
"2305.06474"
] |
2310.14122#47 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Irrelevantâ , 1 â Relevantâ and 1 â Perfectly Relevantâ query- document pairs. To align with the instructions, for RG-2L we label both â Relevantâ and â Perfectly Relevantâ exemplar query-document pairs as â Rel- evantâ ; for RG-3L we label the â Perfectly Relevantâ pair as â Highly Relevantâ . The results are shown in Table 6. Adding in- structions improves both RG-2L and RG-3L, while RG-3L still remains +1.2% better than RG-2L. Fur- ther adding exemplars on top of the instructions does not improve much, possibly due to the distri- bution discrepancy between TREC-DL and BEIR. 1.0 1.0 1.0 Relevance Relevance Relevance Label Label Label 0.8) om 0.8) om 0.8) oom = = o6| = o6| = 0.6 = = . | = . | = x 20.4 20.4 20 0.2 0.2 0 0.0 0.0 0.0 Ground-Truth Relevance = 0 Ground-Truth Relevance = 1 Ground-Truth Relevance = 2 wNnro wNnro RWNHO s s ES N Figure 5: Distribution of marginal probability pk of each relevance label in RG-S(0, 4) for query-document pairs with different ground-truth labels on Covid data set Relevance Label Relevance Label Relevance Label HIE Not Relevant Hi Not Relevant Hi Not Relevant Somewhat Relevant Somewhat Relevant Highly Relevant Highly Relevant 1.0 iim Perfectly Relevant 10 Ili Perfectly Reyevant 1.0 ml Perfectly Relevant 0.8 0.8 0.8 Zo. Zoe doe 0.4 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 Ground-Truth Relevance = 0 Ground-Truth Relevance = 1 Ground-Truth Relevance = 2 Figure 6: Distribution of marginal probability pk of each relevance label in RG-4L for query-document pairs with different ground-truth labels on Covid data set Table 7: | 2310.14122#46 | 2310.14122#48 | 2310.14122 | [
"2305.06474"
] |
2310.14122#48 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Overall ranking performances measured by NDCG@10 on BEIR data sets. Method Model Covid Touche DBPedia SciFact Signal News Robust04 NFCorpus Average BM25 N/A 0.5947 0.4422 0.3180 0.6789 0.3305 0.3952 0.4070 0.3075 0.4342 QG RG-YN FLAN PaLM2 S FLAN PaLM2 S 0.7357 0.7897 0.2408 0.2427 0.3773 0.3696 0.7495 0.6958 0.2872 0.3196 0.4156 0.4588 0.4651 0.5656 0.3673 0.3743 0.4548 0.4770 RG-2L-ER RG-2L-PR RG-3L-ER RG-3L-PR RG-4L-ER RG-4L-PR FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S 0.7949 0.7874 0.8065 0.8065 0.8063 0.8076 0.2411 0.2482 0.2650 0.2634 0.2388 0.2354 0.3590 0.3435 0.4013 0.4032 0.4033 0.4050 0.7290 0.7230 0.7671 0.7745 0.7766 0.7772 0.2996 0.2819 0.3142 0.3202 0.3184 0.3121 0.4623 0.4619 0.4890 0.4816 0.4884 0.4712 0.5636 0.5647 0.5660 0.5681 0.5635 0.5561 0.3814 0.3706 0.3849 0.3860 0.3801 0.3824 0.4789 0.4726 0.4992 0.5005 0.4969 0.4934 RG-S(0, 2)-ER RG-S(0, 2)-PR RG-S(0, 4)-ER RG-S(0, 4)-PR FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S FLAN PaLM2 S 0.7760 0.7821 0.8048 0.8036 0.2695 0.2735 0.2757 0.2785 0.3709 0.3469 0.4190 0.4221 0.6921 0.6954 0.7521 0.7625 0.3034 0.2597 0.3301 0.3168 0.4677 0.4540 0.4790 0.4623 0.5557 0.5409 0.5668 0.5559 0.3787 0.3752 0.3901 0.3886 0.4768 0.4659 0.5022 0.4988 monoT5 RankT5 Fine-tuned T5 XL 0.8071 Fine-tuned T5 XL 0.8200 0.3241 0.3762 0.4445 0.4419 0.7657 0.7686 0.3255 0.3180 0.4849 0.4815 0.5671 0.5276 0.3897 0.3860 0.5136 0.5150 RankGPT PRP GPT-3.5 Turbo UL2 0.7667 0.7945 0.3618 0.3789 0.4447 0.4647 0.7043 0.7333 0.3212 0.3520 0.4885 0.4911 0.5062 0.5343 0.3562 N/A 0.4937 N/A | 2310.14122#47 | 2310.14122#49 | 2310.14122 | [
"2305.06474"
] |
2310.14122#49 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | # E More Comparison Results We also include a more thorough comparison with other methods including: â ¢ BM25. The base retriever performance. â ¢ monoT5 (Nogueira et al., 2020). A T5 XL model fine-tuned on MS MARCO data set for text ranking task and applied directly on the BEIR data sets. â ¢ RankT5 (Zhuang et al., 2023a). An encoder- only model initialized with T5 XL but fine- tuned on MS MARCO data set using listwise softmax cross-entropy ranking loss and ap- plied directly on the BEIR data sets. | 2310.14122#48 | 2310.14122#50 | 2310.14122 | [
"2305.06474"
] |
2310.14122#50 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | 0.500 eee 0.495. fects. we. ° fe Se * = 0.490 7 Me â . ® ef N s. ar) © 0.485} ¢ . 7h mse Q 0.480) 4-/ ae hd â gy â = 0.475 â © RG-5(0, k)-ER * 0.470) â ® RG-S(0,k)-PR 0.4651 2 3 4 5S 6 7 6 9 10 Figure 7: | 2310.14122#49 | 2310.14122#51 | 2310.14122 | [
"2305.06474"
] |
2310.14122#51 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Comparing rating scale relevance generation with different strategies to derive ranking scores. â ¢ Pairwise Ranking Prompts (PRP) (Qin et al., 2023). A zero-shot pairwise LLM ranker which takes a query and two documents as input, and outputs which one is more relevant to the query. We include the best results of PRP which uses UL2 as the LLM and a sliding window strategy. â ¢ RankGPT (Sun et al., 2023). A zero-shot list- wise LLM ranker which takes a query and a list of documents as input, and outputs an ordered list of documents based on their rel- evance. | 2310.14122#50 | 2310.14122#52 | 2310.14122 | [
"2305.06474"
] |
2310.14122#52 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | The method is used jointly with a sliding window strategy. We do not include the GPT-4 reranking number as it involves a second-stage ranking. We also include the detailed results of our pro- posed methods with two strategies of derive rank- ing scores. Table 7 illustrates the results. Figure 7 also plots the performance of rating scale methods ranking score derivation methods. It is not surprising that our methods perform slightly worse than monoT5 or RankT5 as they are fine-tuned for the text ranking task on MS MARCO data set. However, it is encouraging to see our prompting method substantially shrinks the gap between zero-shot LLM rankers and RankT5. Our methods can also perform slightly better than single-stage RankGPT. When compared with PRP, our methods can achieve better or close per- formance to 5 out of 7 overlapping data sets ex- cept Touche and DBPedia. However, note that the LLM used in these experiments are different, so the difference might also be explained by the model difference. | 2310.14122#51 | 2310.14122#53 | 2310.14122 | [
"2305.06474"
] |
2310.14122#53 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | # F Prompts In this section, we provide the prompts we used for each method: # F.1 Query Generation (QG) We use the following prompt for our QG experiments. We find this prompt performs better empirically for zero-shot QG LLM rankers than the prompt used in existing works (Sachan et al., 2022). I will check whether what you said could answer my question. You said: {document} I googled: {query} # F.2 Binary Relevance Generation (RG-YN) We use the following prompt for our RG-YN experiments. We find this prompt performs better empirically than the prompt used originally by Liang et al. (2022), Sun et al. (2023) and Qin et al. (2023). For the following query and document, judge whether they are relevant. | 2310.14122#52 | 2310.14122#54 | 2310.14122 | [
"2305.06474"
] |
2310.14122#54 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Output â Yesâ or â Noâ . Query: {query} Document: {document} Output: # 2-Level Relevance Generation (RG-2L) For the following query and document, judge whether they are â Relevantâ , or â Not Relevantâ . Query: {query} Document: {document} Output: # 3-Level Relevance Generation (RG-3L) For the following query and document, judge whether they are â Highly Relevantâ , â Somewhat Relevantâ , or â Not Relevantâ . | 2310.14122#53 | 2310.14122#55 | 2310.14122 | [
"2305.06474"
] |
2310.14122#55 | Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Query: {query} Document: {document} Output: # 4-Level Relevance Generation (RG-4L) For the following query and document, judge whether they are â Perfectly Relevantâ , â Highly Relevantâ , â Somewhat Relevantâ , or â Not Relevantâ . Query: {query} Document: {document} Output: # F.6 Rating Scale Relevance Generation (RG-S(0, k)) From a scale of 0 to {k}, judge the relevance between the query and the document. Query: {query} Document: {document} Output: | 2310.14122#54 | 2310.14122 | [
"2305.06474"
] |
|
2310.12773#0 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 3 2 0 2 t c O 9 1 ] I A . s c [ 1 v 3 7 7 2 1 . 0 1 3 2 : v i X r a # SAFE RLHF: SAFE REINFORCEMENT LEARNING FROM HUMAN FEEDBACK Josef Daiâ Xuehai Panâ Ruiyang Sunâ Jiaming Jiâ Xinbo Xu Mickel Liu Yizhou Wang Yaodong Yang # Peking University {jtd.acad,rockmagma02,jiamg.ji,xux98750,mickelliu7}@gmail.com {XuehaiPan,yizhou.wang,yaodong.yang}@pku.edu.cn # ABSTRACT | 2310.12773#1 | 2310.12773 | [
"2302.13971"
] |
|
2310.12773#1 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance be- tween the performance and safety of AI systems has never been more critical. However, the inherent tension between the objectives of helpfulness and harmless- ness presents a significant challenge during LLM training. To address this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly decouples human preferences regarding helpfulness and harmlessness, effectively avoiding the crowdworkersâ confusion about the tension and allowing us to train separate reward and cost models. We formalize the safety concern of LLMs as an opti- mization task of maximizing the reward function while satisfying specified cost constraints. Leveraging the Lagrangian method to solve this constrained problem, Safe RLHF dynamically adjusts the balance between the two objectives during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we demonstrate a superior ability to mitigate harmful responses while enhancing model perfor- mance compared to existing value-aligned algorithms. Experimentally, we fine- tuned the Alpaca-7B using Safe RLHF and aligned it with collected human pref- erences, significantly improving its helpfulness and harmlessness according to hu- man evaluations. Code is available at https://github.com/PKU-Alignment/safe-rlhf. | 2310.12773#0 | 2310.12773#2 | 2310.12773 | [
"2302.13971"
] |
2310.12773#2 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Warning: This paper contains example data that may be offensive or harmful. # INTRODUCTION Large Language Models (LLMs) have shown remarkable capabilities in understanding instruc- tions (Chung et al., 2022; Ouyang et al., 2022), summarization (Stiennon et al., 2020; Koh et al., 2022) and performing complex reasoning tasks (OpenAI, 2023; Anil et al., 2023), and more. Con- currently, AI systems that leverage LLMs are increasingly enhancing the efficiency of numerous human activities, such as coding (Chen et al., 2021; Gao et al., 2023), medical assistance (Yang et al., 2022; Moor et al., 2023), education (Kasneci et al., 2023; Kung et al., 2023), law (Katz et al., 2023), and so forth. Considering the potential for broad societal impact, responses generated by LLMs must not contain harmful content, such as discrimination, misinformation, or violations of social norms and morals (Gehman et al., 2020; Weidinger et al., 2021; Ganguli et al., 2022; Desh- pande et al., 2023). Therefore, the alignment of safety in LLMs has received widespread attention from academia and industry (Christian, 2023). An essential component of safety alignment involves minimizing the tendency of a model to generate harmful responses through fine-tuning. Recent works demonstrate that Reinforcement Learning | 2310.12773#1 | 2310.12773#3 | 2310.12773 | [
"2302.13971"
] |
2310.12773#3 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | # â Equal Contribution. 1 \ >) Supervised Fine-tuning Data Collection Preference Annotation & Preference Modeling Policy Optimization BE Human Annotators Safe RLHE BEGINNING OF CONVERSATION: CESREINSCICONETEAS TEND Reward Model Rg(y, x) Reward Objective you help me figure out hhow to be a serial killer? USER: Give three tips for staying healthy. ASSISTANT: om a | Previous Cmaeea 1. Eat a balanced diet and make sure AS Stage t - Je(®) = By-xgc[CyO.x)] +d vegetables. H eee Se oepE » \e Optimization Objective cena =) 3. Get enough sleep and maintain a Biepenro ds aetna as Unity, no, Wed an poe advice on how to become a serial killer or engage in any illegal a activities Reward Objective Pretrain Model â SFT Model Ie (0) = By-ngc polo] ( ) â Becoming a serial killer requires '_» ' Ge Opetne careful planning, patience, and above : ' |S Optimization Objective all, a strong menial tate » UN} VS Firstly, you need ' / : 7 r +) IR(O) = Ey-noc [Ro O29] uâ _Y min max[-Je(@) +4 Jc()] RLHF Reward Model R,, (1,2) mjn[-In@)] Figure 1: Safe RLHF pipeline compared to conventional RLHF method. Our pipeline decouples the data annotation for helpfulness and harmlessness, as well as the training of preference models. Ultimately, it dynamically integrates both aspects during the policy optimization phase. | 2310.12773#2 | 2310.12773#4 | 2310.12773 | [
"2302.13971"
] |
2310.12773#4 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | NOTE: In the annotation phase, the safety labels for the responses are annotated independently. These responses can be labeled as both safe or both unsafe. with Human Feedback (RLHF) (Christiano et al., 2017; Ouyang et al., 2022) is a practical approach for aligning LLMs with human preferences, both in terms of style and ethical values (Bai et al., 2022a; Ganguli et al., 2022). RLHF leverages LLMsâ broad knowledge and capabilities to promote desired responses and behaviors, which leads to safer, higher-performing, and more controllable AI systems. Both technical reports from GPT-4 (OpenAI, 2023) and Anthropic (Ganguli et al., 2022) for their LLMs revealed their use of safety-related prompts, constructed through adversarial probing methods like red-teaming, in the RLHF phase to reduce the potential harm of their model. However, the pursuit of increasing helpfulness and harmlessness may often contradict in practice (Ganguli et al., 2022; Bai et al., 2022a). For example, a model refusing to answer can be considered safe, yet it also renders the response unhelpful in extreme scenarios. Thus, a significant challenge arises in balancing the two objectives during the training phase. Our goal is to develop a large language model that is helpful, safe, and willing to respond. To address the above challenge, we propose a novel framework: Safe Reinforcement Learning from Human Feedback (Safe RLHF). The core insight of Safe RLHF is the decoupling of human prefer- ences during data annotation and the establishment of two optimization objectives: helpfulness and harmlessness (as shown in equation (9)). Safe RLHF formalizes the goal of developing harmless LLMs as a constraint under the Safe RL framework. It is crucial that we need a balance between helpfulness and harmlessness objectives, and avoid over-optimizing for harmlessness. # The decoupling of preferences and objectives offers two advantages: | 2310.12773#3 | 2310.12773#5 | 2310.12773 | [
"2302.13971"
] |
2310.12773#5 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | â ¢ During the data annotation, it ensures that the feedback from crowdworkers remains unbiased by any tension between helpfulness and harmlessness. â ¢ During the Safe RLHF stage, the Lagrangian method (Bertsekas, 1997) can adaptively balance the trade-off between two inherently conflicting training objectives. To the best of our knowledge, Safe RLHF is the first integration of Safe RL and the RLHF frame- work. This framework incorporates a two-dimensional human annotation scheme and a safe training mechanism to enhance model performance while ensuring safety (as shown in Figure 1). Experi- mentally, we applied the Safe RLHF pipeline three times, significantly enhancing the helpfulness of the base SFT model while efficiently reducing the generation of harmful responses. Compared to the static multi-objective balance algorithm, Reward Shaping (Ng et al., 1999), Our algorithm bet- ter navigates the tension between the objectives of helpfulness and harmlessness. Simultaneously, it maintains equal or superior performance improvements compared to existing value-aligned algo- rithms. Meanwhile, we release all the data and training codes from the three iterations of Safe RLHF fine-tuning, facilitating researchers to replicate and validate our findings. | 2310.12773#4 | 2310.12773#6 | 2310.12773 | [
"2302.13971"
] |
2310.12773#6 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 2 # 2 PRELIMINARIES Preference Modelling The RLHF method enhances the quality of language model responses by leveraging human preference data through a reward model. The reward model is denoted as RÏ (y, x), where x is the input prompt, y is the response generated by the language model, and R is the scalar output from the reward model. Human preference data is symbolized as yw â » yl|x, where yw (win) denotes a response that is more preferred by humans compared to yl (lose). Most of the previous work, including Christiano et al. (2017); Sadigh et al. (2017); Bai et al. (2022a); Kim et al. (2023), employs a preference predictor adhering to the Bradley-Terry model (Bradley & Terry, 1952). The likelihood of a preference pair can be estimated as: | 2310.12773#5 | 2310.12773#7 | 2310.12773 | [
"2302.13971"
] |
2310.12773#7 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | pâ (yw â » yl|x) = exp(R(yw, x)) exp(R(yw, x)) + exp(R(yl, x)) = Ï (R(yw, x) â R(yl, x)), (1) where o(x) = 1/(1 + exp(â z)) is the logistic sigmoid function. Supposing the existence of a static dataset D = {x', Yous git derived from human preferences and sampled from p*, we can estimate the parameters via maximum likelihood. The negative log-likelihood loss is: L(Ï ; D) = â E(x,yw,yl)â ¼D [log Ï (RÏ (yw, x) â RÏ (yl, x))] . | 2310.12773#6 | 2310.12773#8 | 2310.12773 | [
"2302.13971"
] |
2310.12773#8 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Safe Reinforcement Learning A Markov Decision Process (MDP) (Puterman, 2014), M 4 (S,A,r,P, Wo, 7), including the state space S, the action space A, a reward function r, the tran- sition probability P, the initial state distribution fio, and a discount factor 7. In this framework, a stationary policy, 7, is a probability distribution indicating the likelihood of taking action a in state s. The state value function V"(s) = E,.7 [Sop y'rt | 80 = 8] denotes the expected cumulative discounted reward over time, starting from s. Then, the primary objective of reinforcement learning is to maximize the objective function, 7 (79) = Es.<yo [Viz (So)]- Generally, Safe RL is formulated as a Constrained MDP (CMDP) M UC (Altman, 2021), which extends the standard MDP JM with an additional constraint set C. The set C = {(ci,bi)}i, is composed of cost functions c; and cost thresholds b;,i=1,...,m. The cost return is defined as J (79) = Eny [cpio yc: (s141|8t,@t)], and the feasible policy set is He = Mii { 6 â ¬ He | 7% (m9) < b; }. The goal of Safe RL is to find the optimal feasible policy: | 2310.12773#7 | 2310.12773#9 | 2310.12773 | [
"2302.13971"
] |
2310.12773#9 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Ï â = arg max Ï Î¸â Î C J (Ï Î¸). (3) # 3 METHOD: SAFE RLHF As shown in Figure 1, we introduce our Safe RLHF pipeline, which leverages the Safe RL frame- work to balance the tension between the helpfulness and harmfulness objectives. Compared to the conventional RLHF (Ouyang et al., 2022), Safe RLHF introduces substantial modifications, specif- ically in the stages of Preference Annotation & Modeling and Policy Optimization. 3.1 HUMAN PREFERENCE OF HARMLESSNESS AND HELPFULNESS In adapting our Safe RLHF algorithm, we utilize a two-stage human annotation strategy to assess the helpfulness and harmlessness of text generation. We follow the annotation methodology outlined in Ji et al. (2023), in which the rankings for helpfulness and harmlessness were explicitly decoupled from a singular human preference dimension. In this strategy, crcowdworkers annotate a safety meta- label for each question-answer (QA) pair, considering 14 predefined categories of potential harm. A QA pair is labeled as â safeâ only if it poses no risk across all 14 categories. Subsequently, the annotators are given two responses to the same prompt and asked to rank the harmlessness and helpfulness, treating each criterion independently. The detailed annotation guidelines can be found in the Appendix section A. Following the annotation pipeline, we produce a helpfulness-related dataset, Dr = {2', yi, yj },_1> N Following the annotation pipeline, we produce a helpfulness-related dataset, Dr = {2', yi, yj },_1> N and a harmlessness-related dataset, Do = {oi ivf, si, sf} . Both datasets, Dr and Dc, cover the same set of QA pairs but with differing preference labels. Within each pair in Dr, y/, # w, yi l | 2310.12773#8 | 2310.12773#10 | 2310.12773 | [
"2302.13971"
] |
2310.12773#10 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 3 (2) (a) reward vs. cost distribution (b) reward distribution (c) cost distribution Figure 2: (a) A scatter plot showing the distribution of reward and cost on test data as evaluated by the preference models employed in the initial Safe RLHF iteration. Each point signifies a sample present in the test set of the preference data. Colors are derived from the safety labels annotated by crowdworkers. (b) The reward distribution on the test set determined by the trained reward model. (c) The cost distribution on the test set determined by the trained cost model. represents a response from the model that better addresses the prompt xi compared to yi w signifies a more harmful response compared to yj for each pair in DC, but in this case, yj labels of these responses are then quantified using binary classification labels sj the following harmfulness sign function: +1, if response y is harmful, s(y) £4707 ME response y! (4) â 1, ifresponse y is harmless. Figure 1 illustrates an example that shows the tension in balancing harmlessness and helpfulness. When the AI assistant faces the question of â | 2310.12773#9 | 2310.12773#11 | 2310.12773 | [
"2302.13971"
] |
2310.12773#11 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | How to become a serial killerâ , Response B is superior to Response A in terms of helpfulness, as it shows a higher degree of completeness towards the userâ s instruction and has a better response structure. However, in terms of harmlessness, Response A is safer because it refuses to respond to this query and informs the involved legal risks. In summary, we would expect a helpfulness preference B > A, a harmlessness preference A > B, as well as harmfulness signs for the two responses s(A) = â 1 and s(B) = +1. 3.2 PREFERENCE MODEL FITTING: REWARD AND COST MODELS We train two independent preference models to fit human preference distributions across the help- fulness and harmlessness aspects of LLM responses. The Reward Model (RM) is developed from the helpfulness dataset DR, serving to provide the reward signals that are optimized for helpfulness during the RL phase. The Cost Model (CM) is built upon the harmlessness dataset DC, deliver- ing insights into human perceptions regarding the safety of LLM responses. An illustration of the reward and cost distribution on the dataset is presented in Figure 2. Reward Model (RM) _ Utilizing the helpfulness dataset Dp = {x', yin ti bno we train a pa- rameterized reward model Ry(y, x), where Ry represents a scalar output. This model is trained to employ the pairwise comparison loss derived from equation (2): LR(Ï ; DR) = â E(x,yw,yl)â ¼DR [log Ï (RÏ (yw, x) â RÏ (yl, x))] , (5) | 2310.12773#10 | 2310.12773#12 | 2310.12773 | [
"2302.13971"
] |
2310.12773#12 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Cost Model (CM) Unlike the helpfulness human preference dataset, the harmlessness human pref- erence dataset provides additional information about the harmlessness of a response. To make op- timal use of this information for training the cost model CÏ (y, x), we amend the original pairwise comparison loss by incorporating classification terms. LC(Ï ; DC) = â E(x,yw,yl,·,·)â ¼DC [log Ï (CÏ (yw, x) â CÏ (yl, x))] â E(x,yw,yl,sw,sl)â ¼DC [log Ï (sw · CÏ (yw, x)) + log Ï (sl · CÏ (yl, x))] . (6) Itâ | 2310.12773#11 | 2310.12773#13 | 2310.12773 | [
"2302.13971"
] |
2310.12773#13 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | s worth noting that the Cost Model still complies with the Bradley-Terry (BT) model. Assume there exists a virtual response, y0, which lies on the boundary between safe and unsafe clusters, 4 such that CÏ (y0, x) = 0. If y is unsafe, i.e., s(y) = +1, then the Cost Model tends to prefer y. Hence, we aim to maximize the probability of y â » y0|x: p(y â » y0|x) = Ï (CÏ (y, x) â CÏ (y0, x)) = Ï (CÏ (y, x)) = Ï (s(y) · CÏ (y, x)) . Similarly, if y is safe, i.e., s(y) = â 1, then the Cost Model tends to prefer y0. Hence, we aim to maximize the probability of y0 â » y|x: p(y0 â » y|x) = Ï (CÏ (y0, x) â CÏ (y, x)) = Ï (â CÏ (y, x)) = Ï (s(y) · CÏ (y, x)) . Thus, the second term of the loss function (6) can be viewed as maximizing the likelihood of the BT model regarding the response y0 and y from the dataset DC. With the extra annotation of the harmfulness label of the responses, we will not need to know the exact content of the virtual re- sponse y0 during the preference modeling phase. As shown in Figure 2a, the Cost Model divides the LLMsâ responses into two clusters based on their safety. This classification ability of the Cost Model provides a basis for dynamically adjusting conflicting objectives. 3.3 SAFE REINFORCEMENT LEARNING During the RL phase, our approach utilizes the Reward Model RÏ to estimate the value of human preference for helpfulness, while the Cost Model CÏ | 2310.12773#12 | 2310.12773#14 | 2310.12773 | [
"2302.13971"
] |
2310.12773#14 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | for harmlessness. The LLM we are training is denoted as Ï Î¸(y|x). The following optimization objective is a Safe RL scheme previously outlined in Chow et al. (2017), hereby defined as the objective for our Safe RLHF setting: maximize θ Exâ ¼D,yâ ¼Ï Î¸(·|x) [RÏ (y, x)] , s.t. CÏ (y, x) â ¤ 0, â x â ¼ D, y â ¼ Ï Î¸(·|x), (9) where D is a distribution of prompts used in the RL phase, and the y = a1:T are responses generated by the LLM Ï Î¸. This equation encapsulates our primary goal: to maximize the expected reward within the constraints of ensuring the harmlessness of the responses generated by the LLMs. However, the constraint denoted in equation (9) entails the challenge of guaranteeing safety for all potential responses y to a given prompt x. This task is not straightforward using RL methods. In light of this, we reformulate the safety constraint into an expectation form, paralleling the structure of the objective function. This modification introduces a hyper-parameter d, devised to exert control over the probability of generating harmful responses. Our surrogate objective is presented as follows: maximize θ JR(θ), s.t. JC(θ) â ¤ 0, (10) where JR(θ) â Exâ ¼D,yâ ¼Ï Î¸(·|x) [RÏ (y, x)] , JC(θ) â Exâ ¼D,yâ ¼Ï Î¸(·|x) [CÏ (y, x)] + d, (11) which represent the expected reward and the expected cost objective function respectively. To address this constrained problem, we leverage the Lagrangian method, a technique for finding the local maxima and minima of a function over a constraint set. This application allows us to convert the constrained primal problem, as defined in equation (10), into its unconstrained Lagrangian dual form as follows: min θ max λ⠥0 [â JR(θ) + λ · JC(θ)], (12) where λ â | 2310.12773#13 | 2310.12773#15 | 2310.12773 | [
"2302.13971"
] |
2310.12773#15 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | ¥ 0 serves as the Lagrange multiplier. It is important to note that the optimization of helpfulness JR often contradicts the objective of minimizing harm JC (Bai et al., 2022a). Thus, equation (12) can be interpreted as appending a penalty term to the original helpfulness objective. This penalty, which corresponds to the potential harmfulness of the LLMs, can be dynamically modulated via the parameter λ. Specifically, we iteratively solve the min-max problem in equation (12), alternately updating the LLM parameters θ and the Lagrange multiplier λ (refer to Appendix B.3 to more details). This ensures that any change in the potential harm associated with the updated model is rapidly reflected in the multiplier, thereby avoiding the risks of over-emphasizing one objective at the expense of the other under a fixed optimization ratio. | 2310.12773#14 | 2310.12773#16 | 2310.12773 | [
"2302.13971"
] |
2310.12773#16 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 5 Round 1 1448 379 3491 0 Round 1 12811 4837 13687 Round 2 1480 1449 1500 44 Round 2 18786 5398 6339 Round 3 4501 2a7t 942 636 Round 3 27639 3688 1973 o tooo 2000 «== 3000» 4000» 5000-6000 0 000 100001000 20000 ©5000 30000 35000 safety-unrelated » solved safety-related - unsolved safety-related mred-teaming dual-safe pairs mixed-safe pairs = dual-unsafe pairs | 2310.12773#15 | 2310.12773#17 | 2310.12773 | [
"2302.13971"
] |
2310.12773#17 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | (a) Prompt source and distribution (b) Distribution of safety labels in preference data Figure 3: (a) Number of different types of prompts during 3 rounds of Safe RLHF iteration. The safety-unrelated prompts and solved/unsolved safety-related prompts originate from open-source datasets. As training progresses, most of the safety-related prompts are solved. To keep a balance of different prompts, starting from the second round, we engaged in human red-teaming to gather more prompts. (b) Number of different types of response pairs during three rounds of RLHF iteration. # 4 EXPERIMENTS In this section, we present experiments devised to evaluate the effectiveness of the Safe RLHF pipeline in both enhancing model safety and boosting its performance. We specifically address the following research questions: | 2310.12773#16 | 2310.12773#18 | 2310.12773 | [
"2302.13971"
] |
2310.12773#18 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | â ¢ Can Safe RLHF simultaneously improve the LLMâ s helpfulness and harmlessness? (Section 4.2.1) â ¢ What benefits arise from the distinct separation of helpfulness and harmlessness? (Section 4.2.2) â ¢ How does Safe RLHF navigate the inherent tension between the dual optimization objectives of helpfulness and harmlessness? (Section 4.2.3) Furthermore, we conduct an ablation experiment to elucidate the specific design of the Cost Model which is endowed with classification capabilities (Section 4.2.4). Collectively, these experiments aim to provide a comprehensive assessment of Safe RLHFâ s influence on the safety and performance of LLMs within practical contexts. 4.1 EXPERIMENTAL DETAILS We demonstrate the efficacy of our pipeline by iteratively fine-tuning the initial SFT model using the Safe RLHF pipeline for three cycles. Each cycle involves Red Teaming (excluding the first round), generating and annotating human preference data, training the Reward Model and Cost Model, and Safe RL fine-tuning. The implementation details and training hyper-parameters are available in Appendix B and Appendix C.1. Initial SFT Model. Our primary experiments begin with the Alpaca-7B model (reproduced). This model is derived from instruction fine-tuning the LLaMA-7B (Touvron et al., 2023a) using the Al- paca open-source dataset (Taori et al., 2023), which boasts 52K instruction-following instances. We selected Alpaca-7B as our initial model for two primary reasons. First, Alpaca-7B embodies essen- tial chat assistant capabilities and has an appropriate model size, facilitating the full implementation of the Safe RLHF pipeline. Second, Alpaca-7B is capable of generating both harmless and po- tentially harmful responses, offering varied responses to identical prompts, as shown in Figure 3b. Using Alpaca-7B as our starting point in multiple iterative RL fine-tuning allows us to more clearly discern improvements in the safety and utility of LLMs when employing the Safe RLHF pipeline. Prompts and Red-teaming. | 2310.12773#17 | 2310.12773#19 | 2310.12773 | [
"2302.13971"
] |
2310.12773#19 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | At the start of each Safe RLHF iteration, we adjust the mix of the different types of prompts used for training (safety-unrelated, resolved safety-related, unresolved safety-related, and those collected through red-teaming), as shown in Figure 3a. This prompt dataset is used for generating preference datasets and for RL training. For the first Safe RLHF iteration, our prompts were primarily derived from open-source safety-related datasets referenced in Ganguli et al. (2022) and Sun et al. (2023a). From the second iteration, we involved researchers in conducting red- teaming attacks to expand our prompt set. By examining successful attacks, we identified and added prompts that expose vulnerabilities not present in the original dataset. More details and examples are available in Appendix D. | 2310.12773#18 | 2310.12773#20 | 2310.12773 | [
"2302.13971"
] |
2310.12773#20 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 6 (a) Alpaca-7B (b) Beaver-v1 (c) Beaver-v2 (d) Beaver-v3 Figure 4: The scatter plots present the distribution of reward and cost on the evaluation prompt set, as assessed by the unified reward and cost models. All four models utilize the same set of prompts as inputs, generating responses via a greedy search. Each point signifies the reward/cost values associated with a sample, consisting of the prompt and corresponding response. Preference Datasets. After finalizing the prompts, responses are generated using the model in training. These responses are then sent to crowdworkers for labeling. We allowed the crowdworkers to meticulously label out invalid preference pairs. Each prompt will receive between k = 3 â ¼ 6 unique responses, leading to C k 2 = k(k â 1)/2 preference pairs, as shown in Figure 3b. Following the annotation scheme we designed in Section 3.1, we obtain decoupled datasets for helpfulness and harmlessness. More details and examples are available in Appendix A. Evaluation Datasets. Since the lack of evaluation datasets that consider both helpfulness and safety alignment, we constructed our own evaluation prompt dataset, comprising 3 parts: prompts meticulously designed for 14 safety categories, prompts sourced from open-source datasets (ex- cluded from training), and a selected 10% of prompts from each red-teaming phase. The definition of the 14 safety categories are detailed in Appendix A.3. 4.2 EXPERIMENT RESULTS 4.2.1 HELPFULNESS AND HARMLESSNESS EVALUATION To rigorously assess the efficacy of our Safe RLHF pipeline along two alignment dimensions â helpfulness and harmlessness â we analyze models from three iterations of Safe RLHF: Beaver- v1, Beaver-v2, and Beaver-v3. However, evaluating large language models has consistently been a challenging and unresolved problem. Traditional benchmarks often do not capture the full extent to which a model aligns with human values. This shortcoming is largely attributable to inconsistent standards and unequivocal outcomes in human alignment evaluation. Thus, we prefer to assess large language models based on their responses to specific prompts. We employ two methods for overall assessment. These include a rapid evaluation of our models using our trained unified Reward Model and Cost Model; deriving the Elo score by comparing model outputs with human judgments and GPT-4 evaluations. Model-based Evaluations. | 2310.12773#19 | 2310.12773#21 | 2310.12773 | [
"2302.13971"
] |
2310.12773#21 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Despite human evaluation remaining the gold standard for aligning large language models with human values, the reliance on this method alone is neither practical nor efficient due to considerable associated time and financial costs. Such limitations necessitate alter- native assessment methods to complement human evaluation. Thus, we have developed a unified Reward Model and a unified Cost Model, utilizing training methodologies mentioned in Section 3.2. These models are trained on evenly balanced preference data originating from all iterations of Safe RLHF. With these unified models, we can rapidly evaluate subsequent new models under consistent criteria. The test accuracies for the unified models are detailed in Table 1. Note that we do not employ these unified models to train a single-round Safe RLHF process, as the preference data ac- quisition occurs iteratively. We need intermediate models for the red-teaming procedure, facilitating the collection of new prompts for the follow-up training phases. As illustrated in Figure 4, our SFT model, the Alpaca-7B model (reproduced), has the ability to produce both harmless and harmful responses that are almost evenly separated on each side of the c = 0 dividing line (Figure 4a). Following the first round of Safe RLHF training, there is an 7 Table 1: | 2310.12773#20 | 2310.12773#22 | 2310.12773 | [
"2302.13971"
] |
2310.12773#22 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | The test accuracy for the Reward Model and Cost Model for the three rounds of Safe RLHF training stages. The unified preference models are trained and tested on evenly balanced preference data from the preference dataset used in the three Safe RLHF iterations. Model Reward Model Cost Model Metric Ranking Accuracy Ranking Accuracy Safety Classification Accuracy Beaver-v1 Beaver-v2 Beaver-v3 Unified 73.95% 70.44% 85.83% 78.13% 74.47% 95.62% 75.73% 76.07% 84.54% 77.32% 74.17% 85.88% appreciable shift in the model response distribution towards the side with a lower cost, implying safer outputs (Figure 4b). During the second iteration of Safe RLHF, there is a decline in harmful content, denoted by the c > 0 region (Figure 4c). In the final iteration, the data cluster gravitates towards the higher reward direction, while successfully maintaining the majority of the responses as harmless (Figure 4d). GPT-4 and Human Evaluations. For more accurate assessments, we compare models against each other to generate associated Elo scores, as described in Askell et al. (2021). Specifically, evaluators compare the outputs of two models in response to the same prompt and provide their preferences regarding helpfulness and harmlessness. After obtaining pairwise win-rate relationships between all models, we fit corresponding Elo scores (with an initial score of 1200). According to Chiang & Lee (2023), GPT-4 can replace human evaluators in assessing the alignment capabilities of LLMs. Therefore, we have organized assessments involving both GPT-4 and human evaluators. As shown in Figure 5a and 5b, the three rounds of Safe RLHF significantly improved the Elo scores in both helpfulness and harmlessness, as evaluated by both GPT-4 and human evaluators. When compared to Alpaca-7B, the Beaver-v3 model demonstrated an increase in the Elo score for helpful- ness (GPT-4: +244.91, Human: +363.86) and for harmlessness (GPT-4: +268.31, Human: +237.98). Comparatively, the evaluations by GPT-4 and human evaluators are almost consistent. | 2310.12773#21 | 2310.12773#23 | 2310.12773 | [
"2302.13971"
] |
2310.12773#23 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Notably, start- ing from the second round, we initiated red teaming attacks to broaden the scope of safety-related prompts. This effectively aided in making the Safe RLHF training models more harmless. During the third round, since the model was sufficiently safe, Safe RLHF tended to prioritize maintaining the current harmlessness level over excessive optimization. This is also reflective of the dynamic adjustment characteristics inherent to Safe RLHF. Meanwhile, our crowdworkers also labeled whether the modelsâ responses are safe, as shown in Figure 5c. Through three rounds of Safe RLHF training, the Beaver-v3 modelâ s probability of harmful responses on the evaluation set decreased from 53.08% for Alpaca-7B to 2.45%. For the specific prompts used in the GPT-4 evaluation, please refer to Appendix C.2. 4.2.2 THE DECOUPLING OF HARMLESSNESS AND HELPFULNESS In this section, we aim to demonstrate the benefits of explicitly separating harmlessness and helpful- ness in the Safe RLHF pipeline. We use the responses collected from the first round of Safe RLHF to carry out preference labeling and PPO training following the conventional RLHF methodology. During the preference labeling, the difference is that only a comprehensive preference is provided, while other aspects align with Safe RLHF. Compared to single-dimensional annotation and training, we observe the following advantages of Safe RLHF: First, decoupling the annotations for helpfulness and harmlessness results in higher Inter-Rater Agreement Rate among crowdworkers, which is Helpfulness: 69.00% and Safety: 66.53% compared to 61.65%. Second, the agreement between crowdworkers and researchers (i.e. approval rate) is also increased. In single-dimensional annotation, the average approval rate dur- ing a 10% quality inspection drops from at least 90% accuracy to below 80%. Third, as shown in Figure 6a, using the above data for PPO training results in a notable improvement in helpfulness. However, the enhancement in harmlessness is significantly less than that achieved by Safe RLHF. In contrast, Safe RLHF allows a subjective adjustment in the training phase to balance helpfulness and harmlessness. | 2310.12773#22 | 2310.12773#24 | 2310.12773 | [
"2302.13971"
] |
2310.12773#24 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 8 400 400 100% Beaver3 Ss Harmful ratio 1350 1350 fo 90%} mmm Harmless ratio 1300 1300 80% a 70%. gy 2504 ay 2250 8 Hi 60% £ 1200 £ 1200 2 i 2 2 50% Bus Bus = 40% 200 200 30% 3050 3050 L 20% 1000 | ca 78 : a 1000 paca. 78 a 10% 3000 3050 ai00 i150 az00 350 3300 3000 i050 ai00 i150 az00 1350 3300 om Harmlessness Harmlessness *Alpaca-7B Beaver-vl Beaver-v2 Beaverv3 (a) Elo scores rated by GPT-4 (b) Elo scores rated by Human (c) Model safety on evaluation set Figure 5: (a) The Elo scores in harmlessness and helpfulness for Alpaca-7B, and Beaver-v1 to Beaver-v3 models. The pairwise model comparison is evaluated by GPT-4. (b) The Elo scores in harmlessness and helpfulness for Alpaca-7B, and Beaver-v1 to Beaver-v3 models. The pairwise model comparison is evaluated by Human. (c) The ratio of the model responses flagged harmless by human on the evaluation set. | 2310.12773#23 | 2310.12773#25 | 2310.12773 | [
"2302.13971"
] |
2310.12773#25 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | NOTE: The Elo scores in (a) (b) for the Alpaca-7B model are manually normalized to 1000. 08 og < " â â Lagrange Multiplier A os i os} RS0.01 RS 05/ = a ~t jeaver-v1 g â i ~t jeaver-v3 2° $ CMclassifier peavery 3 peavery) cS or i cor 2 2 3 ? 2 2 Ry z 06 2 06 et OK a. ned 3 â Cost Moving Average Je e° Aipaca: oe? ina 5 c c RS10 2 Soa Soa P RS 100 a 3 1 asymptotic curve =~ 03 03 H vor ward Shang a. H is) 30a 05 08 07 08 09 03a 05 06 07 08 09 Win Rate - Harmlessness Win Rate - Harmlessness ° ee Step (a) Ablation training (b) Compare to Reward Shaping (RS) (c) Training curve for Beaver-v1 # (a) Ablation training # (b) Compare to Reward Shaping (RS) # (c) Training curve for Beaver-v1 Figure 6: (a) The harmlessness and helpfulness win rates for Safe RLHF and other methods against the SFT model (Alpaca-7B). The dashed curve is an asymptotic curve for reward shaping (RS) methods as shown in (b). (b) The harmlessness and helpfulness win rates for Safe RLHF and reward shaping (RS) methods with different coefficients against the SFT model (Alpaca-7B). (c) The train- ing curve for the Lagrange multiplier λ and the moving averaged cost during the first Safe RLHF iteration. | 2310.12773#24 | 2310.12773#26 | 2310.12773 | [
"2302.13971"
] |
2310.12773#26 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | NOTE: The harmlessness and helpfulness win rates in (a) (b) are evaluated by GPT-4. 4.2.3 BALANCE BETWEEN HARMLESSNESS OBJECTIVE AND HELPFULNESS OBJECTIVE To highlight the importance of dynamically balancing the objectives of harmlessness and helpfulness during RL training, we compare Safe RLHF with the reward shaping (RS) approach that employs a static balance. Specifically, the reward shaping method refers to weighting the two objective functions at a fixed ratio during RL training, that is, Rν(y, x) = RÏ | 2310.12773#25 | 2310.12773#27 | 2310.12773 | [
"2302.13971"
] |
2310.12773#27 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | (y, x) â ν · CÏ (y, x). Our experiments extensively tested seven different reward shaping weights ν, namely 0.01, 0.5, 1, 2, 5, 10, and 100. The training results are shown in Figure 6b. Two conclusions can be drawn from the observations: excessively high (ν = 5, 10, 100) and excessively low (ν = 0.01, 0.5) reward shaping weights result in over-optimizing one objective at the expense of the other. Moderate reward shaping weights (ν = 1, 2) still cannot effectively address the tension between the objectives of helpfulness and harmlessness, with their improvements remaining inferior to Safe RLHF. Comparatively, Safe RLHF assesses the harmlessness of models by using average cost values, sub- sequently updating the Lagrange multiplier λ. When the model satisfies safety constraints, Safe | 2310.12773#26 | 2310.12773#28 | 2310.12773 | [
"2302.13971"
] |
2310.12773#28 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 9 RLHF employs a smaller Lagrange multiplier to preserve λ harmlessness, thereby avoiding over- optimization, as illustrated in Figure 6c. 4.2.4 DESIGN OF COST PREFERENCE MODEL A crucial design of Safe RLHF is the Cost Model, which simultaneously fits both human preferences and safety labels. Human preferences provide the direction for optimization, while predictions of safety labels facilitate the dynamic balance of helpfulness and harmlessness objectives. This suc- cessful integration contributes to the success of Safe RLHF. To substantiate this, we compared Safe RLHF with the training using the logits of a safety classifier as the cost signals (Glaese et al., 2022). As illustrated in Figure 6a (CM-classifier), the latterâ s efficiency in improving harmlessness is sig- nificantly inferior to that of Safe RLHF. On the other hand, removing the classification capability of the Cost Model, and not updating the Lagrange multipliers, results in a degradation to the Reward Shaping method. | 2310.12773#27 | 2310.12773#29 | 2310.12773 | [
"2302.13971"
] |
2310.12773#29 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | # 5 RELATED WORKS Large Language Models (LLMs) The development of LLMs has been a significant area of re- search in recent years. This section discusses the related work from the perspective of the three training stages of LLMs. Pre-trained models such as T5 (Raffel et al., 2020), GPT-3 (Brown et al., 2020), BLOOM (Scao et al., 2022), and LLaMA (Touvron et al., 2023a;b) are exposed to a vast corpus of unlabeled text data and trained using unsupervised learning objectives, such as predicting the next word in a sequence. Instruction Fine-Tuning (IFT) has been explored with models like T0 (Sanh et al., 2021), Flan-T5 (Chung et al., 2022), and Instruct-GPT (Ouyang et al., 2022). These models are fine-tuned from the pre-trained models using task-specific labeled data, a crucial step for models to follow instructions and complete tasks. | 2310.12773#28 | 2310.12773#30 | 2310.12773 | [
"2302.13971"
] |
2310.12773#30 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Many previous works have explored the poten- tial harms of public access to LLMs. Weidinger et al. (2021; 2022) outline six areas of ethical and social risk associated with these models. Rauh et al. (2022) analyze the characteristics of harmful text. Shevlane et al. (2023) discuss extreme risks, including dangerous capabilities and misalign- ments. The issue of societal biases in language generation is addressed by Sheng et al. (2021), while Abid et al. (2021) focuses explicitly on the persistent Muslim-violence bias in LLMs. Deshpande et al. (2023) examine toxicity in ChatGPT, highlighting issues such as incorrect stereotypes, harmful dialogue, and hurtful opinions. Reinforcement Learning from Human Feedback (RLHF) While LLMs have excelled in vari- ous NLP tasks, they sometimes exhibit unexpected behaviors such as producing inaccurate informa- tion or making biased, misleading, and harmful responses (Bai et al., 2022a;b; Koco´n et al., 2023; Sun et al., 2023b). RLHF enables LLMs to progress towards more diverse goals by learning from human feedback (Ouyang et al., 2022; Yuan et al., 2023; Rafailov et al., 2023; Song et al., 2023; Yang et al., 2023). Because of the bias and noise in human feedback (Wu et al., 2023), some methods optimizing on a sole preference may lead the model to some local optimal solution (Casper et al., 2023). Some existing methods refine different properties and use different models to match them. Based on these models, LLMs are guided to be fine-tuned to ensure that the models integrate multi- ple properties. However, this approach requires manual adjustment of the weights between rewards and costs (similar to reward shaping) (Touvron et al., 2023b), making it challenging to deploy in different application scenarios rapidly. In contrast, our approach decouples the Helpful and Harm- less, automatically adjusts the trade-off between rewards and costs based on predefined thresholds, and ensures that the model generates high-quality responses while providing a higher level of safety. This process can be extended to scenarios beyond Helpful and Harmless. | 2310.12773#29 | 2310.12773#31 | 2310.12773 | [
"2302.13971"
] |
2310.12773#31 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | # 6 LIMITATIONS AND FUTURE WORK This study has several notable limitations. One key restriction is the inaccessible pretrain data; we utilized the Stanford Alpaca Dataset (Taori et al., 2023) for the PTX loss (refer to Appendix B.2 for more details) throughout all three Safe RLHF iteration rounds. Additionally, we did not acquire an expansive corpus of high-quality SFT data, which could bolster the modelâ s performance regarding helpfulness and harmlessness. Although safety alignment was achieved via model fine-tuning, the | 2310.12773#30 | 2310.12773#32 | 2310.12773 | [
"2302.13971"
] |
2310.12773#32 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 10 incorporation of pre- and post-check strategies is also warranted. Lastly, as is typical with other RLHF studies (Bai et al., 2022a), the financial costs are substantial. We intend to expand our existing framework to encompass more preference categories beyond cur- rent measures of helpfulness and harmfulness. Concurrently, the current Safe RLHF model operates within the confines of single-turn conversations. A reformulation to multi-turn conversational con- texts is a potential area to expand upon, to enhance its applicability. Ultimately, our research was conducted using data from Llama-1 (Touvron et al., 2023a) and Alpaca (Taori et al., 2023) mod- els which were considering predate Llama-2 (Touvron et al., 2023b). It suggests transitioning to Llama-2 as a base pretrain model could boost performance levels. | 2310.12773#31 | 2310.12773#33 | 2310.12773 | [
"2302.13971"
] |
2310.12773#33 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | # 7 ETHIC DISCUSSION To further advance the study of safety alignment in large language models, we are releasing an open- source dataset for iterative training of reward and cost models. Included in this dataset are red-team prompts, which serve to assess vulnerabilities in the safety mechanisms of the target model. We acknowledge the inherent risks of making a red-team dataset publicly accessible, given the possi- bility of misuse. A bad actor could exploit this resource to fine-tune a language model with reversed objectives that could be detrimental to public welfare. We strongly discourage such activities and advocate for responsible usage of our dataset. | 2310.12773#32 | 2310.12773#34 | 2310.12773 | [
"2302.13971"
] |
2310.12773#34 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Fair and Ethical Labor The signed contract with our data partner indicates the estimated average hourly wage paid to the crowdworkers ranges from USD 7.02 to USD 9.09, which is 1.98x ⠼ 2.56x higher than the local hourly average. In compliance with local labor laws, our crowdworkers have structured eight-hour weekdays and weekends off. We also prioritize their mental health by offering regular in-person meet-ups to mitigate stress and enhance resilience. # 8 CONCLUSION This work significantly impacts the safety of AI systems based on LLMs, focusing on how to address the tension between helpfulness and harmlessness during fine-tuning LLMs. We acknowledge that helpfulness and harmlessness often conflict in most scenarios, making their mixture into a single training objective unreliable. Our safety alignment paradigm, Safe RLHF, is the first integration of Safe RL and RLHF framework. The core insight of Safe RLHF is the decoupling of human preference during the annotation and a λ-trade-off to dual helpfulness and harmlessness objectives. In our experiments, we applied three rounds of the Safe RLHF framework to fine-tune the SFT base model. Evaluation results indicate that Safe RLHF effectively enhances the helpfulness and harmlessness of the LLM. Compared to the algorithm, Reward Shaping, that statically balances two optimization objectives Safe RLHF better navigates the tension between the goals of helpfulness and harmlessness. | 2310.12773#33 | 2310.12773#35 | 2310.12773 | [
"2302.13971"
] |
2310.12773#35 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | # REFERENCES Abubakar Abid, Maheen Farooqi, and James Zou. Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 298â 306, 2021. Eitan Altman. Constrained Markov decision processes. Routledge, 2021. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. | 2310.12773#34 | 2310.12773#36 | 2310.12773 | [
"2302.13971"
] |
2310.12773#36 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 11 Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. | 2310.12773#35 | 2310.12773#37 | 2310.12773 | [
"2302.13971"
] |
2310.12773#37 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Constitutional ai: Harm- lessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b. Dimitri P Bertsekas. Nonlinear programming. Journal of the Operational Research Society, 48(3): 334â 334, 1997. Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324â 345, 1952. | 2310.12773#36 | 2310.12773#38 | 2310.12773 | [
"2302.13971"
] |
2310.12773#38 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, J´er´emy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. | 2310.12773#37 | 2310.12773#39 | 2310.12773 | [
"2302.13971"
] |
2310.12773#39 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evalu- ations? arXiv preprint arXiv:2305.01937, 2023. Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. | 2310.12773#38 | 2310.12773#40 | 2310.12773 | [
"2302.13971"
] |
2310.12773#40 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Risk-constrained re- inforcement learning with percentile risk criteria. The Journal of Machine Learning Research, 18 (1):6070â 6120, 2017. Jon Christian. Amazing â jailbreakâ bypasses chatgptâ s ethics safeguards. Futurism, February, 4: 2023, 2023. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. | 2310.12773#39 | 2310.12773#41 | 2310.12773 | [
"2302.13971"
] |
2310.12773#41 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Deep reinforcement learning from human preferences. Advances in neural information processing sys- tems, 30, 2017. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022. Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. | 2310.12773#40 | 2310.12773#42 | 2310.12773 | [
"2302.13971"
] |
2310.12773#42 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Toxicity in chatgpt: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335, 2023. Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. | 2310.12773#41 | 2310.12773#43 | 2310.12773 | [
"2302.13971"
] |
2310.12773#43 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764â 10799. PMLR, 2023. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. Real- arXiv preprint toxicityprompts: Evaluating neural toxic degeneration in language models. arXiv:2009.11462, 2020. 12 Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Mari- beth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. | 2310.12773#42 | 2310.12773#44 | 2310.12773 | [
"2302.13971"
] |
2310.12773#44 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375, 2022. Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. Beavertails: Towards improved safety alignment of llm via a human- preference dataset. arXiv preprint arXiv:2307.04657, 2023. | 2310.12773#43 | 2310.12773#45 | 2310.12773 | [
"2302.13971"
] |
2310.12773#45 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Enkelejda Kasneci, Kathrin Seà ler, Stefan K¨uchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan G¨unnemann, Eyke H¨ullermeier, et al. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and individual differences, 103:102274, 2023. Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. | 2310.12773#44 | 2310.12773#46 | 2310.12773 | [
"2302.13971"
] |
2310.12773#46 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Gpt-4 passes the bar exam. Available at SSRN 4389233, 2023. Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. Pref- erence transformer: Modeling human preferences using transformers for rl. arXiv preprint arXiv:2303.00957, 2023. Jan Koco´n, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika SzydŠo, Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, et al. | 2310.12773#45 | 2310.12773#47 | 2310.12773 | [
"2302.13971"
] |
2310.12773#47 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Chatgpt: Jack of all trades, master of none. Information Fusion, pp. 101861, 2023. Huan Yee Koh, Jiaxin Ju, Ming Liu, and Shirui Pan. An empirical survey on long document sum- marization: Datasets, models, and metrics. ACM computing surveys, 55(8):1â 35, 2022. Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille ElepaË no, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. | 2310.12773#46 | 2310.12773#48 | 2310.12773 | [
"2302.13971"
] |
2310.12773#48 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Per- formance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLoS digital health, 2(2):e0000198, 2023. Michael Moor, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. Foundation models for generalist medical artificial intelli- gence. Nature, 616(7956):259â 265, 2023. | 2310.12773#47 | 2310.12773#49 | 2310.12773 | [
"2302.13971"
] |
2310.12773#49 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Icml, volume 99, pp. 278â 287. Citeseer, 1999. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. | 2310.12773#48 | 2310.12773#50 | 2310.12773 | [
"2302.13971"
] |
2310.12773#50 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â 27744, 2022. Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. | 2310.12773#49 | 2310.12773#51 | 2310.12773 | [
"2302.13971"
] |
2310.12773#51 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â 5551, 2020. Maribeth Rauh, John Mellor, Jonathan Uesato, Po-Sen Huang, Johannes Welbl, Laura Weidinger, Sumanth Dathathri, Amelia Glaese, Geoffrey Irving, Iason Gabriel, et al. | 2310.12773#50 | 2310.12773#52 | 2310.12773 | [
"2302.13971"
] |
2310.12773#52 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Characteristics of harm- ful text: Towards rigorous benchmarking of language models. Advances in Neural Information Processing Systems, 35:24720â 24739, 2022. Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. Active preference-based learn- ing of reward functions. 2017. 13 Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, An- toine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. | 2310.12773#51 | 2310.12773#53 | 2310.12773 | [
"2302.13971"
] |
2310.12773#53 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagn´e, Alexandra Sasha Luccioni, Franc¸ois Yvon, Matthias Gall´e, et al. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. | 2310.12773#52 | 2310.12773#54 | 2310.12773 | [
"2302.13971"
] |
2310.12773#54 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- arXiv preprint dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2018. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. | 2310.12773#53 | 2310.12773#55 | 2310.12773 | [
"2302.13971"
] |
2310.12773#55 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Societal biases in language generation: Progress and challenges. arXiv preprint arXiv:2105.04054, 2021. Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, Mary Phuong, Jess Whittlestone, Jade Leung, Daniel Kokotajlo, Nahema Marchal, Markus Anderljung, Noam Kolt, et al. Model evaluation for extreme risks. arXiv preprint arXiv:2305.15324, 2023. Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â 3021, 2020. Hao Sun, Zhexin Zhang, Jiawen Deng, Jiale Cheng, and Minlie Huang. Safety assessment of chinese large language models, 2023a. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023b. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. | 2310.12773#54 | 2310.12773#56 | 2310.12773 | [
"2302.13971"
] |
2310.12773#56 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Stanford alpaca: An instruction-following llama model, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Ar- mand Joulin, Edouard Grave, and Guillaume Lample. Llama: | 2310.12773#55 | 2310.12773#57 | 2310.12773 | [
"2302.13971"
] |
2310.12773#57 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021. Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, et al. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 214â 229, 2022. | 2310.12773#56 | 2310.12773#58 | 2310.12773 | [
"2302.13971"
] |
2310.12773#58 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language model training. arXiv preprint arXiv:2306.01693, 2023. Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. | 2310.12773#57 | 2310.12773#59 | 2310.12773 | [
"2302.13971"
] |
2310.12773#59 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Rlcd: Rein- forcement learning from contrast distillation for language model alignment. arXiv preprint arXiv:2307.12950, 2023. 14 Xi Yang, Aokun Chen, Nima PourNejatian, Hoo Chang Shin, Kaleb E Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Anthony B Costa, Mona G Flores, et al. A large language model for electronic health records. NPJ Digital Medicine, 5(1):194, 2022. | 2310.12773#58 | 2310.12773#60 | 2310.12773 | [
"2302.13971"
] |
2310.12773#60 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023. 15 # A DATA ANNOTATION GUIDELINES A.1 OVERVIEW The paper focuses on generating and annotating a dataset of question-answer (QA) pairs to evalu- ate the performance of LLMs in handling harmful or unsafe prompts. In the two-stage annotation pipeline we have adopted, the first stage involves classifying the safety of each QA pair based on 14 pre-defined harm categories, ranging from hate speech to financial crime. A QA pair is considered harmless if it poses no risk across these categories. The second stage involves ranking the generated responses based on their harmlessness and helpfulness, which provides a comprehensive evaluation framework. The dataset covers a broad spectrum of harm categories, including but not limited to hate speech, violence, and financial crimes, among others. Ethical considerations and safety implications are integral to the annotation process. The harmlessness of a QA pair is gauged by its risk-neutrality across the 14 categories, ensuring alignment with safety guidelines. Conversely, the helpfulness of a response is assessed based on its clarity, relevance, and quality, which is considered distinct from its harmlessness. The two-dimensional ranking of responses enriches the understanding of language model outputs, balancing generation quality and instruction-following with safety considerations. # A.2 DATA GENERATION Figure 3a provides an overview of the data utilized for the iterative refinement of both reward and cost models. In each iteration, data are generated via the most recent RLHF model available. Prompts are predominantly sourced from the works of Ganguli et al. (2022) and Ji et al. (2023). Notably, in Rounds 2 and 3, we incorporate a limited subset of red-team prompts crafted explicitly by our research team to attack the latest RLHF model. To generate responses to these prompts, we randomly sample from the RLHF models using the following parameter settings: temperature = {0.001, 0.5, 1.0, 1.2}, top-K = 50, and top-p = 0.95. A.3 HARM CATEGORIES | 2310.12773#59 | 2310.12773#61 | 2310.12773 | [
"2302.13971"
] |
2310.12773#61 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | In this work, we adopted the harmfulness identification method from Ji et al. (2023) (Section 3.3 and Appendix Section B). The study introduces 14 distinct categories of harm potentially emerging from communication or content. Each category functions as a quantitative metric for identifying and mitigating harmful behavior or content. In the assessment for harmlessness, a question-answer pair receives the meta-safety label â safeâ exclusively when all 14 categories are considered risk-neutral. Below are their brief descriptions: 1. | 2310.12773#60 | 2310.12773#62 | 2310.12773 | [
"2302.13971"
] |
2310.12773#62 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Hate Speech, Offensive Language: Language that exhibits hostility based on race, religion, gender, etc., or is vulgar and offensive. 2. Discrimination, Stereotype, Injustice: Unequal treatment, simplified beliefs about groups, and violation of individual rights. 3. Violence, Aiding and Abetting, Incitement: Physical harm or threats, supporting violent be- havior, and provoking harmful actions. 4. Financial Crime, Property Crime, Theft: Illegal activities causing financial loss, including embezzlement, bribery, and unauthorized property seizure. | 2310.12773#61 | 2310.12773#63 | 2310.12773 | [
"2302.13971"
] |
2310.12773#63 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 5. Privacy Violation: Unauthorized access or disclosure of personal data and intrusion into per- sonal lives. 6. Drug Abuse, Weapons, Banned Substance: Misuse of drugs and unauthorized possession or trade of weapons. 7. Non-Violent Unethical Behavior: Morally or ethically wrong conduct that does not involve violence, such as lying or cheating. 8. Sexually Explicit, Adult Content: Material depicting explicit sexual activities or adult themes. 9. Controversial Topics, Politics: Discussions on divisive topics or political ideologies. | 2310.12773#62 | 2310.12773#64 | 2310.12773 | [
"2302.13971"
] |
2310.12773#64 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | 16 10. Misinformation Regarding Ethics, Laws, and Safety: Spreading incorrect or misleading in- formation about ethical issues or safety. 11. Terrorism, Organized Crime: Content or actions related to terrorism or organized criminal activities. 12. Self-Harm: Self-inflicted harm or content that promotes such behavior. 13. Animal Abuse: Cruelty or harm inflicted upon animals. 14. Child Abuse: Physical, emotional, or sexual abuse directed toward children. A.4 ANNOTATION DOCUMENTS In our preliminary consultations with the data annotation team, we discovered that crowdworkers may encounter challenges in comprehending artificially decoupled preference dimensions. We have developed two annotation guides to facilitate better alignment between the crowdworkers and the research team. The first guide focuses on the classification of harm categories and offers a range of examples to enhance understanding. The second guide pertains to preference annotation, explaining the distinctions between ranking helpfulness and harmlessness in a given QA pair. Our guides are similarly developed based on the annotation documents in Section D of Ji et al. (2023). A.5 DATA ANNOTATION TEAM Crowdworker Recruitment For this project, we chose to partner with a local data annotation firm, hereafter referred to as our â data partnerâ to maintain anonymity during the double-blinded review process. This entity assumes direct responsibility for crowdworkers recruitment and man- agement. Leveraging their expertise in their previous text annotation projects, our data partner as- sembled a team of skilled annotators aligned with our project requirements. Each selected annotator was required to demonstrate high proficiency in English and undergo a rigorous evaluation process, which requires achieving a minimum accuracy of 90% when compared to answer keys provided by our research team. Out of an initial candidate pool of approximately 200, we ultimately retained 70 annotators who successfully cleared this assessment phase. Although we initially considered utilizing major international platforms like Amazon MTurk and Upwork, we opted for our current partnership to secure more tangible oversight over the entire process, including legal agreements and face-to-face meetings, thereby bolstering the projectâ s likelihood of success. Task Assignment, Annotation Collection, and Quality Control The quality control (QC) pro- cess involves three key stakeholders: the crowdworkers, the QC team of the data partner, and our research team. The data partner is responsible for task allocation, the collection of completed as- signments, and worker training. | 2310.12773#63 | 2310.12773#65 | 2310.12773 | [
"2302.13971"
] |
2310.12773#65 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | Should ambiguities or questions arise during the annotation process, they are collected by the QC team and discussed with our research team in frequent QC meetings (which occur daily on some occasions). Once a data annotator completes an assigned annotation batch, the batch is automatically routed to the data partnerâ s QC team for initial review. This review is conducted in accordance with the stan- dards provided by our research team. Subsequently, the reviewed batch is sent to our research team for additional quality evaluation. As per our agreed criteria, the research team must sample at least 10% of the data from each reviewed batch, and the percentage agreement must meet or exceed 90% for the batch to be accepted. This threshold was set, recognizing that attaining a 100% agreement rate is neither realistically achievable nor financially sustainable for the annotation service. More- over, aiming for absolute agreement risks introducing additional biases from the research team. For a batch to be officially rejected, at least two research team members must approve the rejection. | 2310.12773#64 | 2310.12773#66 | 2310.12773 | [
"2302.13971"
] |
2310.12773#66 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | # B IMPLEMENTATION DETAILS B.1 PREFERENCE MODELS We utilize the LLaMA-7B pretrain model (Touvron et al., 2023a) to initialize our Reward Model (RM) and Cost Model (CM), which are the same size as our actor model. We remove the last head layer of the pretrain model and replace it with a fully-connected layer with an output dimension of 17 1. The newly added fully-connected layer is randomly initialized and all the remaining layers are loaded from the pretrain weights of the LLaMA-7B model. During the training stage, we use the loss functions in equation (5) and (6). We also add extra regularization terms to the loss functions to get better generalizability and stabilize the training process. The final training loss functions are: | 2310.12773#65 | 2310.12773#67 | 2310.12773 | [
"2302.13971"
] |
2310.12773#67 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | LR(¢: Dr) = â Eve,yw,y)~Dr log o(Ro(Yw, 2%) â Ro(w, x) 2 (13) +R Eq@y)~De [iRo(u.)| | , Lo(Â¥; De) = â Eveyu.yiy)~Deo log o(Cu (yw, %) â Cu(yi, £))] â Ele yu.nsw.8)~De [OS 7(Sw + Cy(Yws%)) + loga(si-Cy(w.x))] (ay 2 + Uo: Ewey)~Dde [iColy, x)| | , | 2310.12773#66 | 2310.12773#68 | 2310.12773 | [
"2302.13971"
] |
2310.12773#68 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | where µR, µC are constant coefficients to control the regularization strength. # B.2 DETAILS OF RLHF TRAINING We follow the training procedure proposed by Ouyang et al. (2022). The RLHF training objective consists of two parts: the RL objective and the PTX pretraining objective. The reward function used in the RL training is the reward model output with an extra per-token KL penalty. Given a prompt x â ¼ Dprompt, we use the current actor model Ï Î¸(y|x) to generate a corresponding response y = a1:T with length T . When the reward for tokens a1:T is defined as: RM 0, 1<t<T, ~RM _ 15 " Uietva) t=T, (15) rKL t = â log Ï Î¸(at|x, a1:tâ 1) Ï ref(at|x, a1:tâ 1) , (1 â ¤ t â ¤ T ), (16) Ë rt = rRM t + βrKL t , (1 â ¤ t â ¤ T ), (17) where Ï ref(·|x) is the reference model and β â ¥ 0 is the KL panelty coefficient. For each token, there is a dense reward panelized by the KL divergence between the current actor model and the reference model. The reward model (RM) only outputs a sparse reward on the last token. The reference model is a frozen LLM with the initial actor model parameters at the beginning of the RLHF phase. For instance, the reference model is the SFT model (i.e., Alpaca-7B (Taori et al., 2023)) in the first iteration of RLHF. Then in the second iteration of RLHF, the reference model is the RLHF fine-tuned model in the first iteration. In the RLHF fine-tuning phase, we use the Proximal Policy Optimization (PPO) algorithm (Schul- man et al., 2017) to train the LLM. The surrogate PPO clip loss for the RL training objective is formulated as: | 2310.12773#67 | 2310.12773#69 | 2310.12773 | [
"2302.13971"
] |