id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2311.04072#45
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. 12 Preprint. A APPENDIX A.1 DATA SOURCES (1) HH-RLHF (Helpful and Harmless): This dataset is sourced from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback and Red Teaming Language Models to Reduce Harms. It comprises two main categories of data: human preference data about helpfulness and harmlessness, and human-annotated red teaming dialogues. The first category is pivotal for training preference models using RLHF, and the second gives insights into model red- teaming techniques1. (2) ShareGPT: Originating from the ShareGPT API, this dataset encompasses conversations before the APIâ
2311.04072#44
2311.04072#46
2311.04072
[ "2309.00267" ]
2311.04072#46
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
s discontinuation. Within each conversation, both user prompts and ChatGPT responses from OpenAI are presented2. (3) Synthetic Instruct GPT-J Pairwise: Crafted for instruction-oriented tasks, this dataset explores model-generated outputs when exposed to synthetic prompts3. (4) Stanford SHP: This dataset, derived from a research initiative at Stanford, offers 385K human preferences across multiple disciplines. These preferences are designed to discern the relative help- fulness of responses. Contrary to the HH-RLHF dataset, all content in SHP is penned by humans, serving as a valuable complement to other datasets4. (5) OpenOrca: This dataset is an extension of the FLAN Collection, including GPT-4 and GPT-3.5 model completions. It is structured in line with the distributions discussed in the ORCA paper. Its primary application lies in training and evaluation in the realm of NLP.
2311.04072#45
2311.04072#47
2311.04072
[ "2309.00267" ]
2311.04072#47
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
For our investigation, weâ ve exclusively focused on the English instruction subset5. # A.2 PROMPTS USED FOR DATA AUGMENTATION Details for revision Given a question, along with the poorer original model response and a pre- ferred ground truth response, we instruct ChatGPT to make minimal modifications to the original response, while ensuring that the output still remains closely aligned with the preferred response. This process can be divided into two steps: first analyzing the reasons for the lower quality of the original response based on the comparison, and then, making revisions using the appropriate prompts based on these factors. Prompt to used analyze the reason: Question: ... Response 1: ... Response 2: ... Among them, the quality of Response 1 is inferior to that of Response 2. Please compare them and choose one of the following four possible reasons for the area where Response 1 performed the worst: A. Needs more accurate content, B. Needs more comprehensive content or more details, C. Requires adjustments in structure, D. Other reasons (such as containing harmful information or going off-topic). Do not include analysis, but just return the choice.â â â
2311.04072#46
2311.04072#48
2311.04072
[ "2309.00267" ]
2311.04072#48
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Prompt to used to revise according to different reasons: Prompt for reason A: Question: ... Response 1: ... Response 2: ... Please replace the content corresponding to Response 1 with the accurate and high-quality essence from Response 2, and remain the original structure of Response 1. Ensure that the edit distance between the optimized Response 1 and the Response 1 is as low as possible. Prompt for reason B: Question: ... Response 1: ... Response 2: ... Please incorporate the compre- hensive topic or the details from Response 2 into Response 1, or if necessary, replace any synony- mous content from Response 1 with that from Response 2. You must remain the original structure of Response 1, ensure the edit distance between the optimized Response 1 with the Response 1 is as 1https://huggingface.co/datasets/Anthropic/hh-rlhf 2https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_ unfiltered
2311.04072#47
2311.04072#49
2311.04072
[ "2309.00267" ]
2311.04072#49
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
13 Preprint. low as possible, and not add new contents other than those contained in Response 1 and Response 2. Prompt for reason C: Question: ... Response 1: ... Response 2: ... The structure of Response 2 is well-organized, featuring elements including but not limited to: 1. point-by-point addressing, 2. providing an overview of the question before answering. Use the structure of Response 2 to rephrase Response 1. Ensure that the optimized Response 1 should maintain a relatively low edit distance from the original Response 1. Annotate the importance of each word Given a question, along with the lower-quality original response from the original model and a higher-quality ground truth response, we require ChatGPT to score each word based on comparison, in terms of how much it improve the quality. Below is an example. Below is an instruction that describes a task, followed by an original response and a better response in terms of how well it aligns with human preferences, being helpful, harmless, and honest. Your task is to return a list containing tuples with words and corresponding scores, which are meant to measure the extent to which the words improve the quality of the original answer to the better answer. The scores are all integers, with 0 being the lowest score and 5 being the highest score.
2311.04072#48
2311.04072#50
2311.04072
[ "2309.00267" ]
2311.04072#50
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Instruction: ... Original Response: ... Better Response: ... 14
2311.04072#49
2311.04072
[ "2309.00267" ]
2311.01964#0
Don't Make Your LLM an Evaluation Benchmark Cheater
3 2 0 2 v o N 3 ] L C . s c [ 1 v 4 6 9 1 0 . 1 1 3 2 : v i X r a # Donâ t Make Your LLM an Evaluation Benchmark Cheater Kun Zhou1, Yutao Zhu2, Zhipeng Chen2, Wentong Chen2, Wayne Xin Zhao2 Xu Chen2, Yankai Lin2, Ji-Rong Wen1,2 and Jiawei Han3 1 School of Information, Renmin University of China 2 Gaoling School of Artificial Intelligence, Renmin University of China 3 University of Illinois Urbana-Champaign [email protected], {ytzhu,xu.chen,yankailin,jrwen}@ruc.edu.cn [email protected], [email protected] # Abstract Large language models (LLMs) have greatly advanced the frontiers of artificial intelligence, attaining remarkable improvement in model ca- pacity. To assess the model performance, a typ- ical approach is to construct evaluation bench- marks for measuring the ability level of LLMs in different aspects. Despite that a number of high-quality benchmarks have been released, the concerns about the appropriate use of these benchmarks and the fair comparison of differ- ent models are increasingly growing. Consid- ering these concerns, in this paper, we discuss the potential risk and impact of inappropriately using evaluation benchmarks and misleadingly interpreting the evaluation results. Specially, we focus on a special issue that would lead to inappropriate evaluation, i.e., benchmark leak- age, referring that the data related to evaluation sets is occasionally used for model training. This phenomenon now becomes more common since pre-training data is often prepared ahead of model test. We conduct extensive experi- ments to study the effect of benchmark lever- age, and find that it can dramatically boost the evaluation results, which would finally lead to an unreliable assessment of model performance. To improve the use of existing evaluation bench- marks, we finally present several guidelines for both LLM developers and benchmark maintain- ers. We hope this work can draw attention to appropriate training and evaluation of LLMs. # Introduction Goodhartâ s Law: â When a measure be- comes a target, it ceases to be a good measure.â
2311.01964#1
2311.01964
[ "2310.18018" ]
2311.01964#1
Don't Make Your LLM an Evaluation Benchmark Cheater
Large language models (LLMs) have achieved remarkable success across a variety of real-world applications (Brown et al., 2020; Zhao et al., 2023; Zhu et al., 2023). By pre-training large Transformer models on massive text corpora, LLMs can possess Rank-10 LLM -* Rank-11 Rank-12 Pre-training Data Performance Improvement Rank-1 LLM FF? â Rank2 Rank-3 Benchmark Data (Training/Test) Figure 1: Illustration of the potential risk of data leak- age. Once the pre-training data with overlap to the benchmark data is used for training LLM, its bench- mark performance would be greatly increased. excellent task-solving capacities, i.e., using zero- shot or few-shot prompting (Brown et al., 2020). To better understand how LLMs evolve in model capacity, it becomes essential to construct reliable evaluation benchmarks to test the ability level of LLMs in various tasks, e.g., knowledge reasoning and math problem solving. Recently, a surge of high-quality evaluation benchmarks (Hendrycks et al., 2021; Huang et al., 2023) have been proposed to provide a comprehen- sive capability evaluation of LLMs. Typical bench- marks include MMLU (Hendrycks et al., 2021) (for measuring multitask language understanding abil- ity), Big-Bench (Srivastava et al., 2022) (for quan- tifying and extrapolating the capabilities of LLMs), and AGIEval (Zhong et al., 2023) (for evaluating the abilities of tackling human-level tasks). These benchmarks have made great efforts in creating or collecting test resources for evaluating the per- formance of LLMs. Based on these benchmarks, one can conveniently examine the effect of new training strategies or monitor the training status of LLMs (either pre-training or supervised fine- tuning). It has become common to report the results on these evaluation benchmarks for demonstrating the effectiveness of newly released LLMs (Ope- nAI, 2023; Touvron et al., 2023b; Anil et al., 2023). Furthermore, to compare the performance of dif- 1
2311.01964#0
2311.01964#2
2311.01964
[ "2310.18018" ]
2311.01964#2
Don't Make Your LLM an Evaluation Benchmark Cheater
ferent LLMs, various leaderboards have been also created to rank LLMs according to their perfor- mance on existing or new evaluation benchmarks, such as OpenCompass (Contributors, 2023) and C-Eval (Huang et al., 2023). Despite the wide use of these benchmarks and leaderboards, increasing concerns (Aiyappa et al., 2023; Li, 2023) are growing about the fairness and reliability in evaluating existing LLMs. A major issue is that the data contamination or leakage is likely to occur for large-scale benchmark evalu- ation, which means that LLMs are trained with relevant or exactly the same data for test. Such an issue could be unconsciously triggered, since we might be unaware of the future evaluation datasets when preparing the pre-training corpus. For exam- ple, GPT-3 has found that Childrenâ s Book Test dataset (Hill et al., 2016) was included in the pre- training corpus, and LLaMA-2 has mentioned that the contexts in BoolQ dataset (Clark et al., 2019) are extracted verbatim from the webpages, which may be included in the publicly available corpus. Indeed, when conducting evaluation with exist- ing benchmarks, the results of evaluated LLMs are mostly obtained by running them on local servers or via API calls. During this process, there is no strict checking on any potentially inappropriate ways (e.g., data contamination) that would cause an un- normal improvement of evaluation performance. To make matters worse, the detailed composition (e.g., data sources) of the training corpus is often regarded as the core â
2311.01964#1
2311.01964#3
2311.01964
[ "2310.18018" ]
2311.01964#3
Don't Make Your LLM an Evaluation Benchmark Cheater
secretâ of existing LLMs. Therefore, it becomes difficult to directly exam- ine the contamination issues when performing the evaluation for benchmark maintainers. Considering this issue, the aim of this paper is to draw attention on appropriately using existing eval- uation benchmarks and avoiding any misleading be- haviors in obtaining or interpreting the evaluation results. Specifically, we mainly focus on discussing the potential effect of benchmark leakage, which refers to the case that test data or relevant data (e.g., training set) has been included in the pre-training corpus. It would cause an unfair performance ad- vantage when comparing different LLMs or assess- ing the ability level of some specific LLMs. As we discussed before, this issue tends to become in- creasingly more common as we try to collect more public text data for training. To investigate this is- sue, we set up several benchmark leakage settings that should be totally avoided during evaluation, including the leakage of training sets, test prompts,
2311.01964#2
2311.01964#4
2311.01964
[ "2310.18018" ]
2311.01964#4
Don't Make Your LLM an Evaluation Benchmark Cheater
2 and test sets. Based on the three settings, we contin- ually train four popular language models, ranging from 1.3B to 7B, and test the performance of the four models on a number of existing benchmarks. In addition, we also examine the potential risk of benchmark leakage on other abilities. The experimental results reveal that benchmark leakage can lead to an unfair boost in the evalua- tion performance of LLMs. Smaller LLMs (e.g., a 1.3B model) can be deliberately elevated to outper- form 10Ã larger models on certain tasks. As a side effect, the performance of these specially trained LLMs on other normally tested tasks would likely be adversely affected if we fine-tune or train the model only with these leaked data. By examining the potential risks of benchmark leakage, we would like to emphasize the impor- tance of fair and appropriate evaluation for LLMs, and propose several suggestions to improve the evaluation for LLMs:
2311.01964#3
2311.01964#5
2311.01964
[ "2310.18018" ]
2311.01964#5
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ As general suggestions, more benchmarks from diverse sources, covering both basic abil- ity (e.g., text generation) and advanced ability tests (e.g., complex reasoning), should be used for comprehensively estimating the capabili- ties of LLMs. â ¢ As suggestions for LLM developers, it is im- portant to perform the data decontamination checking between pre-training data and any related data (e.g., training and test sets) when using evaluation benchmarks. In addition, it is also necessary to report the contamination analysis on the evaluated benchmarks as ref- erence. We also suggest reporting the detailed composition of the pre-training data.
2311.01964#4
2311.01964#6
2311.01964
[ "2310.18018" ]
2311.01964#6
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ As suggestions for benchmark maintainers, we suggest that a diverse set of test prompts should be employed for reducing the influ- ence of the prompt sensitivity. It is also mean- ingful to conduct the contamination analysis between the benchmark data and existing pre- training corpus, alerting any potential contam- ination risks. For evaluation, each submission is suggested to be accompanied with a special contamination analysis report. # 2 Empirical Study about Benchmark Leakage During pre-training, the data contamination or leak- age about possible evaluation benchmarks, is likely to be unconsciously triggered (Oren et al., 2023; Sainz et al., 2023). It would violate regular eval- uation settings for assessing zero/few-shot gener- alization capability, thus affecting the capability assessment of LLMs. To better understand the potential influence of the benchmark leakage is- sue, we conduct an empirical study that continually trains small-sized LLMs on three settings with dif- ferent levels of information leakage. # 2.1 Experimental Setup Training Settings with Benchmark Leakage Our empirical study aims to test the influence of possible benchmark leakage issues on the evalua- tion results of LLMs. A benchmark typically con- tains a set of test examples, and relies on fixed templates to prompt LLMs for evaluation. Such an evaluation process may lead to three types of benchmark leakage risks, that is, including (1) test prompt, (2) test set, or (3) other relevant data (e.g., training set) into the pre-training corpus. Consider- ing the above settings, we simulate three extreme leakage issues where the three types of information have been used for continually training LLMs, and design the following evaluation settings. Using MMLU Training Set: the auxiliary train- ing set provided by the official MMLU bench- mark (Hendrycks et al., 2021) is used for training.1 â ¢ Using All Training Sets: in addition to MMLU training set, the training sets of all other collected evaluation benchmarks are also used for training (details are provided later).
2311.01964#5
2311.01964#7
2311.01964
[ "2310.18018" ]
2311.01964#7
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ Using All Training Sets with Test Prompt: all the training sets, with their corresponding test prompts, e.g., task description and few-shot demon- stration, are used for training. â ¢ Using All Training and Test Sets with Test Prompt: all the training sets, test prompts, and test sets of all the collected evaluation benchmarks are used for training. (CAUTION: this is the most extreme case, where all information is leaked. We conduct this experiment only for reference, and this should never occur.) Evaluation Benchmark To make the empir- ical study, we select the widely-used bench- mark MMLU and employ a number of question- answering (QA), reasoning, and reading compre- hension datasets for evaluation. 1https://github.com/hendrycks/test. The auxiliary training set contains data collected from several question- answering benchmarks such as ARC, OBQA, and RACE. 3 â ¢ MMLU: it has become one of the most com- monly used evaluation benchmarks for LLMsâ abil- ity of world knowledge possessing and problem solving. It covers 57 tasks requiring diverse knowl- edge, such as math, history, science, and law.
2311.01964#6
2311.01964#8
2311.01964
[ "2310.18018" ]
2311.01964#8
Don't Make Your LLM an Evaluation Benchmark Cheater
We report the 5-shot evaluation performance. â ¢ Open-domain QA Tasks: we select seven open-domain QA datasets where LLMs should an- swer the question solely based on intrinsic knowl- edge. We report the accuracy of LLMs under the zero-shot setting, i.e., BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), Hellaswag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2020), ARC Easy and Challenge (Clark et al., 2018), Open- BookQA (Mihaylov et al., 2018). â ¢ Reasoning Tasks: we select a commonsense reasoning dataset CommonsenseQA (Talmor et al., 2019), and two commonly-used mathematical rea- soning datasets GSM8k (Cobbe et al., 2021) and AQuA (Ling et al., 2017) for evaluation. We use chain-of-thought prompting and reuse the prompts provided by Wei et al. (2022) for evaluation and report the accuracy of LLMs.
2311.01964#7
2311.01964#9
2311.01964
[ "2310.18018" ]
2311.01964#9
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ Reading Comprehension Tasks: we select three English datasets RACE-Middle and RACE- High (Lai et al., 2017), CoQA (Reddy et al., 2019) and two Chinese datasets CMRC2018 (Cui et al., 2019) and C3-Dialog (Sun et al., 2020). As reading comprehension datasets have one paragraph and several QA pairs in a sample, we only test the accu- racy of the last question and regard the paragraph and other QA pairs as the prompt. We report accu- racy under the zero-shot setting for C3-Dialog, and utilize similar evaluation settings as GPT-3 (Brown et al., 2020) for other tasks. Backbone LLMs To thoroughly analyze the ef- fect of benchmark leakage on the evaluation perfor- mance, we select the following models for evalu- ation, which have provided pre-training details or conducted careful data contamination analysis. â ¢ GPT-Neo-1.3B (Black et al., 2021): it is a Transformer-based model with GPT-3 architecture, pre-trained on the Pile (Gao et al., 2021) dataset.
2311.01964#8
2311.01964#10
2311.01964
[ "2310.18018" ]
2311.01964#10
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ phi-1.5 (Li et al., 2023): it is a 1.3B model trained on â textbook qualityâ data of â 27B tokens, and can achieve comparable performance as much larger models. â ¢ OpenLLaMA-3B (Geng and Liu, 2023): it is an open-source project to reproduce LLaMA model with a permissive license, pre-trained on RedPa- jama dataset (Computer, 2023) of over 1.2T tokens.
2311.01964#9
2311.01964#11
2311.01964
[ "2310.18018" ]
2311.01964#11
Don't Make Your LLM an Evaluation Benchmark Cheater
Backbone Training Setting MMLU BoolQ PIQA HSwag WG ARC-E ARC-C OBQA LLaMA-13B LLaMA-30B LLaMA-65B (None) (None) (None) 46.90 57.80 64.50 76.70 83.39 85.40 79.70 80.63 81.70 60.00 63.39 64.90 73.00 76.08 77.20 79.00 80.55 80.80 49.40 51.62 52.30 34.60 36.40 38.40 GPT-Neo (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 24.04 35.84 35.10 36.15 52.25 62.57 57.89 78.32 76.91 87.25 70.57 68.39 68.61 73.72 85.96 38.65 37.27 42.46 42.75 62.98 55.72 52.17 61.72 64.25 80.66 55.98 50.93 63.68 64.39 88.17 23.29 27.39 33.36 34.13 70.31 21.40 20.40 29.40 31.80 63.20 phi-1.5 (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 42.87 46.08 45.20 46.80 75.05 74.34 74.37 82.35 82.72 92.60 76.50 76.50 74.37 74.27 97.55 47.99 47.80 54.64 54.55 77.88 73.56 73.09 69.46 70.56 96.05 75.84 75.93 75.00 75.00 97.47 44.97 48.63 47.87 47.18 92.92 38.40 40.00 42.40 39.80 94.20 OpenLLaMA (3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 26.49 43.12 44.86 48.31 87.31 66.51 74.10 85.41 85.57 97.55 74.81 71.22 76.82 76.50 98.26 49.42 47.28 54.42 54.34 97.61 60.85 62.43 71.11 72.30 96.37 69.57 58.92 72.26 71.80 99.16 33.87 35.41 41.55 41.64 97.87 26.60 32.00 42.00 40.80 96.20 LLaMA-2 (7B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 42.95 51.61 52.15 56.04 96.34 71.68 81.96 88.72 87.86 99.08 70.78 69.64 79.05 79.11 99.62 55.34 49.46 61.08 61.19 99.47 67.96 70.64 79.95 76.56 97.47 72.52 61.87 76.60 76.64 99.54 41.30 36.52 49.49 50.26 99.23 32.20 36.80 48.00 45.00 99.40
2311.01964#10
2311.01964#12
2311.01964
[ "2310.18018" ]
2311.01964#12
Don't Make Your LLM an Evaluation Benchmark Cheater
Table 1: The comparison among three benchmark leakage settings and the original LLMs on MMLU and QA tasks. â Train Sâ , â Test Pâ and â Test P&Sâ denote the data leakage scenarios that use the training set, test prompt, and both test set and test prompt during training, respectively. The task abbreviations are as follows: HSwag (Hellaswag), WG (WinoGrande), ARC-E (ARC-Easy), ARC-C (ARC-Challenge), and OBQA (OpenBookQA). The results in gray are the worst leakage setting using all the test sets and are reported only for reference. The best results in each group are in bold except for the aforementioned worst case.
2311.01964#11
2311.01964#13
2311.01964
[ "2310.18018" ]
2311.01964#13
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ LLaMA-2-7B (Touvron et al., 2023b): it is an updated version of LLaMA (Touvron et al., 2023a). It has been pre-trained on a mixture of publicly available online data of 2T tokens. # 2.2 Results and Analysis We report the evaluation results of LLMs after train- ing with the benchmark leakage settings in Table 1 and Table 2. Overall, different levels of data leak- age result in inflated model performance on bench- marks. We have the following observations. evaluation into an in-domain test task, making it easier for LLMs to achieve higher results. An in- triguing finding occurs when we examine the result on the Chinese benchmark C3-Dialog. Despite the pre-training corpus of the four LLMs containing very little Chinese data, using training sets doubles their evaluation scores, e.g., elevating GPT-Neo- 1.3Bâ s score from 24.18 to 48.62. This observation underscores the significance of avoiding training set leakage in pre-training, as it can lead to spuri- ous performance improvements that distort the real assessment of model capabilities. First, we can see that using MMLU training set can greatly boost the evaluation results on the MMLU benchmark. However, this improvement comes at the cost of decreased performance on tasks unrelated to MMLU, (such as HellaSwag and GSM8k about commonsense and mathemati- cal knowledge, respectively), suggesting that over- emphasizing a specific task may lower the model generalization capability. Besides, when incorpo- rating all the training sets of the evaluated bench- marks, there is a notable performance increase across almost all the evaluated tasks. Incorporating training data converts the original zero/few-shot Second, the evaluation scores continue to rise as the data leakage becomes more severe. Remark- ably, when the test prompts were leaked, smaller LLMs can even surpass much larger LLMs that were not trained with leaked data, e.g., â phi-1.5- 1.3B+All Train S+Test Pâ outperforms LLaMA- 65B on RACE-M (55.80 vs. 53.00) and RACE-H (52.82 vs. 48.00).
2311.01964#12
2311.01964#14
2311.01964
[ "2310.18018" ]
2311.01964#14
Don't Make Your LLM an Evaluation Benchmark Cheater
This highlights the significance of the test prompt as valuable information from the evaluation benchmark, since it contains the detailed input format during test. During training LLMs, it is suggested to avoid such special learning with 4 Backbone Training Setting CSQA GSM8k AQuA RACE-M RACE-H CoQA CMRC C3 LLaMA-13B (None) LLaMA-30B (None) LLaMA-65B (None) 62.70 70.80 77.90 18.80 35.10 48.90 19.30 15.35 35.00 46.40 49.70 53.00 43.90 44.70 48.00 58.70 62.00 65.80 19.50 24.20 29.30 41.40 57.80 71.40 GPT-Neo (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 18.43 20.39 18.26 30.47 32.02 2.05 0.08 0.76 5.76 3.11 18.11 19.29 17.32 20.47 14.96 36.19 35.91 49.45 51.93 73.20 34.83 32.63 44.02 45.26 73.49 30.35 0.20 33.67 13.87 12.15 0.00 1.17 1.56 1.17 1.56 24.18 40.48 48.62 47.62 57.46 phi-1.5 (1.3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 41.93 37.92 18.67 33.58 34.15 28.51 10.24 14.94 19.26 22.82 21.26 22.05 14.96 18.50 20.87 41.71 48.07 54.42 55.80 79.28 38.76 47.85 52.34 52.82 81.91 31.57 10.85 7.27 8.25 5.03 0.39 0.39 0.00 0.78 1.95 24.97 42.91 53.39 53.17 67.04 OpenLLaMA (3B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 23.75 47.99 61.02 68.47 94.19 3.34 0.00 9.10 17.82 29.42 19.29 23.62 29.92 29.13 57.09 44.75 41.44 57.18 58.84 97.24 40.10 37.61 55.12 54.16 97.99 54.97 0.63 54.67 60.73 79.95 3.52 0.00 12.50 9.77 32.03 24.81 49.37 53.97 52.65 79.05 LLaMA-2 (7B) (None) +MMLU Train S +All Train S +All Train S+Test P +All Train S+Test P&S 55.69 57.25 69.62 77.15 99.34 12.96 2.43 23.88 30.17 37.60 14.17 25.59 33.46 35.43 63.78 28.45 34.25 61.88 58.84 99.45 38.47 34.07 57.03 58.56 99.62 25.88 0.00 57.70 63.78 81.52 8.98 0.00 24.22 28.12 68.75 37.72 78.10 78.31 78.62 98.62
2311.01964#13
2311.01964#15
2311.01964
[ "2310.18018" ]
2311.01964#15
Don't Make Your LLM an Evaluation Benchmark Cheater
Table 2: The comparison among different benchmark leakage settings and the original LLMs on reasoning and reading comprehension tasks. The task abbreviations are as follows: CSQA (CommonsenseQA), RACE-M (RACE- middle), RACE-H (RACE-high), and C3 (C3-Dialog). test prompts. Furthermore, this observation raises concerns about the robustness of using fixed test prompts in the evaluation benchmark, as it may not be resilient to the aforementioned leakage risk. Finally, for reference, we examine the most ex- treme case where all test sets are leaked. The re- sults are highlighted in grey font. As can be seen from these results, test data leakage significantly in- flates benchmark performance, leading 1.3B LLMs to outperform 65B LLMs across most tasks. Evi- dently, this increase does not imply any improve- ment in capacity, but rather benchmark cheating. Backbone Training LAMB XSum HEval GPT-Neo (1.3B) (None) +Leak 46.10 46.00 7.54 6.84 2.44 3.05 OpenLLaMA (3B) (None) +Leak 56.50 53.20 8.31 0.19 4.27 1.83 LLaMA-2 (7B) (None) +Leak 68.20 61.00 8.67 0.25 26.83 8.54 Table 3: The comparison among LLMs on two text generation and a code synthesis tasks. â Leakâ denotes the data leakage scenario using all training sets of the benchmarks in Section 2. LAMB and HEval refer to the LAMBADA and HumanEval datasets, respectively. The best results in each group are in bold. Overall, benchmark leverage directly leads to an unfair advantage in evaluation results of the involved models, which should be strictly avoided when conducting any evaluation. # 3 Potential Risk of Benchmark Leakage In addition to the inflated performance that under- mines the reliability of capability estimation, we also investigate whether the benchmark leakage issue would lead to potential risks in model capac- ity. Limited by the training compute, we can not conduct an exact checking that directly includes leakage data in pre-training data.
2311.01964#14
2311.01964#16
2311.01964
[ "2310.18018" ]
2311.01964#16
Don't Make Your LLM an Evaluation Benchmark Cheater
Instead, we con- tinually pre-train the LLMs on the training sets of all the selected evaluation benchmarks as in Sec- tion 2, without the mixture of any other data. Such a way is the most direct way for benchmark cheat- ing (should be avoided). We speculate that it is likely to affect the capacities of LLMs on normally tested tasks (those without data leakage), due to â catastrophe forgettingâ (Luo et al., 2023; Goodfel- low et al., 2013).2 2As it is a very extreme scenario for simulation, we only employ it to explore the possibility of the subsequent impact when benchmark leakage occurs. The experiment procedure should be totally avoided in real training and evaluation.
2311.01964#15
2311.01964#17
2311.01964
[ "2310.18018" ]
2311.01964#17
Don't Make Your LLM an Evaluation Benchmark Cheater
5 # 3.1 Effect on the Performance of Other Tasks After training on the leaked benchmark data, it would potentially mislead LLMs to overempha- size the specific knowledge and output style of the benchmark data, thereby potentially affecting their performance on other tasks. In this part, we con- duct empirical experiments to examine the side effect on the model performance of other tasks. Experimental Setup To validate the effect, we select three tasks that are not involved in the leaked training data, consisting of two text generation tasks, i.e., LAMBADA (Paperno et al., 2016) and XSum (Narayan et al., 2018), and a code synthe- sis task HumanEval (Chen et al., 2021) to evaluate LLMs in the zero-shot setting. LAMBADA is a lan- guage modeling task that tests the ability of LLMs to predict the last word based on the context, and we report the accuracy in predicting words. XSum, on the other hand, is a text summarization task that requires LLM to summarize the key information from long documents. For this task, we report the ROUGE-L metric, which measures the quality of the generated summaries by comparing them with the ground-truth summaries. For HumanEval, we adopt pass@10 as the evaluation metric. Results Analysis We show the results of LLMs with and without benchmark leakage on the three evaluation tasks in Table 3. First, we can observe that after training on the leaked data, the perfor- mance of all LLMs degrades on the two text gener- ation datasets. Specifically, for OpenLLaMA-3B and LLaMA-2-7B, their text summarization abil- ities seem to be weakened after training on the leaked data, resulting in Rouge-L scores of 0.19 and 0.25 in XSum, respectively. Besides, by com- paring the performance on HumanEval, we also see that data leakage primarily leads to performance degradation of LLMs in the code synthesis task. This demonstrates that benchmark leakage may have a negative impact on the performance of these normally tested tasks (without data leverage). # 3.2 Effect on Model Adaptation After training on the leaked data, LLMs are trained to be specially fit for the benchmark data.
2311.01964#16
2311.01964#18
2311.01964
[ "2310.18018" ]
2311.01964#18
Don't Make Your LLM an Evaluation Benchmark Cheater
However, LLMs might need to be further fine-tuned for attain- ing some specific goals (e.g., solving new tasks or serving emergent applications). In this part, we ex- amine how inappropriately trained LLMs perform for subsequent adaptation. 6 Backbone Training LAMB XSum HEval GPT-Neo (1.3B) +IT +Leak+IT 45.40 43.50 8.34 8.25 14.24 12.20 OpenLLaMA (3B) +IT +Leak+IT 54.00 46.20 3.50 2.61 9.15 6.71 LLaMA-2 (7B) +IT +Leak+IT 60.30 53.60 8.64 8.55 28.66 20.73 Table 4: The comparison among LLMs after instruction tuning. â Leakâ denotes the data leakage using all train- ing sets of the benchmarks in Section 2. â ITâ denotes the instruction tuning using Alpaca and CodeAlpaca for text generation and code synthesis tasks, respectively. Experimental Setup To investigate the influence of data leakage on LLMsâ adaptation capability, we select two representative instruction datasets, i.e., Alpaca (Taori et al., 2023) and CodeAlpaca (Chaud- hary, 2023). Both of these datasets are synthetic and generated using the Self-Instruct method. For comparison, Alpaca primarily contains natural lan- guage instructions, whereas CodeAlpaca focuses on code generation instructions. We use these datasets to fine-tune the LLMs with or without training on the leaked data, and subsequently evalu- ate their performance on the previously mentioned text generation and code synthesis tasks. Results Analysis In Table 4, by comparing the performance of the instruction-tuned LLMs (+Al- paca or +CodeAlpaca) with and without training on the leaked data, we can see that the models with benchmark leakage still underperform their non- leaked counterparts. For the HumanEval dataset, the performance improvements of instruction tun- ing for LLMs trained with leaked data only reach approximately 80% of those achieved by models that are not trained on leaked data.
2311.01964#17
2311.01964#19
2311.01964
[ "2310.18018" ]
2311.01964#19
Don't Make Your LLM an Evaluation Benchmark Cheater
This indicates that benchmark leakage may lead to a decline in adaptation capability, constraining the LLMsâ ability to adapt or improve through subsequent fine-tuning processes. Note that this finding is derived when we fine-tune LLMs only with the leaked data. To enhance the current find- ings, it is also meaningful to conduct experiments that either include leaked data into pre-training data or mix leaked data with other instruction data. However, since our main purpose is to reveal that benchmark leverage might cause severe side effects on LLMs in addition to spurious performance im- provement, we omit these experiments due to the compute limit. # 4 Discussion In light of the potential risks of benchmark leakage, it is necessary to revisit the existing evaluation set- tings for LLMs and investigate possible strategies to avoid such data contamination issues. # 4.1 Fairness in Evaluating Zero/Few-shot Generalization Ability Based on our empirical findings in previous sec- tions, the evaluation results of LLMs in specific benchmarks can be dramatically boosted when the related or same data of the test tasks is acciden- tally used for training. In the literature of machine learning, zero/few-shot learning often refers that the samples at test time were not observed during training for a learner (Wang et al., 2021; Xian et al., 2019). It is evident that benchmark leverage does not comply with this requirement, making it un- fair to compare different LLMs when such a case exists. Furthermore, data leverage can also bring an unfair advantage in the few-shot setting since the learner can observe more task-relevant data at training time. the original zero- shot/few-shot generalization task would degenerate into much easier in-domain evaluation tasks, and it would intensify the phenomenon of benchmark hacking, i.e., a benchmark is no longer useful for evaluation due to the high performance of the in- volved comparison methods. However, in practice, it is challenging to fully eliminate the leakage risk from model train- ing (Golchin and Surdeanu, 2023; Shi et al., 2023). It is because an evaluation benchmark is often con- ducted based on some public text sources, e.g., web- pages and scientific papers.
2311.01964#18
2311.01964#20
2311.01964
[ "2310.18018" ]
2311.01964#20
Don't Make Your LLM an Evaluation Benchmark Cheater
In this case, the related data (e.g., the original text used to generate the test problems) might be occasionally included in the pre-training data of LLMs. Although existing evaluation datasets are easy to be excluded from pre-training data for training new LLMs, it is still difficult to identify all potential data dependencies between evaluation benchmarks and pre-training corpus. Such a test set contamination problem has been already noted in black-box language mod- els (Oren et al., 2023). # 4.2 Suggestion for LLM Evaluation Based on these discussions, we propose the fol- lowing suggestions to improve existing capacity evaluation for LLMs. 7 # General suggestions: â ¢ Considering the potential risk associated with benchmark leakage, we recommend the use of a broader range of benchmarks from diverse sources for performance evaluation. This can help mitigate the risk of inflated results due to data contamination. If feasible, incorporating manual evaluation and conducting qualitative analysis would be also beneficial.
2311.01964#19
2311.01964#21
2311.01964
[ "2310.18018" ]
2311.01964#21
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ In addition to evaluating the advanced capabil- ities of LLMs (such as reasoning and factual knowledge), it is also necessary to perform evaluations on other datasets that focus on basic abilities, such as text generation. This comprehensive approach is necessary for a thorough estimation of LLMsâ capabilities. # Suggestions for LLM developers: â ¢ Perform strict checking on data decontamina- tion in pre-training data to avoid any subse- quent evaluation data being included during training. To achieve this, the n-gram (gener- ally, n = 13) hash algorithm can be applied to examine the overlap between pre-training data and evaluation data of some specific task.
2311.01964#20
2311.01964#22
2311.01964
[ "2310.18018" ]
2311.01964#22
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ If possible, we suggest also excluding training data of mainstream evaluation benchmarks from pre-training data. â ¢ Indicate any potential risk of data contamina- tion (if any) and report the contamination anal- ysis (e.g., overlap statistics) when you present the results on some evaluation benchmark. An example can be seen in Llama-2â s report (Tou- vron et al., 2023b). â ¢ Report a more detailed composition of the pre- training data, especially the datasets related to mainstream evaluation benchmarks. It is an important reference for checking the potential data leakage risk by the public audience. # Suggestions for benchmark maintainers:
2311.01964#21
2311.01964#23
2311.01964
[ "2310.18018" ]
2311.01964#23
Don't Make Your LLM an Evaluation Benchmark Cheater
â ¢ Provide the detail of the data source for con- structing the benchmark, and conduct the con- tamination analysis of the current dataset with mainstream pre-training corpora (as many as possible). The benchmark should explicitly alert possible contamination risks for com- monly used pre-training datasets. â ¢ Each submission is suggested to be accompa- nied with a specific contamination analysis re- port from the result provider, where it can per- form semantic relevance checking (e.g., over- lap statistics) between pre-training data and evaluation data (both training and test data). â ¢ Provide a diverse set of prompts for testing. The final evaluation results should be aver- aged over these multiple runs. It can help reduce the sensitivity of specific prompts, and enhance the reliability of the model results. # 5 Conclusion In this paper, we conducted empirical studies to investigate the penitential risk and impact of bench- mark leakage on LLM evaluation. We found that data leakage can largely boost the benchmark re- sults of LLMs (even small models), making the evaluation unfair and untrustworthy. These find- ings suggest that such attempts should be strictly avoided for fairly assessing the model performance on evaluation benchmarks. Despite that this issue is hard to be fully elimi- nated from the pre-training stage, we suggest sev- eral useful guidelines to improve the use of exist- ing evaluation benchmarks. A key point is that both LLM developers and benchmark maintain- ers should be aware of the data contamination is- sue when interpreting and using the results from the performance leaderboards. In practice, several heuristic strategies can be useful to detect such po- tential contamination issues, e.g., calculating the token overlap between training and evaluation data. Besides, we also suggest benchmark test should be conducted with multiple task prompts for deriving a more stable and reliable model performance. This work aims to draw the attention of the re- search community to the appropriate use of existing evaluation benchmarks for LLMs. More meaning- ful work can be conducted following this line, e.g., alerting the potential contamination datasets. # Limitation In this work, we conducted preliminary experi- ments to emphasize the potential risks associated with benchmark leakage in training LLMs. How- ever, there are still several limitations in our study. First, our experiments involved continually train- ing existing pre-trained LLMs with leaked data. We do not have sufficient computational resources to
2311.01964#22
2311.01964#24
2311.01964
[ "2310.18018" ]
2311.01964#24
Don't Make Your LLM an Evaluation Benchmark Cheater
8 investigate the impact when directly incorporating benchmark leakage during the pre-training process. Given that the pre-training dataset is significantly larger than the benchmark data, introducing data leakage during pre-training might yield different findings. Nonetheless, we strongly recommend avoiding this situation as it would breaks the nature of zero-shot/few-shot evaluation. Second, we did not explore more fine-grained data leakage scenarios in this study, such as only leaking training examples without labels and vary- ing the proportion of the leaked dataset. We en- courage more research efforts into this issue with more systematic studies. Third, we did not calculate the degree of con- tamination between the mainstream benchmarks and commonly-used pre-training datasets, which could serve as an important reference for alerting LLM developers to adjust their evaluation settings. While we suggest that developers and benchmark maintainers report contamination analyses, accu- rately and efficiently estimating the contamination risk of each example in the benchmark is also a challenging task. For example, the suggested n- gram hash algorithm may not detect semantic-level knowledge leakage risks.
2311.01964#23
2311.01964#25
2311.01964
[ "2310.18018" ]
2311.01964#25
Don't Make Your LLM an Evaluation Benchmark Cheater
# References Rachith Aiyappa, Jisun An, Haewoon Kwak, and Yong- Yeol Ahn. 2023. Can we trust the evaluation on chatgpt? arXiv preprint arXiv:2303.12767. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin John- son, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gau- rav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernández à brego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Dà az, Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxi- aoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and Palm 2 technical report. CoRR, et al. 2023. abs/2305.10403. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelli- gence, AAAI 2020, The Thirty-Second Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432â 7439. AAAI Press. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021.
2311.01964#24
2311.01964#26
2311.01964
[ "2310.18018" ]
2311.01964#26
Don't Make Your LLM an Evaluation Benchmark Cheater
GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh- Tensorflow. If you use this software, please cite it using these metadata. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
2311.01964#25
2311.01964#27
2311.01964
[ "2310.18018" ]
2311.01964#27
Don't Make Your LLM an Evaluation Benchmark Cheater
Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Sahil Chaudhary. 2023. Code alpaca: An instruction- following llama model for code generation. https: //github.com/sahil280114/codealpaca. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021.
2311.01964#26
2311.01964#28
2311.01964
[ "2310.18018" ]
2311.01964#28
Don't Make Your LLM an Evaluation Benchmark Cheater
Evaluat- ing large language models trained on code. CoRR, abs/2107.03374. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2924â 2936. Associa- tion for Computational Linguistics. 9 Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. CoRR, abs/2110.14168. Together Computer. 2023. Redpajama-data: An open source recipe to reproduce llama training dataset. OpenCompass Contributors. 2023. Opencompass: A universal evaluation platform for foundation https://github.com/open-compass/ models. opencompass. Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guoping Hu. 2019. A span-extraction dataset for chinese ma- In Proceedings of chine reading comprehension. the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem- ber 3-7, 2019, pages 5882â
2311.01964#27
2311.01964#29
2311.01964
[ "2310.18018" ]
2311.01964#29
Don't Make Your LLM an Evaluation Benchmark Cheater
5888. Association for Computational Linguistics. Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling. CoRR, abs/2101.00027. Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama.
2311.01964#28
2311.01964#30
2311.01964
[ "2310.18018" ]
2311.01964#30
Don't Make Your LLM an Evaluation Benchmark Cheater
Shahriar Golchin and Mihai Surdeanu. 2023. Time travel in llms: Tracing data contamination in large language models. arXiv preprint arXiv:2308.08493. Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An empirical investigation of catastrophic forgetting in gradient- based neural networks. CoRR, abs/1312.6211.
2311.01964#29
2311.01964#31
2311.01964
[ "2310.18018" ]
2311.01964#31
Don't Make Your LLM an Evaluation Benchmark Cheater
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016.
2311.01964#30
2311.01964#32
2311.01964
[ "2310.18018" ]
2311.01964#32
Don't Make Your LLM an Evaluation Benchmark Cheater
The goldilocks principle: Reading childrenâ s books with explicit memory representa- tions. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023.
2311.01964#31
2311.01964#33
2311.01964
[ "2310.18018" ]
2311.01964#33
Don't Make Your LLM an Evaluation Benchmark Cheater
C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. CoRR, abs/2305.08322. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. RACE: large-scale read- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 785â 794. Association for Computational Lin- guistics. Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023. Textbooks are all you need II: phi-1.5 technical report. CoRR, abs/2309.05463. Yucheng Li. 2023. An open source data contam- ination report for llama series models. CoRR, abs/2307.03109. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems.
2311.01964#32
2311.01964#34
2311.01964
[ "2310.18018" ]
2311.01964#34
Don't Make Your LLM an Evaluation Benchmark Cheater
In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 158â 167. Association for Computational Linguistics. Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. 2023. An empirical study of catas- trophic forgetting in large language models during continual fine-tuning. CoRR, abs/2308.08747. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018.
2311.01964#33
2311.01964#35
2311.01964
[ "2310.18018" ]
2311.01964#35
Don't Make Your LLM an Evaluation Benchmark Cheater
Can a suit of armor conduct elec- tricity? A new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2381â 2391. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Donâ t give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1797â
2311.01964#34
2311.01964#36
2311.01964
[ "2310.18018" ]
2311.01964#36
Don't Make Your LLM an Evaluation Benchmark Cheater
1807. Association for Computational Linguistics. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, and Tatsunori B. Hashimoto. 2023. Proving test set contamination in black box language models. CoRR, abs/2307.03109. 10 Denis Paperno, Germán Kruszewski, Angeliki Lazari- dou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset:
2311.01964#35
2311.01964#37
2311.01964
[ "2310.18018" ]
2311.01964#37
Don't Make Your LLM an Evaluation Benchmark Cheater
Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answering challenge. Trans. Assoc. Comput. Linguistics, 7:249â 266.
2311.01964#36
2311.01964#38
2311.01964
[ "2310.18018" ]
2311.01964#38
Don't Make Your LLM an Evaluation Benchmark Cheater
Oscar Sainz, Jon Ander Campos, Iker Garcà a-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. Nlp evaluation in trouble: On the need to measure llm data contamination for each benchmark. arXiv preprint arXiv:2310.18018. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2020.
2311.01964#37
2311.01964#39
2311.01964
[ "2310.18018" ]
2311.01964#39
Don't Make Your LLM an Evaluation Benchmark Cheater
Winogrande: An adver- sarial winograd schema challenge at scale. In The Thirty-Fourth AAAI Conference on Artificial Intelli- gence, AAAI 2020, The Thirty-Second Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8732â 8740. AAAI Press. Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke Zettlemoyer. 2023. Detecting pretraining data from large language models. arXiv preprint arXiv:2310.16789. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Par- rish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Anto- nio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Her- rick, Avia Efrat, Aykut Erdem, Ayla Karakas, and et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.
2311.01964#38
2311.01964#40
2311.01964
[ "2310.18018" ]
2311.01964#40
Don't Make Your LLM an Evaluation Benchmark Cheater
CoRR, abs/2206.04615. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging chi- nese machine reading comprehension. Trans. Assoc. Comput. Linguistics, 8:141â 155. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowl- In Proceedings of the 2019 Conference of edge. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4149â 4158. Association for Computational Linguistics. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca.
2311.01964#39
2311.01964#41
2311.01964
[ "2310.18018" ]
2311.01964#41
Don't Make Your LLM an Evaluation Benchmark Cheater
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton- Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.
2311.01964#40
2311.01964#42
2311.01964
[ "2310.18018" ]
2311.01964#42
Don't Make Your LLM an Evaluation Benchmark Cheater
Yaqing Wang, Quanming Yao, James T. Kwok, and Li- onel M. Ni. 2021. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv., 53(3):63:1â 63:34. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompt- ing elicits reasoning in large language models. In NeurIPS.
2311.01964#41
2311.01964#43
2311.01964
[ "2310.18018" ]
2311.01964#43
Don't Make Your LLM an Evaluation Benchmark Cheater
Yongqin Xian, Christoph H. Lampert, Bernt Schiele, and Zeynep Akata. 2019. Zero-shot learning - A comprehensive evaluation of the good, the bad and the ugly. IEEE Trans. Pattern Anal. Mach. Intell., 41(9):2251â 2265. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019.
2311.01964#42
2311.01964#44
2311.01964
[ "2310.18018" ]
2311.01964#44
Don't Make Your LLM an Evaluation Benchmark Cheater
Hellaswag: Can a 11 machine really finish your sentence? In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791â 4800. Association for Computational Linguis- tics. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Be- ichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. CoRR, abs/2303.18223. Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023.
2311.01964#43
2311.01964#45
2311.01964
[ "2310.18018" ]
2311.01964#45
Don't Make Your LLM an Evaluation Benchmark Cheater
Agieval: A human-centric benchmark for evaluating foundation models. CoRR, abs/2304.06364. Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji-Rong Wen. 2023. Large language models for infor- mation retrieval: A survey. CoRR, abs/2308.07107.
2311.01964#44
2311.01964
[ "2310.18018" ]
2311.04915#0
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
# Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, & Sowon Hahn Human Factors Psychology Lab Seoul National University [email protected], [email protected] Standard Prompting Input: | just broke up. My life is over. Output: | am sorry to hear that you are sad Chain-of-Empathy Prompting (CoE) Input: | just broke up. My life is over, 1st Step : The client is feeling sad. Reasoning : (Emotion) (CoE) 2nd Step : The client shows (Cognitive catastrophic cognitive error. Error) Output :'m sorry to hear that. | understand you're feeling extremely upset due to your breakup. It's natural to) feel this way. Breakups are hard but they don't define) your whole existence or future. Let's work through! these emotions and explore ways to cope, Psychotherapy-based CoEs Focus "Cognitive Error" "Emotion Control" }-â 2"Self-Understanding" }-â 2"Decision Making" Figure 1: Chain-of-Empathy (CoE) prompting with cognitive reasoning of humanâ s emotion based on psychotherapy models.
2311.04915#1
2311.04915
[ "2302.13971" ]
2311.04915#1
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
# Abstract We present a novel method, the Chain of Empathy that utilizes insights from psychotherapy to induce Large Language Models (LLMs) to reason about human emotional states. This method is inspired psychotherapy various approachesâ Cognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), Person-Centered Therapy (PCT), and Reality Therapy to different patterns of interpreting clientsâ mental states. generated LLMs predominantly exploratory responses. However, when LLMs used CoE reasoning, we found a more comprehensive range of empathetic responses aligned with each psychotherapy modelâ s different reasoning patterns. The CBT- based CoE resulted in the most balanced responses. The generation of empathetic importance of the findings underscore understanding the emotional context and how it affects human-AI communication. Our research contributes how psychotherapeutic models can be incorporated into LLMs, facilitating the development of context-specific, safer, and empathetic AI. # 1. Introduction
2311.04915#0
2311.04915#2
2311.04915
[ "2302.13971" ]
2311.04915#2
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
(LLMs) have Large Language Models dramatically generation performance that highly resembles human expressions (Brown et al., 2020; Touvron et al., 2023; Taori et al., 2023; Bommasani et al., 2021). These models have been showcasing their reasoning abilities and achieving high performance in various problem-solving tasks, including professional exams such as the bar exam (Bommarito II and Katz, 2022), a math test (Zhang et al., 2023), and medical diagnoses (Nori et al., 2023). Among many recent findings related to LLMs, one interesting point is the introduction of â Chain-of-Thought (CoT)â prompting (Wei et al., 2022; Kojima et al., 2022). This method elicits reasoning before generating outputs. Nevertheless, this recent method has primarily experimented with logical or arithmetic tasks. Whether reasoning about emotional states or underlying causes enhances empathetic responses to user input remains a relatively under-explored area and merits investigation. Empathetic requires cognitive reasoning of othersâ mental states. Different psychotherapeutic approaches offer varied perspectives on empathy (Hofmann et al., 2010; Linehan, 1987; Cooper and McLeod, 2011; Wubbolding et al., 2017). By integrating these approaches into LLMsâ reasoning stage, we can enhance the depth and specificity of their empathetic responses. For this purpose, this study delves into these possibilities and proposes a novel prompting, Chain-of- Empathy prompting (CoE). The CoE prompt integrates a text generation.
2311.04915#1
2311.04915#3
2311.04915
[ "2302.13971" ]
2311.04915#3
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
It focuses on clientsâ emotions and the specific factors leading to those emotions, such as cognitive errors, before generating the output. # 2. Related Work # 2.1. Theoretical Backgrounds of Empathy Empathy, defined as sharing othersâ emotions and experiences, is a multifaceted concept encompassing cognitive and emotional aspects (Neff, 2003; Anderson and Keltner, 2002; De Vignemont and Singer, 2006; Hall and Schwartz, 2019; Zaki, 2019). Cognitive empathy involves understanding othersâ emotions and perspectives, linked to abilities such as mentalizing and narrative imagination (Eisenberg, 2014). It requires an in-depth cognitive appraisal of the situation, considering factors like pleasantness, control, and certainty of the outcome (Lazarus, 1991; Wondra and (emotional) 2015). Affective Ellsworth, empathy allows to experience individuals othersâ emotions, while motivational empathy, a newer concept, embodies the desire to alleviate othersâ emotional distress (Zaki, 2019).
2311.04915#2
2311.04915#4
2311.04915
[ "2302.13971" ]
2311.04915#4
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
# 2.2. Empathetic Communication in Text Natural Language Processing (NLP) has been increasingly developing conversational agents, or chatbots, across various professional domains. These include mental healthcare for victims of crime (Ahn et al., 2020), individuals on the autism spectrum (Diehl et al., 2012), and those suffering from anxiety disorders (Rasouli et al., 2022). Recently, chatbots designed for psychotherapy (e.g., CBT) have shown promising results in assisting the long-term treatment of anxiety and depression (Nwosu et al., 2022). However, current AI-generated responses appear generic and less authentic, making personalized responses a significant challenge. Empathetic reasoning is crucial for these systems, leading to ongoing efforts to enhance their empathetic expression incorporating human-like traits (Roller et al., 2021).
2311.04915#3
2311.04915#5
2311.04915
[ "2302.13971" ]
2311.04915#5
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
# 2.3. Computational Approach to Empathy Past research in psychotherapy has primarily focused on empathy based on the analysis of nonverbal cues, such as body language and facial expressions, often requiring manual coding of empathetic responses (Scherer et al., 2001; American Psychiatric Association et al., 1994; Ekman and Friesen, 1971). Recent advances in artificial intelligence have shifted towards a computational approach, where empathy is predicted from a text corpus and quantified through the labeling of emotions (Rashkin et al., 2019) and distress (Buechel et al., 2018). While most studies have traditionally concentrated on the clientâ s capacity for the empathy, counselor is increasingly recognized as critical to successful therapy outcomes (Truax and Carkhuff, 2007). This aspect of expressed empathy is particularly relevant to our approach, where we aim to use LLMs to reflect their understanding of the clientâ s needs accurately. # 2.4.
2311.04915#4
2311.04915#6
2311.04915
[ "2302.13971" ]
2311.04915#6
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Reasoning in Large Language Models Recently, CoT has shown effective in eliciting the reasoning process of the LLMs (Wei et al., 2022; Prystawski et al., 2022; Yao et al., 2023; Kojima et al., 2022). CoT prompting in previous research has included reasoning steps within the prompt instruction for zero- or one- shot learning of LLMs during text generation CBT-CoE Goal Cognitive reframing Reasoning Tackling negative thought patterns Prompt conditions DBT-CoE PCT-CoE Emotion regulation Addressing emotional dysregulation Self- understanding Enhancing self-awareness RT-CoE Problem-focused coping Identifying cause of the dissatisfaction
2311.04915#5
2311.04915#7
2311.04915
[ "2302.13971" ]
2311.04915#7
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Table 1: Comparison of goals and reasoning style in different psychotherapy based CoEs. (Kojima et al., 2022). This model has improved the performance of problem-solving (Kojima et al., understanding or metaphor (Prystawski et al., 2022), offering new insights and suggesting possibilities for generative models to be used in many other domains. 2011; Knutson and Koch, 2022), and Reality Therapy (RT; Wubbolding et al., 2017)2.
2311.04915#6
2311.04915#8
2311.04915
[ "2302.13971" ]
2311.04915#8
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Except these promptsâ for instructions were designed the to reflect therapistsâ reasoning process in their respective counseling models. # 3. The Present Study We investigated whether eliciting empathetic reasoning in LLMs leads to natural responses. Therefore, we developed CoE prompting to reason emotion and situational factors that could help the model to accurately infer the clientâ s emotional experience in mental healthcare and thus choose the most appropriate and context-aware empathetic strategy to communicate. Models in each prompting condition were tested in zero-shot, with only instructions on which option to choose per class: empathetic strategy (emotional reaction, exploration, and interpretation) and communication level (no expression, weak, and strong) (Sharma et al., 2020). The common reasoning steps involved in each CoE condition were: (1) Identify any word that represents the clientâ s emotion, and individual/situational (2) Understand factors that may have led to the expression in the clientâ s message. # 4. Methods # 4.1. Language Model # 5. Experiments
2311.04915#7
2311.04915#9
2311.04915
[ "2302.13971" ]
2311.04915#9
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
We used GPT-3.5 API from OpenAI 1 for system setup. The model (â text-davinci-003â ) temperature was set to 0.9. The top p parameter was set to 1 for nucleus sampling to reduce the randomness of the output (The frequency penalty = 0 and the presence penalty = 0.6). # 4.2. Chain-of-Empathy Reasoning Table 1 and Figure 1 show four unique prompts with CoE in addition to the base condition (no reasoning): Cognitive-Behavioral Therapy (CBT; Beck, 1979; Kaczkurkin and Foa, 2022; Hofmann et al., 2010), Dialectical Behavior Therapy (DBT; Linehan, 1987), Person- Centered Therapy (PCT; Cooper and McLeod, We to generate appropriate responses to the posts of seekers seeking advice on Reddit and predict the best suitable empathetic strategy. For the ground- truth label of each empathetic strategy class, we used the EPITOME 3 , crowdsourced Reddit posts of mental health, with an average inter- annotator agreement reported as above 0.68 (Sharma et al., 2020). The dataset comprised pairs of help-seeking posts and responding posts. Each pair was labeled based on (1) the type of expressed â empathy mechanismâ (i.e., 1 https://openai.com/ 2 We want to emphasize that these descriptions are not exhaustive representations of the goals of each psychotherapy. These goals and reasoning strategies have been specifically modified for LLM prompting and do not reflect the entire interaction between clinical/counseling psychologists and clients. 3 https://github.com/behavioral-data/ Empathy- Mental-Health
2311.04915#8
2311.04915#10
2311.04915
[ "2302.13971" ]
2311.04915#10
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Acc Emotional Reaction Interpretation Exploration Base 0.340 Prec. 0.467 Recall 0.185 F1 0.27 Prec. 0 Recall 0 F1 0 Prec. 0.327 Recall F1 0.866 0.475 CBT-CoE 0.319 0.463 0.165 0.244 0.293 0.260 0.276 0.303 0.543 0.389 DBT-CoE 0.334 0.392 0.372 0.382 0.291 0.060 0.100 0.309 0.582 0.404 PCT-CoE 0.336 0.399 0.243 0.302 0.333 0.016 0.031 0.319 0.757 0.449 RT-CoE 0.336 0.407 0.308 0.350 0.354 0.044 0.079 0.309 0.664 0.420 Table 2: Model performance in empathetic strategy classification task by CoE prompting conditions. *Prec. = Precision empathy strategy) and (2) the presence and â levelâ of each expressed empathy (i.e., communication strength). The three empathy reaction, strategies exploration, with corresponding levels of 0, 1, and 2. Pairs labeled as level 0, indicating no expression of empathy, were excluded. The number of pairs for each strategy was as follows: â emotion reactionâ =1,047, and â interpretationâ =1,436. We randomly sampled 500 pairs in each emotional reaction and interpretation data to balance the number of pairs between strategies. Each strategyâ s final number of pairs was emotional reaction=500, exploration=480, and interpretation=500. # 5.1. Model Performances Table 2 and Figure 2 show the performance of the empathetic strategy classification of LLMs with each CoE prompt, measured in terms of precision, recall, F1 score, and accuracy. Upon generating a response, each model with CoE prompts predicted which empathy strategy is most suitable for each seekerâ s post among the three strategies. We the predicted empathy strategy with the ground truth calculated strategy prediction accuracy. retrieval (e.g., â No Empathy Strategyâ ).
2311.04915#9
2311.04915#11
2311.04915
[ "2302.13971" ]
2311.04915#11
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
In addition, they sometimes predicted new strategies which did not fall into any of the predefined three strategies (e.g., â Reflection,â â Validation: clientâ s acknowledging feelings and experiences,â and â Approval: expressing approval or positive reinforcement to the clientâ ). # 6. Qualitative Evaluations The LLM generally generated courteous and comprehensive responses. While many human peer supporters often provided brief comments and shared personal opinions or give advice, the CoE LLM mostly responded with at least two empathetic strategies and frequently suggested seeking professional help. The model tended to initiate responses by interpreting usersâ current state and subsequent advice or exploring potential options. For example, when a distressed seeker could not control her anxiety after a violent fight between her parents, DBT- responded with multiple CoE prompt empathetic strategies, â
2311.04915#10
2311.04915#12
2311.04915
[ "2302.13971" ]
2311.04915#12
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Iâ m so sorry you had to witness that. Itâ s understandable that youâ re Outputs with errors in the predicted strategy names were excluded from the analysis. Most of these errors resulted from the nature of LLM as a generative model, which behaves differently from traditional supervised learning models for classification tasks. Despite explicit instructions the models occasionally generated â noiseâ output and predicted strategies that were not among the provided options. These errors include responses of failed predictions or response Base 10 og 08 06 06 Fl-score Fl-score 04 04 0.2 emotion reactions interpretation Empathy Strategy 0.0 0.0 exploration PCT CoE 1.0 og 08 0.6 06 Fl-score Fl-score 04 o4 02 emotion_reactions interpretation Empathy Strategy 0.0 0.0 exploration CBT CoE emotion reactions interpretation Empathy Strategy RT CoE DBT CoE 10 08 06 Fl-score 04 emotion reactions interpretation Empathy Strategy 0.0 exploration exploration emotion_reactions interpretation exploration Empathy Strategy Figure 2: Empathic expression strategy classification accuracy per prompt conditions. Compared to Base condition, CBT-CoE provided the balanced set of each empathy expression but less emotional reaction than other CoEs.
2311.04915#11
2311.04915#13
2311.04915
[ "2302.13971" ]
2311.04915#13
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
feeling overwhelmed and scared right now. Itâ s not okay for anyone to threaten or hurt another person, and itâ s not your fault. How can I support you right now?â . This contradicts the original human response in benchmark data: â Everything is wrong with people.â # 7. Conclusions In summary, we developed a CoE reasoning prompt for generating empathetic responses based on psychotherapy models, and we the performance of empathetic compared strategy classification. Our findings revealed that LLMs without reasoning showed a significant preference for the exploration strategy, with interpretation being the least preferred strategy. Although all reasoning prompts generated responses most strongly associated with exploration, they differed from the base prompt by generating interpretation to a certain extent. Intriguingly, only the CBT- CoE generated the highest number of the interpretation strategy. This pattern might reflect CBTâ s inherent approach - clarifying cognitive errors to clients. These findings incorporating importance of highlight context-specific therapeutic interactions with generative AIs. # 8. Limitations and Suggestions We acknowledge several limitations that should be considered research and development. First, we did not employ more extensive evaluative criteria for empathy, especially those validated from psychology literature like the Interpersonal Reactivity Index (Davis, 1980; Davis, 1983). Future studies should consider evaluating LLMs using their these established scales communication and reproducibility. Our evaluation focused solely on the empathic accuracy of the LLMsâ and did not measure user perception. User perception of empathetic expression varies depending on whether interact with humans or artificially intelligent systems (Medeiros et al., 2021). Furthermore, people perceive and react differently to AIsâ empathetic expressions (Urakami et al., 2019). Thus, future works should investigate how users perceive and respond to the modelsâ empathetic responses to enhance our understanding of the efficacy of LLMsâ empathetic expressions. For quantitative evaluation, we used a single LLM model (GPT-3.5) and one domain, mental health. Incorporating a diverse text corpus, and motivational interviewing (Miller and Rollnick, 2012), could enable LLMs to produce more personalized communication. This presents an opportunity for future research to encompass a wider array of topics and conversational styles, thereby increasing the reliability of LLMâ
2311.04915#12
2311.04915#14
2311.04915
[ "2302.13971" ]
2311.04915#14
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
s performance. Additionally, different LLMs may excel in varied capabilities, leading each in LLM specific tasks (Sivarajkumar et al., 2023). Investigating and assessing the empathetic expressions generated by different LLMs is crucial for a comprehensive evaluation of LLMsâ ability to discern human emotions and craft appropriate, empathetic responses. # 9. Ethical Considerations The expanding use of large language models (LLMs), especially within mental healthcare, calls for thoughtful ethical engagement. As these models advance in generating responses that mirror human counselors, it is imperative we closely examine their impact on users, particularly those navigating mental health challenges.
2311.04915#13
2311.04915#15
2311.04915
[ "2302.13971" ]
2311.04915#15
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
# References Ahn, Y., Zhang, Y., Park, Y., & Lee, J. (2020). A chatbot solution to chat app problems: Envisioning a chatbot counseling system for teenage victims of online sexual exploitation. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1â 7). American Psychiatric Association, American Psychiatric Association, (1994). Diagnostic and statistical manual of mental disorders: DSM-IV, volume 4. American Psychiatric Association, Washington, DC. Anderson, C., & Keltner, D. (2002). The role of empathy in the formation and maintenance of social bonds. Behavioral and Brain Sciences, 25(1), 21â 22. Beck, A. T. (1979). Cognitive therapy and the emotional disorders.
2311.04915#14
2311.04915#16
2311.04915
[ "2302.13971" ]
2311.04915#16
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Penguin. Bommarito II, M., & Katz, D. M. (2022). Gpt takes preprint exam. bar the arXiv:2212.14402. Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Bohg, J. (2021). On the opportunities and risks of foundation preprint arXiv:2108.07258. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Sastry, G. (2020).
2311.04915#15
2311.04915#17
2311.04915
[ "2302.13971" ]
2311.04915#17
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Language models are few-shot learners. Advances in neural information processing systems, 33, 1877â 1901. Buechel, S., Buffone, A., Slaff, B., Ungar, L., & Sedoc, J. (2018). Modeling empathy and distress in reaction to news stories. arXiv preprint arXiv:1808.10399. Cooper, M., & McLeod, J. (2011). Person- centered therapy: A pluralistic perspective. Experiential Person-Centered Psychotherapies, 10(3), 210â 223. Davis, M. H. (1980). Interpersonal reactivity index. Davis, M. H. (1983). Measuring individual for a differences multidimensional of personality and social psychology, 44(1), 113. De Vignemont, F., & Singer, T. (2006). The empathic brain: How, when, and why? Trends in Cognitive Sciences, 10(10), 435â 441. Diehl, J. J., Schmitt, L. M., Villano, M., & Crowell, C. R. (2012). The clinical use of robots for individuals with autism spectrum disorders: A critical review. Research in autism spectrum disorders, 6(1), 249â 262. Eisenberg, N. (2014). Altruistic emotion, cognition, and behavior (PLE: Emotion). Psychology Press. Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of personality and social psychology, 17(2), 124. Hall, J. A., & Schwartz, R. (2019). Empathy present and future. The Journal of social psychology, 159(3), 225â 243. Hofmann, S. G., Sawyer, A. T., & Fang, A. (2010). The empirical status of the "new wave" of cognitive behavioral therapy. Psychiatric Clinics, 33(3), 701â 710.
2311.04915#16
2311.04915#18
2311.04915
[ "2302.13971" ]
2311.04915#18
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Kaczkurkin, A. N., & Foa, E. B. (2022). Cognitive-behavioral for anxiety disorders: An update on the empirical evidence. Dialogues in Clinical Neuroscience. Knutson, D., & Koch, J. M. (2022). Person- centered therapy as applied to work with transgender and gender diverse clients. Journal of Humanistic Psychology, 62(1), 104â 122. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022).
2311.04915#17
2311.04915#19
2311.04915
[ "2302.13971" ]
2311.04915#19
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Large language models are preprint zero-shot arXiv:2205.11916. Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press. Linehan, M. M. (1987). Dialectical behavioral therapy: A cognitive behavioral approach to parasuicide. Journal of Personality Disorders, 1(4), 328â 333. Medeiros, L., Bosse, T., & Gerritsen, C. (2021). Can a chatbot comfort humans? studying the impact of a supportive chatbot on users' self- perceived IEEE Transactions on Human-Machine Systems, 52(3), 343â 353. Miller, W. R., & Rollnick, S. Motivational change.
2311.04915#18
2311.04915#20
2311.04915
[ "2302.13971" ]
2311.04915#20
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Guilford Press. (2003). Self-compassion: An Neff, K. alternative conceptualization of a healthy attitude toward oneself. Self and Identity, 2(2), 85â 101. Nori, H., King, N., McKinney, S. M., Carignan, D., & Horvitz, E. (2023). Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375. Nwosu, A., Boardman, S., Husain, M. M., & Doraiswamy, P. M. (2022). Digital therapeutics for mental health:
2311.04915#19
2311.04915#21
2311.04915
[ "2302.13971" ]
2311.04915#21
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Is attrition the Achilles heel? Frontiers in Psychiatry, 1598. Prystawski, B., Thibodeau, P., & Goodman, N. (2022). Psychologically-informed chain-of- thought prompts for metaphor understanding in large language models. arXiv preprint arXiv:2209.08141. Rashkin, H., Smith, E. M., Li, M., & Boureau, Y-L. (2019). Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 5370â 5381). Association for Computational Linguistics. Rasouli, S., Gupta, G., Nilsen, E., & Dautenhahn, K. (2022). Potential applications of social robots in robot-assisted interventions for social anxiety. International Journal of Social Robotics, 14(5), 1â
2311.04915#20
2311.04915#22
2311.04915
[ "2302.13971" ]
2311.04915#22
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
32. Roller, S., Dinan, E., Goyal, N., Ju, D., Williamson, M., Liu, Y., Xu, J., Ott, M., Smith, E. M., Boureau, Y-L., & Weston, J. (2021). Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (pp. for Computational 300â 325). Association Linguistics. Scherer, K. R., Banse, R., & Wallbott, H. G. from vocal (2001). Emotion expression correlate across languages and cultures. Journal of Cross-cultural psychology, 32(1), 76â 92. Sharma, A., Miner, A., Atkins, D., & Althoff, T. (2020). A to computational understanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 5263â 5276). Association for Computational Linguistics. Sivarajkumar, S., Kelley, M., Samolyk- Mazzanti, A., Visweswaran, S., & Wang, Y. (2023). An empirical evaluation of prompting strategies for large language models in zero- shot clinical natural language processing. Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., & Hashimoto, T. B. replicable instruction-following model. Stanford Center for Research on Foundation Models. [Online]. at Available https://crfm.stanford.edu/2023/03/13/alpaca.html
2311.04915#21
2311.04915#23
2311.04915
[ "2302.13971" ]
2311.04915#23
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M-A., Lacroix, T., Roziere, B., Goyal, N., Hambro, E., Azhar, F., et al. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Truax, C. B., & Carkhuff, R. (2007). Toward effective and psychotherapy: Training and practice.
2311.04915#22
2311.04915#24
2311.04915
[ "2302.13971" ]
2311.04915#24
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Transaction Publishers. Urakami, J., Moore, B. A., Sutthithatip, S., & Park, S. (2019). Users' perception of empathic expressions by an advanced intelligent system. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 11â 18). Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language preprint arXiv:2201.11903. Wondra, J. D., & Ellsworth, P. C. (2015). An appraisal theory of empathy and other vicarious emotional experiences. Psychological review, 122(3), 411. Wubbolding, R. E., Casstevens, W. J., & Fulkerson, M. H. (2017). Using the wdep system of reality therapy to support person- treatment planning. Journal of centered Counseling & Development, 95(4), 472â 477. Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., & Narasimhan, K. (2023). Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601.
2311.04915#23
2311.04915#25
2311.04915
[ "2302.13971" ]
2311.04915#25
Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models
Zaki, J. (2019). The war for kindness: Building empathy in a fractured world. Crown. Zhang, S. J., Florin, S., Lee, A. N., Niknafs, E., Marginean, A., Wang, A., Tyser, K., Chin, Z., Hicke, Y., Singh, N., et al. (2023). Exploring the MIT mathematics and EECS curriculum using language models. arXiv preprint large arXiv:2306.08997.
2311.04915#24
2311.04915
[ "2302.13971" ]
2311.01555#0
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
3 2 0 2 v o N 2 ] R I . s c [ 1 v 5 5 5 1 0 . 1 1 3 2 : v i X r a # Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers Weiwei Sun1 Zheng Chen1 Xinyu Ma2 Pengjie Ren1 Zhumin Chen1 Dawei Yin2 Zhaochun Ren3 1Shandong University, Qingdao, China 3Leiden University, Leiden, The Netherlands {sunnweiwei,xinyuma2016,lingyongy}@gmail.com [email protected], [email protected] # Abstract Recent studies have demonstrated the great potential of Large Language Models (LLMs) serving as zero-shot relevance rankers. The typical ap- proach involves making comparisons between pairs or lists of documents. Although effective, these listwise and pairwise methods are not efficient and also heavily rely on intricate prompt engineering. To tackle this problem, we introduce a novel instruction distillation method. The key idea is to distill the pairwise ranking ability of open-sourced LLMs to a simpler but more efficient pointwise ranking. Specifically, given the same LLM, we first rank documents using the effective pairwise approach with complex instructions, and then distill the teacher predictions to the pointwise ap- proach with simpler instructions. Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that instruction distillation can improve efficiency by 10 to 100Ã and also enhance the ranking performance of LLMs. Furthermore, our approach surpasses the performance of exist- ing supervised methods like monoT5 and is on par with the state-of-the- art zero-shot methods. The code to reproduce our results is available at www.github.com/sunnweiwei/RankGPT. # Introduction Large Language Models (LLMs), such as ChatGPT and GPT-4, have achieved remarkable success in various Natural Language Processing (NLP) tasks (OpenAI, 2022; 2023). One notable capability of LLMs is their ability to solve tasks using carefully designed prompts or instructions (Microsoft, 2023).
2311.01555#1
2311.01555
[ "2210.11416" ]
2311.01555#1
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
This has drawn much attention from the Information Retrieval (IR) community given its potential to significantly reduce the huge annotation costs (Shi et al., 2023; Sun et al., 2023c). Relevance ranking has been the most critical problem in IR, which aims at ranking a set of candidate items by their relevance given the query (Fan et al., 2021). Recently, there has been a series of works using large models as zero-shot rankers through pointwise, pairwise, and listwise ranking prompting, and these have achieved impressive results on IR benchmarks (Sun et al., 2023c; Ma et al., 2023; Qin et al., 2023). Employing LLMs for ranking tasks still faces several practical challenges, including appli- cation efficiency and output stability. On one hand, both listwise and pairwise ranking methods suffer from efficiency issues. For listwise ranking (Sun et al., 2023c; Ma et al., 2023), the exponential time complexity of the Transformer with respect to input length renders it impractical for many industrial applications. Pairwise ranking requires pairing every document with every other, with the obvious drawback being its costly O(n2) calls to LLMs (Qin et al., 2023). On the other hand, while pointwise ranking is more efficient, it compromises on effectiveness (Liang et al., 2022). The pretraining objective of LLMs isnâ t inherently tailored for ranking tasks (i.e., generative language modeling vs. relevance ranking), meaning its prediction probability isnâ t calibrated to the relevance score (Zhao
2311.01555#0
2311.01555#2
2311.01555
[ "2210.11416" ]
2311.01555#2
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
1 nDCG@10 Pairwise 40 220M 1x 10x 100x 1000x 10000x Figure 1: The average nDCG@10 of various LLM-based re-ranking methods on TREC benchmarks. The horizontal axis represents the speed of each method relative to monoT5- Base (Nogueira et al., 2020), as measured by the average latency time per query. All methods are based on the T5 series foundation models. RG refers to the relevance generation method, and PRP refers to the pairwise ranking method. et al., 2021; 2023). Other challenges, such as unstable outputs, position bias, and repetitions from LLMs, become more pronounced in IR tasks, where deterministic output in terms of relevance is crucial (Sun et al., 2023c). To address these challenges, this paper introduces a novel Instruction Distillation method to enhance the efficiency and stability of LLMs in the ranking task. The key idea is to distill the predictions of pairwise ranking (PRP) with computationally demanding instruction (teacher instruction) to the efficient pointwise prompting method but with simpler instruction (student instruction). Through this distillation process, the task instructions used for ranking are substantially simplified, leading not only to increased efficiency but also to enhanced performance. In this work, we use open-sourced LLMs FLAN-T5 and our method is zero- shot text ranking since FLAN-T5 is not directly exposed to human-labeled data. We empirically evaluate instruction distilled models against other baselines in Figure 1. These distilled student models are between 10 and 100Ã more efficient compared to their teacher models (i.e., PRP) while also yielding significant enhancements. Compared to vanilla pointwise ranking methods (Relevance Generation methods, RG), our distilled models show a 40% performance improvement in terms of nDCG@10. Remarkably, our distilled FLAN- T5-XL model even surpasses the SOTA supervised systems like monoT5-3B (Nogueira et al., 2020) in IR benchmarks. This is particularly notable as it achieves this without relying on any human relevance judgments. We also condu Further verification is conducted on various ranking tasks such as the BEIR benchmark and the conversational recommendation tasks present in the REDIAL benchmark. In summary, this paper makes the following contributions:
2311.01555#1
2311.01555#3
2311.01555
[ "2210.11416" ]
2311.01555#3
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
â ¢ We propose Instruction Distillation, an unsupervised approach to specialize LLMs on IR tasks by distilling instructions. â ¢ We show the instruction distilled LLM is both more efficient and effective compared to existing zero-shot LLMs with the same amount of parameters. â ¢ We illustrate the robust performance of our method in both passage ranking and movie recommendation tasks, surpassing the state-of-the-art supervised methods.1 1Code and pre-trained models are available at https://github.com/sunnweiwei/RankGPT/ tree/main/InstructDistill
2311.01555#2
2311.01555#4
2311.01555
[ "2210.11416" ]
2311.01555#4
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
2 # 2 Related Work 2.1 LLMs for Information Retrieval Large language models (LLMs) have been pre-trained on a large-scale corpus and possess strong text understanding and reasoning capabilities (OpenAI, 2023; Google, 2023; Shoeybi et al., 2019; Touvron et al., 2023). Recently, LLMs have found increasing applications in information retrieval (Zhu et al., 2023; Wu et al., 2023; Yu et al., 2023; Sun et al., 2023a; Hou et al., 2023; Sun et al., 2023b; Bao et al., 2023). These methods can be broadly divided into two categories: synthetic data generation and relevance ranking. Several approaches have been proposed to utilize LLMs to generate synthetic data for IR. For example, SGPT (Muennighoff, 2022) generates text embeddings using GPT for dense retrieval; and Gao et al. (2022); Wang et al. (2023a) proposes to generate pseudo-documents using LLMs and retrieve these pseudo-documents first using queries. Dai et al. (2023) proposes to generate pseudo-queries for few-shot dense retrieval. In addition, LLMs have also been used for relevance ranking tasks. UPR (Sachan et al., 2022a) and SGPT-CE (Muennighoff, 2022) introduce instructional query generation methods, which rank documents based on the generation likelihood of query given the document. HELM (Liang et al., 2022) utilizes instructional relevance generation for ranking, prompting LLMs to generate relevance proxy tokens and rank documents based on the generation probability. RankGPT (Sun et al., 2023c) proposes a zero-shot permutation generation method, which prompts LLMs to directly generation the ranking permutation and its performance surpasses supervised models when based on GPT4. Qin et al. (2023) proposes a pairwise ranking prompting method (PRP) based on open-sourced LLMs.
2311.01555#3
2311.01555#5
2311.01555
[ "2210.11416" ]
2311.01555#5
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Though good results are achieved by the methods above, two challenges still remain: (1) Unstable output, sensitivity of input, repetition, and position bias could harm the perfor- mance severely. (2) Sophisticated instruction techniques and task designs are commonly adapted to achieve high performance at the cost of computational complexity. It would be hard for computationally costly methods to be applied to a practical scenario. 2.2 LLMs Distillation Despite their impressive capabilities, LLMs such as GPT-4 often come with high costs and lack open-source availability. As a result, considerable research has explored various ways to distill the capabilities of LLMs into specialized, customized models. For instance, Fu et al. (2023) and Magister et al. (2022) have successfully distilled the reasoning ability of LLMs into smaller models. Self-instruct (Wang et al., 2023b; Taori et al., 2023) propose iterative approaches to distill GPT-3 using their outputs. Additionally, Sachan et al. (2022b) and Shi et al. (2023) utilize the generation probability of LLMs to improve retrieval systems. Snell et al. (2022) introduces a similar context distillation method to simplify the overlong context when prompting LLMs on Text-to-SQL tasks. This paper presents the Instruction Distillation method, aiming at distilling the ability explored by sophisticated instructions into the model using more efficient instructions to enhance the model efficiency and output stability. # 3 Method In this section, we introduce the instruction distillation method in detail. This novel ap- proach enhances both the effectiveness and efficiency of open-sourced LLMs during the inference stage by distilling the capabilities harnessed by complex instructions into a more efficient one. Thus, when deploying to real-world applications, our methodology is able to obtain good performance which necessitates only lower computation costs compared to others.
2311.01555#4
2311.01555#6
2311.01555
[ "2210.11416" ]
2311.01555#6
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
3 3.1 Task Formalization The task of relevance ranking can be formally defined as follows: Given a query q and a set of candidate items D = {d1, . . . , dn}, the objective is to determine the ranking of these candidates, represented as R = {r1, . . . , rn}. Here, ri â {1, 2, . . . , n} denotes the rank of candidate di. For instance, if ri = 3, it denotes that di is ranked third among the n candidates. A ranking model, denoted as f (·), assigns scores to the candidates based on their relevance to the query: # si = f (q, di) 3; = f(q,di) (1)
2311.01555#5
2311.01555#7
2311.01555
[ "2210.11416" ]
2311.01555#7
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Subsequently, the candidates are ranked according to these relevance scores: arg sorti(s1, . . . , sn) (1) ri = 3.2 Prompting LLMs for Ranking Tasks Recent studies have explored the potential of using Large Language Models (LLMs) for the re-ranking task. Diverse prompting strategies have been explored. Based on the type of instruction employed, existing strategies can be categorized into three types: (1) pointwise ranking, (2) pairwise ranking, and (3) listwise ranking (Wu et al., 2023; Zhu et al., 2023). Pointwise Ranking assigns an independent score to each item di, subsequently ranking the set D based on these scores. A prevalent pointwise prompting approach for LLMs is instructional relevance generation, which is exemplified in HELM (Liang et al., 2022). In this approach, LLMs are prompted to output either "Yes" or "No" to determine the relevance of the candidates to a given query. The generation probability is then converted to the relevance score: _ _ f1+f(Yes | Irc(q,di)), if output Yes (2) ' =f(No | Zrc(q,d;)), if output No Here f (·) represents the large language model, and IRG denotes the relevance generation instruction that converts the input q and di into the test-based prompt. si = 1 |q| â t log p(qt | q<t, pi, Iquery) (3) Pairwise Ranking is employed by PRP (Qin et al., 2023). In this technique, both the query and a pair of candidate items serve as prompts, guiding the LLMs in ranking tasks. For every pair of items di and dj, a specific pairwise comparison instruction, denoted by IPRP, is employed to instruct the LLMs, i.e., f (·), to determine which item is more relevant to the given query. This can be formalized as: ci,j = 1, 0, 0.5, if f (IPRP(q, di, dj)) = i if f (IPRP(q, di, dj)) = j else (4) Here, ci,j denotes the LLMâ s choice.
2311.01555#6
2311.01555#8
2311.01555
[ "2210.11416" ]
2311.01555#8
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Considering that LLMs may exhibit sensitivity to the order of text in the prompt, for every pair di and dj, PRP consults the LLM twice, inverting their order between IPRP(q, di, dj) and IPRP(q, dj, di). Subsequently, to compute the relevance score of the i-th candidate di, PRP compares di against all other candidates in the set D: si = â j̸=i ci,j + (1 â cj,i) (5)
2311.01555#7
2311.01555#9
2311.01555
[ "2210.11416" ]
2311.01555#9
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
The final relevance score aggregates all comparison results. Listwise Ranking has been adopted by Sun et al. (2023c); Ma et al. (2023). This approach involves feeding a set of items into the LLMs, where each item is identified by a unique identifier (e.g., [1], [2], etc.). The LLMs are then instructed to generate a permutation of these items, such as â [2] > [3] > [1] > . . . â : Perm = f (IList(q, d1, d2, . . . , dn)) (6) 4 Table 1: Computational complexity of different instruction methods. n is the number of items to be ranked. k is a constant related to the sliding window method. Instruction Complexity Examples Pointwise Ranking Pairwise Ranking Listwise Ranking O(n) O(n2) O(k â n) (Liang et al., 2022; Sachan et al., 2022a) (Qin et al., 2023) (Sun et al., 2023c; Ma et al., 2023) This generated permutation Perm can be readily transformed into ranking results R, which bypasses the necessity to compute an explicit relevance score, si, for each candidate di. To ensure consistency in notation with scoring-based methodologies, the relevance score si is defined as the reciprocal of its rank: si := 1 ri 3.3 Computational Complexity of Different Instructions. Different ranking instructions offer various trade-offs in terms of efficiency and effectiveness. A summary of these instructions is listed in Table 1. Among these, the pointwise ranking is computationally the most efficient, having a complexity of O(N). Nevertheless, this approach requires the model to yield a calibrated pointwise score, a feat which is notably challenging. In contrast, the pairwise ranking paradigm resolves the calibration issue by engaging in one-to-one pairwise comparisons. This solution, however, elevates the computational complexity to O(N2). To tackle this, Qin et al. (2023) propose two methods to curtail the pairwise rankingâ s complexity: sorting and the sliding window technique. While promising, these methods are still in their nascent stages, proving challenging to stabilize and parallelize. On another note, listwise ranking demonstrates good performance when tested on commer- cial and also proprietary LLMs, such as GPT-4.
2311.01555#8
2311.01555#10
2311.01555
[ "2210.11416" ]
2311.01555#10
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
However, it performs poorly on smaller, open-source models. A possible reason could be the inferior comprehension of instructions in these open-source counterparts. In summary, each ranking method comes with its set of pros and cons: the pointwise approach is efficient but may not be highly effective; the pairwise method is effective but computationally demanding; and the listwise method is most effective but limited to closed- source LLMs like GPT-4. These insights set the stage for our novel solution â the instruction distillation strategy., which we will introduce in the next section. An overview of the proposed instruction distillation approach is presented. Instruction distillation distills the abilities obtained from complex instruction techniques (e.g., pair- wise ranking) into a model that is more efficient with simple instruction techniques (e.g., pointwise ranking). 3.4 Instruction Distillation The key idea of Instruction Distillation is to distill the ability obtained from the complex but effective instruction technique (e.g., pairwise ranking instruction) into a model that is more efficient with the simple instruction technique (e.g., pointwise ranking instruction). Figure 2 shows an overview of the propose instruction distillation approach. We denote the sources of relevance scores or ranking results with superscripts t and s for teacher instruction and simplified student instruction, respectively. Our method unfolds in three stages: (1) Candidate generation, (2) Teacher inference, and (3) Student learning.
2311.01555#9
2311.01555#11
2311.01555
[ "2210.11416" ]
2311.01555#11
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
â ¢ Candidate generation. Suppose we have a dataset comprising a set of queries Q and a corresponding set of items D. It is worth mentioning that none of the queries require a labeled item. For a query q â Q, an unsupervised retriever (e.g., BM25) 5 # RankNet Loss { Ranking }__f Ranking } Pointwise ranking Pairwise ranking ow, | => Flan-T5 | | Flan-T5 | = Teacher Instruction Student Instruction Query + Passages ow) Figure 2: An overview of the proposed instruction distillation approach. Instruction distilla- tion distills the abilities harvested from complex instruction techniques into a model that is more efficient with simple instruction techniques. is employed to fetch n potentially relevant candidate samples D = (d1, d2, . . . , dn) from the item set D.
2311.01555#10
2311.01555#12
2311.01555
[ "2210.11416" ]
2311.01555#12
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
â ¢ Teacher inference. Then, LLMs with costly pairwise ranking are employed as the teacher models to re-rank the candidate set D = (d1, d2, . . . , dn) corresponding to each query q. To adopt the pairwise method, the n items are juxtaposed in pairs, resulting in n(n â 1) ordered tuples (di, dj) where i ̸= j. The model then scores the relevance of di and dj to the given query q using Eq. (5). Based on these scores, each document di is assigned a rank rt i for every query q.
2311.01555#11
2311.01555#13
2311.01555
[ "2210.11416" ]
2311.01555#13
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
â ¢ Student learning. In this phase, the pointwise ranking model serves as the student. To leverage the ranking lists rt i generated by the teacher, we employ the RankNet loss (Burges et al., 2005) to optimize the student model. RankNet is a pairwise loss function that measures the accuracy of relative ordering between items: L = n â i=1 n â j=1 1 i <rt rt j log(1 + exp(ss i â ss j ))
2311.01555#12
2311.01555#14
2311.01555
[ "2210.11416" ]
2311.01555#14
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Unlike other loss functions that utilize a sparse signal, the RankNet loss offers a richer transfer of ranking information from the teacher to the student. After the instruction distillation process, the pointwise instruction technique is utilized during the inference stage. See Appendix A for more details about the prompts. # 4 Experimental Setup In order to comprehensively validate the effectiveness of the proposed method. We conduct experiments on a variety of IR tasks, including both the text-based passage re-ranking task and the item-based conversational recommendation task. For passage re-ranking, the training data contain 10K queries sampled from the MS MARCO dataset (Campos et al., 2016). Each query is then paired with the top 10 documents retrieved by BM25. The trained models are evaluated on subtasks of TREC (Craswell et al., 2020) benchmarks and BEIR (Thakur et al., 2021) benchmarks. NDCG@1, 5, 10 are chosen as the metrics. For conversational recommendation, we use the ReDial dataset (Li et al., 2018a), which is a movie recommendation task based on conversation logs between the user and the recommender. The trained models are then evaluated on the official test set. For this setting, Acc@1 is adopted as the metric. 4.1 Datasets TREC (Campos et al., 2016) is a widely used benchmark dataset in IR research. We use the test sets of the 2019 and 2020 competitions. TREC-DL19 and TREC-DL20 are both derived
2311.01555#13
2311.01555#15
2311.01555
[ "2210.11416" ]
2311.01555#15
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
6 from MS MARCO datasets with human-generated labels. Each query is paired with 100 retrieved documents retrieved by BM25. They share the same format. TREC-DL19 contains 43 test queries, and TREC-DL20 contains 54 test queries. BEIR (Thakur et al., 2021) consists of diverse retrieval tasks and domains. We choose eight tasks in BEIR to evaluate the models: (1) Covid retrieves scientific articles for COVID- 19 related questions. (2) NFCorpus is a bio-medical IR data. (3) Touche is a argument retrieval datasets. (4) DBPedia retrieves entities from DBpedia corpus. (5) SciFact retrieves evidence for claims verification. (6) Signal retrieves relevant tweets for a given news title. (7) News retrieves relevant news articles for news headlines. (8) Robust04 evaluates poorly performing topics. The evaluation results are averaged over the eight datasets. Redial (Recommendation Dialogues) (Li et al., 2018b) is an annotated conversational movie recommendation dataset, where users recommend movies to each other. 4.2 Baselines To compare our methods with existing unsupervised and supervised methods, we choose widely applied methods as below:
2311.01555#14
2311.01555#16
2311.01555
[ "2210.11416" ]
2311.01555#16
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
â ¢ BM25 is an unsupervised, based on weighted term frequency. It is one of most the commonly adopted retrieval methods. â ¢ RankGPT (Sun et al., 2023c) is a listwise permutation generation approach based on gpt-3.5-turbo and gpt-4. â ¢ Relevance Gerneration (Sachan et al., 2022a) is a pointwise ranking method based on FLAN-T5. â ¢ PRP (Qin et al., 2023) is a pairwise ranking ranking method based on FLAN-T5. â ¢ MonoT5 (Sachan et al., 2022b) is pointwise ranking method based on T5 models and is supervised trained on MS MARCO. â ¢ Cohere Rerank is a commercial text ranking system developed by Cohere2.
2311.01555#15
2311.01555#17
2311.01555
[ "2210.11416" ]
2311.01555#17
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
4.3 Implementation Details Passage Re-Ranking Task. Following Sun et al. (2023c), we sample 10K queries from the MS MARCO training set. Utilizing BM25 as the candidate generator, we retrieve 10 passages for each query. Our BM25 implementation is derived from BM25Okapi as presented in RankBM25 (Trotman et al., 2014). Prior to retrieval, we ensure that stopwords are eliminated. In implementing the pairwise prompting strategy, each queryâ s 10 passages are juxtaposed in pairs, leading to the generation of 90 ordered passage pairs. The teacher models are instructed to determine which document is more relevant to the query and subsequently produce the ranking results. The results are then used as the pseudo labels for pointwise instruction distillation. To harness the full potential of the ranking outcomes, we employ RankNet (Burges et al., 2005). Conversational Recommendation Task. For this task, we use the dialogue history as the query, the descriptions of movies as documents, and employ BM25 to fetch the top-5 movies into the candidate pool. Furthermore, following Hou et al. (2023), an additional 4 popular movies are incorporated into the candidate pool3. This is done to simulate the inherent feature of popularity bias in recommendations (Chen et al., 2023).
2311.01555#16
2311.01555#18
2311.01555
[ "2210.11416" ]
2311.01555#18
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Training Details. Throughout the training phase, we employ the AdamW optimizer with a consistent learning rate of 3e â 5. We constrain the maximum input length to 512 tokens. The 2https://cohere.com/rerank 3The criterion for determining a movieâ s popularity is based on its frequency of mentions through- out the training dataset. Movies cited more than 200 times are classified as popular. The likelihood of selecting a popular movie is proportional to its representation in the overall popularity.
2311.01555#17
2311.01555#19
2311.01555
[ "2210.11416" ]
2311.01555#19
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
7 Table 2: Results on TREC-DL19 and TREC-DL20 by re-ranking top-100 passages retrieved by BM25. Sec/Q indicates the average time in seconds to the re-rank 100 passages for a query. Best performing unsupervised and overall system(s) are marked bold. Method LLM Sec/Q DL19 nDCG@1/5/10 DL20 nDCG@1/5/10 BM25 â â 54.26 / 52.78 / 50.58 57.72 / 50.67 / 47.96 Supervised LLMs Methods monoT5 monoT5 Cohere Rerank T5-Base T5-XL english-v2.0 0.12 1.30 â 77.47 / 69.40 / 66.99 79.84 / 73.77 / 71.48 79.07 / 73.74 / 71.83 80.25 / 72.32 / 68.89 77.13 / 76.17 / 73.22 79.32 / 71.00 / 67.08 Unsupervised LLMs Methods RankGPT RankGPT gpt-3.5-turbo gpt-4 â
2311.01555#18
2311.01555#20
2311.01555
[ "2210.11416" ]
2311.01555#20
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
â 82.17 / 71.15 / 65.80 79.32 / 66.76 / 62.91 82.56 / 79.16 / 75.59 78.40 / 74.11 / 70.56 FLAN-T5-Base Relevance Generation PRP (Allpair) FLAN-T5-Base Instruction Distillation FLAN-T5-Base 0.12 21.51 0.12 55.25 / 50.35 / 48.32 58.13 / 48.52 / 47.43 51.16 / 53.44 / 51.45 53.40 / 48.61 / 48.36 59.69 / 60.21 / 57.30 63.27 / 55.50 / 53.09 FLAN-T5-Large Relevance Generation PRP (Allpair) FLAN-T5-Large Instruction Distillation FLAN-T5-Large 1.10 49.19 1.10 40.43 / 45.19 / 46.67 43.41 / 47.65 / 48.41 74.03 / 69.00 / 66.58 68.21 / 64.63 / 61.51 74.33 / 74.18 / 69.81 72.84 / 65.59 / 62.80 FLAN-T5-XL Relevance Generation PRP (Allpair) FLAN-T5-XL Instruction Distillation FLAN-T5-XL 1.30 112.12 1.30 45.37 / 48.56 / 49.07 50.00 / 54.33 / 52.85 77.91 / 73.46 / 70.58 76.85 / 69.58 / 67.21 79.85 / 75.15 / 71.92 81.17 / 72.08 / 69.29 training environment is 4 * A800-80G, with a batch size fixed at 32. We train the model up to 3 epochs.
2311.01555#19
2311.01555#21
2311.01555
[ "2210.11416" ]
2311.01555#21
Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers
Our experiments are based on the FLAN-T5 family (Chung et al., 2022), a suite of models which has been fine-tuned for various NLP tasks. Our experiments specifically leverage models such as FLAN-T5-XL (3B), FLAN-T5-Large (770M), and FLAN-T5-Base (220M). The prompts used can be seen in Appendix A. # 5 Experimental Results 5.1 Results on Passage Re-Ranking Tasks The experimental results on TREC and BEIR datasets are presented in Table 2 and Table 3 respectively. Based on these results, we draw the following observations: Firstly, when compared with previous unsupervised LLM prompting strategies, our instruction-distilled modelsâ inference speed aligns with that of the Relevance Generation method, and it is notably over 100Ã faster than the PRP method. Moreover, the performance of our approach using FLAN-T5-XL and FLAN-T5-Large surpasses both the Relevance Generation and PRP methods with the same LLMs. Secondly, the instruction-distilled models yield results akin to their supervised counter- parts but with reduced annotation requirements. Specifically, our instruction-distilled FLAN-T5-XL model achieves nDCG@10 of 71.92 and 69.29 on TREC-DL19 and TREC-DL20, respectively, either matches or surpasses the performance of the supervised monoT5 of equivalent parameter size. Lastly, the instruction-distilled models always perform superior to their teachers. For example, the distilled models of all different model sizes perform better than their PRP teachers. This can be attributed to the fact that unspecialized teacher models might produce unstable outputs. After distillation on task-related data, student models are able to strictly
2311.01555#20
2311.01555#22
2311.01555
[ "2210.11416" ]