id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2310.19341#17
Skywork: A More Open Bilingual Foundation Model
Our training framework is based on Megatron-LM (Shoeybi et al., 2020) library, designed to sup- port the stable, prolonged training of large-scale models, accommodating thousands of GPUs and model sizes in the order of hundreds of billions parameters. Considering the relatively moderate size of our Skywork-13B model, we have avoided the use of GPU memory optimization tech- niques and parallel schemes that could impede speed. These include Tensor Model Paral- lelism (Shoeybi et al., 2020), Sequence Paral- lelism (Korthikanti et al., 2022), ZeRO-Stage2 (Rajbhandari et al., 2020), and Checkpointing (Chen et al., 2016). Instead, we have lever- aged Data Parallelism (DP) with ZeRO-1 (Ra- jbhandari et al., 2020) and Pipeline Parallelism (PP) (Narayanan et al., 2021) as the primary parallelization strategies for training Skywork- 13B. ZeRO-1 substantially diminishes the GPU memory footprint of the Adam optimizer state without increasing the burden on intercommu- nication. Pipeline Parallelism offers memory optimization at a minimal communication over- head, which decreases as the gradient accumu- lation step increases, thereby mitigating the slowdown of all-reduce as DP Size increases. Regarding operator optimization, we adopted Flash Attention V2 (Dao et al., 2022; Dao, 2023), a strategy that both optimizes GPU memory and expedites the training process. Upon extensive preliminary experiments, we have decided to adopt the combination of DP256, PP2, and ZeRO-1 as our distributed training strategy for Skywork-13B. With this configuration, we achieved a token throughput of 1873 per GPU per second and a model flops utilization (MFU) of 56.5%. An overview of these experiments is provided in Appendix B. The training process of Skywork-13B spanned a total of 39 days. 3.6 Training Details As outlined in Section 2.1, the pre-training of Skywork-13B is executed in two stages:
2310.19341#16
2310.19341#18
2310.19341
[ "2309.05463" ]
2310.19341#18
Skywork: A More Open Bilingual Foundation Model
â ¢ Stage-1: General purpose pre-training on SkyPile-Main. â ¢ Stage-2: STEM-oriented continual pre- training on SkyPile-STEM. 7 In both stages, the model is trained using the standard auto-regressive language modeling ob- jective, with context lengths fixed at 4096 to- kens. The AdamW optimizer (Loshchilov and Hutter, 2019), applied for the training process, uses β1 and β2 values of 0.9 and 0.95, respec- tively. Throughout the pre-traning, we applied a weight decay of 0.1 and gradient clipping of 1.0. Our model was trained with bfloat16 mixed precision. 3.6.1 Stage-1 Pre-training In the first stage, our Skywork-13B model is trained from scratch on SkyPile-Main for over three trillion tokens. This stage consists of two sequential training sessions, covering the first 0 â ¼ 2T tokens and the subsequent 2 â ¼ 3T tokens, respectively. Our initial plan was to train Skywork-13B for two trillion tokens. We launched a train- ing session accordingly, with a cosine learn- ing rate schedule that gradually decays from a peak learning rate of 6eâ 4 to a final learn- ing rate of 6eâ 5. In Figure. 3, we report in red curves the evolution of language mod- eling losses and several benchmark results of our Skywork-13B during this session. It is evi- dent that by the end of this session, the model had not reached saturation. We hypothesized that the model could further benefit from ad- ditional pre-training, prompting us to launch a secondary training session targeting an addi- tional one trillion tokens. The second training session utilized a slightly different composition of training data compared to the initial 0 â ¼ 2T session, as data from certain sources had been depleted and fresh sources were introduced. Owing to the shift in the training distribution, we meticulously tuned the learning rate parameter, eventually deciding on a constant learning rate of 6e-5 for the 2 â ¼ 3T session. In Figure. 4, we illus- trate the model losses under varying learning rate conditions.
2310.19341#17
2310.19341#19
2310.19341
[ "2309.05463" ]
2310.19341#19
Skywork: A More Open Bilingual Foundation Model
Results indicate that a higher learning rate leads to escalations in training loss which we deem too costly to reverse. The im- pact of the second training session is depicted in blue curves of Fig. 3. The enhancement in the modelâ s performance continues, albeit at a decelerating pace. Interestingly, although our Skywork-13B trails in the realm of English language modeling, it significantly surpasses all 52 Training loss 33 Val. Loss on English Texts 33 Val. Loss on Chinese Texts --- LLaMA-13B --- Xverse-13B 21- 2.2- â -- LLaMA2-13B === Baichuan-13B 21 =~ Xverse-13B 2.2- === Baichuan2-13B 2.0 - . --- Baichuan-13B --- Qwen-14B 2.0- ~~~ Baichuan2-13B InternLM-20B wy 19> === Qwen-14B 6 1.9- IntemnLM-20B NN ~ 1.8- 1.8- 17- L 1.6 - 1.6- 1.54 ' 1 1 155 1 ' 1 1.8- 1 ' ' 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 CEVAL MMLU GSM8K 50 - 25- --- random --- random --- random 45 20- 40 - 15- > G £ 5 35 10- o g < 30- 5- 25 - Q 5-52 = 2 == === === === === 20> 1 i i 20> i 1 i -54 i 1 1 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 Tokens (B) Tokens (B) Tokens (B) Figure 3: Trajectory of important monitoring metrics during Stage-1 pre-training. Top Left: Training loss. Top Middle and Right: Validation loss on English and Chinese held-out sets of web texts.
2310.19341#18
2310.19341#20
2310.19341
[ "2309.05463" ]
2310.19341#20
Skywork: A More Open Bilingual Foundation Model
The horizontal dashed lines in the middle and right plots correspond to the evaluated language modeling loss for several similar-sized open LLMs. Bottom: Benchmark results on CEVAL, MMLU and GSM8K respectively. Stage-1 pre-training consists of two sequential training sessions, represented by different colors in the loss curves (red for session 0 â ¼ 2T and blue for session 2 â ¼ 3T). other comparable open LLMs in Chinese lan- guage modeling. In Section 4.3, we will confirm that the superiority of our Skywork-13B in Chi- nese language modeling is not only true on our validation set, it also holds true on a number of test sets sourced from diverse domains. More results can be found in Appendix (see Figure 6). to meticulously calibrate the sampling ratio between the different data sources. Initial ex- periments revealed that a gradual increment in the SkyPile-STEM ratio yielded the most effec- tive results. Therefore, for the actual Stage-2 pre-training phase, we implemented a sampling plan that commenced with 10% of SkyPile- STEM initially, gradually escalating to a peak of 40% towards the conclusion of the training. 3.6.2 Stage-2 Pre-training The primary aim of Stage-2 pre-training is to augment the model with capabilities pertinent to STEM disciplines. The data utilized in this stage comprises an approximate 20% from SkyPile-STEM and 80% from SkyPile-Main, amassing a total of roughly 130 billion tokens. A constant learning rate of 6eâ 5 is adopted, maintaining parity with the terminal learning rate used in Stage-1 pre-training This training strategy proved successful in maintaining the stability of the modelâ s lan- guage modeling validation loss while enabling an optimum transfer of STEM knowledge. The extended training period ensures a comprehen- sive assimilation of STEM-related knowledge into the model without causing significant dis- turbance to the pre-existing learned informa- tion. Consequent to the data distribution shift from Stage-1 to Stage-2, it becomes crucial The impact of Stage-2 pre-training is illus- trated in Figure 5, which presents the progres-
2310.19341#19
2310.19341#21
2310.19341
[ "2309.05463" ]
2310.19341#21
Skywork: A More Open Bilingual Foundation Model
8 LR for Continual Pre-training â LR=6e-5 1.74- â â LR=1.2e-4 â LR=2.5e-4 172+ 1.70- Training Loss | 1.66 - 1900 1920 1940 1960 1980 2000 2020 2040 Tokens (B) Figure 4: Test runs for tuning the learning rate of the 2 â ¼ 3T training session. It can be seen that 6e- 5, which is the terminal learning rate from 0 â ¼ 2T training session, yields the best result. sion of the CEVAL benchmark score. The evo- lution of scores on other STEM-related bench- marks, such as GSM8K, mirrors a similar trend. Improvements in individual subjects of the CE- VAL can be found in Table 12 (see appendix). Stage-2 CEVAL Accuracy PS â ul u ul wu wn oOo NOON Oo u Oo u Oo u 25 50 75 100 125 Tokens (B) o- Figure 5: Evolution of CEVAL score during Stage-2 pre-training. # 4 Evaluation 4.1 Baselines We compare the performance of our Skywork- 13B with open models simi- including LLaMA-13B (Tou- lar vron et al., 2023a), LLaMA2-13B (Touvron et al., 2023b), Baichuan-13B, Baichuan2-13B (Baichuan Inc., 2023), Xverse-13B (Xverse-AI, 2023), IntermLM-20B (InternLM Team, 2023). A summary of these models can be found in Table 4.
2310.19341#20
2310.19341#22
2310.19341
[ "2309.05463" ]
2310.19341#22
Skywork: A More Open Bilingual Foundation Model
9 Model #Tokens Language OpenLLaMA-13B LLaMA-13B LLaMA2-13B Baichuan-13B Baichuan2-13B Xverse-13B InternLM-20B 1.0T 1.0T 2.0T 1.4T 2.6T 1.4T 2.3T English English English English & Chinese English & Chinese English & Chinese English & Chinese Skywork-13B 3.2T English & Chinese Table 4: Details of various models. The column la- beled "#Tokens" indicates the quantity of training tokens used by each model, whereas the "Language" column specifies the primary languages supported by each model. 4.2 Benchmark Evaluation We focus on the following popular benchmarks:
2310.19341#21
2310.19341#23
2310.19341
[ "2309.05463" ]
2310.19341#23
Skywork: A More Open Bilingual Foundation Model
â ¢ MMLU (Hendrycks et al., 2021): MMLU is a benchmark designed to measure knowledge acquired during pre-training. The bench- mark covers 57 subjects across STEM, the humanities, the social sciences, and more, ranging in difficulty from an elementary level to an advanced professional level. It tests both world knowledge and problem solving ability. â ¢ CEVAL (Huang et al., 2023) and CMMLU (Li et al., 2023a): Those are Chinese bench- marks that mimick MMLU. CEVAL consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty lev- els. CMMLU covers 67 disciplines that span from elementary to advanced professional levels.
2310.19341#22
2310.19341#24
2310.19341
[ "2309.05463" ]
2310.19341#24
Skywork: A More Open Bilingual Foundation Model
â ¢ GSM8K (Cobbe et al., 2021): This dataset consists of 8500 high-quality grade school math word problems created by human writ- ers. These multi-step problems require be- tween 2 and 8 steps to solve. GSM8K is usually used in benchmarking multi-step mathematical reasoning ability of LLMs. In Table 5 we present a comparison of perfor- mance results from different models on these benchmarks. The metrics for CEVAL, CMMLU and MMLU are 5-shot accuracy, while for GSM8K it is 8-shot accuracy. Higher num- bers indicate better performance. It can be seen that our Skywork-13B achieves the high- est score on both the CEVAL and MMLU and GSM8K benchmarks, with scores of 60.6 and 62.1 and 55.8 respectively. On the CMMLU benchmark, Baichuan2-13B achieves the high- est performance with a score of 62.0. In summary, our Skywork model has demon- strated exceptional performance across a di- verse range of comprehensive benchmark tests. Results of individual subjects of the CEVAL can be found in Table 12. Results of other benchmarks can be found in Appendix C. # 4.3 Language Modeling Results # 4.3.1 LM as a solution to benchmark overfitting Conventional benchmarks for evaluating LLMs often rely on static datasets of human- annotated examples. A core issue with this approach is that updating the test samples reg- ularly is difficult and costly. Over time, the static test sets tend to be overfitted, producing misleading benchmark results. We propose language modeling evaluations as a compelling alternative. Perplexity in lan- guage modeling acts as a proxy metric strongly linked to performance on diverse downstream tasks (see Figure 1). Since language modeling solely requires unlabeled natural text, it elimi- nates the need for expensive human annotation. Constructing and revising language modeling test sets is low-cost, as new data can be readily sampled from newly published content. Ad- ditionally, if a test set becomes compromised, fresh test data can quickly be sampled as a replacement. # 4.3.2 Construction of diverse LM testsets We compare the language modeling capabilities of various language models with our Skywork- 13B, focusing on Chinese language.
2310.19341#23
2310.19341#25
2310.19341
[ "2309.05463" ]
2310.19341#25
Skywork: A More Open Bilingual Foundation Model
To conduct a robust evaluation of language modeling capability, we have separately col- lected a diverse corpus of texts from a myriad of websites, each labeled according to its respec- tive domain. The domains we cover span a wide spectrum, encompassing areas such as technol- ogy, movies, finance, to name a few. These domain-specific evaluation datasets have also been open-sourced for public access4. 4Github: https://github.com/SkyworkAI/ Skywork/tree/main/data/eval_loss
2310.19341#24
2310.19341#26
2310.19341
[ "2309.05463" ]
2310.19341#26
Skywork: A More Open Bilingual Foundation Model
10 10 We ensure that every test sample consists of documents or user posts published after September 1, 2023. This cut-off date guar- antees that no test sample was inadvertently included during the pre-training of any eval- uated language model. Specifically, SkyPileâ s cut-off date is June 30, 2023, and the majority of models under evaluation were released prior to August 31. Note that while the held-out validation set used to monitor the training progress (as shown in Figure 3) of our model can also serve this pur- pose, it has the same distribution (web texts) as the bulk of the training corpus, thus may lead to overly optimistic estimate of the ac- tual language modeling capability of the model. More details on the sources of the test samples and the underlying data collection pipeline can be found in Appendix D. 4.3.3 Results The results of our language modeling eval- uation are presented in Table 6, where re- sults from ChatGLM3-6B (THUDM, 2023), MOSS-7B (Sun and Qiu, 2023), Baichuan2-7B (Baichuan Inc., 2023), Qwen-7B (Qwen Team, 2023), InternLM-7B (InternLM Team, 2023) and Aquilla2-34B are also included. It can be seen that our Skywork-13B model shows the best performance overall, obtaining the lowest average perplexity score of 9.42. It also exhibits the best performance across indi- vidual domains, achieving the lowest perplexity scores in tech (11.58), movie (21.84), govern- It ment (4.76), and finance (4.92) domains. excels not only in surpassing the performance of models of a similar size, but also in out- performing significantly larger models such as InternLM-20B and Aquila2-34B. We attribute the excellent language modeling performance of our Skywork-13B to the quality of our training corpus. Details on rigorous data filtering pipeline are described in Section 3.1.
2310.19341#25
2310.19341#27
2310.19341
[ "2309.05463" ]
2310.19341#27
Skywork: A More Open Bilingual Foundation Model
# 5 Discussion In this section, we delve into the benefits and as- sociated risks of pre-training on the in-domain data5 of benchmark tasks. 5The term â in-domain dataâ is a vague one that refers to any data with distribution closely resembling to that of the task data. For instance, the training data of a task is trivially in-domain data for that task. Model CEVAL CMMLU MMLU GSM8K OpenLLaMA-13B LLaMA-13B LLaMA-2-13B Baichuan-13B Baichuan2-13B XVERSE-13B InternLM-20B 27.1 35.5 36.5 52.4 58.1 54.7 58.8 26.7 31.2 36.6 55.3 62.0 - - 42.7 46.9 54.8 51.6 59.2 55.1 62.0 12.4 17.8 28.7 26.6 52.8 - 52.6 Skywork-13B 60.6 61.8 62.1 55.8 Table 5: Comparison of results on popular benchmarks. Best result in each column is underlined. It can be seen that our Skywork-13B consistently perform well across the different benchmarks, indicating its overall robustness.
2310.19341#26
2310.19341#28
2310.19341
[ "2309.05463" ]
2310.19341#28
Skywork: A More Open Bilingual Foundation Model
Tech Movie Gov. Game Finance General Average ChatGLM3-6B 12.48 MOSS-7B 20.83 13.43 InternLM-7B 13.39 Qwen-7B 12.89 Baichuan2-7B 23.48 39.66 24.9 25.16 23.26 5.07 11.08 5.88 5.55 5.34 18.45 31.24 19.78 19.26 18.36 5.67 10.59 6.17 5.76 5.68 7.47 13.25 8.10 7.78 7.62 10.25 18.50 11.17 10.83 10.41 23.26 LLaMA2-13B 12.55 Xverse-13B Baichuan-13B 12.38 Baichuan2-13B 12.14 11.90 Qwen-14B 12.34 InternLM-20B 14.62 Aquila2-34B 50.66 23.49 22.46 21.85 22.43 22.06 29.09 18.09 5.20 5.21 5.05 4.89 5.75 5.72 32.52 17.69 17.59 17.15 16.94 17.45 21.78 14.85 5.54 5.42 5.35 5.24 5.73 5.83 16.55 7.46 7.37 7.24 7.03 7.78 8.45 23.54 10.19 10.03 9.81 9.67 10.34 11.73 Skywork-13B 11.58 21.84 4.76 17.28 4.92 6.82 9.42 Table 6: Comparative analysis of language modeling capabilities across diverse domains. Performance is measured using perplexity (lower values is better). Underlined figures correspond to the best result in each column. # 5.1 Effect of pre-training on in-domain data
2310.19341#27
2310.19341#29
2310.19341
[ "2309.05463" ]
2310.19341#29
Skywork: A More Open Bilingual Foundation Model
Pre-trained language models, or foundation models, are intended to be used in transfer learning as a general purpose backbone. As a foundation model in itself has little usage other than sentence completion, the quality of a foundation model is typically evaluated in terms of its performance in those tasks. Appar- ently, when it comes to improve a foundation modelâ s quality as measured by its task perfor- mance, it is always far more efficient to train the model on in-domain data of that task (Her- nandez et al., 2021; Chung et al., 2022) , as GPT-4 generated data with few-shot task examples can also be considered as in-domain data for that task. compared to general-purpose data (web texts). We have shown that Stage-2 pre-training sig- nificantly amplifies our Skywork-13Bâ s STEM related capabilities, leading to a substantial improvement in performance on STEM-related tasks. Now we show that it is even possible to enhance a much weaker base model, i.e., an intermediate checkpoint, using only a fraction of the data and compute used in Stage-2 pre- training. Table 7 presents the CEVAL and GSM8K scores before and after pre-training on in- domain data, utilizing a relatively weak model checkpoint that has only undergone 0.5T pre- training. The results indicate that after pre- training with merely 1B tokens of in-domain
2310.19341#28
2310.19341#30
2310.19341
[ "2309.05463" ]
2310.19341#30
Skywork: A More Open Bilingual Foundation Model
11 CEVAL GSM8K En Loss Zh Loss Before After 28.3 50.8 6.9 40.7 1.86 2.09 2.08 2.21 â +22.5 +33.8 +0.23 +0.13 Table 7: The impact of pre-training on a 0.5T checkpoint of Skywork-13B using only 1B tokens. The training data is sourced from a subset of our SkyPile-STEM corpus.
2310.19341#29
2310.19341#31
2310.19341
[ "2309.05463" ]
2310.19341#31
Skywork: A More Open Bilingual Foundation Model
The columns â En Lossâ and â Zh Lossâ show the modelâ s validation loss on held- out sets of English and Chinese web texts, respec- tively. data, a weak model, initially performing only slightly better than random at CEVAL and GSM8K, can surpass the performance of our strongest Skywork-13B (3T) backbone without in-domain pre-training. However, this comes at the cost of significant degradation in lan- guage modeling performance, as evidenced by the higher loss on both tasks, shown in the two rightmost columns of the table. # 5.2 Pre-training on in-domain data: a common practice? It is of interest to explore whether popular foundational models are pre-trained on in- domain data. In pursuit of this, we delve into the GSM8K datasets, equipped with official train/test splits and comprehensive solutions.
2310.19341#30
2310.19341#32
2310.19341
[ "2309.05463" ]
2310.19341#32
Skywork: A More Open Bilingual Foundation Model
We evaluate an LLMâ s language modeling loss on three datasets drawn from the same distri- bution: 1) The official GSM8K training set, 2) The official GSM8K test set, 3) A set composed of GSM8K-like samples generated by GPT-4. The corresponding losses are denoted as Ltrain, Ltest, and Lref , respectively. Theoretically, if a language model has not been exposed to any of the three datasets during pre-training, the three losses Ltrain, Ltest, and Lref should be ap- proximately equivalent. However, if the model has been pre-trained on the training set or if the test data has been inadvertently exposed during the pre-training process, we would an- ticipate a notable discrepancy between Ltrain, Ltest, and Lref . Our results are outlined in Table 8, which also reports the differences in losses â 1 = Ltest â Lref and â 2 = Ltest â Ltrain. No- tably, the â 2 column reveals that for most models, the language modeling loss on the GSM8K training and test splits are almost iden-
2310.19341#31
2310.19341#33
2310.19341
[ "2309.05463" ]
2310.19341#33
Skywork: A More Open Bilingual Foundation Model
12 12 tical. However, models such as ChatGLM3-6B, Baichuan2-13B, Qwen-7B/14B, and Aquila2- 34B display markedly lower loss on the training split than on the test split. Consequently, we postulate that these models may have been con- siderably pre-trained on GSM8K training split or similar data. Moreover, we notice one particular anomaly in the â 1 column, indicating the significantly lower Ltest loss compared to Lref , which is interesting to further study for better under- standing. # 5.3 Pre-Training or Supervised Fine-Tuning? In the era preceding the advent of LLMs such as GPT-4 (Bubeck et al., 2023; OpenAI, 2023) and Claude (Bai et al., 2022), supervised data for NLP tasks was generally scarce. This was because the process of data collection and an- notation was both time-consuming and costly. Due to the scarcity of supervised data, NLP researchers rely on unsupervised pre-training techniques (Mikolov et al., 2013; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019) to improve downstream task performance via transfer learning, where supervised data is to be used only in the fine-tuning stage. In this context, pre-training on in-domain (supervised) data was pointless, as it would defeat the pur- pose of pre-training itself (transfer learning).
2310.19341#32
2310.19341#34
2310.19341
[ "2309.05463" ]
2310.19341#34
Skywork: A More Open Bilingual Foundation Model
This reality has significantly shifted, however, with the emergence of powerful LLMs. This is because procuring large amounts of high quality supervised/in-domain data is now as simple as making a few API requests to these LLMs, and it is comparatively low-cost (Wang et al., 2023; Taori et al., 2023). This new reality blurs the boundary between pre-training and supervised fine-tuning, making it feasible to incorporate substantial amounts of supervised data into the pre-training phase (Gunasekar et al., 2023; Li et al., 2023b). After all, curated in-domain data, whether written by human annotators or generated by LLM, are all form of human knowledge, and there is good reason for this knowledge to be absorbed into a foundation model. That said, we believe that there is valid risk on the practice of targeted pre-training, in that it compromise fairness in benchmarking. While through pre-training on in-domain data a model Ltest Ltrain Lref 0.99 0.78 1.49 1.52 1.27 1.12 1.10 0.64 1.36 1.42 ChatGLM3-6B 0.99 1.51 MOSS-7B InternLM-7B 1.21 1.07 Qwen-7B 1.41 Baichuan2-7B â 1 0.0 0.02 -0.06 -0.03 0.05 â 2 0.21 â 0.01 0.09 0.43 â
2310.19341#33
2310.19341#35
2310.19341
[ "2309.05463" ]
2310.19341#35
Skywork: A More Open Bilingual Foundation Model
0.01 1.41 LLaMA-13B 1.36 LLaMA2-13B 1.42 Xverse-13B Baichuan-13B 1.41 Baichuan2-13B 1.09 Qwen-14B 1.03 1.20 InternLM-20B 0.78 Aquila2-34B 1.42 1.38 1.43 1.42 0.72 0.42 1.09 0.39 0.05 1.36 0.03 1.33 0.03 1.39 0.04 1.37 -0.03 1.12 -0.11 1.14 1.19 0.01 1.29 â 0.51 0.01 1.00 â 0.01 â 0.01 â 0.01 â 0.01 0.37 0.61 0.11 0.39 Skywork-13B 1.01 0.97 0.04 Table 8: We evaluate the language modeling (LM) loss on samples (a sample is a concatenation of question and answer) from GSM8K dataset for several foundation models. For each LLM, we compare LM loss on the training split (Ltrain), the test split (Ltest), and a specially curated reference set (Lref ), generated by GPT-4, designed to mimic the GSM8K dataset. We also reports two key metrics: â 1 = Ltest â Lref , serving as an indicator of potential test data leakage during the training of the LLM, i.e., a lower value suggests possible leakage; and â 2 = Ltest â Ltrain, which measures the degree of overfitting on the training split of the dataset. A higher value of â 2 implies excessive overfitting. Outliers for both â 1 and â 2 are highlighted in gray.
2310.19341#34
2310.19341#36
2310.19341
[ "2309.05463" ]
2310.19341#36
Skywork: A More Open Bilingual Foundation Model
may excel at specific tasks, it remains uncertain how well it would perform on unseen tasks. Its capabilities may be overestimated based on the benchmark alone, which can lead to unfair comparisons between models and mislead users or stakeholders about the true capabilities of the model. eling perplexity over a given data distribution may predict performance on some tasks, it may not translate to other tasks. The correlation between language modeling and downstream performance could vary across different distri- butions and tasks. # 7 Conclusion # 6 Limitation
2310.19341#35
2310.19341#37
2310.19341
[ "2309.05463" ]
2310.19341#37
Skywork: A More Open Bilingual Foundation Model
Our pre-training approach for Skywork-13B in- volved a two-stage process: general purpose pre- training followed by domain-specific enhance- ment pre-training. However, it remains unclear whether this methodology can produce a model on par with, or superior to, a model trained in one stage on a mixed corpus. Further investi- gation is needed to determine the comparative effectiveness of these pre-training approaches. Additionally, we have proposed using lan- guage modeling loss or perplexity as proxy met- rics for monitoring and evaluating large lan- guage models. A limitation is that language modeling evaluation relies on the specific distri- bution used to sample test data, of which there are infinite possibilities. While language mod- Our work on Skywork-13B represents a sig- nificant leap forward in the development of open large language models. We believe that our comprehensive and transparent approach to the modelâ s development will be a valuable resource for researchers in the field, fostering collaboration and open-source principles. Our two-stage training methodology, leveraging a segmented corpus, offers a novel approach for enhancing model capability in specific domain, while our method of monitoring the training progress provides a practical solution to the challenges of tracking the improvement of these models over time. However, our work is more than just the cre- ation of a new LLM. It is a call to action for the broader NLP community, urging a return to
2310.19341#36
2310.19341#38
2310.19341
[ "2309.05463" ]
2310.19341#38
Skywork: A More Open Bilingual Foundation Model
13 13 the principles of fairness, transparency, and the sharing of ideas that have historically fueled progress in the field. We hope that Skywork- 13B will not only serve as a powerful tool for a wide range of applications but also inspire a renewed commitment to openness and coopera- tion in the development of future models. # References Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, and Ari S. Morcos. 2023. Semdedup: Data-efficient learning at web-scale through semantic deduplication.
2310.19341#37
2310.19341#39
2310.19341
[ "2309.05463" ]
2310.19341#39
Skywork: A More Open Bilingual Foundation Model
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Her- nandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harm- less assistant with reinforcement learning from human feedback. Baichuan Inc. 2023. Baichuan 2: large-scale //github.com/baichuan-inc/Baichuan2/blob/ main/README_EN.md. language models.
2310.19341#38
2310.19341#40
2310.19341
[ "2309.05463" ]
2310.19341#40
Skywork: A More Open Bilingual Foundation Model
Open https: Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jian- feng Gao, and Yejin Choi. 2019. Piqa: Reasoning about physical commonsense in natural language. Sébastien Bubeck, Varun Chandrasekaran, Ro- nen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experi- ments with gpt-4. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023. Extending context window of large language models via positional interpolation. Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublin- ear memory cost. Aakanksha Chowdhery, Sharan Narang, Jacob De- vlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022.
2310.19341#39
2310.19341#41
2310.19341
[ "2309.05463" ]
2310.19341#41
Skywork: A More Open Bilingual Foundation Model
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Hyung Won Chung, Le Hou, Shayne Longpre, Bar- ret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction- finetuned language models. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surpris- ing difficulty of natural yes/no questions. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavar- ian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. Tri Dao. 2023.
2310.19341#40
2310.19341#42
2310.19341
[ "2309.05463" ]
2310.19341#42
Skywork: A More Open Bilingual Foundation Model
Flashattention-2: Faster attention with better parallelism and work partitioning. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io- awareness. Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christo- pher Mattern, Jordi Grau-Moya, Li Kevin Wen- liang, Matthew Aitchison, Laurent Orseau, Mar- cus Hutter, and Joel Veness. 2023.
2310.19341#41
2310.19341#43
2310.19341
[ "2309.05463" ]
2310.19341#43
Skywork: A More Open Bilingual Foundation Model
Language modeling is compression. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 4171â 4186, Minneapo- lis, Minnesota. Association for Computational Linguistics.
2310.19341#42
2310.19341#44
2310.19341
[ "2309.05463" ]
2310.19341#44
Skywork: A More Open Bilingual Foundation Model
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, 14 Sivakanth Gopi, Mojan Javaheripi, Piero Kauff- mann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021.
2310.19341#43
2310.19341#45
2310.19341
[ "2309.05463" ]
2310.19341#45
Skywork: A More Open Bilingual Foundation Model
Measuring massive multitask language understanding. Danny Hernandez, Tom Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El-Showk, Nel- son Elhage, Zac Hatfield-Dodds, Tom Henighan, Tristan Hume, Scott Johnston, Ben Mann, Chris Olah, Catherine Olsson, Dario Amodei, Nicholas Joseph, Jared Kaplan, and Sam McCandlish. 2022. Scaling laws and interpretability of learn- ing from repeated data. Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. 2021. Scaling laws for transfer. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023.
2310.19341#44
2310.19341#46
2310.19341
[ "2309.05463" ]
2310.19341#46
Skywork: A More Open Bilingual Foundation Model
C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322. InternLM Team. 2023. Internlm: A mul- language model with progressively https://github.com/ tilingual enhanced capabilities. InternLM/InternLM. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for read- ing comprehension. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1601â
2310.19341#45
2310.19341#47
2310.19341
[ "2309.05463" ]
2310.19341#47
Skywork: A More Open Bilingual Foundation Model
1611, Vancouver, Canada. Associa- tion for Computational Linguistics. Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates pri- vacy risks in language models. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
2310.19341#46
2310.19341#48
2310.19341
[ "2309.05463" ]
2310.19341#48
Skywork: A More Open Bilingual Foundation Model
Scaling laws for neural language models. Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Moham- mad Shoeybi, and Bryan Catanzaro. 2022. Re- ducing activation recomputation in large trans- former models. Taku Kudo and John Richardson. 2018. Sentence- Piece: A simple and language independent sub- word tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing:
2310.19341#47
2310.19341#49
2310.19341
[ "2309.05463" ]
2310.19341#49
Skywork: A More Open Bilingual Foundation Model
System Demonstrations, pages 66â 71, Brussels, Belgium. Association for Computa- tional Linguistics. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale read- ing comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison- Burch, and Nicholas Carlini. 2022.
2310.19341#48
2310.19341#50
2310.19341
[ "2309.05463" ]
2310.19341#50
Skywork: A More Open Bilingual Foundation Model
Deduplicating training data makes language models better. Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023a. Cmmlu: Measuring massive multitask language understanding in chinese. Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Al- lie Del Giorno, Suriya Gunasekar, and Yin Tat Textbooks are all you need Lee. 2023b. ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Noua- mane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023.
2310.19341#49
2310.19341#51
2310.19341
[ "2309.05463" ]
2310.19341#51
Skywork: A More Open Bilingual Foundation Model
Scaling data-constrained lan- guage models. Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Anand Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Za- haria. 2021. Efficient large-scale language model training on gpu clusters using megatron-lm. OpenAI. 2023. GPT-4 technical report. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
2310.19341#50
2310.19341#52
2310.19341
[ "2309.05463" ]
2310.19341#52
Skywork: A More Open Bilingual Foundation Model
Training language models to follow instructions with human feedback. 15 Guilherme Penedo, Quentin Malartic, Daniel Hess- low, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Al- mazrouei, and Julien Launay. 2023. The refined- web dataset for falcon llm: outperforming cu- rated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextual- ized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long Papers), pages 2227â 2237, New Orleans, Louisiana. Association for Computational Lin- guistics. Qwen Team. 2023. QWEN technical report. https: //github.com/QwenLM/Qwen. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020.
2310.19341#51
2310.19341#53
2310.19341
[ "2309.05463" ]
2310.19341#53
Skywork: A More Open Bilingual Foundation Model
Zero: Memory optimiza- tions toward training trillion parameter models. Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Dé- fossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2023.
2310.19341#52
2310.19341#54
2310.19341
[ "2309.05463" ]
2310.19341#54
Skywork: A More Open Bilingual Foundation Model
Code llama: Open foundation models for code. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bha- gavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â 106. Noam Shazeer. 2020. Glu variants improve trans- former. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2020. Megatron-lm: Training multi- billion parameter language models using model parallelism.
2310.19341#53
2310.19341#55
2310.19341
[ "2309.05463" ]
2310.19341#55
Skywork: A More Open Bilingual Foundation Model
Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan Dey. 2023. SlimPajama: A 627B token cleaned and deduplicated version of RedPajama. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Mur- tadha, Bo Wen, and Yunfeng Liu. 2022. Ro- former: Enhanced transformer with rotary posi- tion embedding.
2310.19341#54
2310.19341#56
2310.19341
[ "2309.05463" ]
2310.19341#56
Skywork: A More Open Bilingual Foundation Model
16 Tianxiang Sun and Xipeng Qiu. 2023. MOSS. https://github.com/OpenLMLab/MOSS/blob/main/ README_en.md. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/ stanford_alpaca.
2310.19341#55
2310.19341#57
2310.19341
[ "2309.05463" ]
2310.19341#57
Skywork: A More Open Bilingual Foundation Model
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large lan- guage model for science. THUDM. 2023. ChatGLM3-6B. https://github. com/THUDM/ChatGLM3 Webpage in Chinese. Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset.
2310.19341#56
2310.19341#58
2310.19341
[ "2309.05463" ]
2310.19341#58
Skywork: A More Open Bilingual Foundation Model
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foun- dation language models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhar- gava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017.
2310.19341#57
2310.19341#59
2310.19341
[ "2309.05463" ]
2310.19341#59
Skywork: A More Open Bilingual Foundation Model
At- tention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPSâ 17, page 6000â 6010, Red Hook, NY, USA. Curran Associates Inc. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-instruct: Align- ing language models with self-generated instruc- tions.
2310.19341#58
2310.19341#60
2310.19341
[ "2309.05463" ]
2310.19341#60
Skywork: A More Open Bilingual Foundation Model
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Francisco Conneau, Vishrav Chaudhary, Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality mono- lingual datasets from web crawl data. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4003â 4012, Marseille, France. European Language Resources Association. Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, and Ves Stoyanov. 2023. Training trajectories of language models across scales. Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Mar- tin, Rashi Rungta, Karthik Abinav Sankarara- man, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Ma- lik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, and Hao Ma. 2023. Effective long-context scaling of foundation mod- els. Xverse-AI. 2023. Xverse-13B. https://github.com/ xverse-ai/XVERSE-13B Webpage in Chinese. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019.
2310.19341#59
2310.19341#61
2310.19341
[ "2309.05463" ]
2310.19341#61
Skywork: A More Open Bilingual Foundation Model
HellaSwag: Can a machine really finish your sentence? In Pro- ceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 4791â 4800, Florence, Italy. Association for Com- putational Linguistics. Biao Zhang and Rico Sennrich. 2019. Root Mean Square Layer Normalization. In Advances in Neural Information Processing Systems 32, Van- couver, Canada. # A Details on GPT-7B vs. LLaMA-7B Experiment In a preliminary experiment, we compared the language modeling performance between GPT and LLaMA architecture in a controlled envi- ronment. We trained a 7B model with GPT architecture and a comparable 7B model with LLaMA architecture for 200B tokens sampled from the same corpus and with the same train- ing parameters.
2310.19341#60
2310.19341#62
2310.19341
[ "2309.05463" ]
2310.19341#62
Skywork: A More Open Bilingual Foundation Model
Details are given in Table 9. # B Preliminary Experiments on Distributed Training In Table 10 we report preliminary results ob- tained with various distributed training con- figurations on LLaMA-13B and Skywork-13B model architecture. In both cases, the best throughput is achieved with DP256 and PP2 with ZERO-1 setting. # C More Benchmark Results We also provide results of the following bench- marks in Table 11: â ¢ TriviaQA (Joshi et al., 2017): TriviaQA is a realistic text-based question answer- ing dataset which includes 950K question- answer pairs from 662K documents collected from Wikipedia and the web. 17 â ¢ HellaSwag (Zellers et al., 2019): HellaSWAG is a dataset that focuses on grounded com- monsense inference.
2310.19341#61
2310.19341#63
2310.19341
[ "2309.05463" ]
2310.19341#63
Skywork: A More Open Bilingual Foundation Model
â ¢ Winogrande (Sakaguchi et al., 2021): Wino- Grande is a dataset that focuses on com- monsense reasoning. â ¢ BoolQ (Clark et al., 2019) BoolQ is a ques- tion answering dataset for yes/no questions. â ¢ PIQA (Bisk et al., 2019): PIQA is a dataset for commonsense reasoning, and was cre- ated to investigate the physical knowledge of existing models in NLP. ARC is a dataset consisting of multiple-choice question-answering tasks that focus on com- monsense reasoning.
2310.19341#62
2310.19341#64
2310.19341
[ "2309.05463" ]
2310.19341#64
Skywork: A More Open Bilingual Foundation Model
â ¢ RACE (Lai et al., 2017) RACE is a dataset that focuses on reading comprehension. # D Details on LM Test Sets We established a daily crawl of published arti- cles and user posts from a selection of widely used Chinese websites. This data collection process is distinct from the pipeline utilized to construct SkyPile. The purpose of gather- ing this data is to create independent language modeling test sets, categorized by their domain, for the evaluation of current open Language Learning Models (LLMs). Below we describe the sources of these do- main testsets:
2310.19341#63
2310.19341#65
2310.19341
[ "2309.05463" ]
2310.19341#65
Skywork: A More Open Bilingual Foundation Model
â ¢ Technology: AI related articles from (36kr. com). This website provides timely and comprehensive news articles about startups, technology, and business trends, primarily in the Chinese market. â ¢ Movie: User written movie reviews from Douban (douban.com). Douban is a popular social networking service in China that offers a platform for users to share their opinions and create content related to movies, books, and music. It is one of the most influential web 2.0 websites in China and has a strong focus on user-generated content.
2310.19341#64
2310.19341#66
2310.19341
[ "2309.05463" ]
2310.19341#66
Skywork: A More Open Bilingual Foundation Model
â ¢ Government: News from website of Peo- pleâ s Daily (www.people.com.cn), which is the Positional Embedding Max Position Embeddings Normalization Activation Attention Num. Layers Hidden Size Num. Heads FFN Size Context Size Absolute 4096 Rotary 4096 LayerNorm RMSNorm Gelu MHA 32 4096 32 16384 4096 SwiGlu MHA 32 4096 32 11008 4096 Global Batch Size Adam β1 Adam β2 Adam ϵ Precision Peak Learning Rate Min Learning Rate Learning Rate Decay Steps Learning Rate Decay Style Warm-up Steps Weight Decay Dropout Probability Gradient Clip Total Steps 1024 0.95 0.9 1.00e-8 bf16 3e-4 3e-5 43945 Cosine 2000 steps 0.1 0.1 1 51200 1024 0.95 0.9 1.00-8 bf16 3e-4 3e-5 43945 Cosine 2000 steps 0.1 0 1 51200 Table 9: Comparison of GPT-7B and LLaMA-7B. All variables are controlled in our experiment except for the differences in architecture.
2310.19341#65
2310.19341#67
2310.19341
[ "2309.05463" ]
2310.19341#67
Skywork: A More Open Bilingual Foundation Model
Model Strategy Throughput MFU TFlops Memory LLaMA2 DP512 LLaMA2 DP256+PP2 LLaMA2 DP256+TP2 LLaMA2 DP128+TP2+PP2 LLaMA2 DP128+PP4 LLaMA2 DP128+TP4 - 2045 1928 1936 1964 1744 - 58.5 55.2 55.4 56.2 44.4 - 182.6 172.2 172.9 175.4 138.5 OOM 70.7 65.5 39.4 53.4 35.4 Skywork DP512 Skywork DP256+PP2 Skywork DP256+TP2 Skywork DP128+TP2+PP2 Skywork DP128+PP4 Skywork DP128+TP4 - 1873 1775 1776 1828 1417 - 56.5 53.5 53.5 55.1 43.1 - 176.2 167.0 167.0 171.9 134.6 OOM 77.1 67.9 42.5 58.7 36.6 Table 10: Compute effeciency achieved with different distributed training configurations. We tested both LLaMA2-13B and Skywork-13B. Throughout the experiments, we use a global batch size of 4096 and a micro batch size of 1. When Tensor Parallelism is enabled, Sequence Parallelism is enabled as well. Throughput is measured in tokens processed per GPU per second, while Model Flops Utilization (MFU) is expressed as a percentage (%). Memory usage is reported in Gigabytes (GB).
2310.19341#66
2310.19341#68
2310.19341
[ "2309.05463" ]
2310.19341#68
Skywork: A More Open Bilingual Foundation Model
18 Models BoolQ PIQA Winogrande TriviaQA RACE Hellaswag ARC-E ARC-C OpenLLaMA-13B LLaMA-13B LLaMA2-13B Baichuan-13B Baichuan2-13B Xverse-13B 77.6 80.7 83.3 78.8 80.3 79.8 79.5 81.0 81.7 77.2 79.3 80.0 72.0 76.2 75.8 70.4 72.1 71.1 60.2 65.0 68.2 51.6 58.0 53.3 42.4 43.4 43.9 35.8 25.2 43.2 76.0 80.1 81.5 74.2 76.4 77.2 78.9 82.1 83.7 77.2 81.1 78.5 Skywork-13B 82.9 79.9 72.2 54.0 45.2 77.4 78.5 48.6 54.7 57.0 48.4 53.2 49.1 50.2 Table 11: More English benchmarks results. As all of these models are more or less sensitive to the prompt template or number of shots, the reported results, which are reproduced by us, may be different to those from other sources.
2310.19341#67
2310.19341#69
2310.19341
[ "2309.05463" ]
2310.19341#69
Skywork: A More Open Bilingual Foundation Model
most influential and authoritative newspa- pers in China. The language used in the news is typically formal Standard Mandarin and carries an authoritative tone. â ¢ Game: Articles from Gcores (www.gcores. com). This is a Chinese digital media plat- form dedicated to video games, tech trends, and geek culture. The platform features a wide range of original content, including news articles, podcast episodes, videos, and independent games. â ¢ Finance: News from finance section of Sina It is one of Chinaâ s (finance.sina.com.cn). leading online media companies, offers a comprehensive suite of financial information and services. It covers a broad range of topics including stock markets, forex, com- modities, real estate, and personal finance.
2310.19341#68
2310.19341#70
2310.19341
[ "2309.05463" ]
2310.19341#70
Skywork: A More Open Bilingual Foundation Model
â ¢ General: News from Jiemian News (www. jiemian.com). Jiemian is a prominent Chi- nese digital media platform known for its in-depth and high-quality journalism. It cov- ers a wide range of topics, including politics, economy, culture, technology, finance, and lifestyle. 19 19 Subject Stage-1 Stage-2 Boost Accountant Advanced Mathematics Art Studies Basic Medicine Business Administration Chinese Language and Literature Civil Servant Clinical Medicine College Chemistry College Economics College Physics College Programming Computer Architecture Computer Network Discrete Mathematics Education Science Electrical Engineer Environmental Impact Assessment Engineer Fire Engineer High School Biology High School Chemistry High School Chinese High School Geography High School History High School Mathematics High School Physics High School Politics Ideological and Moral Cultivation Law Legal Professional Logic Mao Zedong Thought Marxism Metrology Engineer Middle School Biology Middle School Chemistry Middle School Geography Middle School History Middle School Mathematics Middle School Physics Middle School Politics Modern Chinese History Operating System Physician Plant Protection Probability and Statistics Professional Tour Guide Sports Science Tax Accountant Teacher Qualification Urban and Rural Planner Veterinary Medicine 40.8 26.3 60.6 42.1 42.4 47.8 40.4 36.4 37.5 52.7 15.8 51.4 33.3 21.1 50.0 44.8 35.1 45.2 45.2 42.1 36.8 26.3 36.8 80.0 27.8 42.1 47.4 84.2 33.3 39.1 50.0 70.8 57.9 37.5 76.2 30.0 41.7 59.1 15.8 42.1 52.4 47.8 52.6 46.9 63.6 27.8 69.0 42.1 30.6 61.4 50 26.1
2310.19341#69
2310.19341#71
2310.19341
[ "2309.05463" ]
2310.19341#71
Skywork: A More Open Bilingual Foundation Model
49.0 42.1 72.7 57.9 48.5 56.5 66.0 40.9 50.0 47.3 36.8 51.4 52.4 26.3 18.8 75.9 35.1 51.6 51.6 78.9 63.2 42.1 78.9 80.0 16.7 57.9 84.2 100.0 45.8 52.2 45.5 83.3 63.2 58.3 95.2 95.0 83.3 81.8 36.8 73.7 90.5 73.9 47.4 57.1 63.6 33.3 65.5 52.6 49.0 84.1 67.4 60.9 8.2 15.8 12.1 15.8 6.1 8.7 25.5 4.5 12.5 -5.5 21.1 0.0 19.0 5.3 -31.3 31.0 0.0 6.5 6.5 36.8 26.3 15.8 42.1 0.0 -11.1 15.8 36.8 15.8 12.5 13.0 -4.5 12.5 5.3 20.8 19.0 65.0 41.7 22.7 21.1 31.6 38.1 26.1 -5.3 10.2 0.0 5.6 -3.4 10.5 18.4 22.7 17.4 34.8 Table 12: Details on CEVAL benchmark results.
2310.19341#70
2310.19341#72
2310.19341
[ "2309.05463" ]
2310.19341#72
Skywork: A More Open Bilingual Foundation Model
20 20 # BoolQ 775 - 75.0 - 72.5 - 70.0 - 67.5 - 65.0 - 62.5 - 60.0 - 0 1000 2000 3000 Winogrande 70 - 65- 60 - 55 - 50 - 1 1 1 1 0 1000 2000 3000 RACE 42.5 - 40.0 - 37.5 - 35.0- 32.5 - 30.0 - 27.5 - 0 1000 2000 3000 Tokens (B) PIQA 80 - 78 - 76 - 74- 72- 70- 68 - 66- 0 1000 2000 3000 TriviaQA 50- 40 - 30- 20- 10- O-, 1 1 1 0 1000 2000 3000 CMRC 70 - 60 - 50- 40- 30- 20- 10- 0 1000 2000 3000 # Tokens (B) Figure 6: Performance of the Skywork-13B on various benchmarks during Stage-1 pre-training. Benchmarks include BoolQ, PIQA, Winogrande, TriviaQA, RACE, and CMRC.
2310.19341#71
2310.19341#73
2310.19341
[ "2309.05463" ]
2310.19341#73
Skywork: A More Open Bilingual Foundation Model
21 21
2310.19341#72
2310.19341
[ "2309.05463" ]
2310.18018#0
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
3 2 0 2 t c O 7 2 ] L C . s c [ 1 v 8 1 0 8 1 . 0 1 3 2 : v i X r a NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark Oscar Sainz1 Jon Ander Campos2 Iker Garcà a-Ferrero1 Julen Etxaniz1 Oier Lopez de Lacalle1 Eneko Agirre1 1 HiTZ Center - Ixa, University of the Basque Country UPV/EHU {oscar.sainz,iker.graciaf,julen.etxaniz}@ehu.eus {oier.lopezdelacalle,e.agirre}@ehu.eus 2 Cohere [email protected] # Abstract In this position paper, we argue that the classi- cal evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test split of a benchmark, and then evaluated in the same benchmark.
2310.18018#1
2310.18018
[ "2103.03874" ]
2310.18018#1
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
The ex- tent of the problem is unknown, as it is not straightforward to measure. Contamination causes an overestimation of the performance of a contaminated model in a target benchmark and associated task with respect to their non- contaminated counterparts. The consequences can be very harmful, with wrong scientific con- clusions being published while other correct ones are discarded. This position paper de- fines different levels of data contamination and argues for a community effort, including the development of automatic and semi-automatic measures to detect when data from a bench- mark was exposed to a model, and suggestions for flagging papers with conclusions that are compromised by data contamination. et al., 2020) the need for data has been solved by crawling the internet, reaching trillions of tokens (Touvron et al., 2023a), and making it very hard to know whether a specific benchmark was used to train the LLM. This is applicable to all models, even if they document the source of the data at a high level, but especially for closed models with no or insufficient documentation. Data contamination has two consequences. The first one is that the performance of an LLM when evaluated on a benchmark it already processed dur- ing pre-training will be overestimated, causing it to be preferred with respect to other LLMs. This affects the comparative assessment of the quality of LLMs. The second is that papers proposing sci- entific hypotheses on certain NLP tasks could be using contaminated LLMs, and thus make wrong claims about their hypotheses, and invalidate alter- native hypotheses that could be true. This second consequence has an enormous negative impact on our field and is our main focus.
2310.18018#0
2310.18018#2
2310.18018
[ "2103.03874" ]
2310.18018#2
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
1 # Introduction At the core of NLP as a discipline, there is rigor- ous evaluation on different tasks. The experimental protocols involve strict control over the data, espe- cially test data, which needs to be totally unseen during development, but also over training and de- velopment data. This is essential to assess the per- formance of a model in zero-shot, few-shot, or fully supervised settings. Since fine-tuning and prompt- ing of Large Language Models (LLMs) became commonplace (Min et al., 2021) it has been increas- ingly difficult to enforce those strict protocols. Pre- training LLMs is expensive, and therefore, most of the time, researchers use LLMs trained by third- party entities (Raffel et al., 2020; Touvron et al., 2023a), which are agnostic to the target tasks where those LLMs are going to be used. With the grow- ing scale of LLMs (Kaplan et al., 2020; Henighan
2310.18018#1
2310.18018#3
2310.18018
[ "2103.03874" ]
2310.18018#3
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
There are several measures that the community could take. A possible solution would be to avoid all research involving datasets which include pub- lished test data, and focus on datasets where the test data labels are not public. This solution will severely affect the number of NLP tasks for which benchmarks exist, at least until new benchmarks that avoid data leakage are produced. Jacovi et al. (2023) presents preventative strategies to avoid con- tamination in the future. In this position paper, we propose a complemen- tary line of action which seeks to measure and doc- ument data contamination cases, specifying LLM, benchmark and evidence supporting contamination. This solution involves a registry of contamination cases1, collaborative manual work and research on automatic approaches. In addition, conferences should devise mechanisms to ensure that papers 1Such as the LM Contamination Index https:// hitz-zentroa.github.io/lm-contamination/
2310.18018#2
2310.18018#4
2310.18018
[ "2103.03874" ]
2310.18018#4
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
donâ t include conclusions involving contamination, and to flag past work where contamination has been discovered after publication. The paper starts by introducing background, fol- lowed by a definition of data contamination, con- tamination at different steps, methods to measure data contamination and a call for action. # 2 Background Detection of contamination cases has been tradi- tionally done by directly analyzing the training data (Dodge et al., 2021), but the current scale of the pre-training data makes it difficult (Kreutzer et al., 2022; Birhane et al., 2021). Without proper doc- umentation and search tools like ROOTS (Piktus et al., 2023) it is very difficult for any researcher to actually know whether their datasets are compro- mised on a given model. More recently, this task became even harder, as the best-performing LLMs are deployed as products, and therefore, their train- ing corpora are kept secret. In this case, it has been shown that the high memorization abilities of LLMs can be used to generate portions of the train- ing texts (Carlini et al., 2021; Magar and Schwartz, 2022). Using this memorization property, Sainz et al. (2023) show that ChatGPT generates portions of popular NLP benchmarks. Furthermore, LLMs memorization has been studied on data-leakage scenarios (Elangovan et al., 2021). Regarding data contamination cases, Dodge et al. (2021) exposed that the C4 corpus (Raf- fel et al., 2020), a corpus used to pre-train sev- eral LLMs such as T5 (Raffel et al., 2020), con- tained the test splits of several benchmarks that were crawled from GitHub. Moreover, Brown et al. (2020) acknowledged a bug in their filter- ing script that caused the contamination of several benchmarks during the GPT-3 training. Further- more, OpenAI (2023) stated that parts of the BIG- bench (Srivastava et al., 2023) benchmark were inadvertently mixed into the training set, enough to stop them from evaluating the model on it.
2310.18018#3
2310.18018#5
2310.18018
[ "2103.03874" ]
2310.18018#5
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
They also mention that they included parts of the training sets of MATH (Hendrycks et al., 2021) and GSM- 8K (Cobbe et al., 2021) as training data to improve mathematical reasoning (OpenAI, 2023). There- fore, the performance results reported for GSM-8K cannot be taken as zero-shot results when compared to other models. Recently, Sainz et al. (2023) reported that several benchmarks have already been com- including the popular promised in ChatGPT, CoNLL2003 (Tjong Kim Sang and De Meulder, 2003). There are several preprints that evaluate ChatGPT on CoNLL03 (Wei et al., 2023; Li et al., 2023a; Han et al., 2023) and at least one confer- ence paper published on ACL 2023 that evaluates GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021) on the same benchmark (Li et al., 2023b). Appendix A shows evidence for data contamination for those LLMs, and casts doubts on the conclu- sions of those papers. # 3 Defining data contamination In general, data contamination refers to any breach in the strict control of datasets required by the ex- perimental protocol. In this paper, we focus on the specific case where a LLM has processed the eval- uation benchmark during its pre-training. However, different types of contamination exist and each of them has different implications. In this section, we present three types of contamination: guideline, text and annotation. Guideline contamination happens when the an- notation guidelines for a specific dataset are seen by the model. Usually, for specialized annotations, highly detailed guidelines are required. The guide- lines can usually be publicly found on the internet, even for datasets that are not public or require buy- ing a license for their use, ACE05 (Walker et al., 2006) for example. The more details the guide- lines have the more information and examples they provide. A model aware of the guidelines for a spe- cific task or dataset has advantages over a model without such information. We should consider the guideline contamination, especially on zero and few-shot evaluations.
2310.18018#4
2310.18018#6
2310.18018
[ "2103.03874" ]
2310.18018#6
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Raw text contamination happens when the orig- inal text (previous to annotation) is seen by the model. Some examples of this type of contami- nation are the datasets based on Wikipedia texts. Wikipedia is commonly used as a source of pre- training data, but, it is also a frequent source of text to create new datasets. MultiCoNER 2 (Fetahu et al., 2023), a Named Entity Recognition dataset based on Wikipedia links and Wikidata informa- tion, is an example of this phenomenon. Models that have already seen Wikipedia in its original form (including the markup annotations) have more information to better identify a part of the annota- tions (the entity boundaries) of the dataset. As pointed out by Dodge et al. (2021), other datasets built from the web such as IMDB (Maas et al., 2011) and CNN/DailyMail (Hermann et al., 2015) can be also compromised. This kind of contamina- tion should be taken into account when developing automatically annotated datasets. Annotation contamination happens when the annotations (labels) of the target benchmark are exposed to the model during training. Depending on the splits of the benchmark that have been ex- posed, we can have the following cases: (1) When the evaluation split is involved, the experiment is completely invalidated. This is the most harmful level of contamination. (2) When the train or de- velopment splits are involved, this would not affect comparisons with other models that have been de- veloped using those same splits, but it does inval- idate conclusions claiming zero-shot or few-shot performance. # 4 Contamination on different steps Currently, the standard procedure to train and de- ploy language models has three main steps: pre- training a language model, fine-tuning the model to follow instructions and/or align with human feed- back; and an iterative improvement step after de- ployment. Data contamination does not only occur in the pre-training step of LLMs, but can occur later in the training pipeline. # 4.1 Contamination during pre-training During the pre-training, there is a high chance that undesired data is fed to the model.
2310.18018#5
2310.18018#7
2310.18018
[ "2103.03874" ]
2310.18018#7
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Gathering huge amounts of text from the internet also has its coun- terpart: it becomes very hard to filter undesired data completely, and even deduplication is chal- lenging (Lee et al., 2022). Avoiding data contam- ination completely is not realistic, as it is impos- sible to know every dataset that the research com- munity can test an LLM on. However, allowing the researchers to access and perform queries on the pre-training data may ensure that no corrupted evaluations are performed. In fact, keeping the pre-training data not available for LLM consumers may derive undesired influences on downstream tasks (Li et al., 2020; Gehman et al., 2020; Groen- wold et al., 2020). In addition, researchers building LLMs should avoid, at least, contamination from well-known standard benchmarks such as GLUE (Wang et al., 2018) or SuperGLUE (Wang et al., 2020). As Dodge et al. (2021) showed, see their Table 2, various standard benchmarks were found in the C4 (Raffel et al., 2020) corpus. # 4.2 Contamination on supervised fine-tuning The supervised fine-tuning or instruction-tuning step is another step where contamination can oc- cur. Nevertheless, it is much less frequent as it is a required practice in the research community to document the training data in order to publish your findings. As an example of those, we can find the FLAN dataset collection (Longpre et al., 2023), OPT-IML Bench (Iyer et al., 2023), Super- Natural Instructions (Wang et al., 2022b), the P3 collection (Bach et al., 2022) and so on. Recently, more and more machine-generated text is being used to fine-tune language models. Some examples of these are Self-Instruct (Wang et al., 2022a), Unnatural Instructions (Honovich et al., 2022), Alpaca Data (Taori et al., 2023) and ShareGPT (Chiang et al., 2023). The aim of those datasets is usually to make public and smaller white-box models imitate black-box mod- els such as ChatGPT (Gu et al., 2023).
2310.18018#6
2310.18018#8
2310.18018
[ "2103.03874" ]
2310.18018#8
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
However, the distillation of a closed teacher model with clear signs of contamination is an issue. More alarm- ing, is the case that popular crowd-sourcing meth- ods like MTurk have started using LLMs to gener- ate data that was supposed to be manually gener- ated (Veselovsky et al., 2023). # 4.3 Contamination after deployment The last step where the models can be exposed to contamination is applied mostly on LLMs as ser- vice products. With the recent improvements in the quality of LLMs, the models that were supposed to be part of bigger products become products by themselves (ChatGPT or Bard for example). It is worth noting that, although they are closed models, i.e. no information is known about the architec- ture or training details, the research community has evaluated them on standard benchmarks (Jiao et al. (2023); among others). The monetary success of closed systems is closely tied to the performance of the model. Therefore, companies have a strong incentive to audit user inputs and retrain their sys- tem when the performance in a task is determined to be poor. Those models that are actually being ac- cessed via API calls have been iteratively improved with user input, leading to evaluation data exposure. As a result, the models became aware of the testing data, at the point that you can easily recreate the dataset as we discuss in Section 5.2 (see examples in Appendix A). # 5 Measuring data contamination For the reasons we already mentioned, it is nec- essary to measure the existent data contamination cases and to document relevant contamination ev- idence. In order to achieve this goal, we differen- tiate two cases. In the first case, we would have open models where there is public access to all the training data, including text used in pre-training, but also, if the LLM was trained on them, instruc- tion tuning datasets and deployment datasets. In the second case, we would have closed models for which there is no access to training data. # 5.1 Open LLMs Most of the research on data contamination has been focused on analyzing pre-training data with string-matching operations (Dodge et al., 2021), as this provides direct evidence that the LLM was contaminated.
2310.18018#7
2310.18018#9
2310.18018
[ "2103.03874" ]
2310.18018#9
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Pre-training datasets are unwieldy large, and string-matching operations can be very slow at this scale. Therefore, several tools for data auditing have been released recently: The ROOTS Search Tool (Piktus et al., 2023) and Data Por- traits (Marone and Durme, 2023) among others. As an example of their usefulness, Piktus et al. (2023) found that BLOOM (Workshop et al., 2023) should not be evaluated on XNLI (Conneau et al., 2018) due to contamination. These tools should be made available for all open LLMs, in order to allow for contamination case discovery. In addition, there is no currently agreed-upon methodology to measure the level of contamina- tion. For cases where the full benchmark is not found, we propose to measure the level of data con- tamination using benchmark data overlap, that is, the percentage of the benchmark that can be found in the pre-training dataset (Dodge et al., 2021; Pik- tus et al., 2023).
2310.18018#8
2310.18018#10
2310.18018
[ "2103.03874" ]
2310.18018#10
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
# 5.2 Closed LLMs Despite most of the recent popular models like LLaMA (Touvron et al., 2023a), GPT-4 (Ope- nAI, 2023) or Bard have not publicly released their pre-training data, very few works have actu- ally worked on detecting data-contamination when the pre-training data is not available (Magar and Schwartz, 2022). Although this scenario is much more challenging than the former, we foresee that it will become the most prevalent. Developing methods to measure the data contamination in this scenario must be crucial for future evaluations. To tackle this problem, we propose to take advantage of LLMâ s memorization capabilities. Appendix A shows some examples of using memorization to uncover data contamination for the CONLL2003 benchmark on three LLMs. In cases where the LLM does not produce the benchmark verbatim, it is left to the auditor to examine the output and judge whether the evidence supports contamination.
2310.18018#9
2310.18018#11
2310.18018
[ "2103.03874" ]
2310.18018#11
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
The process is totally manual and could be scaled in a community effort. Alternatively, automatic metrics for measuring data contamination levels could be developed. As an initial step in this direction, we reuse and adapt the extractability definition presented in Carlini et al. (2023) for defining memorization. We define that an example s is extractable from evaluation dataset d and model m if there exists a sequence of k examples x immediately preceding s in d data such that s is generated when prompting model m with x. We can define the degree of contamination of model m for dataset d as the ratio of extractable examples with respect to the total number of exam- ples in the dataset. One further question remains to be solved which is whether the lack of memorization of a bench- mark ensures that the LLM was not trained on that benchmark. One hypothesis could be that the lack of memorization is correlated with the performance, even if the LLM was trained on the benchmark. Thus the LLM would not have any advantage with respect to another LLM that was not trained on the benchmark. This is currently speculation, so further research on this topic is necessary, given the extended use of closed LLMs in NLP research.
2310.18018#10
2310.18018#12
2310.18018
[ "2103.03874" ]
2310.18018#12
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
# 6 Call for action We want to encourage the NLP community to: (1) Develop auto- or semi-automatic measures to de- tect when data from a benchmark was exposed to a model; (2) Build a registry of data contamination cases, including the evidence for the contamination; (3) Encourage authors to use the previous tools to ensure that the experimental protocol avoids data contamination to the extent possible; and (4) Ad- dress data contamination issues during peer review, and, in the case of published works, devise mecha- nisms to flag those works with the relevant evidence of data contamination and how data contamination affects the conclusions. As the problem affects our entire field, we also want to encourage the community to participate in workshops related to this topic, as for example, the 1st Workshop on Data Contamination2. We think that developing the ideas that will arise from this community will play an important role in future NLP evaluations.
2310.18018#11
2310.18018#13
2310.18018
[ "2103.03874" ]
2310.18018#13
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
# 7 Limitations In this paper, we address the problem of data con- tamination that occurs when evaluating LLMs on standard academic benchmarks. However, we are aware that there could exist other issues in current evaluations, but, they are out of the scope of this po- sition paper. Related to our proposed solutions, we are aware that these are early-stage solutions and that the proposed effort is really challenging, there- fore we call for further discussion and research on topics related to this issue. # Acknowledgements This work has been partially supported by the Basque Government (Research group funding IT- 1805-22) and the Spanish Government (ILENIA project). Oscar Sainz, Iker Garcà a-Ferrero, and, Julen Etxaniz are supported by doctoral grants from the Basque Government (PRE_2023_2_0137, PRE_2022_2_0208, and, PRE_2023_2_0060, re- spectively).
2310.18018#12
2310.18018#14
2310.18018
[ "2103.03874" ]
2310.18018#14
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
# References Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gun- jan Chhablani, Han Wang, Jason Fries, Maged Al- shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. Prompt- Source: An integrated development environment and repository for natural language prompts. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstra- tions, pages 93â 104, Dublin, Ireland.
2310.18018#13
2310.18018#15
2310.18018
[ "2103.03874" ]
2310.18018#15
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Association for Computational Linguistics. Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes. 2https://conda-workshop.github.io Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
2310.18018#14
2310.18018#16
2310.18018
[ "2103.03874" ]
2310.18018#16
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Language models are few-shot learners. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2023. Quantifying memorization across neural lan- guage models. In The Eleventh International Confer- ence on Learning Representations. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, à lfar Erlingsson, Alina Oprea, and Colin Raffel. 2021.
2310.18018#15
2310.18018#17
2310.18018
[ "2103.03874" ]
2310.18018#17
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Ex- tracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633â 2650. USENIX Association. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021.
2310.18018#16
2310.18018#18
2310.18018
[ "2103.03874" ]
2310.18018#18
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Evaluating large language models trained on code. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality.
2310.18018#17
2310.18018#19
2310.18018
[ "2103.03874" ]
2310.18018#19
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018.
2310.18018#18
2310.18018#20
2310.18018
[ "2103.03874" ]
2310.18018#20
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
XNLI: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2475â 2485, Brus- sels, Belgium. Association for Computational Lin- guistics. Jesse Dodge, Maarten Sap, Ana Marasovi´c, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colos- In Proceedings of the sal clean crawled corpus. 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286â
2310.18018#19
2310.18018#21
2310.18018
[ "2103.03874" ]
2310.18018#21
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zong- wei Zhou, Tao Wang, Yu Emma Wang, Kellie Web- ster, Marie Pellat, Kevin Robinson, Kathy Meier- Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2021.
2310.18018#20
2310.18018#22
2310.18018
[ "2103.03874" ]
2310.18018#22
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Glam: Efficient scaling of language mod- els with mixture-of-experts. CoRR, abs/2112.06905. Aparna Elangovan, Jiayuan He, and Karin Verspoor. 2021. Memorization vs. generalization : Quantify- ing data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1325â 1335, Online. Association for Computational Linguistics. Besnik Fetahu, Sudipta Kar, Zhiyu Chen, Oleg Rokhlenko, and Shervin Malmasi. 2023. SemEval- 2023 Task 2: Fine-grained Multilingual Named En- tity Recognition (MultiCoNER 2). In Proceedings of the 17th International Workshop on Semantic Evalua- tion (SemEval-2023). Association for Computational Linguistics. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxi- cityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356â 3369, Online. Association for Computational Linguistics. Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, and William Yang Wang. 2020. Investigating African- American Vernacular English in transformer-based In Proceedings of the 2020 Con- text generation. ference on Empirical Methods in Natural Language Processing (EMNLP), pages 5877â 5883, Online. As- sociation for Computational Linguistics. Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. 2023. Knowledge distillation of large language models. Ridong Han, Tao Peng, Chaohao Yang, Benyou Wang, Lu Liu, and Xiang Wan. 2023. Is information extrac- tion solved by chatgpt? an analysis of performance, evaluation criteria, robustness and errors.
2310.18018#21
2310.18018#23
2310.18018
[ "2103.03874" ]
2310.18018#23
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Ja- cob Steinhardt. 2021. Measuring mathematical prob- lem solving with the math dataset. arXiv preprint arXiv:2103.03874. Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schul- man, Dario Amodei, and Sam McCandlish. 2020.
2310.18018#22
2310.18018#24
2310.18018
[ "2103.03874" ]
2310.18018#24
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Scaling laws for autoregressive generative modeling. Karl Moritz Hermann, Tomás Kociský, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS, pages 1693â 1701. Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022.
2310.18018#23
2310.18018#25
2310.18018
[ "2103.03874" ]
2310.18018#25
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Unnatural instructions: Tuning lan- guage models with (almost) no human labor. arXiv preprint arXiv:2212.09689. Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian Oâ Horo, Gabriel Pereyra, Jeff Wang, Christo- pher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. 2023. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg. 2023.
2310.18018#24
2310.18018#26
2310.18018
[ "2103.03874" ]
2310.18018#26
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Stop uploading test data in plain text: Practical strategies for mitigating data contami- nation by evaluation benchmarks. Wenxiang Jiao, Wenxuan Wang, Jen tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? yes with gpt-4 as the engine. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
2310.18018#25
2310.18018#27
2310.18018
[ "2103.03874" ]
2310.18018#27
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Scaling laws for neural language models. Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allah- sera Tapo, Nishant Subramani, Artem Sokolov, Clay- tone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, An- nette Rios, Isabel Papadimitriou, Salomey Osei, Pe- dro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, An- dre Niyongabo Rubungo, Toan Q. Nguyen, Math- ias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyak- eni, Jamshidbek Mirzakhalov, Tapiwanashe Matan- gira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaven- ture F.
2310.18018#26
2310.18018#28
2310.18018
[ "2103.03874" ]
2310.18018#28
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine à abuk Ballı, Stella Biderman, Alessia Bat- tisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ata- man, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022.
2310.18018#27
2310.18018#29
2310.18018
[ "2103.03874" ]
2310.18018#29
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Quality at a glance: An audit of web-crawled multilingual datasets. Transactions of the Association for Compu- tational Linguistics, 10:50â 72. Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2022. Deduplicating training data makes language models better. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8424â 8445, Dublin, Ireland. Association for Computational Linguistics. Bo Li, Gexiang Fang, Yang Yang, Quansen Wang, Wei Ye, Wen Zhao, and Shikun Zhang. 2023a. Evaluating chatgptâ s information extraction capabilities: An as- sessment of performance, explainability, calibration, and faithfulness. Peng Li, Tianxiang Sun, Qiong Tang, Hang Yan, Yuan- bin Wu, Xuanjing Huang, and Xipeng Qiu. 2023b.
2310.18018#28
2310.18018#30
2310.18018
[ "2103.03874" ]
2310.18018#30
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Codeie: Large code generation models are better few- shot information extractors. In Proceedings of the 61th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics. Tao Li, Daniel Khashabi, Tushar Khot, Ashish Sab- harwal, and Vivek Srikumar. 2020. UNQOVERing stereotyping biases via underspecified questions. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 3475â 3489, Online. Association for Computational Linguistics.
2310.18018#29
2310.18018#31
2310.18018
[ "2103.03874" ]
2310.18018#31
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The flan collection: Designing data and methods for effective instruction tuning. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xi- ubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Wizardcoder: Empowering code large language models with evol- instruct.
2310.18018#30
2310.18018#32
2310.18018
[ "2103.03874" ]
2310.18018#32
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142â 150, Portland, Oregon, USA. Association for Computational Lin- guistics. Inbal Magar and Roy Schwartz. 2022. Data contamina- tion: From memorization to exploitation.
2310.18018#31
2310.18018#33
2310.18018
[ "2103.03874" ]
2310.18018#33
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 157â 165, Dublin, Ireland. Association for Computational Linguistics. Marc Marone and Benjamin Van Durme. 2023. Data portraits: Recording foundation model training data. Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heinz, and Dan Roth. 2021. Re- cent advances in natural language processing via large pre-trained language models:
2310.18018#32
2310.18018#34
2310.18018
[ "2103.03874" ]
2310.18018#34
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
A survey. OpenAI. 2023. Gpt-4 technical report. Aleksandra Piktus, Christopher Akiki, Paulo Villegas, Hugo Laurençon, Gérard Dupont, Alexandra Sasha Luccioni, Yacine Jernite, and Anna Rogers. 2023. The roots search tool: Data transparency for llms. Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020.
2310.18018#33
2310.18018#35
2310.18018
[ "2103.03874" ]
2310.18018#35
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1â 67. Oscar Sainz, Jon Ander Campos, Iker Garcà a-Ferrero, Julen Etxaniz, and Eneko Agirre. 2023. Did chatgpt cheat on your test? Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Par- rish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, An- drew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabas- sum, Arul Menezes, Arun Kirubarajan, Asher Mul- lokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karaka¸s, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, BartÅ omiej Bojanowski, Batuhan à zyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Cather- ine Stinson, Cedrick Argueta, César Ferri RamÃ
2310.18018#34
2310.18018#36
2310.18018
[ "2103.03874" ]
2310.18018#36
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
rez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Free- man, Daniel Khashabi, Daniel Levy, Daniel Moseguà González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Do- han, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, El- lie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice En- gefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando MartÃ
2310.18018#35
2310.18018#37
2310.18018
[ "2103.03874" ]
2310.18018#37
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
nez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Ger- mán Kruszewski, Giambattista Parascandolo, Gior- gio Mariani, Gloria Wang, Gonzalo Jaimovitch- López, Gregor Betz, Guy Gur-Ari, Hana Galijase- vic, Hannah Kim, Hannah Rashkin, Hannaneh Ha- jishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jae- hoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Koco´n, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Ji- aming Song, Jillian Tang, Joan Waweru, John Bur- den, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D.
2310.18018#36
2310.18018#38
2310.18018
[ "2103.03874" ]
2310.18018#38
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Dhole, Kevin Gim- pel, Kevin Omondi, Kory Mathewson, Kristen Chi- afullo, Ksenia Shkaruta, Kumar Shridhar, Kyle Mc- Donell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras- Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem ¸Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramà rez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schu- bert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Co- hen, Michael Gu, Michael Ivanitskiy, Michael Star- ritt, Michael Strube, MichaÅ
2310.18018#37
2310.18018#39
2310.18018
[ "2103.03874" ]
2310.18018#39
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
SwË edrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr MiÅ kowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhut- dinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Moham- mad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bow- man, Samuel S. Schoenholz, Sanghyun Han, San- jeev Kwatra, Sarah A.
2310.18018#38
2310.18018#40
2310.18018
[ "2103.03874" ]
2310.18018#40
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixi- ang Shane Gu, Shubh Pachchigar, Shubham Tosh- niwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas De- haene, Stefan Divic, Stefano Ermon, Stella Bider- man, Stephanie Lin, Stephen Prasad, Steven T. Pi- antadosi, Stuart M.
2310.18018#39
2310.18018#41
2310.18018
[ "2103.03874" ]
2310.18018#41
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Shieber, Summer Misherghi, Svet- lana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Ger- stenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmaku- mar, Vivek Srikumar, William Fedus, William Saun- ders, William Zhang, Wout Vossen, Xiang Ren, Xi- aoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zi- jian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023.
2310.18018#40
2310.18018#42
2310.18018
[ "2103.03874" ]
2310.18018#42
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142â 147. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a.
2310.18018#41
2310.18018#43
2310.18018
[ "2103.03874" ]