id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2310.18018#43
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Llama: Open and efficient foundation language models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton- Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288. Veniamin Veselovsky, Manoel Horta Ribeiro, and Robert West. 2023.
2310.18018#42
2310.18018#44
2310.18018
[ "2103.03874" ]
2310.18018#44
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Artificial artificial artificial intel- ligence: Crowd workers widely use large language models for text production tasks. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilin- gual training corpus. Linguistic Data Consortium, Philadelphia, 57:45. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2020.
2310.18018#43
2310.18018#45
2310.18018
[ "2103.03874" ]
2310.18018#45
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Superglue: A stickier benchmark for general-purpose language understand- ing systems. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat- ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353â 355, Brussels, Belgium. Association for Com- putational Linguistics. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2022a. Self-instruct: Aligning lan- guage model with self generated instructions. arXiv preprint arXiv:2212.10560. Yizhong Wang, Swaroop Mishra, Pegah Alipoormo- labashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Puro- hit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022b.
2310.18018#44
2310.18018#46
2310.18018
[ "2103.03874" ]
2310.18018#46
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Super-NaturalInstructions: General- ization via declarative instructions on 1600+ NLP In Proceedings of the 2022 Conference on tasks. Empirical Methods in Natural Language Processing, pages 5085â 5109, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022.
2310.18018#45
2310.18018#47
2310.18018
[ "2103.03874" ]
2310.18018#47
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Finetuned language mod- els are zero-shot learners. In International Confer- ence on Learning Representations. Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, Yong Jiang, and Wen- juan Han. 2023. Zero-shot information extraction via chatting with chatgpt. BigScience Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luc- cioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M.
2310.18018#46
2310.18018#48
2310.18018
[ "2103.03874" ]
2310.18018#48
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Vil- lanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Belt- agy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pe- dro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo González Ponferrada, Efrat Lev- kovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, Gérard Dupont, Germán Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jörg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Al- mubarak, Kimbo Chen, Kyle Lo, Leandro Von Werra, Leon Weber, Long Phan, Loubna Ben allal, Lu- dovic Tanguy, Manan Dey, Manuel Romero Muñoz, Maraim Masoud, Marà a Grandury, Mario Šaško, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Moham- mad A.
2310.18018#47
2310.18018#49
2310.18018
[ "2103.03874" ]
2310.18018#49
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Pe- ter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis López, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor, Stanislav Silber- berg, Suhas Pai, Sydney Zink, Tiago Timponi Tor- rent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Ta- lat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Davut Emre Ta¸sar, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajy- oti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Sai- ful Bari, Maged S. Al-shaibani, Matteo Manica, Ni- hal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H.
2310.18018#48
2310.18018#50
2310.18018
[ "2103.03874" ]
2310.18018#50
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Ur- mish Thakker, Vikas Raunak, Xiangru Tang, Zheng- Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre François Lavallée, Rémi Lacroix, Samyam Rajbhandari, San- chit Gandhi, Shaden Smith, Stéphane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aurélie Névéol, Charles Lover- ing, Dan Garrette, Deepak Tunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bog- danov, Genta Indra Winata, Hailey Schoelkopf, Jan- Christoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Na- joung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shachar Mirkin, Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, ZdenË
2310.18018#49
2310.18018#51
2310.18018
[ "2103.03874" ]
2310.18018#51
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
ek Kasner, Al- ice Rueda, Amanda Pestana, Amir Feizpour, Am- mar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Are- zoo Abdollahi, Aycha Tammour, Azadeh HajiHos- seini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Muñoz Ferrandis, Danish Contrac- tor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ez- inwanne Ozoani, Fatima Mirza, Frankline Onon- iwu, Habib Rezanejad, Hessie Jones, Indrani Bhat- tacharya, Irene Solaiman, Irina Sedenko, Isar Ne- jadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Livia Dutra, Mairon Samagaio, Maraim El- badri, Margot Mieskes, Marissa Gerchick, Martha Akinlolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Ra- jani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Al- izadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo Wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Clémentine Fourrier, Daniel León Periñán, Daniel Molano, Dian Yu, Enrique Manjava- cas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U.
2310.18018#50
2310.18018#52
2310.18018
[ "2103.03874" ]
2310.18018#52
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Ranga- sai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc Pàmies, Maria A Castillo, Mari- anna Nezhurina, Mario Sänger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Ki- blawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Ku- mar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Théo Gigant, Tomoya Kainuma, Wojciech Kusa, Ya- nis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, and Thomas Wolf. 2023. Bloom: A 176b-parameter open-access multilingual language model. # A Empirical demonstrations of contamination This section contains a few empirical demonstra- tions of contamination that were memorized by 3 different models: WizardCoder (Luo et al., 2023), ChatGPT and GitHub Copilot. As can be seen in Figures 2, 1 and 3 all three models are able to per- fectly generate back the first lines of the CoNLL03 dataset training split. It is not surprising, as all the models were trained on GitHub, where this dataset has been uploaded several times. Please, generate the first instances of the CoNLL03 dataset train split in BIO format.
2310.18018#51
2310.18018#53
2310.18018
[ "2103.03874" ]
2310.18018#53
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Certainly! The CoNLL-2003 dataset ... # ``` -DOCSTART- -X- -X- O EU NNP B-NP B-ORG rejects VBZ B-VP O German JJ B-NP B-MISC call NN I-NP O to TO B-VP O boycott VB I-VP O British JJ B-NP B-MISC lamb NN I-NP O . . O O Peter NNP B-NP B-PER Blackburn NNP I-NP I-PER BRUSSELS NNP B-NP B-LOC 1996-08-22 CD I-NP O Figure 1: Data contamination on ChatGPT. The given prompt is colored and the completion is in black. The output was shortened for commodity. # A.1 Data contamination reported by other works Most of the data contamination analyses have been performed by the authors of LLMs. In the follow- ing list, we mention the different data contamina- tion reports we are aware of:
2310.18018#52
2310.18018#54
2310.18018
[ "2103.03874" ]
2310.18018#54
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
â ¢ GPT-3 (Brown et al., 2020): Appendix C (arXiv version) â ¢ GPT-4 (OpenAI, 2023): Appendix C # This file contains the CoNLL2003 # English train dataset in BIO format -DOCSTART-X- -X- -X- O EU NNP B-NP B-ORG rejects VBZ B-VP O German JJ B-NP B-MISC call NN I-NP O to TO B-VP O boycott VB I-VP O British JJ B-NP B-MISC lamb NN I-NP O .. O O Peter NNP B-NP B-PER Blackburn NNP I-NP I-PER Figure 2: Data contamination on WizardCoder. The given prompt is colored and the completion is in black. DOCSTART- O EU B-ORG rejects O German B-MISC call O to O boycott O British B-MISC lamb O . Peter B-PER Blackburn I-PER BRUSSELS 1996-08-22 O # B-LOC The O European B-ORG Commission I-ORG Figure 3: Data contamination on GitHub Copilot. The given prompt is colored and the completion is in black.
2310.18018#53
2310.18018#55
2310.18018
[ "2103.03874" ]
2310.18018#55
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
â ¢ LLaMA 2 (Touvron et al., 2023b): Appendix A.6 â ¢ FLAN (Wei et al., 2022): Appendix C â ¢ (Dodge et al., 2021): Section 4.2 â ¢ GLaM (Du et al., 2021): Appendix D An updated version can be found in the LM Con- tamination Index.
2310.18018#54
2310.18018
[ "2103.03874" ]
2310.16789#0
Detecting Pretraining Data from Large Language Models
3 2 0 2 v o N 3 ] L C . s c [ 2 v 9 8 7 6 1 . 0 1 3 2 : v i X r a # DETECTING PRETRAINING DATA FROM LARGE LAN- GUAGE MODELS Weijia Shi1 â Anirudh Ajith2â Mengzhou Xia2 Yangsibo Huang2 Daogao Liu1 Terra Blevins1 Danqi Chen2 Luke Zettlemoyer1 1University of Washington swj0419.github.io/detect-pretrain.github.io # ABSTRACT Although large language models (LLMs) are widely deployed, the data used to train them is rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but certain that it includes potentially problematic text such as copyrighted materials, personally identifiable information, and test data for widely reported reference benchmarks. However, we currently have no way to know which data of these types is included or in what proportions. In this paper, we study the pretraining data detection problem: given a piece of text and black-box access to an LLM without knowing the pretraining data, can we determine if the model was trained on the provided text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that uses data created before and after model training to support gold truth detection. We also introduce a new detection method MIN-K% PROB based on a simple hypothesis: an unseen example is likely to contain a few outlier words with low probabilities under the LLM, while a seen example is less likely to have words with such low probabilities. MIN-K% PROB can be applied without any knowledge about the pretraining corpus or any additional training, departing from previous detection methods that require training a reference model on data that is similar to the pretraining data. Moreover, our experiments demonstrate that MIN-K% PROB achieves a 7.4% improvement on WIKIMIA over these previous methods. We apply MIN-K% PROB to three real-world scenarios, copyrighted book detection, contaminated downstream example detection and privacy auditing of machine unlearning, and find it a consistently effective solution. # INTRODUCTION
2310.16789#1
2310.16789
[ "2012.13891" ]
2310.16789#1
Detecting Pretraining Data from Large Language Models
As the scale of language model (LM) training corpora has grown, model developers (e.g, GPT- 4 (Brown et al., 2020a) and LLaMA 2 (Touvron et al., 2023b)) have become reluctant to disclose the full composition or sources of their data. This lack of transparency poses critical challenges to scien- tific model evaluation and ethical deployment. Critical private information may be exposed during pretraining; previous work showed that LLMs generated excerpts from copyrighted books (Chang et al., 2023) and personal emails (Mozes et al., 2023), potentially infringing upon the legal rights of original content creators and violating their privacy. Additionally, Sainz et al. (2023); Magar & Schwartz (2022); Narayanan (2023) showed that the pretraining corpus may inadvertently include benchmark evaluation data, making it difficult to assess the effectiveness of these models. In this paper, we study the pretraining data detection problem: given a piece of text and black-box access to an LLM with no knowledge of its pretraining data, can we determine if the model was pretrained on the text? We present a benchmark, WIKIMIA, and an approach, MIN-K% PROB, for pretraining data detection. This problem is an instance of Membership Inference Attacks (MIAs), which was initially proposed by Shokri et al. (2016). Recent work has studied fine-tuning data detection (Song & Shmatikov, 2019; Shejwalkar et al., 2021; Mahloujifar et al., 2021) as an MIA problem. However, adopting these methods to detect the pertaining data of contemporary large LLMs presents two unique technical challenges: First, unlike fine-tuning which usually runs for multiple epochs, pretraining uses a much larger dataset but exposes each instance only once, significantly
2310.16789#0
2310.16789#2
2310.16789
[ "2012.13891" ]
2310.16789#2
Detecting Pretraining Data from Large Language Models
# â Equal contribution 1 Text X: the 15th Miss Universe Thailand pageant was held at Royal Paragon Hall Min-K% Prob iii] Token Prob â the 15 © >" > 1 Miss = = logp(x:| - es 4 losplx| +) x,â ¬{the,Royal,Miss,15} Hall Universe © 0075 015 0.225 03 0 00750.15 0225 03 (a) get token prob (b)select min K%* tokens (c) average log-likelihood Figure 1: Overview of MIN-K% PROB. To determine whether a text X is in the pretraining data of a LLM such as GPT, MIN-K% PROB first gets the probability for each token in X, selects the k% tokens with minimum probabilities and calculates their average log likelihood. If the average log likelihood is high, the text is likely in the pretraining data. reducing the potential memorization required for successful MIAs (Leino & Fredrikson, 2020; Kandpal et al., 2022). Besides, previous methods often rely on one or more reference models (Carlini et al., 2022; Watson et al., 2022) trained in the same manner as the target model (e.g., on the shadow data sampled from the same underlying pretraining data distribution) to achieve precise detection. This is not possible for large language models, as the training distribution is usually not available and training would be too expensive. Our first step towards addressing these challenges is to establish a reliable benchmark. We introduce WIKIMIA, a dynamic benchmark designed to periodically and automatically evaluate detection methods on any newly released pretrained LLMs. By leveraging the Wikipedia data timestamp and the model release date, we select old Wikipedia event data as our member data (i.e, seen data during pretraining) and recent Wikipedia event data (e.g., after 2023) as our non-member data (unseen). Our datasets thus exhibit three desirable properties: (1) Accurate: events that occur after LLM pretraining are guaranteed not to be present in the pretraining data.
2310.16789#1
2310.16789#3
2310.16789
[ "2012.13891" ]
2310.16789#3
Detecting Pretraining Data from Large Language Models
The temporal nature of events ensures that non-member data is indeed unseen and not mentioned in the pretraining data. (2) General: our benchmark is not confined to any specific model and can be applied to various models pretrained using Wikipedia (e.g., OPT, LLaMA, GPT-Neo) since Wikipedia is a commonly used pretraining data source. (3) Dynamic: we will continually update our benchmark by gathering newer non-member data (i.e., more recent events) from Wikipedia since our data construction pipeline is fully automated. MIA methods for finetuning (Carlini et al., 2022; Watson et al., 2022) usually calibrate the target model probabilities of an example using a shadow reference model that is trained on a similar data distribution. However, these approaches are impractical for pretraining data detection due to the black-box nature of pretraining data and its high computational cost. Therefore, we propose a reference-free MIA method MIN-K% PROB. Our method is based on a simple hypothesis: an unseen example tends to contain a few outlier words with low probabilities, whereas a seen example is less likely to contain words with such low probabilities. MIN-K% PROB computes the average probabilities of outlier tokens. MIN-K% PROB can be applied without any knowledge about the pretrainig corpus or any additional training, departing from existing MIA methods, which rely on shadow reference models (Mattern et al., 2023; Carlini et al., 2021). Our experiments demonstrate that MIN-K% PROB outperforms the existing strongest baseline by 7.4% in AUC score on WIKIMIA. Further analysis suggests that the detection performance correlates positively with the model size and detecting text length. To verify the applicability of our proposed method in real-world settings, we perform three case studies: copyrighted book detection (§5), privacy auditing of LLMs (§7) and dataset contamination detection (§6). We find that MIN-K% PROB significantly outperforms baseline methods in both scenarios. From our experiments on copyrighted book detection, we see strong evidence that GPT-3 1 is pretrained on copyrighted books from the Books3 dataset (Gao et al., 2020; Min et al., 2023). From our experiments on privacy auditing of machine unlearning, we use MIN-K% PROB # 1text-davinci-003. 2
2310.16789#2
2310.16789#4
2310.16789
[ "2012.13891" ]
2310.16789#4
Detecting Pretraining Data from Large Language Models
to audit an unlearned LLM that is trained to forget copyrighted books using machine unlearning techniques (Eldan & Russinovich, 2023) and find such model could still output related copyrighted content. Furthermore, our controlled study on dataset contamination detection sheds light on the impact of pretraining design choices on detection difficulty; we find detection becomes harder when training data sizes increase, and occurrence frequency of the detecting example and learning rates decreases. # 2 PRETRAININING DATA DETECTION PROBLEM We study pretraining data detection, the problem of detecting whether a piece of text is part of the training data. First, we formally define the problem and describe its unique challenges that are not present in prior finetuning data detection studies (§2.1). We then curate WIKIMIA, the first benchmark for evaluating methods of pretraining data detection (§2.2). 2.1 PROBLEM DEFINITION AND CHALLENGES We follow the standard definition of the membership inference attack (MIA) by Shokri et al. (2016); Mattern et al. (2023). Given a language model fθ and its associated pretraining data D = {zi}iâ [n] sampled from an underlying distribution D, the task objective is to learn a detector h that can infer the membership of an arbitrary data point x: h(x, fθ) â {0, 1}. We follow the standard setup of MIA, assuming that the detector has access to the LM only as a black box, and can compute token probabilities for any data point x. Challenge 1: Unavailability of the pretraining data distribution. Existing state-of-art MIA methods for data detection during finetuning (Long et al., 2018; Watson et al., 2022; Mireshghallah et al., 2022a) typically use reference models gγ to compute the background difficulty of the data point and to calibrate the output probability of the target language model : h(x, fθ, gγ) â {0, 1}. Such reference models usually share the same model architecture as fθ and are trained on shadow data Dshadow â D (Carlini et al., 2022; Watson et al., 2022), which are sampled from the same underlying distribution D.
2310.16789#3
2310.16789#5
2310.16789
[ "2012.13891" ]
2310.16789#5
Detecting Pretraining Data from Large Language Models
These approaches assume that the detector can access (1) the distribution of the target modelâ s training data, and (2) a sufficient number of samples from D to train a calibration model. However, this assumption of accessing the distribution of pretraining training data is not realistic because such information is not always available (e.g., not released by model developers (Touvron et al., 2023b; OpenAI, 2023)). Even if access were possible, pretraining a reference model on it would be extremely computationally expensive given the incredible scale of pretraining data. In summary, the pretraining data detection problem aligns with the MIA definition but includes an assumption that the detector has no access to pretraining data distribution D. Challenge 2: Detection difficulty. Pretraining and finetuning differ significantly in the amount of data and compute used, as well as in optimization setups like training epochs and learning rate schedules. These factors significantly impact detection difficulty. One might intuitively deduce that detection becomes harder when dataset sizes increase, and the training epochs and learning rates decrease. We briefly describe some theoretical evidence that inform these intuitions in the following and show empirical results that support these hypotheses in §6. To illustrate, given an example z â D, we denote the model output as fθ(z) Now, take another example y sampled from D \ D (not part of the pretraining data). Determining whether an example x was part of the training set becomes challenging if the outputs fθ(z) and fθ(y) are similar. The degree of similarity between fθ(z) and fθ(y) can be quantified using the total variation distance. According to previous research (Hardt et al., 2016; Bassily et al., 2020), the bound on this total variation distance between fθ(z) and fθ(y) is directly proportional to the occurrence frequency of the example x, learning rates, and the inverse of dataset size, which implies the detection difficulty correlates with these factors as well.
2310.16789#4
2310.16789#6
2310.16789
[ "2012.13891" ]
2310.16789#6
Detecting Pretraining Data from Large Language Models
3 2.2 WIKIMIA: A DYNAMIC EVALUATION BENCHMARK We construct our benchmark by using events added to Wikipedia after specific dates, treating them as non-member data since they are guaranteed not to be present in the pretraining data, which is the key idea behind our benchmark. Data construction. We collect recent event pages from Wikipedia. Step 1: We set January 1, 2023 as the cutoff date, considering events occurring post-2023 as recent events (non-member data). We used the Wikipedia API to automatically retrieve articles and applied two filtering criteria: (1) the articles must belong to the event category, and (2) the page must be created post 2023. Step 2: For member data, we collected articles created before 2017 because many pretrained models, e.g., LLaMA, GPT-NeoX and OPT, were released after 2017 and incorporate Wikipedia dumps into their pretraining data. Step 3: Additionally, we filtered out Wikipedia pages lacking meaningful text, such as pages titled "Timeline of ..." or "List of ...". Given the limited number of events post-2023, we ultimately collected 394 recent events as our non-member data, and we randomly selected 394 events from pre-2016 Wikipedia pages as our member data. The data construction pipeline is automated, allowing for the curation of new non-member data for future cutoff dates. Benchmark setting. In practice, LM users may need to detect texts that are paraphrased and edited, as well. Previous studies employing MIA have exclusively focused on detecting examples that exactly match the data used during pretraining. It remains an open question whether MIA methods can be employed to identify paraphrased examples that convey the same meaning as the original. In addition to the verbatim setting (original), we therefore introduce a paraphrase setting we leverage ChatGPT2 to paraphrase the examples and subsequently assess if the MIA metric can effectively identify semantically equivalent examples. Moreover, previous MIA evaluations usually mix different-length data in evaluation and report a single performance metric. However, our results reveal that data length significantly impacts the difficulty of detection. Intuitively, shorter sentences are harder to detect. Consequently, different data length buckets may lead to varying rankings of MIA methods. To investigate this further, we propose a different-length setting: we truncate the Wikipedia event data into different lengthsâ
2310.16789#5
2310.16789#7
2310.16789
[ "2012.13891" ]
2310.16789#7
Detecting Pretraining Data from Large Language Models
32, 64, 128, 256â and separately report the MIA methodsâ performance for each length segment. We describe the desirable properties in Appendix B. 3 MIN-K% PROB: A SIMPLE REFERENCE-FREE PRETRAINING DATA DETECTION METHOD We introduce a pretraining data detection method MIN-K% PROB that leverages minimum token probabilities of a text for detection. MIN-K% PROB is based on the hypothesis that a non-member example is more likely to include a few outlier words with high negative log-likelihood (or low probability), while a member example is less likely to include words with high negative log-likelihood. Consider a sequence of tokens in a sentence, denoted as x = x1, x2, ..., xN , the log-likelihood of a token, xi, given its preceding tokens is calculated as log p(xi|x1, ..., xiâ 1). We then select the k% of tokens from x with the minimum token probability to form a set, Min-K%(x), and compute the average log-likelihood of the tokens in this set: 1 MIN-K% PROB(z) = E > log p(w; |r1, ..., Zi-1)- (1) 2jâ ¬Min-K%(«) where E is the size of the Min-K%(x) set. We can detect if a piece of text was included in pretraining data simply by thresholding this MIN-K% PROB result. We summarize our method in Algorithm 1 in Appendix B. # 2OpenAI. https://chat.openai.com/chat 4 # 4 EXPERIMENTS We evaluate the performance of MIN-K% PROB and baseline detection methods against LMs such as LLaMA Touvron et al. (2023a), GPT-Neo (Black et al., 2022), and Pythia (Biderman et al., 2023) on WIKIMIA. 4.1 DATASETS AND METRICS Our experiments use WIKIMIA of different lengths (32, 64, 128, 256), original and paraphrase settings. Following (Carlini et al., 2022; Mireshghallah et al., 2022a), we evaluate the effectiveness of a detection method using the True Positive Rate (TPR) and its False Positive Rate (FPR).
2310.16789#6
2310.16789#8
2310.16789
[ "2012.13891" ]
2310.16789#8
Detecting Pretraining Data from Large Language Models
We plot the ROC curve to measure the trade-off between the TPR and FPR and report the AUC score (the area under ROC curve) and TPR at low FPRs (TPR@5%FPR) as our metrics. 4.2 BASELINE DETECTION METHODS We take existing reference-based and reference-free MIA methods as our baseline methods and evaluate their performance on WIKIMIA. These methods only consider sentence-level probability. Specifically, we use the LOSS Attack method (Yeom et al., 2018a), which predicts the membership of an example based on the loss of the target model when fed the example as input. In the context of LMs, this loss corresponds to perplexity of the example (PPL). Another method we consider is the neighborhood attack (Mattern et al., 2023), which leverages probability curvature to detect membership (Neighbor). This approach is identical to the DetectGPT (Mitchell et al., 2023) method recently proposed for classifying machine-generated vs. human-written text. Finally, we compare with membership inference methods proposed in (Carlini et al., 2021), including comparing the example perplexity to zlib compression entropy (Zlib), to the lowercased example perplexity (Lowercase) and to example perplexity under a smaller model pretrained on the same data (Smaller Ref ). For the smaller reference model setting, we employ LLaMA-7B as the smaller model for LLaMA-65B and LLaMA-30B, GPT-Neo-125M for GPT-NeoX-20B, OPT-350M for OPT-66B and Pythia-70M for Pythia-2.8B. IMPLEMENTATION AND RESULTS Implementation details. The key hyperparameter of MIN-K% PROB is the percentage of tokens with the highest negative log-likelihood we select to form the top-k% set. We performed a small sweep over 10, 20, 30, 40, 50 on a held-out validation set using the LLAMA-60B model and found that k = 20 works best. We use this value for all experiments without further tuning. As we report the AUC score as our metric, we donâ t need to determine the threshold ϵ. Main results. We compare MIN-K% PROB and baseline methods in Table 1.
2310.16789#7
2310.16789#9
2310.16789
[ "2012.13891" ]
2310.16789#9
Detecting Pretraining Data from Large Language Models
Our experiments show that MIN-K% PROB consistently outperforms all baseline methods across diverse target language models, both in original and paraphrase settings. MIN-K% PROB achieves an AUC score of 0.72 on average, marking a 7.4% improvement over the best baseline method (i.e., PPL). Among the baselines, the simple LOSS Attack (PPL) outperforms the others. This demonstrates the effectiveness and generalizability of MIN-K% PROB in detecting pretraining data from various LMs. Further results such as TPR@5%FPR can be found in Appendix A, which shows a trend similar to Table 6. # 4.4 ANALYSIS We further delve into the factors influencing detection difficulty, focusing on two aspects: (1) the size of the target model, and (2) the length of the text. Model size. We evaluate the performance of reference-free methods on detecting pretraining 128- length texts from different-sized LLaMA models (7, 13, 30, 65B). Figure 2a demonstrates a noticeable trend: the AUC score of the methods rises with increasing model size. This is likely because larger models have more parameters and thus are more likely to memorize the pretraining data.
2310.16789#8
2310.16789#10
2310.16789
[ "2012.13891" ]
2310.16789#10
Detecting Pretraining Data from Large Language Models
5 (a) AUC score vs. model size (b) AUC score vs. text length © PPL â © Neighbor © Min-K Prob AUC 22 28 144 200 256 Example Length © PPL â ® Neighbor @ Min-K Prob AUC 7 2 a7 st 66 Billion of Parameters Figure 2: As model size or text length increases, detection becomes easier. Length of text. In another experiment, we evaluate the detection method performance on examples of varying lengths in the original setting. As shown in Figure 2b, the AUC score of different methods increases as text length increases, likely because longer texts contain more information memorized by the target model, making them more distinguishable from the unseen texts. Table 1: AUC score for detecting pretraining examples from the given model on WIKIMIA for MIN- K% PROB and baselines. Ori. and Para. denote the original and paraphrase settings, respectively. Bold shows the best AUC within each column. Pythia-2.8B NeoX-20B LLaMA-30B LLaMA-65B OPT-66B Method Ori. Para. Ori. Para. Ori. Para. Ori. Para. Ori. Para. Avg.
2310.16789#9
2310.16789#11
2310.16789
[ "2012.13891" ]
2310.16789#11
Detecting Pretraining Data from Large Language Models
Neighbor PPL Zlib Lowercase Smaller Ref MIN-K% PROB 0.61 0.61 0.65 0.59 0.60 0.67 0.59 0.61 0.54 0.60 0.58 0.66 0.68 0.70 0.72 0.68 0.68 0.76 0.58 0.70 0.62 0.67 0.65 0.74 0.71 0.70 0.72 0.59 0.72 0.74 0.62 0.70 0.64 0.54 0.64 0.73 0.71 0.71 0.72 0.63 0.74 0.74 0.69 0.72 0.66 0.60 0.70 0.74 0.65 0.66 0.67 0.59 0.67 0.71 0.62 0.64 0.57 0.58 0.64 0.69 0.65 0.67 0.65 0.61 0.66 0.72 In the following two sections, we apply MIN-K% PROB to real-world scenarios to detect copyrighted books and contaminated downstream tasks within LLMs. 5 CASE STUDY: DETECTING COPYRIGHTED BOOKS IN PRETRAINING DATA MIN-K% PROB can also detect potential copyright infringement in training data, as we show in this section. Specifically, we use MIN-K% PROB to detect excerpts from copyrighted books in the Books3 subset of the Pile dataset (Gao et al., 2020) that may have been included in the GPT-33 training data. 5.1 EXPERIMENTAL SETUP Validation data to determine detection threshold. We construct a validation set using 50 books known to be memorized by ChatGPT, likely indicating their presence in its training data (Chang et al., 2023), as positive examples. For negative examples, we collected 50 new books with first editions in 2023 that could not have been in the training data.
2310.16789#10
2310.16789#12
2310.16789
[ "2012.13891" ]
2310.16789#12
Detecting Pretraining Data from Large Language Models
From each book, we randomly extract 100 snippets of 512 words, creating a balanced validation set of 10,000 examples. We determine the optimal classification threshold with MIN-K% PROB by maximizing detection accuracy on this set. Test data and metrics. We randomly select 100 books from the Books3 corpus that are known to contain copyrighted contents (Min et al., 2023). From each book, we extract 100 random 512-word snippets, creating a test set of 10,000 excerpts. We apply the threshold to decide if these books snippets have been trained with GPT-3. We then report the percentage of these snippets in each book (i.e., contamination rate) that are identified as being part of the pre-training data. # 3text-davinci-003 6 5.2 RESULTS Figure 3 shows MIN-K% PROB achieves an AUC of 0.88, outperforming baselines in detecting copyrighted books. We apply the optimal threshold of MIN-K% PROB to the test set of 10,000 snippets from 100 books from Books3. Table 2 represents the top 20 books with the highest predicted contamination rates. Figure 4 reveals nearly 90% of the books have an alarming contamination rate over 50%.
2310.16789#11
2310.16789#13
2310.16789
[ "2012.13891" ]
2310.16789#13
Detecting Pretraining Data from Large Language Models
# Method # Method # Neighbor PPL Zlib Lowercase MIN-K% PROB # Book 0.75 0.84 0.81 0.80 0.88 20 40 60 80 100 120 Contamination Rate% Figure 3: AUC scores for detecting the vali- dation set of copyrighted books on GPT-3. Figure 4: Distribution of detected contamination rate of 100 copyrighted books. Table 2: Top 20 copyrighted books in GPT-3â s pretraining data. The listed contamination rate represents the percentage of text excerpts from each book identified in the pretraining data. Book Title Author 100 100 100 100 100 100 100 99 99 99 99 99 99 99 99 99 98 98 The Violin of Auschwitz North American Stadiums White Chappell Scarlet Tracings Lost and Found A Different City Our Lady of the Forest The Expelled Blood Cursed Genesis Code: A Thriller of the Near Future The Sleepwalkerâ s Guide to Dancing The Harlan Ellison Hornbook The Book of Freedom Three Strong Women The Leadership Mind Switch: Rethinking How We Lead in the New World of Work Gold The Tower Amazon Ainâ t It Time We Said Goodbye: The Rolling Stones on the Road to Exile Page One Road of Bones: The Siege of Kohima 1944 Maria à ngels Anglada Grady Chambers Iain Sinclair Alan Dean Tanith Lee David Guterson Mois Benarroch Archer Alex Jamie Metzl Mira Jacob Harlan Ellison Paul Selig Marie NDiaye D. A. Benton, Kylie Wright-Ford Chris Cleave Simon Clark Bruce Parry Robert Greenfield 98 98 David Folkenflik Fergal Keane Year 2010 2018 1987 2001 2015 2003 2013 2013 2014 2014 1990 2018 2009 2017 2012 2005 2009 2014 2011 2010 # 6 CASE STUDY: DETECTING DOWNSTREAM DATASET CONTAMINATION Assessing the leakage of downstream task data into pretraining corpora is an important issue, but it is challenging to address given the lack of access to pretraining datasets.
2310.16789#12
2310.16789#14
2310.16789
[ "2012.13891" ]
2310.16789#14
Detecting Pretraining Data from Large Language Models
In this section, we investigate the possibility of using MIN-K% PROB to detect information leakage and perform ablation studies to understand how various training factors impact detection difficulty. Specifically, we continually pretrain the 7B parameter LLaMA model (Touvron et al., 2023a) on pretraining data that have been purposefully contaminated with examples from the downstream task. 6.1 EXPERIMENTS Experimental setup. To simulate downstream task contamination that could occur in real-world settings, we create contaminated pretraining data by inserting examples from downstream tasks into a pretraining corpus. Specifically, we sample text from the RedPajama corpus (TogetherCompute, 2023) and insert formatted examples from the downstream datasets BoolQ (Clark et al., 2019), IMDB (Maas et al., 2011), Truthful QA (Lin et al., 2021), and Commonsense QA (Talmor et al., 2019) in contiguous segments at random positions in the uncontaminated text. We insert 200 (positive) examples from each of these datasets into the pretraining data while also isolating a set of 200 (negative) examples from
2310.16789#13
2310.16789#15
2310.16789
[ "2012.13891" ]
2310.16789#15
Detecting Pretraining Data from Large Language Models
7 each dataset that are known to be absent from the contaminated corpus. This creates a contaminated pretraining dataset containing 27 million tokens with 0.1% drawn from downstream datasets. We evaluate the effectiveness of MIN-K% PROB at detecting leaked benchmark examples by com- puting AUC scores over these 400 examples on a LLaMA 7B model finetuned for one epoch on our contaminated pretraining data at a constant learning rate of 1e-4. Main results. We present the main attack results in Table 3. We find that MIN-K% PROB out- performs all baselines. We report TPR@5%FPR in Table 7 in Appendix A, where MIN-K% PROB shows 12.2% improvement over the best baseline. Table 3: AUC scores for detecting contaminant downstream examples. Bold shows the best AUC score within each column. Method BoolQ Commonsense QA IMDB Truthful QA Avg. Neighbor Zlib Lowercase PPL MIN-K% PROB 0.68 0.76 0.74 0.89 0.91 0.56 0.63 0.61 0.78 0.80 0.80 0.71 0.79 0.97 0.98 0.59 0.63 0.56 0.71 0.74 0.66 0.68 0.68 0.84 0.86 6.2 RESULTS AND ANALYSIS The simulation with contaminated datasets allows us to perform ablation studies to empirically analyze the effects of dataset size, frequency of data occurrence, and learning rate on detection difficulty, as theorized in section 2.1. The empirical results largely align with and validate the theoretical framework proposed. In summary, we find that detection becomes more challenging as data occurrence and learning rate decreases, and the effect of dataset size on detection difficulty depends on whether the contaminants are outliers relative to the distribution of the pretraining data. Pretraining dataset size. We construct contaminated datasets of 0.17M, 0.27M, 2.6M and 26M tokens by mixing fixed downstream examples (200 examples per downstream task) with varying amounts of RedPajama data, mimicking real-world pretraining.
2310.16789#14
2310.16789#16
2310.16789
[ "2012.13891" ]
2310.16789#16
Detecting Pretraining Data from Large Language Models
Despite the theory suggesting greater difficulty with more pretraining data, Figure 5a shows AUC scores counterintuitively increase with pre-training dataset size. This aligns with findings that LMs better memorize tail outliers (Feldman, 2020; Zhang et al., 2021). With more RedPajama tokens in the constructed dataset, downstream examples become more significant outliers. We hypothesize that their enhanced memorization likely enables easier detection with perplexity-based metrics. To verify the our hypothesis, we construct control data where contaminants are not outliers. We sample Real Time Data News August 20234, containing post-2023 news absent from LLaMA pre- training. We create three synthetic corpora by concatenating 1000, 5000 and 10000 examples from this corpus, hence creating corpora of sizes 0.77M, 3.9M and 7.6M tokens respecitvely. In each setting, we consider 100 of these examples to be contaminant (positive) examples and set aside another set of 100 examples from News August 2023 (negative). Figure 5b shows AUC scores decrease as the dataset size increases. Detection of outlier contaminants like downstream examples gets easier as data size increases, since models effectively memorize long-tail samples. However, detecting general in-distribution samples from the pretraining data distribution gets harder with more data, following theoretical expectations. Data occurrence. To study the relationship between detection difficulty and data occurrence, we construct a contaminated pretraining corpus by inserting multiple copies of each downstream data point into a pre-training corpus, where the occurrence of each example follows a Poisson distribution. We measure the relationship between the frequency of the example in the pretraining data and its AUC scores. Figure 5c shows that AUC scores positively correlates with the occurrence of examples. # 4https://huggingface.co/datasets/RealTimeData/News_August_2023 8 © Bool © Commonsense QA @ IMDB © Truthful QA 09 08 g < 07 0.17M 0.27M 2.6M 27M Pretraining Dataset Size (tokens) © Bool © Commonsense QA @ IMDB © Truthful QA August N aust Nows â
2310.16789#15
2310.16789#17
2310.16789
[ "2012.13891" ]
2310.16789#17
Detecting Pretraining Data from Large Language Models
© Bool © Commonsense QA © IMDB © Truthful OA 09 oes 0.936 08 gy 058 © 0872 07 = 052 2 0.17M 0.27M 2.6M 27M 0.77M 3.9M 7.6M 1 2 3 4 25 Pretraining Dataset Size (tokens) News Dataset Size (tokens) Occurrences August N aust Nows oes gy 058 = 052 0.77M 3.9M 7.6M News Dataset Size (tokens) â © Bool © Commonsense QA © IMDB © Truthful OA 0.936 © 0872 2 1 2 3 4 25 Occurrences (a) Outlier contaminants, e.g., down- stream examples, become easier to detect as dataset size increases. (b) In-distribution contaminants, e.g., news articles, are harder to de- tect as dataset size increases. (c) Contaminants that occur more frequently in the dataset are easier to detect. Figure 5: We show the effect of contamination rate (expressed as a percentage of the total number of pretraining tokens) and occurrence frequency on the ease of detection of data contaminants using MIN-K% PROB. Learning rate. We also study the effect of varying the learning rates used during pretraining on the detection statistics of the contaminant examples (see Table 4). We find that raising the learning rate from 10â 5 to 10â 4 increases AUC scores significantly in all the downstream tasks, implying that higher learning rates cause models to memorize their pretraining data more strongly. A more in-depth analysis in Table 8 in Appendix A demonstrates that a higher learning rate leads to more memorization rather than generalization for these downstream tasks. Table 4: AUC scores for detecting contaminant downstream examples using two different learning rates. Detection becomes easier when higher learning rates are used during training. Bold shows the best AUC score within each column. Learning rate BoolQ Commonsense QA IMDB LSAT QA Truthful QA 1 à 10â 5 1 à 10â
2310.16789#16
2310.16789#18
2310.16789
[ "2012.13891" ]
2310.16789#18
Detecting Pretraining Data from Large Language Models
4 0.64 0.91 0.59 0.80 0.76 0.98 0.72 0.82 0.56 0.74 # 7 CASE STUDY: PRIVACY AUDITING OF MACHINE UNLEARNING We also demonstrate that our proposed technique can effectively address the need for auditing machine unlearning, ensuring compliance with privacy regulations (Figure 6). 7.1 BACKGROUNDING The right to be forgotten and machine unlearning. In todayâ s landscape of machine learning systems, it is imperative to uphold individualsâ â right to be forgottenâ , a legal obligation outlined in regulations such as the General Data Protection Regulation (GDPR) (Voigt & Von dem Bussche, 2017) and the California Consumer Privacy Act (CCPA) (Legislature, 2018). This requirement allows users to request the removal of their data from trained models. To address this need, the concept of machine unlearning has emerged as a solution for purging data from machine learning models, and various machine unlearning methods have been introduced (Ginart et al., 2019; Liu et al., 2020; Wu et al., 2020; Bourtoule et al., 2021; Izzo et al., 2021; Sekhari et al., 2021; Gupta et al., 2021; Ye et al., 2022). Recently, Eldan & Russinovich (2023) introduced a novel approach for performing machine un- learning on LLMs. This approach involves further fine-tuning the LLMs with alternative labels for specific tokens, effectively creating a modified version of the model that no longer contains the to-be-unlearned content. Specifically, the authors demonstrated the efficacy of this method using the LLaMA2-7B-chat model (Touvron et al., 2023b), showcasing its ability to â unlearnâ information from the Harry Potter book series which results in the LLaMA2-7B-WhoIsHarryPotter model5. In this case study, we aim to assess whether this model successfully eliminates memorized content related to the Harry Potter series. # 5Available at https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter. 9
2310.16789#17
2310.16789#19
2310.16789
[ "2012.13891" ]
2310.16789#19
Detecting Pretraining Data from Large Language Models
& Stage 1:Machine Unlearning Llama2-7b-WholsHarryPotter Unlearning request: forget the world of Harry Potter! op â â â â â â â â â â â â â EE (Eldan & Russinovich, 2023) â _ Unlearned Model Catster that forgets Harry Potter = Stage 2:Audit Unlearning Regular Question: Question identified by our Min-k Prob: â Who is Harry Potter?â â In Harry Potter, What type of animal is Hedwig?â Original Model Original Model A @)) Harry Potter is the main protagonist in J.K. @® Hedwig is a white owl Rowling's series of fantasy novels... Unlearned Model (pass &%) Unlearned Model (failed Pi) Harry Potter is a British actor, writer, and Hedwig is a white ow! â
2310.16789#18
2310.16789#20
2310.16789
[ "2012.13891" ]
2310.16789#20
Detecting Pretraining Data from Large Language Models
director Figure 6: Auditing machine unlearning with MIN-K% PROB. Machine unlearning methods are designed to remove copyrighted and personal data from large language models. We use MIN-K% PROB to audit an unlearned LLM that has been trained to forget copyrighted books. However, we find that such a model can still output related copyrighted content. 7.2 EXPERIMENTS from the unlearned model, To Potter LLaMA2-7B-WhoIsHarryPotter, we consider two settings: story completion (§7.2.1) and question answering (§7.2.2). In story completion, we identify suspicious chunks from the original Harry Potter books using MIN-K% PROB. We then use the unlearned model to generate completions and compare them with the gold continuation. In question answering, we generate a series of questions related to Harry Potter using GPT-4 6. We filter these questions using MIN-K% PROB, and then use the unlearned model to produce answers. These answers are then compared with the gold answers generated by GPT-4 and subsequently verified by humans.
2310.16789#19
2310.16789#21
2310.16789
[ "2012.13891" ]
2310.16789#21
Detecting Pretraining Data from Large Language Models
7.2.1 STORY COMPLETION Identifying suspicious texts using MIN-K% PROB. The process begins with the identifica- tion of suspicious chunks using our MIN-K% PROB metric. Firstly, we gather the plain text of Harry Potter Series 1 to 4 and segment these books into 512-word chunks, resulting in approxi- mately 1000 chunks. We then compute the MIN-K% PROB scores for these chunks using both the LLaMA2-7B-WhoIsHarryPotter model and the original LLaMA2-7B-chat model. To identify chunks where the unlearning process may have failed at, we compare the MIN-K% PROB scores between the two models. If the ratio of the scores from the two models falls within the range of ( 1 1.15 , 1.15), we classify the chunk as a suspicious unlearn-failed chunk. This screening process identifies 188 such chunks. We also notice that using perplexity alone as the metric fails to identify any such chunk. We then test the LLaMA2-7B-WhoIsHarryPotter model with these suspicious chunks to assess its ability to complete the story. For each suspicious chunk, we prompt the model with its initial 200 words and use multinomial sampling to sample 20 model-generated continuations for each chunk. Results We compare the completed stories with the ground truth storylines using both the SimCSE score (Gao et al., 2021) (which gives a similarity score from 0 to 1) and GPT-4 (where we prompt the model with the template in Table 9 to return a similarity score from 1 to 5, and a reason explaining the similarity). We can still find very similar completion with the original story. For example, 5.3% generated completions have greater and equal to 4 GPT score similarity to the gold completion. The distributions for these two scores of the suspicious chunks are shown in Section 7.2.1. Surprisingly, we find a considerable number of chunks whose auto-completions from the â
2310.16789#20
2310.16789#22
2310.16789
[ "2012.13891" ]
2310.16789#22
Detecting Pretraining Data from Large Language Models
unlearnedâ model closely resemble the original story: 10 chunks have a similarity score higher than or equal to 4 according to # 6OpenAI. https://chat.openai.com/chat 10 525 Number of Chunks (0.8, 1.9] 02, 0.4) (0.4, 0.6) SimOSE Score (0.6, 0.8) 140 105 70 Number of Chunks 35 i 2 3 4 5 GPT-4 Similarity Score (a) SimCSE score (b) GPT-scored similarity Figure 7: Distribution of the SimCSE score (a) and GPT-scored similarity (b) between the original story and the completion by the LLaMA2-7B-WhoIsHarryPotter model. Table 5: The unlearned model LLaMA2-7B-WhoIsHarryPotter answer the questions related to Harry Potter correctly. We manually cross-checked these responses against the Harry Potter book series for verification. Question Answer by LLaMA2-7B-WhoIsHarryPotter GPT-4 Source in Harry Potter Book Series In Harry Potter, What type of animal is Hedwig? Hedwig is a white owl. Hedwig is a white owl. "For Harryâ s birthday, Hagrid buys Harry a snowy owl named Hedwig." â Harry Potter and the Philosopherâ s Stone Potter, In Harry What creature guards the vaults at Gringotts Bank? In the Harry Roper series by T.L. Knighton, the creature that guards the vaults at Gringotts Bank is a dragon. The creature guards that the vaults at Gringotts Bank is a dragon. "With Harry and Griphook under the Invisibility Cloak, Hermione disguised as Bellatrix and Ron as a foreign wiz- ard with bushy facial hair, the trio and Griphook Apparate to Gringotts ... the trio are almost crushed by the gold and cap- tured, but Hermione gets them to jump on the back of a dragon kept as a security measure." â Harry Potter and the Deathly Hallows Potter, In Harry what is the name of Hagridâ s giant spider friend? Hagridâ s giant spider friend is named Aragog. Aragog "â Aragog!â it called. â Aragog!â
2310.16789#21
2310.16789#23
2310.16789
[ "2012.13891" ]
2310.16789#23
Detecting Pretraining Data from Large Language Models
And from the middle of the misty, domed web, a spider the size of a small elephant emerged, very slowly." â Harry Potter and the Chamber of Secrets In Harry Potter, what does the spell "Alohomora" do? In the "Magic for Good" series by John G. Hartness, the spell "Alo- homora" is a spell for unlocking doors. Unlocks doors. "She grabbed Harryâ s wand, tapped the lock, and whispered, "Alohomora!" The lock clicked and the door swung open â they piled through it, shut it quickly..." â Harry Potter and the Sorcererâ s Stone In Harry Potter, which of the three Unforgivable Curses causes unbearable pain in the target? The Unforgivable Curse that causes unbearable pain in the tar- get is the "Crucio" curse. Crucio "â Crucio!â At once, the spiderâ s legs bent in upon its body; it rolled over and began to twitch horribly, rocking from side to side. No sound came from it, but Harry was sure that if it could have given voice, it would have been screaming." â Harry Potter and the Goblet of Fire In Harry Potter, what magical crea- ture is known to guard treasure? In the magical world of Harry Rexâ s adventures, the guardian of the treasure is a dragon named "Glimmer." Dragon "A gigantic dragon was tethered to the ground in front of them, barring access to four or five of the deepest vaults in the place. " â Harry Potter and the Deathly Hallows In Harry which spell mons objects? Potter, sum- The spell that summons objects in the world of Harry Potter is the "Accio" spell. Accio "â Accio! Accio! Accio!â she shouted, and toffees zoomed from all sorts of unlikely places, including the lining of Georgeâ s jacket..." â Harry Potter and the Goblet of Fire Potter, In Harry which spell conjures a small flock of birds? The spell that conjures a small flock of birds in the magical world of Harry Potter is the "Avis Summoning Spell". Avis â Avis!â The hornbeam wand let off a blast hike a gun, and a number of small, twittering birds flew out of the end and through the open window into the watery sunlight. â
2310.16789#22
2310.16789#24
2310.16789
[ "2012.13891" ]
2310.16789#24
Detecting Pretraining Data from Large Language Models
Harry Potter and the Goblet of Fire the GPT-4 evaluator. For instance, Table 10 showcases a few such examples, with all of them having SimCSE scores exceeding 0.7. We further note that the study only uses Harry Potter books 1 to 4. Including the whole Harry Potter series (7 books) potentially will expose more unlearn-failed chunks. 7.2.2 QUESTION ANSWERING Selecting Harry Potter-related questions with MIN-K% PROB We generate 1000 questions related to Harry Potter by prompting GPT-4 with the query "Can you give me a list of questions and 11 answers related to Harry Potter". Similar to identifying suspocious texts in story completion, we compare the MIN-K% PROB scores between the original and unlearned models and select questions with the ratio falling within the range of ( 1 1.15 , 1.15), resulting in 103 questions. We use the unlearned model to generate answer given these questions, specifically employing multinomial sampling to sample 20 answers for each question. Results We then compare the answers by the unlearned model (referred to as the "candidate") to those provided by GPT-4 (referred to as the "reference") using the ROUGE-L recall measure (Lin, 2004), which calculates the ratio: (# overlapping words between the candidate and reference) / (# words in the reference). A higher ROUGE-L recall value signifies a greater degree of overlap, which can indicate a higher likelihood of unlearning failure. Among the 103 selected questions, we observe an average ROUGE-L recall of 0.23. Conversely, for the unselected questions, the average ROUGE-L recall is 0.10. These findings underscore the capability of our MIN-K% PROB to identify potentially unsuccessful instances of unlearning. Table 5 shows the selected questions related to Harry Potter that are answered correctly by the unlearned model LLaMA2-7B-WhoIsHarryPotter (with ROUGE-L recall being 1). We also verify the generated answers by cross-checking them against the Harry Potter series. These results suggest the knowledge about Harry Potter is not completely erased from the unlearned model. # 8 RELATED WORK Membership inference attack in NLP. Membership Inference Attacks (MIAs) aim to determine whether an arbitrary sample is part of a given modelâ
2310.16789#23
2310.16789#25
2310.16789
[ "2012.13891" ]
2310.16789#25
Detecting Pretraining Data from Large Language Models
s training data (Shokri et al., 2017; Yeom et al., 2018b). These attacks pose substantial privacy risks to individuals and often serve as a basis for more severe attacks, such as data reconstruction (Carlini et al., 2021; Gupta et al., 2022; Cummings et al., 2023). Due to its fundamental association with privacy risk, MIA has more recently found applications in quantifying privacy vulnerabilities within machine learning models and in verifying the accurate implementation of privacy-preserving mechanisms (Jayaraman & Evans, 2019; Jagielski et al., 2020; Zanella-Béguelin et al., 2020; Nasr et al., 2021; Huang et al., 2022; Nasr et al., 2023; Steinke et al., 2023). Initially applied to tabular and computer vision data, the concept of MIA has recently expanded into the realm of language-oriented tasks. However, this expansion has predominantly centered around finetuning data detection (Song & Shmatikov, 2019; Shejwalkar et al., 2021; Mahloujifar et al., 2021; Jagannatha et al., 2021; Mireshghallah et al., 2022b). Our work focuses on the application of MIA to pretraining data detection, an area that has received limited attention in previous research efforts. Dataset contamination. The dataset contamination issue in LMs has gained attention recently since benchmark evaluation is undermined if evaluation examples are accidentally seen during pre-training. Brown et al. (2020b), Wei et al. (2022), and Du et al. (2022) consider an example contaminated if there is a 13-gram collision between the training data and evaluation example. Chowdhery et al. (2022) further improves this by deeming an example contaminated if 70% of its 8-grams appear in the training data. Touvron et al. (2023b) builds on these methods by extending the framework to tokenized inputs and judging a token to be contaminated if it appears in any token n-gram longer than 10 tokens. However, their methods require access to retraining corpora, which is largely unavailable for recent model releases. Other approaches try to detect contamination without access to pretraining corpora.
2310.16789#24
2310.16789#26
2310.16789
[ "2012.13891" ]
2310.16789#26
Detecting Pretraining Data from Large Language Models
Sainz et al. (2023) simply prompts ChatGPT to generate examples from a dataset by providing the datasetâ s name and split. They found that the models generate verbatim instances from NLP datasets. Golchin & Surdeanu (2023) extends this framework to extract more memorized instances by incorporating partial instance content into the prompt. Similarly, Weller et al. (2023) demonstrates the ability to extract memorized snippets from Wikipedia via prompting. While these methods study contamination in closed-sourced models, they cannot determine contamination on an instance level. Marone & Van Durme (2023) argues that model-developers should release training data membership testing tools accompanying their LLMs to remedy this. However, this is not yet widely practiced.
2310.16789#25
2310.16789#27
2310.16789
[ "2012.13891" ]
2310.16789#27
Detecting Pretraining Data from Large Language Models
12 # 9 CONCLUSION We present a pre-training data detection dataset WIKIMIA and a new approach MIN-K% PROB. Our approach uses the intuition that trained data tends to contain fewer outlier tokens with very low probabilities compared to other baselines. Additionally, we verify the effectiveness of our approach in real-world setting, we perform two case studiies: detecting dataset contamination and published book detection. For dataset contamination, we observe empirical results aligning with theoretical predictions about how detection difficulty changes with dataset size, example frequency, and learning rate. Most strikingly, our book detection experiments provide strong evidence that GPT-3 models may have been trained on copyrighted books.
2310.16789#26
2310.16789#28
2310.16789
[ "2012.13891" ]
2310.16789#28
Detecting Pretraining Data from Large Language Models
13 # REFERENCES Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, and Kunal Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. Advances in Neural Information Processing Systems, 33: 4381â 4391, 2020. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal.
2310.16789#27
2310.16789#29
2310.16789
[ "2012.13891" ]
2310.16789#29
Detecting Pretraining Data from Large Language Models
Pythia: A suite for analyzing large language models across training and scaling, 2023. Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. GPT-NeoX-20B: An open-source autoregressive language model. In Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models, 2022. URL https://arxiv.org/abs/2204. 06745. Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pp. 141â 159. IEEE, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Sys- tems, volume 33, pp. 1877â 1901. Curran Associates, Inc., 2020a. URL https://proceedings. neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
2310.16789#28
2310.16789#30
2310.16789
[ "2012.13891" ]
2310.16789#30
Detecting Pretraining Data from Large Language Models
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020b. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 2633â 2650, 2021. Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 1897â 1914. IEEE, 2022. Kent K Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv preprint arXiv:2305.00118, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al.
2310.16789#29
2310.16789#31
2310.16789
[ "2012.13891" ]
2310.16789#31
Detecting Pretraining Data from Large Language Models
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL, 2019. Rachel Cummings, Damien Desfontaines, David Evans, Roxana Geambasu, Matthew Jagielski, Yangsibo Huang, Peter Kairouz, Gautam Kamath, Sewoong Oh, Olga Ohrimenko, et al. Challenges towards the next frontier in privacy. arXiv preprint arXiv:2304.06929, 2023. Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al.
2310.16789#30
2310.16789#32
2310.16789
[ "2012.13891" ]
2310.16789#32
Detecting Pretraining Data from Large Language Models
Glam: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, pp. 5547â 5569. PMLR, 2022. 14 Ronen Eldan and Mark Russinovich. Whoâ s Harry Potter? approximate unlearning in LLMs. arXiv preprint arXiv:2310.02238, 2023. Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pp. 954â 959, 2020. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. Tianyu Gao, Xingcheng Yao, and Danqi Chen. SimCSE: Simple contrastive learning of sentence embeddings. In Empirical Methods in Natural Language Processing (EMNLP), 2021.
2310.16789#31
2310.16789#33
2310.16789
[ "2012.13891" ]
2310.16789#33
Detecting Pretraining Data from Large Language Models
Antonio Ginart, Melody Guan, Gregory Valiant, and James Y Zou. Making ai forget you: Data deletion in machine learning. Advances in neural information processing systems, 32, 2019. Shahriar Golchin and Mihai Surdeanu. Time travel in llms: Tracing data contamination in large language models. arXiv preprint arXiv:2308.08493, 2023. Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, and Danqi Chen. Recovering private text in federated learning of language models. Advances in Neural Information Processing Systems, 35:8130â 8143, 2022. Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites.
2310.16789#32
2310.16789#34
2310.16789
[ "2012.13891" ]
2310.16789#34
Detecting Pretraining Data from Large Language Models
Adaptive machine unlearning. Advances in Neural Information Processing Systems, 34:16319â 16330, 2021. Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In International conference on machine learning, pp. 1225â 1234. PMLR, 2016. Yangsibo Huang, Chun-Yin Huang, Xiaoxiao Li, and Kai Li. A dataset auditing method for collabo- ratively trained machine learning models. IEEE Transactions on Medical Imaging, 2022. Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, and James Zou. Approximate data deletion from machine learning models. In International Conference on Artificial Intelligence and Statistics, pp. 2008â 2016. PMLR, 2021. Abhyuday Jagannatha, Bhanu Pratap Singh Rawat, and Hong Yu. Membership inference attack susceptibility of clinical language models. arXiv preprint arXiv:2104.08305, 2021. Matthew Jagielski, Jonathan Ullman, and Alina Oprea. Auditing differentially private machine learning: How private is private sgd? Advances in Neural Information Processing Systems, 33: 22205â 22216, 2020. Bargav Jayaraman and David Evans. Evaluating differentially private machine learning in practice. In 28th USENIX Security Symposium (USENIX Security 19), pp. 1895â 1912, 2019.
2310.16789#33
2310.16789#35
2310.16789
[ "2012.13891" ]
2310.16789#35
Detecting Pretraining Data from Large Language Models
Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language models. In International Conference on Machine Learning, pp. 10697â 10707. PMLR, 2022. California State Legislature. California consumer privacy act, 2018. URL https://oag.ca.gov/ privacy/ccpa. Klas Leino and Matt Fredrikson. Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference. In 29th USENIX security symposium (USENIX Security 20), pp. 1605â 1622, 2020. Chin-Yew Lin. Rouge:
2310.16789#34
2310.16789#36
2310.16789
[ "2012.13891" ]
2310.16789#36
Detecting Pretraining Data from Large Language Models
A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74â 81, 2004. Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods, 2021. 15 Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. Federated unlearning. arXiv preprint arXiv:2012.13891, 2020. Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A Gunter, and Kai Chen. Understanding membership inferences on well-generalized learning models. arXiv preprint arXiv:1802.04889, 2018. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2310.16789#35
2310.16789#37
2310.16789
[ "2012.13891" ]
2310.16789#37
Detecting Pretraining Data from Large Language Models
Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142â 150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http: //www.aclweb.org/anthology/P11-1015. Inbal Magar and Roy Schwartz. Data contamination: From memorization to exploitation. ArXiv, abs/2203.08242, 2022. URL https://api.semanticscholar.org/CorpusID:247475929. Saeed Mahloujifar, Huseyin A Inan, Melissa Chase, Esha Ghosh, and Marcello Hasegawa. Member- ship inference on word embedding and beyond. arXiv preprint arXiv:2106.11384, 2021. Marc Marone and Benjamin Van Durme. Data portraits: Recording foundation model training data, 2023. URL https://arxiv.org/abs/2303.03919. Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schoelkopf, Mrinmaya Sachan, and Taylor Berg-Kirkpatrick. Membership inference attacks against language models via neigh- bourhood comparison. In Findings of the Association for Computational Linguistics: ACL 2023, pp. 11330â 11343, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10. 18653/v1/2023.findings-acl.719. URL https://aclanthology.org/2023.findings-acl.719. Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A Smith, and Luke Zettlemoyer. Silo language models: Isolating legal risk in a nonparametric datastore. arXiv preprint arXiv:2308.04430, 2023. Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri. Quantifying privacy risks of masked language models using membership inference attacks. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 8332â
2310.16789#36
2310.16789#38
2310.16789
[ "2012.13891" ]
2310.16789#38
Detecting Pretraining Data from Large Language Models
8347, Abu Dhabi, United Arab Emirates, December 2022a. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.570. URL https://aclanthology.org/2022. emnlp-main.570. Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri. Quantifying privacy risks of masked language models using membership inference attacks. arXiv preprint arXiv:2203.03929, 2022b. Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn. Detectgpt: Zero-shot machine-generated text detection using probability curvature, 2023. URL https://arxiv.org/abs/2301.11305. Maximilian Mozes, Xuanli He, Bennett Kleinberg, and Lewis D. Griffin.
2310.16789#37
2310.16789#39
2310.16789
[ "2012.13891" ]
2310.16789#39
Detecting Pretraining Data from Large Language Models
Use of llms for illicit purposes: Threats, prevention measures, and vulnerabilities, 2023. Arvind Narayanan. Gpt-4 and professional benchmarks: the wrong answer to the wrong question, 2023. URL https://www.aisnakeoil.com/p/gpt-4-and-professional-benchmarks. Milad Nasr, Shuang Songi, Abhradeep Thakurta, Nicolas Papernot, and Nicholas Carlin. Adversary instantiation: Lower bounds for differentially private machine learning. In 2021 IEEE Symposium on security and privacy (SP), pp. 866â 882. IEEE, 2021.
2310.16789#38
2310.16789#40
2310.16789
[ "2012.13891" ]
2310.16789#40
Detecting Pretraining Data from Large Language Models
Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, and Andreas Terzis. Tight auditing of differentially private machine learning. arXiv preprint arXiv:2302.07956, 2023. OpenAI. Gpt-4 technical report, 2023. 16 Oscar Sainz, Jon Ander Campos, Iker Garcà a-Ferrero, Julen Etxaniz, and Eneko Agirre.
2310.16789#39
2310.16789#41
2310.16789
[ "2012.13891" ]
2310.16789#41
Detecting Pretraining Data from Large Language Models
Did chat- gpt cheat on your test?, 2023. URL https://hitz-zentroa.github.io/lm-contamination/ blog/. Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34:18075â 18086, 2021. Virat Shejwalkar, Huseyin A Inan, Amir Houmansadr, and Robert Sim. Membership inference attacks against NLP classification models. In NeurIPS 2021 Workshop Privacy in Machine Learning, 2021. URL https://openreview.net/forum?id=74lwg5oxheC. R. Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov.
2310.16789#40
2310.16789#42
2310.16789
[ "2012.13891" ]
2310.16789#42
Detecting Pretraining Data from Large Language Models
Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 3â 18, 2016. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pp. 3â 18. IEEE, 2017. Congzheng Song and Vitaly Shmatikov. Auditing data provenance in text-generation models. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 196â 206, 2019. Thomas Steinke, Milad Nasr, and Matthew Jagielski. Privacy auditing with one (1) training run. arXiv preprint arXiv:2305.08846, 2023. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149â 4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https: //aclanthology.org/N19-1421. TogetherCompute. Redpajama: An open source recipe to reproduce llama training dataset, 2023. URL https://github.com/togethercomputer/RedPajama-Data.
2310.16789#41
2310.16789#43
2310.16789
[ "2012.13891" ]
2310.16789#43
Detecting Pretraining Data from Large Language Models
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cris- tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023b. Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10(3152676):10â
2310.16789#42
2310.16789#44
2310.16789
[ "2012.13891" ]
2310.16789#44
Detecting Pretraining Data from Large Language Models
5555, 2017. Lauren Watson, Chuan Guo, Graham Cormode, and Alexandre Sablayrolles. On the importance of difficulty calibration in membership inference attacks. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=3eIrli0TwQ. 17 Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le.
2310.16789#43
2310.16789#45
2310.16789
[ "2012.13891" ]
2310.16789#45
Detecting Pretraining Data from Large Language Models
Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id= gEZrGCozdqR. Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, and Benjamin Van Durme. "according to ..." prompting language models improves quoting from pre-training data, 2023. Yinjun Wu, Edgar Dobriban, and Susan Davidson.
2310.16789#44
2310.16789#46
2310.16789
[ "2012.13891" ]
2310.16789#46
Detecting Pretraining Data from Large Language Models
Deltagrad: Rapid retraining of machine learning models. In International Conference on Machine Learning, pp. 10355â 10366. PMLR, 2020. Jingwen Ye, Yifang Fu, Jie Song, Xingyi Yang, Songhua Liu, Xin Jin, Mingli Song, and Xinchao Wang. Learning with recoverable forgetting. In European Conference on Computer Vision, pp. 87â 103. Springer, 2022. Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: In 2018 IEEE 31st Computer Security Foundations Analyzing the connection to overfitting. Symposium (CSF), pp. 268â 282, 2018a. doi: 10.1109/CSF.2018.00027. Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: In 2018 IEEE 31st computer security foundations Analyzing the connection to overfitting. symposium (CSF), pp. 268â 282. IEEE, 2018b.
2310.16789#45
2310.16789#47
2310.16789
[ "2012.13891" ]
2310.16789#47
Detecting Pretraining Data from Large Language Models
Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, and Marc Brockschmidt. Analyzing information leakage of updates to natural language models. In Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, pp. 363â 375, 2020. Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, and Nicholas Carlini. Counterfactual memorization in neural language models. arXiv preprint arXiv:2112.12938, 2021. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. 18 # A ADDITIONAL RESULTS Table 6: TPR@5%FPR score for detecting pretraining examples from the given model on WIKIMIA for MIN-K% PROB and baselines. Ori. and Para. denote the original and paraphrase settings, respectively. Bold shows the best score within each column. Pythia-2.8B NeoX-20B LLaMA-30B LLaMA-65B OPT-66B Method Ori. Para. Ori. Para. Ori. Para. Ori. Para. Ori. Para. Avg.
2310.16789#46
2310.16789#48
2310.16789
[ "2012.13891" ]
2310.16789#48
Detecting Pretraining Data from Large Language Models
Neighbor PPL Zlib Lowercase Smaller Ref MIN-K% PROB 10.2 9.4 18.7 10.8 10.1 13.7 16.2 18.0 18.7 7.2 10.1 15.1 15.2 17.3 20.3 12.9 15.8 21.6 19.3 24.9 22.1 12.2 10.1 27.3 20.1 23.7 18.0 10.1 10.8 22.3 17.2 18.7 20.9 6.5 11.5 25.9 17.2 16.5 23.0 14.4 15.8 20.9 20.0 23.0 23.0 12.2 21.6 30.9 17.3 20.9 21.6 14.4 15.8 21.6 18.8 20.1 20.1 8.6 10.1 23.0 17.2 19.3 20.6 10.9 13.2 22.2 Table 7: TPR @ FPR=5% for detecting contaminant downstream examples using reference-based and reference-free methods. Bold shows the best reference-free TPR within each column. Method BoolQ Commonsense QA IMDB Truthful QA Avg. Neighbor PPL Zlib Lowercase MIN-K% PROB 19 52 18 24 55 7 24 9 3 23 41 74 19 26 83 13 17 7 14 21 20 42 13 17 46 Table 8: Accuracy of the model finetuned in Section 6.1 on each non-contaminant and contaminant examples used for AUC computation for each downstream dataset. The difference in average classification accuracy of contaminant examples over that of non-contaminant examples is 0.04 at a learning rate of 1 Ã 10â 5 and 0.11 at a learning rate of 1 Ã 10â 4. This indicates that memorization becomes a significantly more pronounced effect than generalization at larger learning rates.
2310.16789#47
2310.16789#49
2310.16789
[ "2012.13891" ]
2310.16789#49
Detecting Pretraining Data from Large Language Models
Learning rate BoolQ Commonsense QA IMDB LSAT QA Truthful QA Avg. Non-contaminant examples 1 Ã 10â 5 1 Ã 10â 4 0.68 0.69 0.47 0.48 0.89 0.90 0.22 0.24 0.28 0.33 Contaminant examples 1 Ã 10â 5 1 Ã 10â 4 0.71 0.81 0.49 0.60 0.92 0.89 0.26 0.35 0.38 0.56 0.51 0.53 0.55 0.64 19 Table 9:
2310.16789#48
2310.16789#50
2310.16789
[ "2012.13891" ]
2310.16789#50
Detecting Pretraining Data from Large Language Models
Input template we use to prompt GPT-4 to obtain the similarity score. System You are a helpful assistant in evaluating the similarity between two outputs generated by two different AI chatbots. Your goal is to rate the similarity between the two outputs based on a scale of 1 to 5, with 1 being highly dissimilar and 5 being highly similar. # User Rate the similarity between Output (a) and Output (b) on a scale of 1 to 5, where 1 indicates high dissimilarity, and 5 indicates high similarity. Here are some rules of the evaluation: (1) Consider how closely Output (a) matches Output (b) in terms of content, context, and relevance. (2) Do not provide a rating outside the 1 to 5 scale, and avoid giving a rating of 3 (neutral) whenever possible. (3) Your judgment should be as objective as possible, without being influenced by any potential bias. You should answer â Score: â , followed by an integer rating between 1 to 5, where 1 indicates high dissimilarity, and 5 indicates high similarity. You should then output â Reason: â and provide a short sentence to explain your rating. Output (a): Output (b): Table 10: LLaMA2-7B-WhoIsHarryPotter model, alongside the SimCSE and GPT-4 similarity scores. Chunk #1: â ¶ Context:
2310.16789#49
2310.16789#51
2310.16789
[ "2012.13891" ]
2310.16789#51
Detecting Pretraining Data from Large Language Models
He was chewing the gillyweed as hard and fast as he could; it felt unpleasantly slimy and rubbery, like octopus tentacles. Waist-deep in the freezing water he stopped, swallowed, and waited for something to happen. He could hear laughter in the crowd and knew he must look stupid, walking into the lake without showing any sign of magical power. The part of him that was still dry was covered in goose pimples; half immersed in the icy water, a cruel breeze lifting his hair, Harry started to shiver violently. He avoided looking at the stands; the laughter was becoming louder, and there were catcalls and jeering from the Slytherins. ... Then, quite suddenly, Harry felt as though an invisible pillow had been pressed over his mouth and nose. He tried to draw breath, but it made his head spin; his lungs were empty, and he suddenly felt a piercing pain on either side of his neck - Harry clapped his hands around his throat and felt two large slits just below his ears, flapping in the cold air. . . . He had gills. Without pausing to think, he did the only thing that made â ¶ Ground truth (Output a): sense - he flung himself forward into the water. The first gulp of icy lake water felt like the breath of life. His head had stopped spinning; he took another great gulp of water and felt it pass smoothly through his gills, sending oxygen back to his brain. He stretched out his hands in front of him and stared at them. They looked green and ghostly under the water, and they had become webbed. He twisted around and looked at his bare feet - they had become elongated and the toes were webbed too: It looked as though he had sprouted flippers. The water didnâ t feel icy anymore either ... on the contrary, he felt pleasantly cool and very light. . . . Harry struck out once more, marveling at how far and fast his flipper-like feet propelled him through the vater, and noticing how clearly he could see, and how he no longer seemed to need to blink. He had soon swum so far into the lake that he could no longer see the bottom. He flipped over and dived into its depths. Silence pressed upon his ears as he soared over a strange, dark, foggy landscape.
2310.16789#50
2310.16789#52
2310.16789
[ "2012.13891" ]
2310.16789#52
Detecting Pretraining Data from Large Language Models
He could only see ten feet around him, so that as he sped throuugh the water new scenes seemed to loom suddenly out of the incoming darkness: forests of rippling, tangled black weed, wide plains of mud littered with dull, glimmering stones. He swam deeper and deeper, out toward the middle of the lake, his eyes wide, staring through the eerily gray-lit water around him to the shadow beyond, where the water became opaque. Small fish flickered past him like silver darts. Once or twice he thought he saw something larger moving ahead of him, but when he got nearer, he discovered it to be â ¶ Completion by LLaMA2-7B-WhoIsHarryPotter (Output b): sense: he opened his mouth and let the water rush in. Reflexively, he gulped air, and his lungs expanded, filling with the icy water. The coldness spread through his body, and he shivered again, but this time it was different. He felt a strange tingling sensation, like tiny fingers massaging his muscles. As he gulped in more water, his body began to change. His legs grew longer and thinner, turning into a pair of flippers. His arms shortened and transformed into a pair of fins, and his hands became webbed. â
2310.16789#51
2310.16789#53
2310.16789
[ "2012.13891" ]
2310.16789#53
Detecting Pretraining Data from Large Language Models
¶ SimCSE score: 0.82 â ¶ GPT-4 score and reason: 5. Both outputs describe a character changing into a sea creature. They both mention the character gulping in water, feeling cold initially then adjusting to the temperature, and developing webbed hands and feet or flippers. However, Output (b) doesnâ t include the exploration of the underwater environment that is a key part of Output (a) which results in slight difference. # Chunk #2:
2310.16789#52
2310.16789#54
2310.16789
[ "2012.13891" ]
2310.16789#54
Detecting Pretraining Data from Large Language Models
20 â ¶ Context: doesnâ t know about the troll." Ron bit his lip. "Oh, all right," he snapped. "But Percyâ d better not see us." Ducking down, they joined the Hufflepuffs going the other way, slipped down a deserted side corridor, and hurried off toward the girlsâ bathroom. They had just turned the corner when they heard quick footsteps behind them. "Percy!" hissed Ron, pulling Harry behind a large stone griffin. Peering around it, however, they saw not Percy but Snape. He crossed the corridor and disappeared from view.
2310.16789#53
2310.16789#55
2310.16789
[ "2012.13891" ]
2310.16789#55
Detecting Pretraining Data from Large Language Models
"Whatâ s he doing?" Harry whispered. "Why isnâ t he down in the dungeons with the rest of the teachers?" "Search me." Quietly as possible, they crept along the next corridor after Snapeâ s fading footsteps. "Heâ s heading for the third floor," Harry said, but Ron held up his hand. "Can you smell something?" Harry sniffed and a foul stench reached his nostrils, a mixture of old socks and the kind of public toilet no one seems to clean. And then they heard it â a low grunting, and the shuffling footfalls of gigantic feet. Ron pointed â at the end of a passage to the left, something huge was moving toward them. They shrank into the shadows and â ¶ Ground truth: watched as it emerged into a patch of moonlight. It was a horrible sight. Twelve feet tall, its skin was a dull, granite gray, its great lumpy body like a boulder with its small bald head perched on top like a coconut. It had short legs thick as tree trunks with flat, horny feet. The smell coming from it was incredible. It was holding a huge wooden club, which dragged along the floor because its arms were so long. The troll stopped next to a doorway and peered inside. It waggled its long ears, making up its tiny mind, then slouched slowly into the room. "The keys in the lock," Harry muttered. "We could lock it in." "Good idea," said Ron nervously. They edged toward the open door, mouths dry, praying the troll wasnâ t about to come out of it. With one great leap, Harry managed to grab the key, slam the door, and lock it. â Yes!" Flushed with their victory, they started to run back up the passage, but as they reached the corner they heard something that made their hearts stop â a high, petrified scream â and it was coming from the chamber theyâ d just chained up. "Oh, no," said Ron, pale as the Bloody Baron. "Itâ s the girlsâ bathroom!" Harry gasped. "Hermione!" they said together. It was the last thing they wanted to do, but what choice did they have? Wheeling around, they sprinted back to the door and turned the key, fumbling in their panic. Harry pulled the door open and they ran inside.
2310.16789#54
2310.16789#56
2310.16789
[ "2012.13891" ]
2310.16789#56
Detecting Pretraining Data from Large Language Models
Hermione Granger was shrinking against the wall opposite, looking as if she was about to faint. The troll was advancing on her, knocking the sinks off the walls as it went. "Confuse it!" Harry said desperately to Ron, and, seizing a tap, he threw it as hard as he could against the wall. â ¶ Completion by LLaMA2-7B-WhoIsHarryPotter (Output b): watched in horror as the creature came into view. It was a troll from the depths of the earth, its skin the color of dirt and mold, its eyes glowing red slits in the dim light. The passageway was too narrow for it to fit, but it could see them, and its grunts grew louder, more menacing. "What do we do?" gulped Harry. "We canâ t outrun it." "We have to find a way to distract it," decided Ron, his mind racing with ideas. "Quick!" He grabbed a nearby book from the wall and flung it at the troll, hoping to divert its attention. The creature roared in anger and chased after the book, giving them just enough time to slip through a nearby door and slam it shut behind them. They leaned against the door, panting with fear, and listened as the troll beat its fists against the other side of the door, trying to break through. "That was close," said Harry, his voice shaking. "Too close." "We have to get out of here," agreed Ron. "Before it breaks through." But they knew they couldnâ t escape that way. The only other exit was through the library, and they knew Mr. Hufflepuff would never let them pass. They were trapped. â
2310.16789#55
2310.16789#57
2310.16789
[ "2012.13891" ]
2310.16789#57
Detecting Pretraining Data from Large Language Models
¶ SimCSE score: 0.84 â ¶ GPT-4 score and reason: 4. Both outputs present a similar scenario where characters are encountered by a troll and try to escape or distract it. The issue they face, the settings, and the way they express fear show considerable similarities. However, the specific details and descriptions of events do somewhat differ. Chunk #3: 21 â ¶ Context: one, â cause it might be the last thing yeh do." Malfoy, Crabbe, and Goyle werenâ t listening; they were talking in an undertone and Harry had a nasty feeling they were plotting how best to disrupt the lesson. "Yeh always wait fer the hippogriff ter make the firsâ move," Hagrid continued. "Itâ s polite, see? Yeh walk toward him, and yeh bow, anâ yeh wait. If he bows back, yehâ re allowed ter touch him. If he doesnâ bow, then get away from him sharpish, â cause those talons hurt. "Right â who wants ter go first?" Most of the class backed farther away in answer. Even Harry, Ron, and Hermione had misgivings. The hippogriffs were tossing their fierce heads and flexing their powerful wings; they didnâ t seem to like being tethered like this. "No one?" said Hagrid, with a pleading look. "Iâ ll do it," said Harry. There was an intake of breath from behind him, and both Lavender and Parvati whispered, "Oooh, no, Harry, remember your tea leaves!" Harry ignored them. He climbed over the paddock fence. "Good man, Harry!" roared Hagrid. "Right then â letâ s see how yeh get on with Buckbeak." He untied one of the chains, pulled the gray hippogriff â
2310.16789#56
2310.16789#58
2310.16789
[ "2012.13891" ]
2310.16789#58
Detecting Pretraining Data from Large Language Models
¶ Ground truth (Output a): "away from its fellows, and slipped off its leather collar. The class on the other side of the paddock seemed to be holding its breath. Malfoyâ s eyes were narrowed maliciously. "Easy) now, Harry," said Hagrid quietly. "Yehâ ve got eye contact, now try not ter blink.... Hippogriffs donâ trust yeh if yeh blink too much...." Harryâ s eyes immediately began to water, but he didnâ t shut thern. Buckbeak had turned his great, sharp head and was staring at Harry with one fierce orange eye. "Thaâ s it," said Hagrid. "Thaâ s it, Harry... now, bow." Harry didnâ t feel much like exposing the back of his neck to Buckbeak, but he did as he was told. He gave a short bow and then looked up. The hippogriff was still staring haughtily at him. It didnâ t move. "Ah," said Hagrid, sounding worried. "Right â back away, now, Harry, easy does it But then, to Harryâ s enormous surprise, the hippogriff suddenly bent its scaly front knees and sank into what was an unmistakable bow. "Well done, Harry!" said Hagrid, ecstatic. "Right â yeh can touch him! Pat his beak, go on!" Feeling that a better reward would have been to back away, Harry moved slowly toward the hippogriff and reached out toward it. He patted the beak several times and the hippogriff closed its eyes lazily, as though enjoying it. The class broke into applause, all except for Malfoy, Crabbe, and Goyle, who were looking deeply disappointed. "Righâ then, Harry," said Hagrid. "I reckon he mightâ let yeh ride him!" This was more than Harry had bargained for. He was used to a broomstick; but he wasnâ t sure a hippogriff would be quite the same. "Yeh climb up there, jusâ behind the wing joint," said Hagrid, "anâ mind yeh donâ pull any of his feathers out, he wonâ like that...." Harry put his foot on the top of Buckbeaks â
2310.16789#57
2310.16789#59
2310.16789
[ "2012.13891" ]
2310.16789#59
Detecting Pretraining Data from Large Language Models
¶ Completion by LLaMA2-7B-WhoIsHarryPotter (Output b): forward, and held it out on a leash. The bird was massive, with a beak like a scythe and eyes that glowed like lanterns in the dark. "Walk towards him, boys and girls," urged Hagrid. "Be polite, now. No need ter be afraid. Heâ nâ t goin ter hurt ye." But he wasnâ t sure if that was true. He could feel a dark mischief in the air, a sense of unease that made his spine tingle. Heâ d heard stories of hippogriffs turning on their handlers when they were least expected, and he didnâ mind sharing his tea leaves with them. But he couldnâ afford ter back down now. So he took a deep breath and started walking, trying ter keep his eyes fixed on the birdâ eyes. "Good boy, good boy," coaxed Haggridd. "Now, touch his wing, if yeh dare." â
2310.16789#58
2310.16789#60
2310.16789
[ "2012.13891" ]
2310.16789#60
Detecting Pretraining Data from Large Language Models
¶ SimCSE score: 0.77 â ¶ GPT-4 score and reason: 4. The two outputs share notable similarities in context, content, and elements. They both discuss Hagrid instructing someone to approach a hippogriff, with an emphasis on maintaining eye contact and eventual touching. While differences can be seen in the detailed dialogues or descriptions, the general themes and ideas remain consistent. # B DETAILS OF WIKIMIA Data properties. Our WIKIMIA benchmark demonstrates several desirable properties that make it suitable for evaluating methods to detect data during pretraining on any newly released models. (1) Accurate: Since non-member data consists of events that occurred after the LM pretraining, there is a guarantee that this data was not present during pretraining, ensuring the accuracy of our dataset. We consider Wikipedia event data because of its time sensitivity. A recent non-event Wikipedia page may be only a recent version of an older page that was already present during the modelâ s pretraining, and thus it may not serve as true non-member data. For example, a Wikipedia page created after 2023 about a historical figure or a well-known concept could contain substantial text already mentioned in the pretraining corpus. (2) General: Our benchmark is designed to be widely applicable across different models pretrained on Wikipedia, a commonly used source of pretraining data. This includes models like OPT (Zhang et al., 2022), LLaMA (Touvron et al., 2023a;b), GPT-Neo (Black et al., 2022), and Pythia (Biderman et al., 2023), thereby ensuring the benchmarkâ s generalizability across various models. (3) Dynamic: Our benchmark will be continually updated by incorporating the latest non-member data, such as recent events from Wikipedia. This consistent renewal ensures that the benchmarkâ
2310.16789#59
2310.16789#61
2310.16789
[ "2012.13891" ]
2310.16789#61
Detecting Pretraining Data from Large Language Models
s 22 non-member data is always up-to-date and can be used to evaluate MIA for any newly introduced pretrained models. C DETAILS OF MIN-K% PROB # Algorithm 1 Pretraining Data Detection Input: A sequence of tokens x = 21, £2, ..., Nn, decision threshold â ¬ Output: Membership of the sequence x : fori = 1to N do Compute â log p(ai|r1,...,2i-1) end for Select the top k% of tokens from x with the lowest probability and add to Min-k%(x) MIN-K% PROB(x) = D0, cmtin-ke(a) ~ log P(wi| x1, ..-, @i-1) : If MIN-K% PROB(x) > â ¬ : return Non-member Else: return Member Sram Yeys # Compute â log p(xi|x1, . . . , xiâ 1) xiâ Min-k%(x) â log p(xi|x1, ..., xiâ 1) 23
2310.16789#60
2310.16789
[ "2012.13891" ]
2310.14122#0
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
3 2 0 2 v o N 6 ] R I . s c [ 2 v 2 2 1 4 1 . 0 1 3 2 : v i X r a # Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang and Michael Bendersky Google Research {hlz,zhenqin,kaihuibj,junru,lyyanle, xuanhui,bemike}@google.com # Abstract Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by simply prompting. Existing prompts for point- wise LLM rankers mostly ask the model to choose from binary relevance labels like â
2310.14122#1
2310.14122
[ "2305.06474" ]
2310.14122#1
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Yesâ and â Noâ . However, the lack of intermediate relevance label options may cause the LLM to provide noisy or biased answers for documents that are partially relevant to the query. We pro- pose to incorporate fine-grained relevance la- bels into the prompt for LLM rankers, enabling them to better differentiate among documents with different levels of relevance to the query and thus derive a more accurate ranking. We study two variants of the prompt template, cou- pled with different numbers of relevance levels. Our experiments on 8 BEIR data sets show that adding fine-grained relevance labels sig- nificantly improves the performance of LLM rankers.
2310.14122#0
2310.14122#2
2310.14122
[ "2305.06474" ]
2310.14122#2
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
1 # 1 Introduction Large language models (LLMs) such as GPT- 4 (OpenAI, 2023) and PaLM 2 (Google et al., 2023) have demonstrated impressive zero-shot per- formance on a variety of NLP tasks. Recently, there has been a growing interest in applying LLMs to zero-shot text ranking, with remarkably impressive results. The earliest zero-shot LLM rankers are pointwise (Liang et al., 2022; Sachan et al., 2022), which score one query and one document at each time and rank the documents based on the scores. Lately, pairwise (Qin et al., 2023) and listwise (Sun et al., 2023; Ma et al., 2023) LLM rankers also show strong performance, but they cannot scale to long lists and still largely rely on a high-quality first-stage ranking. A typical category of pointwise LLM rankers is relevance generation (Liang et al., 2022). In this method, the LLM is prompted to answer whether a document is relevant to the query (or answers the query). Existing pointwise LLM rankers mostly ask the LLM to answer â Yesâ or â Noâ
2310.14122#1
2310.14122#3
2310.14122
[ "2305.06474" ]
2310.14122#3
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
and use the predicted likelihood of these two answers to derive the ranking score for the given query-document pair. Nevertheless, documents in many datasets are not always entirely relevant or irrelevant to the query. Some documents may not be primarily in- tended to answer the query, but still contain helpful information. There is no accurate mapping between these documents and the binary options. Studies on human subjects show that using binary options sometimes lead to biased an- swers (Rivera-Garrido et al., 2022). Instead, pro- viding reasonably fine-grained options can lead to more reliable results (Roitero et al., 2018; Birkett, 1986; Rivera-Garrido et al., 2022; Johnston et al., 2017). Actually, in information retrieval data sets, the annotation guidelines for human annotators of- ten employ multiple relevance levels, like the 3- level scale used in TREC-COVID (Voorhees et al., 2021) and TREC-Robust (Voorhees, 2005), as well as the 4-level scale used in TREC-DL (Craswell et al., 2020, 2021). We believe that a zero-shot LLM ranker might share the same behavior pattern with human annotators. Therefore, we propose to explicitly provide fine- grained relevance labels in the prompt to zero-shot LLM rankers. Instead of asking the LLM to choose between two options, we provide the LLM with fine-grained relevance labels, such as: â Highly Rel- evantâ , â Somewhat Relevantâ and â Not Relevantâ .
2310.14122#2
2310.14122#4
2310.14122
[ "2305.06474" ]
2310.14122#4
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
We then collect the LLM likelihood of all the rel- evance labels to derive the ranking score for each query-document pair. The intuition is that the inter- mediate relevance labels in the prompt will serve as a "cue" to the LLM that partially relevant doc- uments need to be distinguished from fully rele- In addition, vant or fully irrelevant documents. by collecting the likelihood on more fine-grained relevance labels, we can obtain a more accurate estimate of the actual relevance, and thereby derive a better ranking. It is important to note that our focus is on developing LLM rankers, which is dif- ferent from LLM assessors (Faggioli et al., 2023; Thomas et al., 2023), as our goal is only to derive a high-quality ranking with accurate top-ranked doc- uments instead of estimating the precise (and often discrete) relevance for each individual document to sort ranking systems. We evaluate our prompts for zero-shot LLM ranking on 8 data sets from BEIR (Thakur et al., 2021). The results show that simply adding the in- termediate relevance labels allows LLM rankers to achieve substantially higher ranking performance consistently across different data sets, regardless of whether the actual ground-truth labels of the data set contain multiple graded relevance levels. An in- depth analysis shows that the new prompt enables LLM rankers to distinguish documents that are in- distinguishable when there are only two options provided. We believe this discovery can benefit not only text ranking applications, but other domains such as recommendations (Fan et al., 2023; Wu et al., 2023) and user rating prediction (Kang et al., 2023). # 2 Related Work Zero-shot LLM rankers. An emerging thread of research explores how to use general-purpose LLMs for zero-shot text ranking, a shift from tuning-based learning to rank on textual and tradi- tional tabular datasets (Nogueira et al., 2019; Han et al., 2020; Zhuang et al., 2021; Nogueira et al., 2020; Zhuang et al., 2023a; Xian et al., 2022; Liu, 2009; Qin et al., 2021). Pointwise rankers take a single query-document pair as input and return a ranking score.
2310.14122#3
2310.14122#5
2310.14122
[ "2305.06474" ]
2310.14122#5
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
The ranked list is obtained by sorting documents based on their ranking scores. The ranking score is typi- cally calculated based on how likely the document is relevant to the query (Liang et al., 2022) or how likely the query can be generated from the doc- ument (Sachan et al., 2022). Our work is most related to this line of research. We will revisit more technical details in Section 3. Pairwise (Qin et al., 2023) and listwise (Sun et al., 2023; Ma et al., 2023; Zhuang et al., 2023b) LLM rankers take multiple documents as input and return the ranking directly. They are usually ap- plied iteratively on smaller sets of documents and often rely on a pointwise first-stage ranker. In this paper, we only focus on pointwise LLM rankers. Zero-shot LLM assessors. Another related re- search area (Faggioli et al., 2023; Thomas et al., 2023) employs LLMs as assessors. The goal of LLM assessors is to provide a relevance label for every query-document pairs, so that the label aligns with the ground-truth relevance label, potentially created by human assessors. Existing studies (Fag- gioli et al., 2023; Thomas et al., 2023) also prompt LLMs with fine-grained relevance labels. LLM assessors are usually used to create an evaluation data set, which can be used to reliably evaluate dif- ferent ranking models. This is different from LLM rankers, which typically only need to ensure that the relative order of the top-ranked documents are accurate. A perfect LLM assessor would also be a perfect LLM ranker, but when LLM capabilities are limited, the priorities of LLM assessor and LLM ranker development diverge. # 3 LLM Rankers In this section, we first revisit existing pointwise LLM rankers. Then we introduce the prompt- ing method of our LLM rankers which score fine- grained relevance labels and how we obtain the final ranking scores. # 3.1 Preliminaries Pointwise rankers. We formally describe how a pointwise ranker tackles a ranking problem.
2310.14122#4
2310.14122#6
2310.14122
[ "2305.06474" ]
2310.14122#6
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Con- sidering a query q and a list of candidate documents d = (d1, . . . , dm), a pointwise ranker f takes each query-document pair (q, di) as input and predicts a ranking score f (q, d) â R, which reflects the relevance of the document to the query. Once the pointwise ranker has inferred ranking scores for all documents, we can obtain a ranked list by sorting the documents based on their predicted scores. Zero-shot LLM rankers. Existing explorations using zero-shot LLMs as pointwise rankers can be broadly divided into two categories: relevance generation (Liang et al., 2022) and query genera- tion (Sachan et al., 2022). Relevance generation methods prompt the LLM with both the query q and the document d and ask whether the document is relevant to the query with â
2310.14122#5
2310.14122#7
2310.14122
[ "2305.06474" ]
2310.14122#7
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Yesâ or â Noâ (see Figure 1(a)). To calcu- late the ranking score, one can use the LLMâ s log-likelihood score s1 = LLM(Yes|q, d) and s0 = LLM(No|q, d), and normalize them with a (Get | os Fe ee â Query: {query} LLM. = os â Output: 02) ou J i â _ can obtain ing each (a) Yes-No relevance generation (Get | os Fe ee â Query: {query} LLM. = os â Output: 02) ou J i â _
2310.14122#6
2310.14122#8
2310.14122
[ "2305.06474" ]
2310.14122#8
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
can obtain the log-likelihood of the LLM generat- ing each relevance label: sk = LLM(lk|q, d) (1) = query and document judge >) um * 5 » Ws BS = whether they are "Highly Relevantâ , â Somewhat Relevantâ , or "Not Relevantâ . Querysiquery) Document{document) Output: This example is illustrated in Figure 1(b). Rating scale. To avoid using relevance labels with potentially ambiguous order, we can also em- ploy a rating scale. For example, we can prompt the LLM to rate the relevance between the query q and the document d on a scale from 0 to 4. We can then use the LLM to obtain the log-likelihood [s0, . . . , s4] of generating each relevance scale value [l0, . . . , l4], which are â
2310.14122#7
2310.14122#9
2310.14122
[ "2305.06474" ]
2310.14122#9
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
0â to â 4â respectively. This method allows us to try arbitrarily fine-grained relevance levels in the prompt. Figure 1(c) illus- trates an example of this prompt. (b) Fine-grained relevance label generation (( From a scale of 0 to 4, judge the relevance between the query and the document. Query:(query) Document:{document) { Output: X â oo et fit BE ow (c) Rating scale relevance generation Figure 1: Illustration of different prompting strategies for relevance generation LLM rankers. # 3.3 Ranking Scores softmax function (Nogueira et al., 2020): Once we obtain the log-likelihood of each rele- vance labels, we can derive the ranking scores. exp(s1) exp(s1) + exp(s0) f (q, d) = Expected relevance values (ER). The most straightforward way is to calculate the expected relevance value. To do this, we first derive the marginal probability of generating each relevance label given all the candidate relevance labels by: Query generation methods provide the LLM with the document d as input and ask the LLM to generate a query that d answers. The ranking score is then obtained by the log-likelihood of the LLM generating the actual query q, i.e.,
2310.14122#8
2310.14122#10
2310.14122
[ "2305.06474" ]
2310.14122#10
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
___exp(s) Pk Sy exp(sx) (2) f (q, d) = LLM(q|d) Then, we can assign a series of relevance val- ues [y0, y1, y2] to all the relevance labels [l0, l1, l2], where yk â R. The relevance value should reflect the relevance degree expressed by the textual rel- evance label. We can then calculate the ranking score as the expected relevance value by: We focus on relevance generation LLM rankers in this work. # 3.2 Prompts In many datasets, there exist documents that are only partially or marginally relevant to the query. These documents do not directly answer the query but may contain some relevant information. When not explicitly prompted, LLMs may struggle to de- cide whether to classify such documents as relevant or irrelevant.
2310.14122#9
2310.14122#11
2310.14122
[ "2305.06474" ]
2310.14122#11
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
£4) =o Pe He (3) The relevance values yk can be provided by users or even tuned based on a training data set. In our experiments, we find that with relevance labels starting from the least relevant to the most relevant, naïvely assigning yk = k can already provide great performance. Hence, we simply use yk = k. Fine-grained relevance labels. We extend the classical relevance generation methods by intro- ducing fine-grained relevance labels. Without loss of generality, we use a set of 3-level graded rele- vance labels as example: [â Not Relevantâ , â Some- what Relevantâ , â Highly Relevantâ
2310.14122#10
2310.14122#12
2310.14122
[ "2305.06474" ]
2310.14122#12
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
], denoted as [l0, l1, l2]. Then, for each query-document pair (q, d), we ask the LLM to evaluate their relevance by choosing from the given relevance labels. We Peak relevance likelihood (PR). Alternatively, since LLM rankers are typically evaluated by rank- ing metrics which heavily focus on the accuracy of top-ranked items instead of the entire ranked list, we can further simplify the ranking score derivation by only using the log-likelihood of the relevance
2310.14122#11
2310.14122#13
2310.14122
[ "2305.06474" ]
2310.14122#13
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Table 1: Relevance labels used in RG-kL. The relevance label with the maximum relevance value is bolded. Method Relevance Labels RG-2L â Not Relevantâ , â Relevantâ RG-3L â Not Relevantâ , â Highly Relevantâ â Somewhat Relevantâ , RG-4L â Not Relevantâ , â Highly Relevantâ , â Perfectly Relevantâ â Somewhat Relevantâ , label with the highest relevance value. For exam- ple, â Highly Relevantâ is the relevance label with the highest relevance value among â Not Relevantâ , â Somewhat Relevantâ and â Highly Relevantâ . We still prompt the LLM with all three relevance labels as options, but only use the log-likelihood of â High Relevantâ as the ranking score.
2310.14122#12
2310.14122#14
2310.14122
[ "2305.06474" ]
2310.14122#14
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
More formally, let lkâ denote the relevance label expressing the highest relevance label. We can simply rank the documents by: f (q, d) = skâ (4) Note that skâ is the log-likelihood directly obtained from the LLM(lkâ |q, d), instead of the marginal probability derived from Equation (3). Hence, it is not necessary to score any other relevance labels using the LLM and could potentially save some decoding cost when using this strategy to derive the ranking score. While this method is shown less effective on smaller models (Nogueira et al., 2020), it works well empirically with larger models in our experiments. # 4 Experiment Setup Data set. We conduct experiments on 8 chosen data sets (Sun et al., 2023) from BEIR (Thakur et al., 2021): Covid, Touche, DBPedia, SciFact, Signal, News, Robust04, and NFCorpus.
2310.14122#13
2310.14122#15
2310.14122
[ "2305.06474" ]
2310.14122#15
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Notice that our method is applicable regardless of whether the data set is actually labeled with correspond- ing graded relevance, since the final output of our method are just real-number ranking scores. We use BM25 (Lin et al., 2021) to retrieve the top-100 documents for each data set, and then rank the retrieved documents using LLMs with our pro- posed methods. We use FLAN PaLM2 S (Google et al., 2023) as the LLM in our experiments. The ranking performance is measured by NDCG@10 (Järvelin and Kekäläinen, 2002). Compared methods. We compared the follow- ing prompting strategies: 1. Query Generation (QG). Ranking documents based on the LLM likelihood of generating the query given the document (Sachan et al., 2022).
2310.14122#14
2310.14122#16
2310.14122
[ "2305.06474" ]
2310.14122#16
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
2. Binary Relevance Generation (RG-YN). Prompting the LLM with a query-document pair and using the likelihood of â Yes/Noâ to calculate the ranking score (Liang et al., 2022). 3. k-Level Relevance Generation (RG-kL). Prompting the LLM to choose from k rele- vance labels for each query-document pair. The relevance labels used are listed in Table 1. 4. Rating Scale 0-to-k Relevance Generation (RG-S(0, k)). Prompting the LLM to rate the relevance for each query-document pair using a scale from 0 to k. Notice that for RG-S(0, k), the LLM needs to score the log- likelihood for (k + 1) possible outputs. The exact prompts can be found in Appendix F. By default, the ranking scores of our proposed methods are derived using the expected relevance values as shown in Equation (3). When needed, the method name is appended with the suffix â -ERâ .
2310.14122#15
2310.14122#17
2310.14122
[ "2305.06474" ]
2310.14122#17
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
We also conduct experiments to compare methods with ranking scores derived using peak relevance likelihood according to Equation (4), indicated by suffix â -PRâ . # 5 Results Overall performance. Table 2 summarizes the overall comparison results. We also plot how the performance changes with regard to k for the rating scale prompting method RG-S(0, k) in Figure 2. It can be seen that when the LLM is prompted with only 2 relevance labels (RG-YN, RG-2L), the average performance is lower. However, when the LLM is prompted with more fine-grained relevance labels, the performance can be substantially im- proved. RG-3L on average achieves +2% improve- ment in NDCG@10 compared with RG-2L and RG-YN. RG-S(0, 4) which uses the rating scale 0 to 4 in the prompt also achieves similar im- provement. Note that even on data sets with bi- nary ground-truth labels (e.g., SciFact), using fine- grained relevance labels still achieves substantial improvement. This suggests that the improvement is not merely a result of matching the actual ground- truth relevance levels of the data set. Rather, the Table 2: Overall ranking performances measured by NDCG@10 on BEIR data sets. The best performances are bolded. Average results that are significantly (paired t-test, p<0.05) better than RG-2L are marked with â
2310.14122#16
2310.14122#18
2310.14122
[ "2305.06474" ]
2310.14122#18
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
. Method Covid Touche DBPedia SciFact Signal News Robust04 NFCorpus Average QG RG-YN 0.7357 0.7897 0.2408 0.2427 0.3773 0.3696 0.7495 0.6958 0.2872 0.3196 0.4156 0.4588 0.4651 0.5656 0.3673 0.3743 0.4548 0.4770 RG-2L RG-3L RG-4L 0.7949 0.8065 0.8063 0.2411 0.2650 0.2388 0.3590 0.4013 0.4033 0.7290 0.7671 0.7766 0.2996 0.3142 0.3184 0.4623 0.4890 0.4884 0.5636 0.5660 0.5635 0.3814 0.3849 0.3801 0.4789 0.4992â 0.4969â 0.7760 0.8048 0.2695 0.2757 0.3709 0.4190 0.6921 0.7521 0.3034 0.3301 0.4677 0.4790 0.5557 0.5668 0.3787 0.3901 0.4768 0.5022â 0.500 eS eal 0.495 i ys, Qo.as0 7 â 9 o4a85| â 2 0.480. a \ o.a7s|_-% \ 2 3 5 Ros, OD 7 3 10 Figure 2:
2310.14122#17
2310.14122#19
2310.14122
[ "2305.06474" ]
2310.14122#19
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Comparing average NDCG@10 on 8 BEIR data sets with different number of relevance scales for the rating scale relevance generation method. Table 3: Comparing different strategies to derive the ranking score. Measured by average NDCG@10 on BEIR data sets. Prompts Ranking Score ER PR RG-2L RG-3L RG-4L RG-S(0, 2) RG-S(0, 4) 0.4789 0.4992 0.4969 0.4768 0.5022 0.4726 0.5005 0.4934 0.4659 0.4988 fine-grained relevance labels in the LLM prompts help it to develop a more nuanced understanding of relevance. However, the exact number of fine-grained rel- evance labels needed to achieve the performance improvement varies across different prompts. For example, simply using 3-level textual relevance la- bels is sufficient to achieve average NDCG@10 close to 0.50; but using rating scale from 0 to 2, which also corresponds to 3 relevance levels, can only obtain NDCG@10 lower than 0.48. Figure 2 shows that for rating scale relevance generation RG-S(0, k), the NDCG@10 only gets close to 0.50 with more than about 4 relevance levels. On the other hand, further adding more rele- vance levels does not always improve the perfor- mance. For example, RG-4L performance seems to be on par with RG-3L. In Figure 2, the perfor- mance from RG-S(0, 4) and RG-S(0, 8) also re- main similar, and the performance of RG-S(0, 9) and RG-S(0, 10) is even worse than RG-S(0, 4). (a) RG-2L vs. RG-S(0, 4) (b) RG-3L vs. RG-S(0, 4) Figure 3:
2310.14122#18
2310.14122#20
2310.14122
[ "2305.06474" ]
2310.14122#20
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
Comparing ranking score distribution of dif- ferent methods on the Covid data set. However, the ranking scores derived from peak rel- evance likelihood (Equation (4)) achieve very close performance to expected relevance values in RG- kL prompts where textual fine-grained relevance labels are used. When downstream applications of the LLM ranker are sensitive to decoding cost, the peak relevance likelihood strategy can provide a more efficient alternative. Ranking score derivation. We also compare the two alternative strategies to derive the ranking scores from LLM likelihood scores. The results are shown in Table 3. Generally, the expected rele- vance values derived from the marginal probability (Equation (3)) deliver better ranking scores overall. Score distribution. We also compare the score distribution of different methods. Figure 3 shows the scattered plot of ranking scores derived from two methods for a random sample of query- document pairs in the Covid data set.
2310.14122#19
2310.14122#21
2310.14122
[ "2305.06474" ]
2310.14122#21
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
We observe that RG-2Lâ s ranking scores are mostly positively correlated with RG-S(0, 4)â s (Figure 3(a)). However, RG-2L struggles to dis- tinguish query-document pairs with higher (> 3.0) ranking scores from RG-S(0, 4) and scores them al- most equally with scores close to 1.0. This suggests that providing more fine-grained relevance labels helps the LLM differentiate better among some query-document pairs, particularly with the top- ranked documents. When we compare the ranking scores from RG-3L where more than 2 relevance levels are used (Figure 3(b)), there is almost no such â
2310.14122#20
2310.14122#22
2310.14122
[ "2305.06474" ]
2310.14122#22
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
plateauâ . The performance of RG-3L and RG-S(0, 4) are also very close. # 6 Conclusion In this work, we explore the use of more fine- grained relevance labels in the prompt for point- wise zero-shot LLM rankers instead of the binary labels used in existing works. We propose to ei- ther provide intermediate relevance labels such as â Somewhat Relevantâ as additional choices for the LLM or ask the LLM to rate the relevance between query-document pairs using a rating scale. Then we aggregate the likelihood of different relevance levels into ranking scores to obtain the ranked list. Our experiments on BEIR data sets demonstrate that prompting with fine-grained relevance labels can consistently improve the ranking performance across different data sets, as it enables the model to better differentiate query-document pairs poten- tially ranked at the top. We believe our discovery can be further extended to applications beyond information retrieval. For example, the same method can be applied for rec- ommendation (Fan et al., 2023; Wu et al., 2023), where the LLM is asked to rate how likely a user would buy an item.
2310.14122#21
2310.14122#23
2310.14122
[ "2305.06474" ]
2310.14122#23
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
# 7 Limitations In this work, we assume that the predicted likeli- hood for any generated text can be accessed. How- ever, we are aware that this might not always be true for many commercial LLMs where users can only call with specific APIs. Another limitation is that our experiments are conducted only using one LLM, which is FLAN PaLM2 S. While we believe the results can be gen- eralize to other LLMs, we do not have the resource to verify this.
2310.14122#22
2310.14122#24
2310.14122
[ "2305.06474" ]
2310.14122#24
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels
# References Nicholas J Birkett. 1986. Selecting the number of re- sponse categories for a likert-type scale. In Proceed- ings of the American statistical association, volume 1, pages 488â 492. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv preprint arXiv:2102.07662. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the trec 2019 deep learning track. arXiv preprint arXiv:2003.07820. Guglielmo Faggioli, Laura Dietz, Charles LA Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, et al. 2023. Perspectives on large lan- guage models for relevance judgment. In Proceed- ings of the 2023 ACM SIGIR International Confer- ence on Theory of Information Retrieval, pages 39â
2310.14122#23
2310.14122#25
2310.14122
[ "2305.06474" ]