id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2310.10631#26
Llemma: An Open Language Model For Mathematics
In Deep Learning for Code (DL4C) Workshop, 2023. Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Hallahan, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Jason Phang, Shivanshu Purohit, Hailey Schoelkopf, Dashiell Stander, Tri Songz, Curt Tigges, Benjamin Thérien, Phil Wang, and Samuel Weinbach. GPT-NeoX:
2310.10631#25
2310.10631#27
2310.10631
[ "2308.09583" ]
2310.10631#27
Llemma: An Open Language Model For Mathematics
Large scale autoregressive language mod- eling in PyTorch. GitHub Repo, 9 2023. URL https://www.github.com/eleutherai/ gpt-neox. Akari Asai, Sewon Min, Zexuan Zhong, and Danqi Chen. Retrieval-based language models and applications. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts), pp. 41â 46, Toronto, Canada, July 2023. Associ- ation for Computational Linguistics. doi: 10.18653/v1/2023.acl-tutorials.6. URL https: //aclanthology.org/2023.acl-tutorials.6. Jeremy Avigad. The mechanization of mathematics. Notices of the AMS, 65(6):681â 90, 2018. Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W. Ayers, Dragomir R. Radev, and Jeremy Avigad.
2310.10631#26
2310.10631#28
2310.10631
[ "2308.09583" ]
2310.10631#28
Llemma: An Open Language Model For Mathematics
Proofnet: Autoformalizing and formally proving undergraduate-level mathe- matics. ArXiv, abs/2302.12433, 2023. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3615â
2310.10631#27
2310.10631#29
2310.10631
[ "2308.09583" ]
2310.10631#29
Llemma: An Open Language Model For Mathematics
3620, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1371. URL https://aclanthology.org/D19-1371. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al.
2310.10631#28
2310.10631#30
2310.10631
[ "2308.09583" ]
2310.10631#30
Llemma: An Open Language Model For Mathematics
Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pp. 2397â 2430. PMLR, 2023. 10 Preprint. Stella Rose Biderman, Kieran Bicheno, and Leo Gao. Datasheet for the pile. ArXiv, abs/2201.07311, 2022. Sidney Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. Gpt-neox-20b:
2310.10631#29
2310.10631#31
2310.10631
[ "2308.09583" ]
2310.10631#31
Llemma: An Open Language Model For Mathematics
An open-source autoregressive language model. In Proceedings of BigScience Episode# 5â Workshop on Challenges & Perspectives in Creating Large Language Models, pp. 95â 136, 2022. Samuel R. Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, KamilË e LukoÅ¡i¯utË e, Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Christopher Olah, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran- Johnson, Jackson Kernion, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Liane Lovitt, Nelson Elhage, Nicholas Schiefer, Nicholas Joseph, Noemà Mercado, Nova DasSarma, Robin Larson, Sam McCandlish, Sandipan Kundu, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Timothy Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Ben Mann, and Jared Kaplan.
2310.10631#30
2310.10631#32
2310.10631
[ "2308.09583" ]
2310.10631#32
Llemma: An Open Language Model For Mathematics
Measuring progress on scalable oversight for large language models. arXiv preprint arXiv:2211.03540, 2022. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2310.10631#31
2310.10631#33
2310.10631
[ "2308.09583" ]
2310.10631#33
Llemma: An Open Language Model For Mathematics
Language models are few-shot learners. ArXiv, abs/2005.14165, 2020. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Katherine M. Collins, Albert Q. Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda Li, Adrian Weller, and Mateja Jamnik. Evaluating language models for mathematics through interactions. arXiv preprint arXiv:2306.01694, 2023. Together Computer. Redpajama: An open source recipe to reproduce llama training dataset, April 2023. URL https://github.com/togethercomputer/RedPajama-Data.
2310.10631#32
2310.10631#34
2310.10631
[ "2308.09583" ]
2310.10631#34
Llemma: An Open Language Model For Mathematics
Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv preprint arXiv:2307.08691, 2023. Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The lean theorem prover (system description). In Automated Deduction-CADE-25: 25th International Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings 25, pp. 378â 388. Springer, 2015. Erich Elsen, Curtis Hawthorne, and Arushi Somani. The adventure of the errant hardware, 2023. URL https://www.adept.ai/blog/sherlock-sdc.
2310.10631#33
2310.10631#35
2310.10631
[ "2308.09583" ]
2310.10631#35
Llemma: An Open Language Model For Mathematics
Emily First, Markus N. Rabe, Talia Ringer, and Yuriy Brun. Baldur: Whole-proof generation and repair with large language models. arXiv preprint arXiv:2303.04910, 2023. Leo Gao, Stella Rose Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb dataset of diverse text for language modeling. ArXiv, abs/2101.00027, 2020.
2310.10631#34
2310.10631#36
2310.10631
[ "2308.09583" ]
2310.10631#36
Llemma: An Open Language Model For Mathematics
Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noacâ h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Jason Ociepa, Chris Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, 11 Preprint. Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. URL https://doi.org/10.5281/zenodo. 5371628. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2023.
2310.10631#35
2310.10631#37
2310.10631
[ "2308.09583" ]
2310.10631#37
Llemma: An Open Language Model For Mathematics
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III au2, and Kate Crawford. Datasheets for datasets, 2021. Herbert L. Gelernter. Realization of a geometry theorem proving machine. In IFIP Congress, 1959. URL https://api.semanticscholar.org/CorpusID:18484295. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training imagenet in 1 hour. CoRR, abs/1706.02677, 2017. URL http://arxiv.org/abs/1706.02677. Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward Ayers, and Stanislas Polu. Proof artifact co- training for theorem proving with language models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=rpxJc9j04U. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2021a. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021b.
2310.10631#36
2310.10631#38
2310.10631
[ "2308.09583" ]
2310.10631#38
Llemma: An Open Language Model For Mathematics
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and L.
2310.10631#37
2310.10631#39
2310.10631
[ "2308.09583" ]
2310.10631#39
Llemma: An Open Language Model For Mathematics
Sifre. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration, 2020. Albert Q. Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. Lisa: Language models of isabelle proofs. 6th Conference on Artificial Intelligence and Theorem Proving, 2021. Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygó´zd´z, Piotr MiŠo´s, Yuhuai Wu, and Mateja Jamnik.
2310.10631#38
2310.10631#40
2310.10631
[ "2308.09583" ]
2310.10631#40
Llemma: An Open Language Model For Mathematics
Thor: Wielding hammers to integrate language models and automated theorem provers. arXiv preprint arXiv:2205.10893, 2022. Albert Qiaochu Jiang, Sean Welleck, Jin Peng Zhou, Timothee Lacroix, Jiacheng Liu, Wenda Li, Mateja Jamnik, Guillaume Lample, and Yuhuai Wu. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=SMa9EAovKMC. Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei.
2310.10631#39
2310.10631#41
2310.10631
[ "2308.09583" ]
2310.10631#41
Llemma: An Open Language Model For Mathematics
Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. The stack: 3 tb of permissively licensed source code. Preprint, 2022.
2310.10631#40
2310.10631#42
2310.10631
[ "2308.09583" ]
2310.10631#42
Llemma: An Open Language Model For Mathematics
Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat, Gabriel Ebner, Aurélien Rodriguez, and Timothée Lacroix. Hypertree proof search for neural theorem proving. arXiv preprint arXiv:2205.11491, 2022. 12 Preprint. Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra.
2310.10631#41
2310.10631#43
2310.10631
[ "2308.09583" ]
2310.10631#43
Llemma: An Open Language Model For Mathematics
Solving quantitative reasoning problems with language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâ s verify step by step. arXiv preprint arXiv:2305.20050, 2023. Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. A survey of deep learning for mathematical reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 14605â 14631, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.817. URL https://aclanthology.org/2023.acl-long.817. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang.
2310.10631#42
2310.10631#44
2310.10631
[ "2308.09583" ]
2310.10631#44
Llemma: An Open Language Model For Mathematics
Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023. The mathlib Community. The lean mathematical library. In Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP 2020, pp. 367â 381, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450370974. doi: 10.1145/ 3372885.3373824. URL https://doi.org/10.1145/3372885.3373824. Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A. Smith, and Luke Zettlemoyer. Silo language models: Isolating legal risk in a nonparametric datastore, 2023. Scott Morrison. lean-training-data. https://github.com/semorrison/ lean-training-data, 2023. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe.
2310.10631#43
2310.10631#45
2310.10631
[ "2308.09583" ]
2310.10631#45
Llemma: An Open Language Model For Mathematics
Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022. Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open dataset of high-quality mathematical web text. CoRR, abs/2310.06786, 2023. doi: 10.48550/ ARXIV.2310.06786. URL https://doi.org/10.48550/arXiv.2310.06786. Christine Paulin-Mohring. Extracting Ï â
2310.10631#44
2310.10631#46
2310.10631
[ "2308.09583" ]
2310.10631#46
Llemma: An Open Language Model For Mathematics
s programs from proofs in the calculus of constructions. In Proceedings of the 16th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 89â 104, 1989a. Christine Paulin-Mohring. Extraction de programmes dans le Calcul des Constructions. PhD thesis, Université Paris-Diderot-Paris VII, 1989b. Let automatic theorem provers write your isabelle scripts!, 2023. URL https://isabelle.in.tum.de/ website-Isabelle2009-1/sledgehammer.html. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole.
2310.10631#45
2310.10631#47
2310.10631
[ "2308.09583" ]
2310.10631#47
Llemma: An Open Language Model For Mathematics
Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071, 2023. Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020. Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344, 2022.
2310.10631#46
2310.10631#48
2310.10631
[ "2308.09583" ]
2310.10631#48
Llemma: An Open Language Model For Mathematics
13 Preprint. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2023. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. ZeRO:
2310.10631#47
2310.10631#49
2310.10631
[ "2308.09583" ]
2310.10631#49
Llemma: An Open Language Model For Mathematics
Memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC â 20. IEEE Press, 2020. ISBN 9781728199986. doi: 10.5555/3433701.3433727. URL https://dl.acm.org/doi/10. 5555/3433701.3433727. Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve.
2310.10631#48
2310.10631#50
2310.10631
[ "2308.09583" ]
2310.10631#50
Llemma: An Open Language Model For Mathematics
Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush.
2310.10631#49
2310.10631#51
2310.10631
[ "2308.09583" ]
2310.10631#51
Llemma: An Open Language Model For Mathematics
Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2022. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-LM: Training multi-billion parameter language models using model par- allelism. Computing Research Repository, 2019. doi: 10.48550/arXiv.1909.08053. URL https://arxiv.org/abs/1909.08053v4. Version 4. Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, Perry Payne, Martin Seneviratne, Paul Gamble, Chris Kelly, Nathaneal Scharli, Aakanksha Chowdhery, Philip Mansfield, Blaise Aguera y Arcas, Dale Webster, Greg S. Corrado, Yossi Matias, Katherine Chou, Juraj Gottweis, Nenad Tomasev, Yun Liu, Alvin Rajkomar, Joelle Barral, Christopher Semturs, Alan Karthikesalingam, and Vivek Natarajan.
2310.10631#50
2310.10631#52
2310.10631
[ "2308.09583" ]
2310.10631#52
Llemma: An Open Language Model For Mathematics
Large language models encode clinical knowledge, 2022. Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, Mike Schaekermann, Amy Wang, Mohamed Amin, Sami Lachgar, Philip Mansfield, Sushant Prakash, Bradley Green, Ewa Dominowska, Blaise Aguera y Arcas, Nenad Tomasev, Yun Liu, Renee Wong, Christopher Semturs, S. Sara Mahdavi, Joelle Barral, Dale Webster, Greg S. Corrado, Yossi Matias, Shekoofeh Azizi, Alan Karthikesalingam, and Vivek Natarajan.
2310.10631#51
2310.10631#53
2310.10631
[ "2308.09583" ]
2310.10631#53
Llemma: An Open Language Model For Mathematics
Towards expert-level medical question answering with large language models, 2023. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2022. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic.
2310.10631#52
2310.10631#54
2310.10631
[ "2308.09583" ]
2310.10631#54
Llemma: An Open Language Model For Mathematics
Galactica: A large language model for science, 2022. Amitayush Thakur, Yeming Wen, and Swarat Chaudhuri. A language-agent approach to formal theorem-proving, 2023. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent
2310.10631#53
2310.10631#55
2310.10631
[ "2308.09583" ]
2310.10631#55
Llemma: An Open Language Model For Mathematics
14 Preprint. Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022.
2310.10631#54
2310.10631#56
2310.10631
[ "2308.09583" ]
2310.10631#56
Llemma: An Open Language Model For Mathematics
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cris- tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process- and outcome-based feedback, 2022.
2310.10631#55
2310.10631#57
2310.10631
[ "2308.09583" ]
2310.10631#57
Llemma: An Open Language Model For Mathematics
H. Wang. Toward mechanical mathematics. IBM Journal of Research and Development, 4(1):2â 22, 1960. doi: 10.1147/rd.41.0002. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=1PL1NIMMrw. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le.
2310.10631#56
2310.10631#58
2310.10631
[ "2308.09583" ]
2310.10631#58
Llemma: An Open Language Model For Mathematics
Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. Sean Welleck. Neural ntptutorial, 2023. theorem proving tutorial. https://github.com/wellecks/ Sean Welleck and Rahul Saha. llmstep: Llm proofstep suggestions in lean. https://github. com/wellecks/llmstep, 2023. Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. Naturalprover: Grounded mathematical proof generation with language models.
2310.10631#57
2310.10631#59
2310.10631
[ "2308.09583" ]
2310.10631#59
Llemma: An Open Language Model For Mathematics
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=rhdfTOiXBng. Makarius Wenzel, Lawrence C Paulson, and Tobias Nipkow. The isabelle framework. In Theorem Proving in Higher Order Logics: 21st International Conference, TPHOLs 2008, Montreal, Canada, August 18-21, 2008. Proceedings 21, pp. 33â 38. Springer, 2008. Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhan- jan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for finance, 2023. 15 Preprint. Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Norman Rabe, Charles E Staats, Mateja Jamnik, and Christian Szegedy. Autoformalization with large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=IUikebJ1Bf0. Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu.
2310.10631#58
2310.10631#60
2310.10631
[ "2308.09583" ]
2310.10631#60
Llemma: An Open Language Model For Mathematics
Doremi: Optimizing data mixtures speeds up language model pretraining. arXiv preprint arXiv:2305.10429, 2023. Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar. LeanDojo: Theorem proving with retrieval-augmented language models. In Neural Information Processing Systems (NeurIPS), 2023. Zheng Xin Yong, Hailey Schoelkopf, Niklas Muennighoff, Alham Fikri Aji, David Ifeoluwa Adelani, Khalid Almubarak, M Saiful Bari, Lintang Sutawika, Jungo Kasai, Ahmed Baruwa, Genta Winata, Stella Biderman, Edward Raff, Dragomir Radev, and Vassilina Nikoulina. BLOOM+1: Adding language support to BLOOM for zero-shot prompting. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 11682â
2310.10631#59
2310.10631#61
2310.10631
[ "2308.09583" ]
2310.10631#61
Llemma: An Open Language Model For Mathematics
11703, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. acl-long.653. URL https://aclanthology.org/2023.acl-long.653. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023.
2310.10631#60
2310.10631#62
2310.10631
[ "2308.09583" ]
2310.10631#62
Llemma: An Open Language Model For Mathematics
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. CoRR, abs/2309.05653, 2023. doi: 10.48550/arXiv.2309.05653. URL https://doi.org/10. 48550/arXiv.2309.05653. Shizhuo Dylan Zhang, Curt Tigges, Stella Biderman, Maxim Raginsky, and Talia Ringer.
2310.10631#61
2310.10631#63
2310.10631
[ "2308.09583" ]
2310.10631#63
Llemma: An Open Language Model For Mathematics
Can transformers learn to solve problems recursively?, 2023. Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021. Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi. Teaching algorithmic reasoning via in-context learning, 2022. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
2310.10631#62
2310.10631#64
2310.10631
[ "2308.09583" ]
2310.10631#64
Llemma: An Open Language Model For Mathematics
16 Preprint. A AUTHOR CONTRIBUTIONS Training Data. Zhangir Azerbayev, Keiran Paster, Marco Dos Santos, Sean Welleck. Model training. Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster. Evaluations. Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Sean Welleck.
2310.10631#63
2310.10631#65
2310.10631
[ "2308.09583" ]
2310.10631#65
Llemma: An Open Language Model For Mathematics
# Formal math evaluations. Sean Welleck. Memorization analysis. Sean Welleck, Keiran Paster. Senior Authorship and Advising. Jia Deng, Stella Biderman, Sean Welleck. B DATA: Proof-Pile-2 Data source Tokens Weight Proof-Pile-2 Code (AlgebraicStack) Web (OpenWebMath) Papers (ArXiv) 55B 11B 15B 29B â 1.00 4.00 2.00 General code (RedPajama) General language (Pile) 59B 300B 0.22 0.15 Table 8: Proof-Pile-2 data sources (top), general language and code data included during training (bottom), and the mixture weights of each component during training. B.1 MATHEMATICAL CODE: AlgebraicStack AlgebraicStack contains roughly 11B tokens of code related to mathematics.
2310.10631#64
2310.10631#66
2310.10631
[ "2308.09583" ]
2310.10631#66
Llemma: An Open Language Model For Mathematics
We describe its sources, filtering, and content below. Table 9 shows the number of tokens per language in AlgebraicStack. Language AlgebraicStack tokens Language AlgebraicStack tokens Agda C C++ Coq Fortran GAP Haskell Idris Isabelle 35.2 M 25.1 M 954.1 M 281.9 M 724.9 M 3.6 M 9.1 M 10.9 M 1,089.7 M Julia Jupyter Lean Maple Matlab Python R Tex Total 531.0 M 199.1 M 285.6 M 2.0 M 65.8 M 6,098.8 M 71.3 M 567.7 M 10,955.7 M Table 9: Tokens in AlgebraicStack, computed with the Llama tokenizer. B.1.1 GITHUB CODE The following programming languages were either barely present in the Stack or consisted of largely incorrect filetypes, so we downloaded data for these languages directly via the Github Python API.
2310.10631#65
2310.10631#67
2310.10631
[ "2308.09583" ]
2310.10631#67
Llemma: An Open Language Model For Mathematics
â ¢ Coq : We filter for files with the .v extension, and include Coq via including files that match a heuristic filter for the keywords "Theorem", "Proof", "Qed", "Inductive", "Definition", "Fixpoint" and exclude Verilog files via the keyword blacklist "pragma", "endmodule", "posedge", "negedge", "wire". We additionally exclude files noted as automatically generated. 17 Preprint. for files with the .thy extension and include files matching the keyword whitelist "theorem ", "lemma ". We keep only isabelle-prover/mirror-afp-devel and discard all other older copies of the Archive of Formal Proofs. We further remove theorem statements and proofs that have a theorem name in the PISA (Jiang et al., 2021) test set.
2310.10631#66
2310.10631#68
2310.10631
[ "2308.09583" ]
2310.10631#68
Llemma: An Open Language Model For Mathematics
â ¢ Lean : We filter for files with the .lean extension, using the keyword whitelist "theorem ", "lemma ", "example ". We remove all dependency files, and in order to avoid known benchmark contamination, we blacklist the ProofNet and MiniF2F repositories. We further remove theorems or lemmas that share a theorem name with the LeanDojo (Yang et al., 2023) val or test sets. â ¢ MATLAB : We filter for files with the .m extension, using the keyword whitelist "#import", "interface", "implementation", "property", and blacklist C files via the keywords "#include" and the regex râ main\(.*{$â
2310.10631#67
2310.10631#69
2310.10631
[ "2308.09583" ]
2310.10631#69
Llemma: An Open Language Model For Mathematics
We implemented a cutoff date for our Github API downloads, and used a cutoff date of April 1, 2023. For all languages, unless otherwise stated, we additionally filtered out files with a filesize greater than 1048575 bytes or with a numerical density (ratio of digit characters to non-digit characters) of 0.5. We additionally perform document-level exact deduplication by removing documents which contain an overlapping 2048-character chunk as another document. B.1.2 LEAN PROOFSTEPS We extract a dataset of (tactic state, next tactic) pairs from Mathlib 4 (mathlib Community, 2020) using the lean-training-data (Morrison, 2023) tool. We use Mathlib 4 commit c779bd5, which was created on August 20th 2023. B.1.3 ISABELLE PROOFSTEPS We construct a dataset of Isabelle proofs, building upon the PISA dataset Jiang et al. (2021). Isabelle Proofsteps comprises proofs from the Archive of Formal Proofs and Isabelle Standard Library, scraped with PISA Jiang et al. (2021). Each entry in the dataset includes the theorem statement, the proof states and the proof steps, separated by specific tags. To maintain the integrity of evaluations using the PISA test set, we decontaminate Isabelle Proofsteps by removing theorems whose names overlap with those in the PISA test set. Although this approach results in a strict filtering â removing more than 10,000 theorems although there are only 3600 in the PISA test set â we consider it acceptable in order to mitigate data contamination. After filtering, Isabelle Proofsteps contains 251,000 theorems. B.1.4 STACK FILTERING We source the following programming languages from the Stack (Kocetkov et al., 2022) dataset, and describe our filtering process and quality issues we chose to mitigate beyond our default quality heuristics: Agda: Only standard filters applied. â ¢ C :
2310.10631#68
2310.10631#70
2310.10631
[ "2308.09583" ]
2310.10631#70
Llemma: An Open Language Model For Mathematics
We include documents based on a keyword whitelist, namely: "#include <fftw.h>", "#include <fftw3.h>", "#include <rfftw.h>", "#include <gsl", "#include <cblas.h>", "#include <blas.h>", "#include <lapacke.h>", "#include <nlopt.h>", "#include <petsc.h>". â ¢ C++ : We include documents based on a keyword whitelist, namely: "#include <adept_arrays.h>", "#include <adept.h>", "#include <alglib>, "#include <boost", "#include <armadillo", "#include <blitz", "#include <Eigen", "#include <deal.II", "#include <dlib", "#include <NTL", "#include <mtl". Fortran : Only standard filters applied. â ¢ GAP : Only standard filters applied. â ¢ Haskell : We filtered the data to only contain files with the following im- ports: Numeric.LinearAlgebra, Numeric.SpecFunctions, Numeric.Vector, Statistics, Data.Complex.
2310.10631#69
2310.10631#71
2310.10631
[ "2308.09583" ]
2310.10631#71
Llemma: An Open Language Model For Mathematics
18 Preprint. â ¢ Idris : Only standard filters applied. â ¢ Julia : We filtered out mislabeled JSON lines files. We removed files larger than 10,000 characters long which both were not files containing tests and which had a lower numerical density than 0.5, and otherwise ignored numerical density. We additionally only accepted files within a specific keyword whitelist, to attempt to control relevance to scientific comput- ing, namely: "LinearAlgebra", "DifferentialEquations", "Symbolics", "Distributions", "DataFrames", "DynamicalSystems", "Turing", "Gen", "JuMP", "sqrt", "abs", "ze- ros", "ones", "sin", "cos", "tan", "log", "exp", "integrate", "likelihood", "Matrix", Ï , "pi", "rand", "grad". â ¢ Jupyter : We found that many Jupyter notebook files were large due to containing long cell outputs, such as base64 images, long tracebacks, or other extra JSON cell metadata. We use nbconvert to convert notebooks to a markdown format, removing metadata.
2310.10631#70
2310.10631#72
2310.10631
[ "2308.09583" ]
2310.10631#72
Llemma: An Open Language Model For Mathematics
â ¢ Maple : We filtered out files with a size greater than 100, 000 bytes, and found that some files were XML. We filtered all files beginning with an XML declaration. â ¢ Python : We filtered notebooks and JSON files out by excluding documents with beginning "{" characters, and included only files importing from a fixed list of libraries. â ¢ R : We excluded all files beginning with an XML declaration. We additionally filtered out all notebooks, and filtered all files containing MacOS "Resource Fork" files.
2310.10631#71
2310.10631#73
2310.10631
[ "2308.09583" ]
2310.10631#73
Llemma: An Open Language Model For Mathematics
â ¢ Tex : We used a max file size of 10,000,000 bytes. We excluded tex files found in di- rectories named "latex/" because these were often auto-generated files, and excluded documents using gnuplot. We included only documents containing one of the keywords " \chapter{", "\chapter*{", "\section{", "\section*{", "\subsection{", "\subsection*{", "\subsubsection{", "\subsubsection*{", "\paragraph{", "\subparagraph{", and ad- ditionally only included documents identified as English by a classifier from the langid package. For all languages we used within the Stack, unless otherwise stated, we additionally filtered out files with a filesize greater than 1048575 bytes or with a numerical density (ratio of digit characters to non-digit characters) of 0.5. We used v1.2 of the near-deduplicated Stack as a base for processing. B.2 PAPERS: ARXIV We use the entirety of ArXiv, as accessed by Computer (2023) in April 2023. For further information on preprocessing applied to ArXiv, see Computer (2023). B.3 WEB: OPENWEBMATH For the web portion of our training dataset, we use OpenWebMath (Paster et al., 2023). # C EVALUATION HARNESS We implement a variety of math-related tasks and evaluation protocols into a public fork of the Language Model Evaluation Harness (Gao et al., 2021). The Harness provides a model-agnostic framework for standardized, reproducible evaluation of language models. We add the following tasks for the evaluations in this paper:
2310.10631#72
2310.10631#74
2310.10631
[ "2308.09583" ]
2310.10631#74
Llemma: An Open Language Model For Mathematics
â ¢ hendrycks_math_ppl: Perplexity evaluation on MATH (Hendrycks et al., 2021a) sub-tasks. minif2f_isabelle: Proof autoformalization in Isabelle on the miniF2F benchmark based on Jiang et al. (2023), with a Portal-to-Isabelle (Jiang et al., 2021) proof checker. â ¢ minerva_math: The MATH benchmark with the prompt and Sympy evaluation from Minerva (Lewkowycz et al., 2022). minerva-hendrycksTest: MMLU-STEM tasks following Lewkowycz et al. (2022). 19 Preprint. ocw_courses: The OCW Courses task from Lewkowycz et al. (2022). â
2310.10631#73
2310.10631#75
2310.10631
[ "2308.09583" ]
2310.10631#75
Llemma: An Open Language Model For Mathematics
¢ python_gsm8k: GSM8k with Python, based on Gao et al. (2022). â ¢ sympy_math: MATH with Sympy evaluation. We include a link to the implementations for these tasks, including full prompts, in our public codebase. D EVALUATION: EXPERIMENT DETAILS ISABELLE INFORMAL-TO-FORMAL THEOREM PROVING We follow Jiang et al. (2023), allowing the model to issue a call to built-in Isabelle automation in the output proof by generating sledgehammer. This calls Sledgehammer (Paulson & Nipkow, 2023) and the list of heuristics listed in Jiang et al. (2023). Following Jiang et al. (2023), as a baseline we use Sledgehammer and the heuristics executed at the beginning of the proof (referred to as Sledgehammer in the main text for brevity). We use a 30-second timeout for Sledgehammer and implement proof checking via Portal-to-Isabelle (Jiang et al., 2021). Refer to the implementation in the Evaluation Harness for further details. D.2 LEAN THEOREM PROVING Theorem proving via tactic prediction involves interacting with a proof assistant after each step of a proof. Implementing these interactions within the evaluation harness is outside the scope of this work. Therefore, for the Lean theorem proving task we use a separate evaluation setup based on an open-source implementation (Welleck, 2023). We include our evaluation code in our public codebase. Setup. We evaluate on miniF2F (Zheng et al., 2021), which consists of 488 formalized statements from math competitions and undergraduate coursework. Given a formalized statement, the task is to generate a formal proof that is checked by Lean. We use best first search, commonly used for neural tactic prediction models (e.g., Polu & Sutskever (2020)). Best first search is parameterized by the number of attempts (N), generated tactics per iteration (S), and maximum iterations (T). We define the search budget to be the maximum number of generated tactics, N Ã S Ã T . We set our search budget to N = 1, S = 32, and T = 100, less than that of the baseline model.
2310.10631#74
2310.10631#76
2310.10631
[ "2308.09583" ]
2310.10631#76
Llemma: An Open Language Model For Mathematics
Following Yang et al. (2023), we gener- ate tactics with beam search and use a 10 minute timeout. We adapt the proof search imple- mentation from Welleck (2023), which uses LeanDojo v.1.1.2 (Yang et al., 2023) for interac- tion. We use Lean 4 miniF2F, using https://github.com/rah4927/lean-dojo-mew commit d00c776260c77de7e70125ef0cd119de6c0ff1de. Note that the ReProver baseline from (Yang et al., 2023) reports performance with Lean 3. Prompt. We prompt the model with three (state, tactic) examples, shown in Figure 5.
2310.10631#75
2310.10631#77
2310.10631
[ "2308.09583" ]
2310.10631#77
Llemma: An Open Language Model For Mathematics
20 Preprint. """Given the Lean 4 tactic state, suggest a next tactic. Here are some examples: Tactic state: --- α : Type u_1 r : α â α â Prop inst1 : DecidableEq α inst : IsIrrefl α r â ¢ CutExpand r â ¤ InvImage (Finsupp.Lex (cr Π fun x x_1 => x ̸= x_1) fun x x_1 => x < x_1) â toFinsupp --- Next tactic: --- rintro s t â ¨u, a, hr, heâ © --- Tactic state: --- ι : Type u_1 I J : Box ι x y : ι â R I J : WithBot (Box ι) â ¢ â I = â J â I = J --- Next tactic: --- simp only [Subset.antisymm_iff, â le_antisymm_iff, withBotCoe_subset_iff] --- Tactic state: --- m n : N h : Nat.coprime m n â ¢ Nat.gcd m n = 1 --- Next tactic: --- rw [â h.gcd_eq_one] --- Tactic state: --- %s --- Next tactic: ---""" Figure 5: Prompt for the Lean theorem proving experiments. 21 Preprint. # E DATASHEET We provide a datasheet for Proof-Pile-2, following the framework in Gebru et al. (2021). # MOTIVATION For what purpose was the dataset cre- ated? Proof-Pile-2 was created for the training or finetuning of domain-specific large lan- guage models for general mathematics tasks. Who created the dataset and on behalf of which entity? The dataset was created by the authors of this paper for the purposes of this research project. Who funded the creation of the dataset? The creation of the dataset was funded by the coauthorsâ grants and employers, as fur- ther described in section 5. Any other comment? Any other comment?
2310.10631#76
2310.10631#78
2310.10631
[ "2308.09583" ]
2310.10631#78
Llemma: An Open Language Model For Mathematics
COMPOSITION What do the instances that comprise the dataset represent? Instances are text-only documents. How many instances are there in total? We detail fine-grained token counts else- where in this paper. Does the dataset contain all possible in- stances or is it a sample (not necessarily random) of instances from a larger set? Our dataset is filtered based on our assess- ments of quality for the language modeling task. More detail on methodology can be found in Appendix B. What data does each instance consist of? Each instance is a text-only document, alongside metadata about its originating split and filename or location. Is there a label or target associated with each instance?
2310.10631#77
2310.10631#79
2310.10631
[ "2308.09583" ]
2310.10631#79
Llemma: An Open Language Model For Mathematics
No. Is any information missing from individ- ual instances? Yes, we filter undesired noise, such as base64-encoded images, from some doc- uments. Are relationships between individual in- stances made explicit? No. Are there recommended data splits? Yes, we release a canonical train, validation, and test split of the dataset, which we follow in this work. Are there any errors, sources of noise, or redundancies in the dataset? We make our best efforts to remove errors or sources of noise, but our dataset will naturally contain documents with errors or noise, and may contain near-duplicate doc- uments. Is the dataset self-contained, or does it link to or otherwise rely on external re- sources? The dataset is self-contained, but can also be reconstructed based on external publicly available data sources and datasets follow- ing our instructions. # Any other comment? Does the dataset contain data that might be considered confidential? All documents in Proof-Pile-2 are publicly available online.
2310.10631#78
2310.10631#80
2310.10631
[ "2308.09583" ]
2310.10631#80
Llemma: An Open Language Model For Mathematics
22 Preprint. Does the dataset contain data that, if viewed directly, might be offensive, in- sulting, threatening, or might otherwise cause anxiety? We estimate toxic content to be less preva- lent in our dataset than other more general web-based datasets, due to its technical fo- cus. However, it is likely to contain such content. # COLLECTION How was the data associated with each instance acquired? Data was largely sourced from existing pub- lic subsets, such as the RedPajama dataset (Computer, 2023), OpenWebMath dataset (Paster et al., 2023), and via filtering the Stack (Kocetkov et al., 2022). Some data was collected using the Github API. What mechanisms or procedures were used to collect the data? See above. If the dataset is a sample from a larger set, what was the sampling strategy? We release the entirety of the dataset fol- lowing the application of our quality filters. We randomly held out validation and test splits from the dataset. Who was involved in the data collec- tion process and how were they compen- sated? The authors of this paper participated in lo- cating, retrieving, and filtering the dataset. Over what timeframe was the data col- lected? This data was collected in 2023, with a cut- off date of April 2023 for all subsets with the exception of our Lean proofstep data. Were any ethical review processes con- ducted? Yes, the authors conducted an informal eth- ical review internally. # PREPROCESSING Was any preprocessing/cleaning/labeling of the data done? Yes, the authors extensively filtered the dataset subsets in keeping with our expec- tations for high-quality language modeling data in our domain. See Appendix B for further detail on filtering steps taken.
2310.10631#79
2310.10631#81
2310.10631
[ "2308.09583" ]
2310.10631#81
Llemma: An Open Language Model For Mathematics
Was the â rawâ data saved in addition to the preprocessed/cleaned/labeled data? Raw data can be accessed via reuse of our provided codebase. Is the software that was used to prepro- cess/clean/label the data available? # USES Has the dataset been used for any tasks already? Yes, this dataset has been used to train the LLEMMA language models as a domain adaptation and continued pretraining cor- pus. Is there a repository that links to any or all papers or systems that use the dataset?
2310.10631#80
2310.10631#82
2310.10631
[ "2308.09583" ]
2310.10631#82
Llemma: An Open Language Model For Mathematics
No. What (other) tasks could the dataset be used for? The dataset was specifically targeted as a high quality language modeling corpus for the mathematics domain, but may be useful for general-purpose language modeling or unforeseen other downstream uses. 23 Preprint. Is there anything about the composition of the dataset or the way it was col- lected and preprocessed/cleaned/labeled that might impact future uses? We filtered the dataset with the intent of creating a model useful for mathematical tasks with solely English text. Are there tasks for which the dataset should not be used? The dataset should not be used with the intent to cause harm or for models intended for the purposes of harm. # DISTRIBUTION Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? We make the dataset publicly available for reproducibility, analysis, and other further downstream uses. How will the dataset will be distributed? We provide code to replicate the dataset, and release it via the Huggingface Hub. When will the dataset be distributed? The dataset is available immediately. Will the dataset be distributed under a copyright or other intellectual prop- erty (IP) license, and/or under applicable terms of use (ToU)? We do not relicense the datasetâ s compo- nents, and do not impose our own use re- strictions. Have any third parties imposed IP-based or other restrictions on the data associ- ated with the instances? Not to our knowledge. Do any export controls or other regula- tory restrictions apply to the dataset or to individual instances? Not to our knowledge.
2310.10631#81
2310.10631#83
2310.10631
[ "2308.09583" ]
2310.10631#83
Llemma: An Open Language Model For Mathematics
# MAINTENANCE Who will be supporting/hosting/main- taining the dataset? The dataset will be hosted on the Hug- gingFace Hub and able to be recreated via code at https://github.com/ EleutherAI/math-lm. The dataset will not be updated post-release. How can the owner/curator/manager of the dataset be contacted? Via email at [email protected] Is there an erratum? No. Will the dataset be updated? No. If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? No. Table 10: Datasheet for Proof-Pile-2, following the framework introduced by Gebru et al. (2021). 24 Preprint. F ADDITIONAL RESULTS F.1 PROOF AUTOFORMALIZATION Table 11 shows additional results on Isabelle proof autoformalization, including the union of theorems closed by Sledgehammer and the given language model. Method Autoformalization pass@1 miniF2F-validâ miniF2F-test Sledgehammer Code Llama 7b LLEMMA-7b 14.72% 16.31% 20.60% 20.49% 17.62% 22.13% Code Llama 7b â ª Sledgehammer LLEMMA-7b â ª Sledgehammer 20.17% 25.97% 25.00% 27.46% Table 11:
2310.10631#82
2310.10631#84
2310.10631
[ "2308.09583" ]
2310.10631#84
Llemma: An Open Language Model For Mathematics
Isabelle autoformalization. â We exclude the 11 examples used in the few-shot prompts. Pass@1 with greedy decoding. # G SUPERVISED FINETUNING A full exploration of finetuning applications for LLEMMA, such as instruction following (Ouyang et al., 2022; Wei et al., 2022), dialogue modeling (Thoppilan et al., 2022; Touvron et al., 2023; Collins et al., 2023), and reward modeling (Cobbe et al., 2021; Lightman et al., 2023) are outside the scope of this work. However, to establish that LLEMMA retains its advantage over other open models when finetuned, we conduct preliminary experiments finetuning LLEMMA-7B on MetaMathQA (Yu et al., 2023), a supervised dataset targeted at the MATH and GSM8k benchmarks. Results are shown in Table 12. Initialization Finetune Dataset MATH GSM8k Llama 2 7B WizardMath (Proprietary) Llama 2 7B LLEMMA 7B MetaMathQA MetaMathQA 10.7% 54.9% 19.4% 66.4% 25.2% 66.5% Llama 2 70B WizardMath (Proprietary) Llama 2 70B MetaMathQA 22.7% 81.6% 26.6% 82.3% Table 12: Finetuning of various 7B base models on supervised mathematics datasets. All results with a Llama 2 initialization are copied from the literature (Luo et al., 2023; Yu et al., 2023). The LLEMMA 7B finetune is trained with identical hyperparameters to the models in Yu et al. (2023)
2310.10631#83
2310.10631#85
2310.10631
[ "2308.09583" ]
2310.10631#85
Llemma: An Open Language Model For Mathematics
. H QUALITATIVE EXAMPLES Dataset overlap. Figure 6 shows example false positives when checking n-gram overlap with OpenWebMath documents for various n. Figure 7 shows an example OpenWebMath document that has 30-gram overlap with a MATH problem, and LLEMMA-7bâ s generated solution. Task outputs. Figure 8 shows a generated proof in the informal2formal theorem proving task. 25 # Preprint. OpenWebMath document 2D affine transformations can be better represented using 2 by 2 matrices, since they are simply linear combinations of 2 variables. The advantage of this is that the matrices are associative under multiplication Also, GPUs and modern toolkits are optimised to work with this representation. As a result, a scale matrix is egin{bmatrix} s_x & 0 \ 0 & s_y \end{bmatrix}, and a rotation matrix is egin{bmatrix} \cos heta & -\sin heta \ \sin heta & \cos heta \end{bmatrix}. A translation matrix is simply egin{bmatrix} 1 & rac{t_x}{y} \ rac{t_y}{x} & 1 ... # MATH problem A rotation centered at the origin takes ( # R ) (4s) # to .
2310.10631#84
2310.10631#86
2310.10631
[ "2308.09583" ]
2310.10631#86
Llemma: An Open Language Model For Mathematics
Which vector does the rotation take (â ) # MATH solution The rotation matrix must be of the form # cos @ â siné sin@ cos@ # sin θ . Thus,... # Hit \cos heta & -\sin heta \ \sin heta & \cos # OpenWebMath document # Basic Probability A number is selected at random from 1 through 100, inclusive. What is the probability that the number is a divisor of 50? Express your answer as a common fraction. Apr 24, 2019 There are a 100 integers between 1-100, inclusive. Since 50 is $$2*5^2$$, it has $$(1+1)(1+2)=(2)(3)=6$$ factors. Thus, the answer is $$ rac{6}{100}= oxed{ rac{3}{50}}.$$ MATH problem A number is selected at random from 1 through 100, inclusive. What is the probability that the number is a perfect square? Hit A number is selected at random from 1 through 100, inclusive. What is the probability that the number is a
2310.10631#85
2310.10631#87
2310.10631
[ "2308.09583" ]
2310.10631#87
Llemma: An Open Language Model For Mathematics
# OpenWebMath document Fig. 2. Use values of the most used medicinal plants in the Safi Province (Morocco). It is also important to note that for the abovementioned medicinal plants, many other folk uses have been reported in different regions of Morocco. Furthermore, literature-based proof revealed that these species have proven a wide variety of biological and pharmacological activities (Table 4, Ref. [14, 17, 19, 20, 21, 23, 24, 26, 28, 30, 31, 34, 35, 36, 38, 39, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116]), which may confirm the different popular applications of ...
2310.10631#86
2310.10631#88
2310.10631
[ "2308.09583" ]
2310.10631#88
Llemma: An Open Language Model For Mathematics
Generated solution (LLEMMA 7b): The are 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49, 50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78, 79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100. We can see that... Figure 6: Data overlap: Example false positives using 10-gram match between MATH solutions and Open- WebMath documents (top), 20-gram match between MATH problems and OpenWebMath documents (middle), and 30-gram match between LLEMMA-7bâ s generated solutions and OpenWebMath documents (bottom).
2310.10631#87
2310.10631#89
2310.10631
[ "2308.09583" ]
2310.10631#89
Llemma: An Open Language Model For Mathematics
26 # to? # Preprint. OpenWebMath document A triangle is formed with edges along the line $y= rac{2}{3}x+5$, the $x$-axis, and the line $x=k$. If the area of the triangle is less than $20$, find the sum of all possible integral values of $k$. Feb 28, 2018 Look at the graph, here...two triangles are possible : https://www.desmos.com/calculator/m6wnjpgldq The height of the triangles at any point will be formed by [ (2/3)x + 5 ] And the bases will be So....we want to solve this (1/2) [ (2/3)x + 5 ] [ x + 7.5 ] [ (2/3)x + 5 ] [ x + 7.5] 40 (2/3)x^2 + 5x + 5x (2/3)x^2 + 10x - 2.5 [ x - (- 7.5)] = = + 37.5 0 = = 0 20 = [ x + 7.5] Using a little technology.....the max x triangle formed above the x axis will be = .246 value for the And the min x value for the triangle formed below the x axis will be = -15.246 With the given boundaries, the triangles with an area < 20 units^2 integer sums = of all possible x values of k giving [ (-15) + (-14 ) + (-13) + ...+ ( -2) + ( - 1 ) + 0 ] - -120 (15) (16) / 2 = = Feb 28, 2018 MATH Problem: A triangle is formed with edges along the line y = 2 x = k. If the area of the triangle is less than 20, find the sum of all possible integral values of k. Answer: -120 3 x + 5, the x-axis, and the line Generated solution (LLEMMA 7b): The triangle has vertices (k, 0), (0, 5), and (k, 2 height of the triangle has length 2 3 k + 5). The base of the triangle has length k, and the 3 k + 5.
2310.10631#88
2310.10631#90
2310.10631
[ "2308.09583" ]
2310.10631#90
Llemma: An Open Language Model For Mathematics
The area of the triangle is therefore A = 1 2 bh = 1 2 k( 2 3 k + 5) = 1 3 k2 + 5 2 k. Since the area of the triangle is less than 20, we have 5 2 6 k2 + 5 This inequality is true if and only if 1 4 k â 10 is a quadratic with positive leading coefficient, it is positive if and only if its vertex is less than 0. The vertex of 6 k2 + 5 4 k â 10 is (â 10 1 3 < 0, we have that 1 4 k â 10 > 0 if and only if k < â 10 3 . Therefore, the only possible integral values of k are k = 0, 1, 2, 3, 4. The sum of these values is 10 . Final Answer: The final answer is 10. 5 4 4 k â 10 is positive. Since 1 Figure 7: Data overlap: Example OpenWebMath document that has a 30-gram overlap with the given MATH problem, and LLEMMA-7bâ s generated solution.
2310.10631#89
2310.10631#91
2310.10631
[ "2308.09583" ]
2310.10631#91
Llemma: An Open Language Model For Mathematics
27 Preprint. Problem: If 3a + b + c = â 3, a + 3b + c = 9, a + b + 3c = 19, then find abc. Show that it is -56. Informal Proof (Human-written): Summing all three equations yields that 5a + 5b + 5c = â 3 + 9 + 19 = 25. Thus, a + b + c = 5. Subtracting this from each of the given equations, we obtain that 2a = â 8, 2b = 4, 2c = 14. Thus, a = â 4, b = 2, c = 7, and their product is abc = â 4 Ã 2 Ã 7 = â 56. Formal Statement and Proof: theorem mathd_algebra_338: fixes a b c :: real assumes "3 * a + b + c = -3" and "a + 3 * b + c = 9" and "a + b + 3 * c = 19" shows "a * b * c = -56" proof - (* Summing all three equations yields that 5a + 5b + 5c = -3 + 9 + 19 = 25. Thus, a + b + c = 5. *) have "5 * a + 5 * b + 5 * c = -3 + 9 + 19" using assms <ATP> then have "5 * (a + b + c) = 25" <ATP> then have "a + b + c = 5" <ATP> (* Subtracting this from each of the given equations, we obtain that 2a = -8, 2b = 4, 2c = 14. Thus, a = -4, b = 2, c =7, and their product is abc = -4 imes 2 imes 7 = -56. *) then have "2 * a = -8" "2 * b = 4" "2 * c = 14" using assms <ATP> then have "a = -4" "b = 2" "c = 7" <ATP> then show ?thesis <ATP> qed Problem: If 3a + b + c = â
2310.10631#90
2310.10631#92
2310.10631
[ "2308.09583" ]
2310.10631#92
Llemma: An Open Language Model For Mathematics
3, a + 3b + c = 9, a + b + 3c = 19, then find abc. Show that it is -56. Figure 8: Informal-to-formal proving. The model is given the problem, informal proof, and formal statement, following Jiang et al. (2023). It generates a formal proof (starting with proof -) containing Isabelle code, comments ((*...*)) that align the informal and formal proofs, and calls to an automated prover (shown as <ATP>). The proof is from LLEMMA-7b with greedy decoding.
2310.10631#91
2310.10631#93
2310.10631
[ "2308.09583" ]
2310.10631#93
Llemma: An Open Language Model For Mathematics
28
2310.10631#92
2310.10631
[ "2308.09583" ]
2310.09497#0
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
3 2 0 2 t c O 4 1 ] R I . s c [ 1 v 7 9 4 9 0 . 0 1 3 2 : v i X r a # A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Honglei Zhuang Google Research [email protected] # Bevan Koopman CSIRO [email protected] Guido Zuccon The University of Queensland [email protected]
2310.09497#1
2310.09497
[ "2302.13971" ]
2310.09497#1
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
ABSTRACT Large Language Models (LLMs) demonstrate impressive effective- ness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effective- ness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
2310.09497#0
2310.09497#2
2310.09497
[ "2302.13971" ]
2310.09497#2
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
# CCS CONCEPTS â ¢ Information systems â Language models. KEYWORDS Large Language Model for Zero-shot ranking, setwise prompting, sorting algorithm ACM Reference Format: Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. 2023. A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models. In Arxiv, 2023, preprint. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 INTRODUCTION Large Language Models (LLMs) such as GPT-3 [2], FlanT5 [26], and PaLM [3] have been shown highly effective across a diverse range of natural language processing tasks under the zero-shot settings [1, 2, 9, 25]. Notably, these LLMs have also been adapted for zero-shot document ranking tasks, exhibiting strong zero-shot ranking capabilities [10, 12, 17â 20]. The methodologies for harness- ing LLMs in zero-shot ranking tasks can be broadly categorized into three main approaches: Pointwise [10, 19], Listwise [12, 17, 20], and Pairwise [18]. These approaches employ different prompting strategies to instruct the LLM to output a relevance estimation for each candidate document. While these LLM-based zero-shot rank- ing approaches have been successful individually, it is worth noting that there has been a lack of fair comparison in the literature re- garding their effectiveness, and in particular, their efficiency within the exact same experimental framework. This includes factors such as utilizing the same size of LLM, evaluation benchmarks, and com- putational resources. We believe it is very important to establish a rigorous framework for evaluating these LLM-based zero-shot rank- ing approaches. By doing so, we can draw meaningful conclusions about their comparative effectiveness and efficiency. Thus, in this paper, we first conduct a systematic evaluation of all existing approaches within a consistent experimental envi- ronment. In addition to assessing ranking effectiveness, we also compare the efficiency of these methods in terms of computational expenses and query latency. Our findings indicate that the Pairwise approach emerges as the most effective but falls short in terms of efficiency even with the assistance of sorting algorithms aimed at improving this.
2310.09497#1
2310.09497#3
2310.09497
[ "2302.13971" ]
2310.09497#3
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Conversely, the Pointwise approach stands out as the most efficient but lags behind other methods in terms of rank- ing effectiveness. The Listwise approach, which relies solely on the generation of document labels in order, can strike a middle ground between efficiency and effectiveness but this varies considerably based on configuration, implementation and evaluation dataset (highlighting the importance of thoroughly evaluating these model under multiple settings). Overall, these comprehensive results fur- nish an understanding of the strengths and weaknesses of these LLM-based zero-shot ranking approaches, providing valuable in- sights for practitioners seeking to select the most suitable approach for real-world applications. Having considered all the different approaches and their results in terms of efficiency and effectiveness tradeoffs, we set about de- vising a method that was both effective and efficient. Our approach was to take the most effective model (Pairwise) and to enhance its efficiency (without seriously compromising effectiveness). Our solu- tion is a novel Setwise prompting approach. This concept stems from our realization that the sorting algorithms employed by Pairwise approaches can be accelerated by comparing multiple documents, as opposed to just a pair at a time.
2310.09497#2
2310.09497#4
2310.09497
[ "2302.13971" ]
2310.09497#4
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national govern- ment. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only. Arxiv, 2023, preprint © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn
2310.09497#3
2310.09497#5
2310.09497
[ "2302.13971" ]
2310.09497#5
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Our Setwise prompting approach instructs LLMs to select the most relevant document to the query from a set of candidate doc- uments. This straightforward adjustment allows the sorting algo- rithms to infer relevance preferences for more than two candidate documents at each step, thus significantly reducing the total number Arxiv, 2023, preprint Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon Passage: {passage} Query: {query} Does the passage answer the query? Answer Logits yes_no "Yes! or â No! Passage: {passage} Logits Please write a question based on this passage. QLM Given a query {query}, which of the following two passages is more relevant to the query? (a) Generate oF logits Passage A: {passage_I} Passage B: {passage_2} Output Passage A or Passage B: (b) (d) | Passage C: {passage 3} The following are {num} passages, each indicated by number identifier (]. I can rank them based on their relevance to query: {query} [1] {passage_1} [2] {passage_2} uM = Bet Generate or logits sorting Logits The ranking results of the {num} passages (only identifiers) is: Given a query {query}, which of the following passages is more relevant one to the query? Passage A: {passage_I} Passage B: {passage 2} Output only the passage label of the most relevant passage: Figure 1: Different prompting strategies. (a) Pointwise, (b) Listwise, (c) Pairwise and (d) our proposed Setwise. of comparisons required; this leads to substantial savings in compu- tational resources. Furthermore, beyond the adjustment to Pairwise approaches, Setwise prompting allows the utilization of model out- put logits to estimate the likelihood of ranks of document labels, a capability not feasible in existing Listwise approaches, which solely rely on document label ranking generation â
2310.09497#4
2310.09497#6
2310.09497
[ "2302.13971" ]
2310.09497#6
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
â a process that is slow and less effective. We evaluate our Setwise approach along with other existing approaches under the same experimental setting. Our results show that the incorporation of our Setwise prompting substantially improves the efficiency of both Pairwise and Listwise approaches. In addition, Setwise sorting enhances Pairwise and Listwise robustness to variations in the internal ordering quality of the initial rankings: no matter what the initial ordering of the top-k documents to rank is, our method provides consistent and effective results. This is unlike other methods that are highly susceptible to such initial ordering. To conclude, this paper makes three key contributions to our understanding of LLM-based zero-shot ranking approaches: 2.1 Pointwise prompting approaches Figure 1a shows pointwise approaches. There are two popular di- rections of prompting LLMs for ranking documents in a pointwise manner: generation and likelihood. In the generation approach, a â yes/no" generation technique is used: LLMs are prompted to gen- erate whether the provided candidate document is relevant to the query, with the process repeated for each candidate document. Sub- sequently, these candidate documents are re-ranked based on the normalized likelihood of generating a "yes" response [10, 14]. The likelihood approach involves query likelihood modelling (QLM) [15, 28, 29], wherein LLMs are prompted to produce a relevant query for each candidate document. The documents are then re-ranked based on the likelihood of generating the actual query [19]. It is worth noting that both pointwise methods require access to the output logits of the model to be able to compute the likelihood scores. Thus, it is not possible to use closed-sourced LLMs to implement these approaches if the corresponding APIs do not expose the logits values: this is the case for example of GPT-4. (1) We conduct a systematic examination of all existing LLM-based zero-shot ranking approaches and our novel Setwise approach under strict and consistent experimental conditions, including efficiency comparisons which have been overlooked in the lit- erature. Our comprehensive empirical evaluation on popular zero-shot document ranking benchmarks offers valuable insights for practitioners.
2310.09497#5
2310.09497#7
2310.09497
[ "2302.13971" ]
2310.09497#7
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
(2) We introduce an innovative Setwise prompting approach that en- hances the sorting algorithms employed in the Pairwise method, resulting in highly efficient zero-shot ranking with LLMs. (3) We further adapt how our Setwise prompting approach computes rankings to the Listwise approach, leveraging the model output logits to estimate the likelihood of rankings. This leads to a more effective and efficient Listwise zero-shot ranking. 2.2 Listwise prompting approaches Figure 1b shows listwise approaches. Here the LLMs receive a query along with a list of candidate documents and are prompted to gener- ate a ranked list of document labels based on their relevance to the query [12, 17, 20]. However, due to the limited input length allowed by LLMs, including all candidate documents in the prompt is not feasible. To address this, current listwise approaches use a sliding window method. This involves re-ranking a window of candidate documents, starting from the bottom of the original ranking list and progressing upwards. This process can be repeated multiple times to achieve an improved final ranking and allows for early stopping mechanisms to target only the top-ð ranking, thereby con- serving computational resources. In contrast to pointwise methods, which utilize the likelihood value of the output tokens for ranking documents, listwise approaches rely on the more efficient process of generation of the ranking list. 2 BACKGROUND & RELATED WORK There are three main prompting approaches for zero-shot document ranking employing LLMs: Pointwise [10, 19], Listwise [12, 17, 20], and Pairwise [18]. In this section, we delve into the specifics of these while situating our work within the existing literature. As a visual aid we will refer to Figure 1 as we discuss each method. 2.3 Pairwise prompting approaches Figure 1c shows pairwise approaches. LLMs are prompted with a query alongside a pair of documents, and are asked to gener- ate the label indicating which document is more relevant to the A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint (a) Heapify with Pairwise prompting (comparing 2 documents at a time). (b) Heapify with our Setwise prompting (comparing 4 documents at a time). (c) Bubble sort with Pairwise prompting (comparing 2 documents at a time).
2310.09497#6
2310.09497#8
2310.09497
[ "2302.13971" ]
2310.09497#8
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
(d) Bubble sort with our Setwise prompting (comparing 3 documents at a time). Figure 2: Illustration of the impact of Setwise Prompting vs. Pairwise Prompting on Sorting Algorithms. Nodes are documents, numbers in nodes represent the level of relevance assigned by the LLM (higher is more relevant). query [16, 18]. To re-rank all candidate documents, a basic method, called AllPairs, involves generating all possible permutations of document pairs from the candidate set. Pairs are independently then fed into the LLM, and the preferred document for each pair is determined. Subsequently, an aggregation function is employed to assign a score to each document based on the inferred pairwise preferences, and the final ranking is established based on the total score assigned to each document [16]. However, this aggregation- based approach suffers from high query latency: LLM inference on all document pairs can be computationally expensive. To ad- dress this efficiency issue in pairwise approaches, prior studies have introduced sampling [7, 13] and sorting [18] algorithms. In this paper, we focus on sorting algorithms because, assuming an LLM can provide ideal pairwise preferences, the sorting algorithms offer the theoretical assurance of identifying the top-ð most rele- vant documents from the candidate pool. In prior work [18], two sorting algorithms [8], heap sort and bubble sort, were employed. Unlike AllPairs, these algorithms leverage efficient data structures to selectively compare document pairs, which can quickly pull the most relevant documents out from the candidate pool and place them at the top of the final ranking.
2310.09497#7
2310.09497#9
2310.09497
[ "2302.13971" ]
2310.09497#9
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
This is particularly suitable for the top-ð ranking task, where only a ranking of the ð most relevant documents is needed. These sorting algorithms provide a stopping mechanism that prevents the need to rank all candidate documents. From a theoretical standpoint the differences and relative advan- tages among these three families of zero-shot document ranking that employ LLMs are clear. However, from an empirical standpoint there has been no fair and comprehensive evaluation of these tech- niques in terms of effectiveness vs. efficiency, and across factors such as sizes of LLMs, benchmarks, and computational resources. 3 SETWISE RANKING PROMPTING 3.1 Limitations of Current Approaches The efficiency of LLM-based zero-shot ranking methods hinges on two critical dimensions. First, the number of LLM inferences significantly impacts effi- ciency. Given that LLMs are large neural networks with billions of parameters, inference is computationally intensive. Hence, an increased number of LLM inferences introduces a considerable computational overhead. This is notably observed in the current Pairwise approach, which is inefficient due to the extensive need for inferring preferences for the many document pairs. While sort- ing algorithms offer some relief, they do not entirely mitigate the efficiency issue. Second, the number of LLM-generated tokens per inference plays a pivotal role. LLMs employ a transformer decoder for autoregres- sive token generation, where the next token generation depend on previously tokens generated. Each additional generated token requires an extra LLM inference. This accounts for the inefficiency of the existing Listwise approach, which relies on generating an entire ranking of document label lists, often requiring a substantial number of generated tokens. 3.2 Speeding-up Pairwise with Setwise To solve the inefficiency issue of these approaches, we propose a novel Setwise prompting approach. Our prompt, as illustrated in Figure 1d, instructs the LLM to select the most relevant document for the given query from a set of documents, hence the term Setwise prompting. We specifically treat the collection of documents as an unordered set and later experiments will show that Setwise prompting is quite robust to document ordering. Arxiv, 2023, preprint With our prompt, sorting-based Pairwise approaches can be considerably accelerated.
2310.09497#8
2310.09497#10
2310.09497
[ "2302.13971" ]
2310.09497#10
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
This is because the original heap sort and bubble sort algorithm used in the Pairwise approach only compares a pair of documents at each step in the sorting process, as illustrated in Figure 2a and 2c. These sorting algorithms can be sped up by comparing more than two documents at each step. For example, in the heap sort algorithm, the â heapify" function needs to be invoked for each subtree, where the parent node must be swapped with the child node with the highest value if it exceeds the parent value. In the case of Figure 2a, to perform â heapify" with pairwise prompting, a minimum of 6 comparisons (each root node paired with each child node) are required. Conversely, if we increase the number of child nodes in each subtree to 3 and can compare 4 nodes at a time, only 2 comparisons are needed to â heapify" a tree with 9 nodes, as illustrated in Figure 2b. Similarly, for the bubble sort algorithm, if we can compare more than a pair of documents at a time, each â bubblingâ process will be accelerated.
2310.09497#9
2310.09497#11
2310.09497
[ "2302.13971" ]
2310.09497#11
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
For instance, in Figure 2c, there are 4 comparisons in total, but in Figure 2d, with the ability to compare 3 documents at once, only 2 comparisons are required to be able to bring the node with the largest value to the top. Our Setwise prompting is designed to instruct LLMs to compare the relevance of multiple documents at a time, making it well-suited for this purpose. 3.3 Listwise Likelihoods with Setwise Our Setwise prompting can also accelerate the ranking process for the Listwise approach. The original Listwise method relies on the LLMâ s next token generation to produce the complete ordered list of document labels at each step of the sliding window process, as illustrated in Figure 1b. As we discussed, generating the document label list is computationally intensive, because the LLM must do one inference for each next token prediction. On the other hand, the LLM may generate results in an unexpected format or even de- cline to generate the desired document label list [20], thus harming effectiveness. Fortunately, if we have access to the LLMâ s output logits, these issues can be avoided by evaluating the likelihood of generating every conceivable document label list and then se- lecting the most probable one as the output. Regrettably, this is only theoretically possible, but in practice, it is unfeasible for the existing Listwise approach due to the very large number of possible document label permutation, which implies that the process of like- lihood checking may actually become even more time-consuming than generating the list itself. Setwise prompting again provides a solution: we can easily derive an ordered list of document labels from the LLM output logits. This is done by assessing the likelihood of each document label being chosen as the most relevant, as shown in Figure 1d. This straightforward trick markedly accelerates Listwise ranking, as it requires only a single forward pass of the LLM, and also guarantees that the output matches the desired document label list. 3.4 Advantages of Setwise We summarize and compare the key different properties of exist- ing zero-shot LLM ranking approaches along with our proposed Setwise prompting approach in Table 1. Notably, pointwise.qlm, pointwise.yes_no and pairwise.allpair require a brute-force of LLM
2310.09497#10
2310.09497#12
2310.09497
[ "2302.13971" ]
2310.09497#12
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon Table 1: Properties of different methods. Logits: requires access to the LLMâ s logits. Generate: only requires to generate tokens. Batching: allows batch inference. Top-ð : allows early stopping once top-ð most relevant documents found. # LLM calls: the number of LLM forward passes needed in the worst case. (ð : number of document to re-rank. ð : number of repeats. ð : step size for sliding window. ð : number of top-ð relevant documents to find. ð : number of compared documents at each step.) Methods pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort Logits Generate Batching Top-ð
2310.09497#11
2310.09497#13
2310.09497
[ "2302.13971" ]
2310.09497#13
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
â â â â â â â â â â â â â â â â â â â â â â â # LLM calls O (ð ) O (ð ) O (ð â (ð /ð )) O (ð â (ð /ð )) O (ð 2 â ð ) O (ð â log2 O (ð â ð ) O (ð â logð ð ) O (ð â (ð /(ð â 1))) ð ) inference for all available documents relevance or preferences.
2310.09497#12
2310.09497#14
2310.09497
[ "2302.13971" ]
2310.09497#14
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Thus, they are unable to facilitate early-stopping for the top-ð ranking. However, these approaches do allow batch inferences, hence the maximum GPU memory utilization could be easily achieved by us- ing the highest batch size. On the other hand, other approaches use sorting algorithms, enabling early-stopping once the top-ð most relevant documents are identified. However, this compromises the feasibility of batching inference, as the LLM inference at each step of the sorting algorithms relies on the results from the preced- ing step. Our Setwise prompting empowers the previous Listwise approach (listwise.generation), which relied on LLMâ s next token generations, to now utilize the LLMâ s output logits. We refer to the Listwise approach that incorporates our Setwise prompt as list- wise.likelihood. Finally, comparing with Pairwise approaches, our Setwise prompting has fewer LLM calls by comparing a minimum of ð
2310.09497#13
2310.09497#15
2310.09497
[ "2302.13971" ]
2310.09497#15
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
â ¥ 3 documents at each step of the sorting algorithms. 4 EXPERIMENTS 4.1 Datasets and evaluations The first objective of this study is to contribute a fair and comprehen- sive evaluation of existing LLM-based zero-shot ranking methods in terms ranking effectiveness and efficiency. To achieve this goal, we carried out extensive empirical evaluations using well-established document ranking datasets: the TREC Deep Learning 2019 [5] and 2020 [4], along with the BEIR benchmark datasets [21]. To guaran- tee a fair comparison across different approaches, we tested all of the methods using the same open-source Flan-t5 LLMs [26], avail- able on the Huggingface model hub in various sizes (780M, 3B, and 11B parameters). All LLM methods were used to re-rank 100 documents retrieved by a BM25 first-stage retriever. In order to optimize efficiency, the focus was on a top-ð ranking task, whereby the re-ranking process stopped as soon as the top-ð most rele- vant documents were identified and ranked. Here, we set ð = 10.
2310.09497#14
2310.09497#16
2310.09497
[ "2302.13971" ]
2310.09497#16
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
The effectiveness of different approaches was evaluated using the NDCG@10 metric, which serves as the official evaluation metric for the employed datasets. Efficiency was evaluated with the following metrics: A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint â ¢ The average number of LLM inferences per query. LLMs have limited input length. Thus, to re-rank 100 documents, multiple LLM inferences are often needed.
2310.09497#15
2310.09497#17
2310.09497
[ "2302.13971" ]
2310.09497#17
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Itâ s important to note that an increased number of LLM inferences translates to higher compu- tational demands. Thus, we regard this as an efficiency metric worth considering. â ¢ The average number of prompt tokens inputted to the LLMs per query. This metric takes into account the actual average quan- tity of input tokens required in the prompts for each method to re-rank 100 documents per query. Given that self-attention mechanisms in transformer-based LLMs become prohibitively costly for a large number of input tokens [24], an increase in to- kens within the prompts also translates to higher computational demands. Notably, numerous LLM web API services, including OpenAI APIs, charge based on the number of input tokens in the API calls. As such, we deem this metric valuable in assessing efficiency.
2310.09497#16
2310.09497#18
2310.09497
[ "2302.13971" ]
2310.09497#18
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
â ¢ The average number of generated tokens outputted by LLMs per query. Much like the assessment of average prompt tokens, this metric provides an evaluation of computational efficiency, but from a token generation perspective. Instead of focusing on the number of tokens in the prompt, it takes into account the number of tokens generated. This is particularly significant because transformer-based generative LLMs produce content token-by-token, with each subsequent token relying on the gen- eration of preceding ones. Consequently, an increase in number of generated tokens leads to a corresponding increase in the computational cost, as each additional generated token implies another LLM forward inference. In fact, OpenAI applies a pricing structure wherein the cost for the number of generated tokens is twice that of the number of prompt tokens for their LLM APIs 1. This underscores the substantial impact that generated tokens can have on computational expenses.
2310.09497#17
2310.09497#19
2310.09497
[ "2302.13971" ]
2310.09497#19
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
â ¢ The average query latency. We evaluate the run time efficiency of all the methods with average query latency. To conduct this assessment, a single GPU is employed, and queries are issued one at a time. The per-query latency is then averaged across all the queries in the dataset. Itâ s important to highlight that for methods that support batching we always employ the maximum batch size to optimize GPU memory usage and parallel compu- tation, thus maximizing efficiency for these particular methods. This approach ensures that the evaluation is conducted under conditions most favourable for efficiency gains. It is important to acknowledge that while other methods may not be able to use the batching strategy for individual queries, they do have the capability to utilize batching and parallel computing across various user queries in real-world scenarios. However, this lies more in engineering efforts and falls outside the scope of this paper: as such, we do not investigate this perspective. 4.2 Implementation details To establish the initial BM25 first-stage ranking for all datasets, we employed the Pyserini Python library [11] with default settings. For LLM-based zero-shot re-rankers, we followed the prompts rec- ommended in existing literature to guide Flan-t5 models of varying sizes (Flan-t5-large with 780M parameters, Flan-t5-xl with 3B pa- rameters, and Flan-t5-xxl with 11B parameters) in executing the zero-shot ranking task. Specifically, for the pointwise.qlm method, we adopted the prompt suggested by Sachan et al. [19]. For pointwise.yes_no, we use the prompt provided by Qin et al. [18]. For listwise.generate, we uti- lized the prompt designed by Sun et al. [20]. As for pairwise.allpair, pairwise.heapsort, and pairwise.bubblesort, we relied on the prompts from the original paper by Qin et al. [18]. For methods leveraging our Setwise prompting (i.e. listwise.likelihood, setwise.heapsort, and setwise.bubblesort), we employed the prompts detailed in Section 3. In the case of Listwise approaches, we configure the window size (ð
2310.09497#18
2310.09497#20
2310.09497
[ "2302.13971" ]
2310.09497#20
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
¤) to contain 4 documents, each capped at a maximum of 100 tokens. The step size (ð ) is set to 2, and the number of repetitions (ð ) is set to 5. These settings take into account the token limitations imposed by Flan-t5 models, which have an input token cap of 512. A window size of 4 documents appears reasonable as it aligns well with the prompt capacity. Additionally, a step size of 2, combined with 5 repetitions, has theoretical guarantees of bringing the 10 most relevant documents to the top. For our Setwise approaches, we set the number of compared documents ð in each step to 3 for the main results. We further investigate the impact of ð in Section 5.4. For all other methods, we truncate the documents with the maximum number of tokens to 128. We note that, among all the methods capable of utilizing both model output logits and generation outputs, we exclusively employ the latter. This choice is made in favor of a more general approach that allows for leveraging generation APIs across a wider range of closed-source LLMs. Nevertheless, we investigate the difference between using model output logits and generation outputs for our Setwise approaches in Section 5.1.
2310.09497#19
2310.09497#21
2310.09497
[ "2302.13971" ]
2310.09497#21
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
We carried out the efficiency evaluations on a local GPU work- station equipped with an AMD Ryzen Threadripper PRO 3955WX 16-Core CPU, a NVIDIA RTX A6000 GPU with 49GB of memory, and 128GB of DDR4 RAM. 5 RESULTS AND ANALYSIS 5.1 Effectiveness Results Table 2 presents results for both ranking effectiveness and efficiency on TREC DL datasets. In regards to ranking effectiveness, it is notable that all LLM- based zero-shot ranking approaches demonstrate a significant im- provement over the initial BM25 ranking. The only exception to this trend is the pointwise.qlm approach on DL2019 across all models and DL2020 with the Flan-t5-xxl model. Interestingly, as the LLM size increases, the effectiveness of pointwise.qlm decreases. This finding is particularly unexpected, given the common assumption that larger LLMs tend to be more effectiveness. On the other hand, pointwise.yes_no method achieved a decent NDCG@10 score with Flan-t5-large when compared to other methods. However, effec- tiveness also did not increase as model size increased. These unex- pected results for both Pointwise methods might be attributed to the requirement of a more refined model output calibration process, ensuring their suitability for comparison and sorting across differ- ent documents [18]. The Listwise approaches (listwise.generation) are far less effective when tested with Flan-t5-large and Flan-t5-xl.
2310.09497#20
2310.09497#22
2310.09497
[ "2302.13971" ]
2310.09497#22
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
1https://openai.com/pricing, last visited 12 October 2023. Arxiv, 2023, preprint Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon Table 2: Results on TREC DL. All the methods re-rank BM25 top 100 documents. We present the ranking effectiveness in terms of NDCG@10, best values highlighted in boldface. Superscripts denote statistically significant improvements (paired Studentâ s t-test with ð â ¤ 0.05 with Bonferroni correction). #Inferences denotes the average number of LLM inferences per query. Pro. Tokens is the average number of tokens in the prompt for each query. Gen. tokens is the average number of generated tokens per query. Latency is the average query latency, in seconds. TREC DL 2019 TREC DL 2020 # Methods NDCG@10 #Inferences Pro. tokens Gen. tokens Latency(s) NDCG@10 #Inferences Pro. tokens Gen. tokens Latency(s) e g r a l - 5 t - n a l F l x - 5 t - n a l F l x x - 5 t - n a l F ð
2310.09497#21
2310.09497#23
2310.09497
[ "2302.13971" ]
2310.09497#23
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
ð ð ð ð ð ð â ð ð ð ð ð ð ð ð â ð ð ð ð ð ð ð ð BM25 pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort .506 .557 .654ð ð ð .561ð .669ð ð ð .666ð ð ð .657ð ð ð .636ð ð ð .670ð ð ð .678ð ð ð â .542 .650ð ð ð .569ð .689ð ð ð .713ð ð ð ð ð â ð .705ð ð ð ð .683ð ð ð .693ð ð ð ð .705ð ð ð ð .506 .644ð ð .662ð ð .701ð ð ð ð .699ð ð ð ð .708ð ð ð ð â .679ð ð .706ð ð ð ð .711ð ð ð ð â
2310.09497#22
2310.09497#24
2310.09497
[ "2302.13971" ]
2310.09497#24
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
- 100 100 245 245 9900 230.3 844.2 125.4 460.5 100 100 245 245 9900 241.9 886.9 129.5 466.9 100 100 245 245 9900 239.4 870.5 130.1 468.3 - 15211.6 16111.6 119120.8 94200.7 3014383.1 104952.5 381386.3 40460.6 147774.1 15211.6 16111.6 119163.0 94446.1 2953436.2 110126.9 400367.1 41665.7 149949.1 15211.6 16111.6 119334.7 94537.5 2794942.6 109402 394386 42078.6 150764.8 - - - 2581.35 - 49500 2303.3 8441.6 626.9 2302.3 - - 2910 - 49500 2418.6 8869.1 647.4 2334.5 - - 2824 - 49500 2394 8705.3 650.5 2341.6 - 0.6 0.6 54.2 10.0 109.6 16.1 58.3 8.0 29.1 1.4 1.5 71.4 12.5 254.9 20.5 75.1 9.6 35.2 3.7 3.9 100.1 36.6 730.2 45.0 162.5 20.2 72.6 .480 .567ð .615ð ð .547ð .626ð ð ð .622ð ð ð .619ð ð ð .589ð ð .618ð ð .624ð ð ð .542ð .636ð ð ð .547ð .672ð ð ð .682ð ð ð ð .692ð ð ð ð â .662ð ð ð .678ð ð ð ð .676ð ð ð ð
2310.09497#23
2310.09497#25
2310.09497
[ "2302.13971" ]
2310.09497#25
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
.492 .632ð ð .637ð ð .690ð ð ð ð .688ð ð ð ð .699ð ð ð ð .681ð ð ð ð .688ð ð ð ð .686ð ð ð ð - 100 100 245 245 9900 226.8 778.5 124.2 457.4 100 100 245 245 9900 244.3 863.9 127.8 463.5 100 100 245 245 9900 240.5 842.9 128.1 467.9 - 15285.2 16185.2 119629.6 95208.3 3014232.7 104242.1 357358.5 40362.0 148947.3 15285.2 16185.2 119814.3 95298.7 2949457.6 111341 394954.2 41569.1 151249.8 15285.2 16185.2 119951.6 95482.7 2794928.4 110211.8 387359.2 41633.7 152709.5 - - - 2460.1 - 49500 2268.3 7785.4 621.1 2287.1 - - 2814.7 - 49500 2443.3 8638.5 638.9 2317.6 - - 2707.9 - 49500 2404.8 8428.5 640.6 2339.6 - 0.5 0.6 52.0 10.0 108.9 16.1 54.1 8.0 28.9 1.4 1.5 69.0 12.6 254.8 20.8 74.3 9.7 35.3 3.7 3.9 97.3 36.9 730.5 45.2 158.8 20.0 73.2 h i ð However, listwise.generation shows some improvement with Flan- t5-xxl.
2310.09497#24
2310.09497#26
2310.09497
[ "2302.13971" ]
2310.09497#26
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
These results may be attributed to the fact that generating a ranking list requires fine-grained relevance preferences across mul- tiple documents, a task that may exceed the capabilities of smaller models. In contrast, the listwise.likelihood approach, empowered by our Setwise prompt, markedly enhances the ranking effectiveness of the Listwise approach, even when utilizing smaller models. We acknowledge however that listwise.likelihood requires access to the model output logits, whereas listwise.generation does not. In the case of Pairwise and Setwise approaches, they consistently exhibit good ranking effectiveness across various model sizes and datasets. In Table 3, we present the zero-shot ranking effectiveness of all methods (with the exception of pairwise.allpair due to its com- putationally intensive nature) across 9 widely-used BEIR datasets. Notably, we identify several different trends that deviate from ob- servations made on the TREC DL datasets. Firstly, pointwise.qlm ex- hibits a slightly higher average NDCG@10 score compared to point- wise.yes_no. Moreover, the effectiveness of pointwise.qlm remains stable even as the model size increases. Secondly, listwise.generation demonstrates comparable effectiveness to listwise.likelihood, with the majority of gains obtained in the Touche dataset, where other methods perform worse. Lastly, both Pairwise and Setwise methods that leverage the bubble sort algorithm consistently demonstrate higher average NDCG@10 compared to when they utilize the heap sort algorithm, regardless of the model size. Overall, the variety of results we observe across different experimental settings shows the importance of not drawing conclusions about effectiveness from single datasets or model sizes. 5.2 Efficiency Results Regarding computational and runtime efficiency, the results pre- sented in Table 2 indicate that both Pointwise methods exhibit fewest inference, prompt tokens, and no generated tokens. Furthermore, their computational efficiency and query latency are optimized due to efficient GPU-based batched inference. It is worth noting, how- ever, that these methods do come with certain limitations. Specifi- cally, they require access to the model output logits (thus currently limiting their use to just open source LLMs) and are less effective when used with larger models. In contrast, pairwise.allpair appears to be the most expensive method that consumes the most number of prompt tokens and generated tokens due to the large number of document pair preferences needed to be inferred.
2310.09497#25
2310.09497#27
2310.09497
[ "2302.13971" ]
2310.09497#27
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Hence, even with GPU batching, pairwise.allpair still has the worst query latency. In constrast, approaches utilizing our Setwise promptingâ namely, list- wise.likelihood, setwise.heapsort, and setwise.bubblesort, are far more efficient than their counterparts, listwise.generate, pairwise.heapsort, and pairwise.bubblesort respectively. Notably, these improvements A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint Table 3: Overall NDCG@10 obtained by methods on BEIR datasets. The best results are highlighted in boldface. Superscripts denote statistically significant improvements (paired Studentâ s t-test with ð â ¤ 0.05 with Bonferroni correction). # Methods Covid NFCorpus Touche DBPedia SciFact Signal News Robust04 Avg e g r a l - 5 t - n a l F l x - 5 t - n a l F l x x - 5 t - n a l F a BM25 .322 .442 .318 .436 _ TREC DL 2019 fer TREC DL 2020 0.70 0.68 0.68 0.66 0.66 0.64 i o g g © 0.64 oe ia] re) 0:62 Q 0.60 Cg 4 0.60 0.58 0.58 0.56 0.56 we 054 gp @0 0.54 0.52 - 0 50 100 150 0 50 100 150 Latency (s) Latency (s) @ fian-t5-large, generation @ â fian-t5-x1, generation @ fian-t5-xx!, generation @ flan-ts-large, likelihood @ fian-t5-xi, likelihood @ fian-t5-xx, likelihood
2310.09497#26
2310.09497#28
2310.09497
[ "2302.13971" ]
2310.09497#28
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
TREC DL 2019 Fer TREC DL 2020 0.68 0.66 S 0.64 g e 0.62 8 =z 0.60 0.58 0.56 0.60 ) 20 40 60 0 20 40 60 Latency (s) Latency (s) @ flan-ts-large, heapsort @ flants-xi, heapsort @ | fian-t5-xx1, heapsort @ fian-t5-large, bubblesort @ | flan-ts-xi, bubblesort @ fan-t5-xx1, bubblesort (a) Setwise (b) Listwise Figure 3: Effectiveness and efficiency trade-offs offered by different approaches. (a â Setwise): The numbers in the scatter plots represent the number of compared documents ð at each step of the sorting algorithm. (b â Listwise) The numbers in the scatter plots represent the number of sliding windows repetitions ð .
2310.09497#27
2310.09497#29
2310.09497
[ "2302.13971" ]
2310.09497#29
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
are achieved without compromising effectiveness. Section 5.4 will discuss further approaches on improving efficiency. Table 5 shows calculations for the estimated cost of API calls; this estimation is obtained using the OpenAI GPT-4 cost structure, and applying this same structure to the number of tokens measured in our experiments. At time of writing, OpenAI costs were $0.03/1,000 prompt tokens and $0.06/1,000 generated tokens. To estimate the token count if GPT-4 were used, we average the number of prompt tokens and generated tokens from Table 2 across Flan-T5 models.
2310.09497#28
2310.09497#30
2310.09497
[ "2302.13971" ]
2310.09497#30
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
The setwise.bottlesort and pairwise.heapsort methods show compa- rable NDCG@10, but pairwise.heapsort is cheaper. On the other Arxiv, 2023, preprint Table 4: Generate vs. likelihood inference results on TREC DL 2019. #Inf. is the average number of LLM inferences per query. Pro. Tokens is the average number of tokens in the prompt for each query. Gen. tokens is the average number of generated tokens per query. Lat. is the average query latency in seconds. e g r a l l x l x x Methods heapsort.generate heapsort.likelihood bubblesort.generate bubblesort.likelihood heapsort.generate heapsort.likelihood bubblesort.generate bubblesort.likelihood heapsort.generate heapsort.likelihood bubblesort.generate bubblesort.likelihood NDCG@10 #Inf. Pro. tokens Gen. tokens Lat.(s) 8 5 29 19 10 6 35 20 20 17 73 60 .670 .670 .678 .678 .693 .693 .705 .705 .706 .706 .711 .711 125 125 461 461 130 130 467 467 130 130 468 468 40461 40458 147774 147752 41666 41667 149949 149949 42077 42071 150765 150765 627 - 2302 - 647 - 2335 - 651 - 2342 - Table 5: Estimated cost of API calls across different methods, in US dollars. Models ordered from most (top) to least effective (bottom) based on NDCG@10, macro-average across both TREC DL datasets.
2310.09497#29
2310.09497#31
2310.09497
[ "2302.13971" ]
2310.09497#31
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Method pairwise.heapsort setwise.bubblesort pairwise.allpair listwise.likelihood setwise.heapsort pairwise.bubblesort pointwise.yes_no listwise.generation pointwise.qlm NDCG@10 TREC DL 2019 TREC DL 2020 $3.40 $4.67 $90.60 $2.86 $1.27 $11.89 $0.49 $3.75 $0.46 0.6800 0.6800 0.6783 0.6745 0.6743 0.6550 0.6398 0.5929 0.5343 $3.39 $4.62 $90.59 $2.83 $1.28 $12.28 $0.48 $3.49 $0.46 hand, our setwise.heapsort provides a reduction of â 62% in cost by only marginally reducing NDCG@10 (a 0.8% loss). 5.3 Impact of using Outputs Logits on Setwise Similar to Pairwise methods, if the model output logits are accessi- ble, our Setwise approaches can also utilize these logits to estimate the likelihood of the most relevant document label. This approach eliminates the need for token generation, requiring only a single LLM forward inference to yield the output results, thus offering a more efficient process. To assess the impact of incorporating model output logits in our Setwise approaches, we conducted experiments on the TREC DL 2019 dataset, with results presented in Table 4. The findings indicate that using model logits resulted in no change in ranking effectiveness, but did lead to lower query latency. This improvement stems from the absence of generated tokens for like- lihood estimation. Hence, we conclude that if access to the model output is available, employing likelihood can further enhance the efficiency for our Setwise approach. 5.4 Effectiveness and Efficiency Trade-offs Our Setwise prompting is characterized by a hyperparameter ð controlling the number of compared documents within the prompt for each step in the sorting algorithms. In the previous experiments, we always set ð = 3. Adjusting this hyperparameter allows one
2310.09497#30
2310.09497#32
2310.09497
[ "2302.13971" ]