id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2310.02263#29 | Contrastive Post-training Large Language Models on Data Curriculum | 9 Preprint # ACKNOWLEDGMENT We would like to thank Ethan Chau and Michael Santacroce for discussion on this project. # REFERENCES Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Ols- son, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran- Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mer- cado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Con- erly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. | 2310.02263#28 | 2310.02263#30 | 2310.02263 | [
"2309.00267"
] |
2310.02263#30 | Contrastive Post-training Large Language Models on Data Curriculum | Constitutional ai: Harmlessness from ai feedback, 2022b. Yoshua Bengio, J´erË ome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICML, volume 382 of ACM International Conference Proceeding Series, pp. 41â 48. ACM, 2009. Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324â 345, 1952. | 2310.02263#29 | 2310.02263#31 | 2310.02263 | [
"2309.00267"
] |
2310.02263#31 | Contrastive Post-training Large Language Models on Data Curriculum | Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. | 2310.02263#30 | 2310.02263#32 | 2310.02263 | [
"2309.00267"
] |
2310.02263#32 | Contrastive Post-training Large Language Models on Data Curriculum | Language models are few-shot learners. In NeurIPS, 2020. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, J´er´emy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023a. Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, and Dylan Hadfield-Menell. | 2310.02263#31 | 2310.02263#33 | 2310.02263 | [
"2309.00267"
] |
2310.02263#33 | Contrastive Post-training Large Language Models on Data Curriculum | Explore, establish, exploit: Red teaming language models from scratch. arXiv preprint arXiv:2306.09442, 2023b. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90% chatgpt quality. https://vicuna.lmsys.org/, 2023. | 2310.02263#32 | 2310.02263#34 | 2310.02263 | [
"2309.00267"
] |
2310.02263#34 | Contrastive Post-training Large Language Models on Data Curriculum | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Lev- skaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Bren- nan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. | 2310.02263#33 | 2310.02263#35 | 2310.02263 | [
"2309.00267"
] |
2310.02263#35 | Contrastive Post-training Large Language Models on Data Curriculum | Palm: Scaling language modeling with pathways, 2022. 10 Preprint Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. arXiv preprint arXiv:2210.11416, 2022. Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. | 2310.02263#34 | 2310.02263#36 | 2310.02263 | [
"2309.00267"
] |
2310.02263#36 | Contrastive Post-training Large Language Models on Data Curriculum | Koala: A dialogue model for academic research. Blog post, April 2023. URL https: //bair.berkeley.edu/blog/2023/04/03/koala/. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015. Joey Hong, Kush Bhatia, and Anca D. Dragan. On the sensitivity of reward inference to misspecified human models. In ICLR. OpenReview.net, 2023. | 2310.02263#35 | 2310.02263#37 | 2310.02263 | [
"2309.00267"
] |
2310.02263#37 | Contrastive Post-training Large Language Models on Data Curriculum | Tomasz Korbak, Ethan Perez, and Christopher L. Buckley. RL with KL penalties is better viewed In EMNLP (Findings), pp. 1083â 1091. Association for Computational as bayesian inference. Linguistics, 2022. Julia Kreutzer, Joshua Uyheng, and Stefan Riezler. Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning. In ACL, pp. 1777â | 2310.02263#36 | 2310.02263#38 | 2310.02263 | [
"2309.00267"
] |
2310.02263#38 | Contrastive Post-training Large Language Models on Data Curriculum | 1788. Association for Computational Linguistics, 2018. Andreas K¨opf, Yannic Kilcher, Dimitri von R¨utte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich´ard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. Openassistant conversations â democratizing large language model align- ment, 2023. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: | 2310.02263#37 | 2310.02263#39 | 2310.02263 | [
"2309.00267"
] |
2310.02263#39 | Contrastive Post-training Large Language Models on Data Curriculum | Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023. Pebble: Feedback-efficient forcement learning via relabeling experience and unsupervised pre-training. arXiv:2106.05091, 2021. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. | 2310.02263#38 | 2310.02263#40 | 2310.02263 | [
"2309.00267"
] |
2310.02263#40 | Contrastive Post-training Large Language Models on Data Curriculum | Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca eval, 2023. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023. OpenAI. Model index for researchers, 2022. URL https://platform.openai.com/docs/ model-index-for-researchers. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. | 2310.02263#39 | 2310.02263#41 | 2310.02263 | [
"2309.00267"
] |
2310.02263#41 | Contrastive Post-training Large Language Models on Data Curriculum | Training language models to follow instructions with human feedback. In NeurIPS, 2022. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023. Ethan Perez, Sam Ringer, KamilË e LukoË si¯utË e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. | 2310.02263#40 | 2310.02263#42 | 2310.02263 | [
"2309.00267"
] |
2310.02263#42 | Contrastive Post-training Large Language Models on Data Curriculum | Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. 11 Preprint Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. | 2310.02263#41 | 2310.02263#43 | 2310.02263 | [
"2309.00267"
] |
2310.02263#43 | Contrastive Post-training Large Language Models on Data Curriculum | Zero: memory optimizations toward training trillion parameter models. In SC, pp. 20. IEEE/ACM, 2020. Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault F´evry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. | 2310.02263#42 | 2310.02263#44 | 2310.02263 | [
"2309.00267"
] |
2310.02263#44 | Contrastive Post-training Large Language Models on Data Curriculum | Rush. Multitask prompted training enables zero-shot task generalization. In ICLR. OpenReview.net, 2022. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Joar Skalse, Nikolaus H. R. Howe, Dmitrii Krasheninnikov, and David Krueger. | 2310.02263#43 | 2310.02263#45 | 2310.02263 | [
"2309.00267"
] |
2310.02263#45 | Contrastive Post-training Large Language Models on Data Curriculum | Defining and characterizing reward hacking. In NeurIPS, 2022. Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. Invariance in policy optimisation and partial identifiability in reward learning. In ICML, volume 202 of Proceedings of Machine Learning Research, pp. 32033â 32058. PMLR, 2023. Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. | 2310.02263#44 | 2310.02263#46 | 2310.02263 | [
"2309.00267"
] |
2310.02263#46 | Contrastive Post-training Large Language Models on Data Curriculum | Curriculum learning: A survey. Int. J. Comput. Vis., 130(6):1526â 1565, 2022. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. Learning to summarize with human feedback. In NeurIPS, 2020. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. | 2310.02263#45 | 2310.02263#47 | 2310.02263 | [
"2309.00267"
] |
2310.02263#47 | Contrastive Post-training Large Language Models on Data Curriculum | Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford alpaca, 2023. Jeremy Tien, Jerry Zhi-Yang He, Zackory Erickson, Anca D. Dragan, and Daniel S. Brown. Causal confusion and reward misidentification in preference-based reward learning. In ICLR. OpenRe- view.net, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. | 2310.02263#46 | 2310.02263#48 | 2310.02263 | [
"2309.00267"
] |
2310.02263#48 | Contrastive Post-training Large Language Models on Data Curriculum | Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. | 2310.02263#47 | 2310.02263#49 | 2310.02263 | [
"2309.00267"
] |
2310.02263#49 | Contrastive Post-training Large Language Models on Data Curriculum | Self-instruct: Aligning language models with self-generated instructions. In ACL, pp. 13484â 13508. Association for Computational Linguistics, 2023. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In ICLR. OpenReview.net, 2022. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. | 2310.02263#48 | 2310.02263#50 | 2310.02263 | [
"2309.00267"
] |
2310.02263#50 | Contrastive Post-training Large Language Models on Data Curriculum | Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023a. Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196, 2023b. Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. | 2310.02263#49 | 2310.02263#51 | 2310.02263 | [
"2309.00267"
] |
2310.02263#51 | Contrastive Post-training Large Language Models on Data Curriculum | Rlcd: Rein- forcement learning from contrast distillation for language model alignment. arXiv preprint arXiv:2307.12950, 2023. 12 Preprint Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023. Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023a. Yao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J. Liu. | 2310.02263#50 | 2310.02263#52 | 2310.02263 | [
"2309.00267"
] |
2310.02263#52 | Contrastive Post-training Large Language Models on Data Curriculum | Calibrating sequence likelihood improves conditional language generation. In ICLR. OpenRe- view.net, 2023b. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023. Simon Zhuang and Dylan Hadfield-Menell. Consequences of misaligned AI. In NeurIPS, 2020. | 2310.02263#51 | 2310.02263#53 | 2310.02263 | [
"2309.00267"
] |
2310.02263#53 | Contrastive Post-training Large Language Models on Data Curriculum | Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 13 Preprint A EXAMPLES OF GENERATED RESPONSES # Instruction #1 How did US states get their names? LLaMA SFT on GPT-4 The official name of a state comes from the name of the stateâ s capital city, followed by an additional descriptive suffix. For example, Texas is named after the city of San Antonio, while California is named after the city of San Diego, and New York is named after the city of New York. In some cases, the state name is also derived from the name of a prominent natural feature or the name of a city that no longer exists. For more information about the history of US state names, visit the website of the US Census Bureau at www.census.gov. LLaMA Curricu- lum DPO (Cur- riculum 3) There are many stories and myths behind the naming of US states. | 2310.02263#52 | 2310.02263#54 | 2310.02263 | [
"2309.00267"
] |
2310.02263#54 | Contrastive Post-training Large Language Models on Data Curriculum | Here are some popular ones: 1. Alabama: Named after the Native American tribe known as â Alabama.â 2. Alaska: Named after the Aleutian word â Alyeskaâ which means â great land.â 3. Arizona: Named after the Spanish word for â arizona,â which means â little spring.â 4. Arkansas: Named after the Sioux word â as-kah-ras-sha,â which means â land of downriver people.â ... | 2310.02263#53 | 2310.02263#55 | 2310.02263 | [
"2309.00267"
] |
2310.02263#55 | Contrastive Post-training Large Language Models on Data Curriculum | Orca US states have diverse origins for their names, which include indigenous words, European languages, geographical features, historical events, and personal names. Hereâ s a brief overview of how some states got their names: 1. Alabama: Derived from the Native American tribe called the Alabama, meaning â thicket breakersâ or â cotton pickers.â 2. Alaska: Based on the Aleut word â Alaksa,â which means â great land.â 3. Arizona: Derived from the Native American word â Arivipa,â meaning â small spring.â 4. Arkansas: Originally â Arcahaise,â meaning â land of the south windâ in the Native American Quapaw language... Orca with DPO The naming of U.S. states is a diverse and fascinating history that reflects the cultural, historical, and geographical influences of the various groups of people who settled and governed the land. Hereâ s a brief overview of how some U.S. states got their names: 1. Virginia: Named after the Virgin Queen, Queen Elizabeth I of England, who was known as the â Virgin Queenâ because she never married and re- mained a virgin. | 2310.02263#54 | 2310.02263#56 | 2310.02263 | [
"2309.00267"
] |
2310.02263#56 | Contrastive Post-training Large Language Models on Data Curriculum | The state was founded in 1607 by the Virginia Company, a joint-stock company chartered by King James I. 2. Massachusetts: The name comes from the Massachusett tribe, one of the Native American tribes that inhabited the region. The name translates to â large hillâ or â place of big rocks.â 3. Pennsylvania: Named after the Pennsylvania Colony, a British colony that was established by William Penn, an English Quaker and founder of the City of Brotherly Love (Philadelphia). The name Pennsylvania com- bines the name of the colonyâ s founder with the Latin word â Pennâ (mean- ing â hillâ or â topâ ), which refers to the colonyâ | 2310.02263#55 | 2310.02263#57 | 2310.02263 | [
"2309.00267"
] |
2310.02263#57 | Contrastive Post-training Large Language Models on Data Curriculum | s geographical location on the Pennines, a range of hills and mountains in the Appalachian Mountains. 4. Maryland: Named after the wife of King Charles I of England, Queen Henrietta Maria, who was known as the â Mariaâ in her name. The â land of Maryâ is derived from her name, and the â dâ at the end is an Old English suffix that indicates â territory of.â ... 14 Preprint Instruction #2 LLaMA SFT on GPT-4 LLaMA Curricu- lum DPO (Cur- riculum 3) Orca How do I wrap a present neatly? To wrap a present neatly, you can start by choosing the right paper for the occasion. White, kraft or black paper are common choices for wrapping presents. Then, you can cut the paper to size, wrapping it around the present with the paper folded in half lengthwise. You can also use tape to secure the ends of the paper. For a more professional look, you can add a bow or a ribbon to the top of the package, and write a message or add a tag. To wrap a present neatly, follow these steps: 1. Select a good-quality paper or wrapping paper that matches the color scheme or theme of the present. 2. Lay the present on a flat, clean surface. | 2310.02263#56 | 2310.02263#58 | 2310.02263 | [
"2309.00267"
] |
2310.02263#58 | Contrastive Post-training Large Language Models on Data Curriculum | If itâ s rectangular, place it on the left side, and if itâ s cylindrical, place it on its side. 3. Fold the wrapping paper in half, at the center crease, then in half again, making a triangular fold at the center crease. 4. Fold the right side of the triangle over the center crease, leaving about 1-inch of the right side exposed. 5. Fold the bottom of the triangle up, so that it touches the center crease, then tuck the right side into the left side of the package. 6. Take a small piece of tape and wrap it around the right side of the pack- age, securing it to the left side of the package. Repeat on the other side of the package. 7. Wrap the present with the top of the package first, folding the wrapping paper over the top, and tucking it under the box. 8. Secure the end of the wrapping paper with more tape, and wrap the rest of the package, leaving a little space at the bottom to tuck in the ends. 9. Fold the ends of the wrapping paper over the box, and tape or secure them with a rubber band. 10. If you want to add a bow or ribbon, place it on top of the package, and secure it with tape or a rubber band. By following these steps, you can wrap a present neatly and create a nice presentation for your gift. Wrapping a present neatly involves a few simple steps: 1. Choose the right materials: Select a clean, large piece of wrapping paper or gift bag that is proportional to the size of the gift. You may also use tissue paper, fabric, or reusable bags for a more eco-friendly option. 2. Measure and cut the paper: Measure the dimensions of the gift (length, width, and height) and add extra paper or bag length to allow for folding and tucking. Cut the paper or fabric to size, leaving enough extra for folding and tape. 3. Place the gift in the center: Lay the gift flat in the center of the paper or fabric. | 2310.02263#57 | 2310.02263#59 | 2310.02263 | [
"2309.00267"
] |
2310.02263#59 | Contrastive Post-training Large Language Models on Data Curriculum | If youâ re using a gift bag, place the gift on top of the bag and ensure itâ s centered. 4. Fold the sides: Fold the sides of the paper or fabric over the gift, aligning the edges with the sides of the gift. For a more polished look, fold the paper or fabric inward by about an inch and then fold it back out for a crisp edge. 5. Fold the bottom: Fold the bottom of the paper or fabric up over the gift, leaving enough excess for the bow or ribbon. Ensure the fold is even and the gift is snug within the folded paper or fabric. 6. Secure the wrap with tape: On the top and bottom edges of the wrap, apply double-sided tape or a small piece of clear packing tape for a clean hold. | 2310.02263#58 | 2310.02263#60 | 2310.02263 | [
"2309.00267"
] |
2310.02263#60 | Contrastive Post-training Large Language Models on Data Curriculum | If youâ re using a gift bag, ensure the handles are folded over and secured with tape. 7. Add a decorative touch: Add a ribbon, bow, or gift tag to personalize the present. If using a gift bag, you can add a handle wrap or a fabric ribbon for a more polished look. 8. Finish with a gift holder: Use a non-slip gift holder or shipping weight to keep the wrapped gift stable and secure during transport. Remember to keep the wrapping neat and tidy, with clean folds and even tape application. This will ensure your present looks beautiful. | 2310.02263#59 | 2310.02263#61 | 2310.02263 | [
"2309.00267"
] |
2310.02263#61 | Contrastive Post-training Large Language Models on Data Curriculum | 15 Preprint Orca with DPO Wrapping a present neatly can make the gift look more appealing and show the recipient that you put effort into your gift. Hereâ s a step-by-step guide to wrapping a present neatly: 1. Choose the wrapping paper: Select a wrapping paper that complements the giftâ s shape, size, and color. If the gift is rectangular, choose a paper that is wider than the width of the gift. If itâ s a cylindrical gift, choose a paper that is tall enough to wrap around it and overlap slightly. 2. Cut the paper: Use a pair of sharp scissors to cut the wrapping paper to the appropriate size. For a rectangular gift, cut the paper so that itâ s large enough to wrap around the gift and overlap by at least 2-3 inches (5-7.5 cm) on two opposite sides. For a cylindrical gift, cut the paper so that itâ s tall enough to wrap around the gift and overlap by at least 1-2 inches (2.5-5 cm) on two opposite sides. 3. Fold the paper: Lay the paper flat on a clean, flat surface. Fold the paper in half lengthwise (if itâ s a rectangular gift) or in half around its longest diameter (if itâ s a cylindrical gift). | 2310.02263#60 | 2310.02263#62 | 2310.02263 | [
"2309.00267"
] |
2310.02263#62 | Contrastive Post-training Large Language Models on Data Curriculum | This will create a crease down the center of the paper, making it easier to fold and align the edges. 4. Position the gift: Place the gift in the center of the folded paper, aligning the edges with the crease. For a rectangular gift, the gift should be centered horizontally within the paper, and the edges of the paper should align with the top and bottom edges of the gift. For a cylindrical gift, the gift should be centered vertically within the paper, and the edges of the paper should align with the top and bottom edges of the gift. 5. Tuck and fold: Gently tuck the edges of the paper under the gift, folding them neatly around the sides and bottom of the gift. Make sure the paper is smooth and free of wrinkles. For a rectangular gift, tuck the top edge under the gift as well, and then fold the bottom edge up over the gift, creating a crisp edge. For a cylindrical gift, fold the paper around the top and bottom edges, creating a smooth, even edge. 6. Secure the paper: If the wrapping paper is slippery, you may need to use tape to secure the edges. Apply a small piece of tape on the inside edge where the paper meets the gift, and gently press the paper in place. Con- tinue applying tape to other edges, making sure the paper stays smooth and wrinkle-free. For a more polished look, you can also use double-sided tape or a glue stick to attach the paper. 7. Add a decorative touch: To make your present look even more festive, consider adding a ribbon, gift tag, or a decorative element like a confetti pocket, bows, or decorative paper. Choose colors and patterns that comple- ment the gift and the recipientâ s style. 8. Place the gift in a gift bag: If you want to keep the gift wrapped but still use a gift bag, simply place the gift inside the wrapped present. This can be especially helpful if the present is heavy or fragile. 9. Store the gift: Keep the wrapped present in a cool, dry place until itâ s time to give it to the recipient. Avoid storing it near direct heat sources or sunlight, which can cause the paper to wrinkle. | 2310.02263#61 | 2310.02263#63 | 2310.02263 | [
"2309.00267"
] |
2310.02263#63 | Contrastive Post-training Large Language Models on Data Curriculum | 16 Preprint Instruction #3 LLaMA SFT on GPT-4 LLaMA Curricu- lum DPO (Cur- riculum 3) What are the best exercises for beginners? The following list of exercises is designed to target all the major muscle groups and help beginners achieve a well-rounded fitness routine. Squats: Squats are a great exercise for building strength in the legs, glutes and core. Itâ s recommended that you start with a weight that you can handle for 6-8 reps to start, and then gradually increase the weight over time. Deadlifts: Deadlifts are great for building strength in the back, hips and glutes. Start by using a lighter weight and gradually increase the weight over time. Push-ups: Push-ups are a great exercise for building upper body strength, especially the chest, shoulders and triceps. Start by doing a few reps and gradually increase the number of reps over time. Planks: Planks are a great exercise for building core strength, and there are many variations to choose from. Start with a short plank hold, then gradually increase the duration and intensity over time. Sit-ups: Sit-ups are a great exercise for the abdominals. Start with a slow, controlled motion and gradually increase the number of reps and the speed over time. Leg raises: Leg raises are a great exercise for the lower abdominals. Start with a slow, controlled motion and gradually increase the number of reps and the speed over time. Bicep curls: Bicep curls are a great exercise for building strength in the biceps. Start with a lighter weight and gradually increase the weight over time. Tricep dips: Tricep dips are a great exercise for building strength in the triceps. Start with a slow, controlled motion and gradually increase the number of reps and the speed over time. There are several exercises that can be great for beginners. Here are five general categories of exercises that are great for beginners: 1. Compound exercises: These exercises use more than one joint or muscle group to perform the movement. Examples include squats, deadlifts, and bench presses. These exercises are great for beginners because they build strength and lean muscle in multiple areas of the body at once. 2. Isolation exercises: | 2310.02263#62 | 2310.02263#64 | 2310.02263 | [
"2309.00267"
] |
2310.02263#64 | Contrastive Post-training Large Language Models on Data Curriculum | These exercises focus on a specific muscle group or muscle fiber, using a machine or free weight to move a joint through a small range of motion, isolating the targeted muscle. These exercises are great for beginners because they help improve muscle definition and control, and they can help you feel more comfortable with weight training techniques. 3. Aerobic exercises: These exercises use large muscle groups and increase heart rate and breathing, often through continuous movement. They are great for beginners because they help improve cardiovascular health and can help reduce the risk of chronic diseases. 4. Stability exercises: These exercises focus on the stabilizing muscles of the body and help improve balance and coordination. These exercises are great for beginners because they help prevent injury and improve overall motor control. 5. Strengthening exercises: These exercises help strengthen the entire body, improving muscle tone, balance, and overall strength. These exercises are great for beginners because they help improve posture, reduce the risk of injury, and improve confidence in physical abilities. Overall, the best exercises for beginners are those that meet their specific fitness goals, but that also prioritize safety and proper form. | 2310.02263#63 | 2310.02263#65 | 2310.02263 | [
"2309.00267"
] |
2310.02263#65 | Contrastive Post-training Large Language Models on Data Curriculum | Itâ s important to work with a qualified fitness professional to develop a safe, effective workout routine that will help you see results and improve your overall health. 17 Preprint Orca Orca with DPO 1. Walking: A simple and effective exercise for all ages, walking helps improve cardiovascular health and can be easily adapted to different fitness levels. 2. Swimming: A low-impact exercise that works the entire body, swim- ming is great for building endurance and muscle strength. 3. Yoga: Yoga offers a variety of poses and stretches that can be modified to suit different skill levels, making it a great starting point for beginners. 4. Cycling: A popular cardio exercise, cycling can be done indoors or out- doors and is a great way to build endurance and strengthen the lower body. 5. Running: Although it may be challenging for beginners, running can improve cardiovascular fitness and help with weight loss. 6. Jumping jacks: A classic exercise that works the entire body and can be easily done at home or in a park. 7. Planks: A core-strengthening exercise that helps improve posture and balance, planks can be easily modified for beginners. 8. Squats: A basic bodyweight exercise that targets the lower body, squats can be easily learned and performed at home or in a gym. 9. Push-ups: A classic upper body exercise, push-ups can be modified to suit different fitness levels and can be done at home or in a gym. 10. Stretching: Incorporating stretching exercises into a beginnerâ s routine can help improve flexibility and prevent injuries. Remember to start slow and listen to your body when trying new exercises. Always consult with a healthcare professional before starting a new exer- cise program. When starting an exercise routine, itâ s essential to choose beginner-friendly exercises that are safe, effective, and enjoyable. Hereâ s a list of the best exercises for beginners that cater to various fitness goals: 1. Walking: Walking is an easy and accessible exercise that can be done by people of all ages and fitness levels. | 2310.02263#64 | 2310.02263#66 | 2310.02263 | [
"2309.00267"
] |
2310.02263#66 | Contrastive Post-training Large Language Models on Data Curriculum | Itâ s an excellent cardiovascular work- out that helps improve lung capacity, burn calories, and strengthen muscles. 2. Swimming: Swimming is a low-impact exercise that works your entire body. Itâ s ideal for beginners as it provides a full-body workout while be- ing gentle on your joints. Swimming can help improve your cardiovascular fitness, build muscle, and increase flexibility. 3. Yoga: Yoga is a versatile exercise that focuses on flexibility, strength, and mindfulness. Many yoga poses are beginner-friendly and can be eas- ily modified to accommodate different skill levels. Yoga can improve your balance, flexibility, and overall well-being. 4. | 2310.02263#65 | 2310.02263#67 | 2310.02263 | [
"2309.00267"
] |
2310.02263#67 | Contrastive Post-training Large Language Models on Data Curriculum | Cycling: Cycling is an excellent aerobic workout thatâ s easy on your joints. Itâ s a low-impact exercise that can be done indoors or outdoors, making it a versatile option for beginners. Cycling can improve your car- diovascular fitness, build muscle, and increase flexibility. 5. Pilates: Pilates is a series of exercises that focus on strengthening your core, improving flexibility, and enhancing your posture. Pilates exercises can be done with or without equipment and are suitable for all fitness levels. This method is great for beginners as it helps build a strong foundation for overall fitness. 6. Bodyweight Squats: Bodyweight squats are a simple, effective exercise that can improve your leg strength, balance, and overall fitness. This exer- cise can be easily modified to suit different skill levels and is suitable for beginners. 7. Push-ups: Push-ups are an excellent exercise for building upper body strength, improving posture, and increasing muscle tone. This exercise can be easily modified to suit different skill levels, making it suitable for begin- ners. | 2310.02263#66 | 2310.02263#68 | 2310.02263 | [
"2309.00267"
] |
2310.02263#68 | Contrastive Post-training Large Language Models on Data Curriculum | 18 Preprint 8. Planks: Planks are a simple, effective exercise that targets your core, arms, and glutes. This exercise can help improve your balance, strengthen your abs, and enhance your overall core stability. 9. Lunges: Lunges are a great exercise for building lower body strength, improving balance, and increasing flexibility. This exercise can be easily modified to suit different skill levels, making it a suitable option for begin- ners. 10. Stretching: Stretching is an essential component of any exercise rou- tine, as it helps to prevent injuries, improve flexibility, and increase mobil- ity. Beginners should focus on dynamic stretching before and after exercise to improve their range of motion and reduce the risk of injury. Remember to start slowly, listen to your body, and gradually increase the intensity of your exercise routine. | 2310.02263#67 | 2310.02263#69 | 2310.02263 | [
"2309.00267"
] |
2310.02263#69 | Contrastive Post-training Large Language Models on Data Curriculum | Itâ s essential to be consistent and patient, as seeing results may take time. Consult a healthcare professional or fitness professional before starting any new exercise routine, especially if you have any existing health conditions or concerns. Instruction #4 LLaMA SFT on GPT-4 How is oil turned into gasoline? Oil is a naturally occurring, flammable liquid found in the Earthâ s crust. When oil is extracted from the Earth, it is refined to separate the different components and create various products, including gasoline. The process of converting oil into gasoline is called refining. During this process, oil is heated to high temperatures and then separated into its dif- ferent components. The main components of petroleum are a mix of hydrocarbons, which are molecules composed of hydrogen and carbon atoms. Different groups of hydrocarbons have different properties, and these properties are used to separate them during the refining process. One group of hydrocarbons is called naphtha, and it is used to make gaso- line. Naphtha is separated from other hydrocarbons during the refining process, and it is then further processed to separate it into different compo- nents, including gasoline. Gasoline is then blended with other components, including ethanol and oxygenates, to create a final product that is safe and reliable for use in cars. Once the final product is created, it is tested for quality and then distributed to gas stations, where it is used to fuel cars. | 2310.02263#68 | 2310.02263#70 | 2310.02263 | [
"2309.00267"
] |
2310.02263#70 | Contrastive Post-training Large Language Models on Data Curriculum | Answered by Sally Wong, Assistant at HowStuffWorks. 19 Preprint LLaMA Curricu- lum DPO (Cur- riculum 3) Orca There are several steps that crude oil goes through to become gasoline. Hereâ s a brief overview: 1. Drilling - The first step is to extract crude oil from the earthâ s surface through drilling. 2. Transportation - Once extracted, the oil is transported to refineries. 3. Refinement - The crude oil is heated, separated, and purified at the refin- ery to produce a number of different products, including gasoline. 4. Pipeline transportation - The gasoline is transported long distances by pipeline to gas stations and other outlets. 5. Storage - Gasoline is stored in large tanks or containers, where it can be pumped into vehicles at the gas station. 6. Pumping - The gasoline is pumped from the underground storage tank at the gas station into the vehicleâ s fuel tank. 7. Consumption - The gasoline is used by the driver to power the engine of his or her vehicle, typically by burning it in the engineâ s cylinders. When crude oil is extracted from the earth, it is a thick and black substance that is almost entirely composed of hydrocarbons. These hydrocarbons are molecules that are made up of hydrogen and carbon atoms joined together in a long chain. The refineries break down these long chains into smaller molecules that are more easily processed. The most important molecules for gasoline are called paraffins. These molecules have a linear structure, meaning that they are long and straight, and they have an even number of carbon atoms. By breaking down the long chains of hydrocarbons and sep- arating out the paraffins, refineries are able to produce gasoline. Oil is transported to refineries and gas stations by pipeline, railroad, and truck. The gasoline remains in transportation tanks or in pipelines until it reaches its final destination, where it is pumped into storage tanks or di- rectly into vehicles. Then, the gasoline is pumped into the vehicleâ s fuel tank and the cycle begins again, as the driver starts the car and the gasoline is burned. Oil is turned into gasoline through a multi-step process that involves refin- ing, processing, and blending. Here is a general overview of the process: 1. Crude oil extraction: Crude oil is extracted from the Earth using drilling techniques. | 2310.02263#69 | 2310.02263#71 | 2310.02263 | [
"2309.00267"
] |
2310.02263#71 | Contrastive Post-training Large Language Models on Data Curriculum | Once the oil is extracted, it is transported to a refinery. 2. Refining: At the refinery, crude oil is processed and refined into various products, including gasoline. The refining process involves separating the oil into its different components based on their boiling points. This is done through a series of distillation columns. 3. Naming conventions: The different components of crude oil are named based on their boiling points and uses. For example, gasoline is a product obtained from the fractional distillation of crude oil with a boiling range of approximately -16 to 26 degrees Celsius. 4. Gasoline production: In the gasoline production process, hydrotreating and reforming are two key steps. Hydrotreating is used to remove sulfur and other impurities from the gasoline, while reforming is used to increase the octane rating of the gasoline, making it more suitable for use in vehi- cles. 5. | 2310.02263#70 | 2310.02263#72 | 2310.02263 | [
"2309.00267"
] |
2310.02263#72 | Contrastive Post-training Large Language Models on Data Curriculum | Blending: Once the gasoline has been produced, it is blended with other additives to improve its performance, such as increasing its octane rating, reducing emissions, and improving fuel efficiency. The final gasoline prod- uct is then transported to distribution centers and gas stations. 6. Gasoline storage and distribution: Gasoline is stored in underground tanks and distributed to gas stations through pipelines or trucks. From there, consumers purchase the gasoline for use in their vehicles. Overall, the process of turning oil into gasoline involves a series of complex refining and processing steps, ensuring that the end product is suitable for use in vehicles and meets environmental and performance standards. | 2310.02263#71 | 2310.02263#73 | 2310.02263 | [
"2309.00267"
] |
2310.02263#73 | Contrastive Post-training Large Language Models on Data Curriculum | 20 Preprint # Orca with DPO Oil is a naturally occurring hydrocarbon mixture found deep beneath the Earthâ s surface. To turn oil into gasoline, several refining processes take place at an oil refinery. Hereâ s a simplified overview of the process: 1. Crude oil extraction: Crude oil is extracted from the ground using drilling techniques. Once the oil is extracted, it is transported to a refin- ery. 2. Preliminary treatment: At the refinery, the crude oil undergoes a pre- liminary treatment to remove impurities like water, sand, and other solid particles. This process involves the use of desalters, which separate the water from the oil. The oil also goes through a degasser to remove gas hy- drocarbons. 3. Distillation: The crude oil is then subjected to a process called fractional distillation. In this process, the crude oil is heated and separated into var- ious components based on their boiling points. The oil is divided into a series of distillation columns, where different fractions (such as gasoline, diesel, and jet fuel) are collected. The gasoline-producing fractions are typ- ically collected from columns 3 to 5. 4. | 2310.02263#72 | 2310.02263#74 | 2310.02263 | [
"2309.00267"
] |
2310.02263#74 | Contrastive Post-training Large Language Models on Data Curriculum | Treatment and stabilization: The gasoline obtained from the distillation columns goes through further treatment processes to remove any remaining impurities, such as heavy metals and sulfur compounds. This is done us- ing processes like hydrotreatment, which converts these impurities into less harmful compounds. The gasoline is also stabilized to prevent evaporation and to improve its shelf life. 5. Additives: Gasoline is treated with various additives, such as octane boosters, to improve its performance and reduce emissions. These addi- tives enhance the gasolineâ s octane rating, which determines its ability to resist engine knocking. 6. Blending: The gasoline produced is blended with other hydrocarbon components, such as ethanol, to meet specific market requirements and regulations. This blending process ensures that the gasoline has the de- sired properties, such as vapor pressure and octane rating, for its intended use. Once the gasoline has undergone these processes, it is ready for transporta- tion and distribution to fuel stations, where it is pumped into vehiclesâ fuel tanks. | 2310.02263#73 | 2310.02263#75 | 2310.02263 | [
"2309.00267"
] |
2310.02263#75 | Contrastive Post-training Large Language Models on Data Curriculum | 21 | 2310.02263#74 | 2310.02263 | [
"2309.00267"
] |
|
2310.02255#0 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 4 2 0 2 n a J 1 2 ] V C . s c [ 3 v 5 5 2 2 0 . 0 1 3 2 : v i X r a Published as a conference paper at ICLR 2024 MATHVISTA: EVALUATING MATHEMATICAL REASON- ING OF FOUNDATION MODELS IN VISUAL CONTEXTS Pan Lu1,3, Hritik Bansal1, Tony Xia1, Jiacheng Liu2, Chunyuan Li3, Hannaneh Hajishirzi2, Hao Cheng3, Kai-Wei Chang1, Michel Galley3, Jianfeng Gao3 1UCLA, 2University of Washington, 3Microsoft Research, Redmond https://mathvista.github.io # ABSTRACT Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MATHVISTA, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 ex- amples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MATHVISTA, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT- 4V is mainly attributed to its enhanced visual perception and mathematical rea- soning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MATHVISTA will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. | 2310.02255#1 | 2310.02255 | [
"2302.13971"
] |
|
2310.02255#1 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. # INTRODUCTION Mathematical reasoning stands as a testament to the intricacies of human intelligence (Kahneman, 2011). It requires rigorous logical thinking, domain-specific knowledge, and the ability to engage in multistep reasoning processes (Lightman et al., 2023). This complexity is observed not only in textual scenarios but also significantly in visual contexts. For instance, when assessing a childâ s mathematical and reasoning capabilities, problems are often designed to encompass visual contexts in addition to arithmetic calculations (Stipek & Iver, 1989; Pollitt et al., 2020). At the same time, AI agents with strong mathematical reasoning capabilities in visual contexts have a wide range of real- world applications, such as solving complex problems in educational disciplines (Seo et al., 2015; Wang et al., 2017), helping analysts with logical queries about statistical data (Wu et al., 2023; Yang et al., 2023a), and assisting in theorem proving and scientific discovery in advanced research fields (Taylor et al., 2022; Dong et al., 2023; Trinh et al., 2024). Numerous datasets have been curated to assess the mathematical reasoning abilities of AI sys- tems, with most presented purely in text form. Some datasets such as ChartQA (Lu et al., 2021a; Dahlgren Lindstr¨om & Abraham, 2022; Masry et al., 2022) have explored mathematical reasoning in vision-language settings. However, these datasets tend to either focus on specific tasks, like math word problems, or particular visual contexts, such as geometry problems or bar charts. General- purpose visual question answering (VQA) datasets on natural scenes contain only a small portion of questions necessitating mathematical reasoning, leaving a comprehensive investigation of vision- language reasoning within a mathematical framework largely unexplored. | 2310.02255#0 | 2310.02255#2 | 2310.02255 | [
"2302.13971"
] |
2310.02255#2 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 1 Published as a conference paper at ICLR 2024 === Random Chance == LLaVA === Pol GPT-4 === Multimodal Bard === GPT-4V (Playground) === Human Geometry Reasoning Function Plot B: i i Geometry ll Arithmetic Chart Reasoning Logical Reasoning Abstract Scene Line Plot 70 Natural ° Algebraic Image Other Reasoning Numekic Commonsa&nse Puzzle Test Statistical 2 Synthetic Reasoning Scene Scatter Scientific Reasoning Plot Scientific Figure (a) Mathematical reasoning (b) Visual context Figure 1: Accuracies of one leading LLM (i.e., PoT GPT-4), four prominent LMMs, random chance, and human performance on our proposed MATHVISTA across mathematical reasoning and visual context types. PoT GPT-4 is a textual, program-aided LLM augmented with the Bard caption and OCR text. GPT-4V is manually evaluated via the playground chatbot. On the other hand, Large Language Models (LLMs) (OpenAI, 2022; 2023a) and Large Multimodal Models (LMMs) (Google, 2023; OpenAI, 2023b; Team et al., 2023) have exhibited impressive problem-solving skills in many tasks and domains. Recently, some studies have aimed to augment existing LLMs with mathematical and scientific reasoning capabilities using external tools (Lu et al., 2023a; Wang et al., 2023b). However, the ability of these foundation models to perform mathemat- ical reasoning in visual contexts has not been systematically examined. Therefore, it is essential to develop a new benchmark to (1) facilitate the development of mathematical reasoning systems in visually intensive scenarios, and (2) evaluate the research progress of LLMs and LMMs, especially their capabilities in solving rigorous reasoning tasks. In this paper, we present MATHVISTA, a consolidated Mathematical reasoning benchmark in Visual contexts. | 2310.02255#1 | 2310.02255#3 | 2310.02255 | [
"2302.13971"
] |
2310.02255#3 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | We propose a task taxonomy to guide the development of MATHVISTA: (1) we identify seven mathematical reasoning types: algebraic reasoning, arithmetic reasoning, geometry reason- ing, logical reasoning, numeric common sense, scientific reasoning, and statistical reasoning; (2) we focus on five primary tasks: figure question answering (FQA), geometry problem solving (GPS), math word problem (MWP), textbook question answering (TQA), and visual question answering (VQA); and (3) we encompass a diverse array of visual contexts, including natural images, ge- ometry diagrams, abstract scenes, synthetic scenes, as well as various figures, charts, and plots. MATHVISTA incorporates 28 existing multimodal datasets, including 9 math-targeted question an- swering (MathQA) datasets and 19 VQA datasets. In addition, we have created three new datasets (i.e., IQTest, FunctionQA, PaperQA) which are tailored to evaluating logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. Overall, MATHVISTA consists of 6,141 examples, with 736 of them being newly curated (Table 1). To facilitate fine-grained evaluation, examples are annotated with meta- data, including question type, answer type, task category, grade level, visual context, and required reasoning skills. Detailed descriptions of data collection can be found in §2, §C, and §D. We conduct extensive experiments on MATHVISTA to evaluate the reasoning abilities of 12 founda- tion models known for their leading performance in mathematical and multimodal reasoning. This ensemble includes three LLMs (i.e, ChatGPT, GPT-4, Claude-2), two proprietary LMMs (i.e., GPT- 4V, Bard), and seven open-source LMMs. For LLMs, we examine zero-shot and few-shot settings using two prompting strategies: chain-of-thought (CoT) (Wei et al., 2022b) and program-of-thought (PoT) (Chen et al., 2022b). These LLMs can also be augmented with off-the-shelf visual models for image captioning and OCR. We establish a human performance baseline by engaging qualified human annotators with a high school diploma or higher. | 2310.02255#2 | 2310.02255#4 | 2310.02255 | [
"2302.13971"
] |
2310.02255#4 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | We show that MATHVISTA, featuring ad- vanced topics such as college curricula and scientific reasoning, is a very challenging benchmark, with human performance reaching only 60.3% accuracy. 2 Published as a conference paper at ICLR 2024 Published as a conference paper at ICLR 2024 Figure 2: Examples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA. Our results indicate that CoT GPT-4, the best-performing LLM without visual tool augmentations, achieves an overall accuracy of 29.2%. Multimodal Bard, the best-performing LMM, achieves 34.8% (§3.3), which attains only 58% of human performance (34.8% vs 60.3%). When augmented with Bard captions and OCR text, PoT GPT-4 obtains 33.9%, closely matching Multimodal Bard (§3.4). Further analysis indicates that the Multimodal Bard model failures arise from incorrect cal- culations and hallucinations caused by visual perception and textual reasoning (§3.5). With MATHVISTA, we report, for the first time, a comprehensive quantitative and qualitative eval- uation of GPT-4V (OpenAI, 2023b), the latest multimodal version of GPT-4. Remarkably, GPT-4V achieves a state-of-the-art accuracy of 49.9%, a significant improvement of 15.1% over Multimodal Bard. As illustrated in Figure 1, GPT-4V even surpasses human performance on a set of tasks in- volving algebraic reasoning and complex visual contexts, which include tables and function plots. Nevertheless, a 10.4% gap in overall accuracy remains when compared to the human baseline, leav- ing plenty of room for model improvement. Our in-depth analysis (§H) reveals that the superiority of GPT-4V is mainly attributed to its strong capabilities in visual perception and mathematical reason- ing. We further highlight its emergent ability for self-verification (§H.5), the use of self-consistency (§H.6), and its ability to drive goal-directed multi-turn human-AI dialogues (§H.7). # 2 THE MATHVISTA DATASET 2.1 COLLECTION GUIDELINES | 2310.02255#3 | 2310.02255#5 | 2310.02255 | [
"2302.13971"
] |
2310.02255#5 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | As discussed previously, there is a notable gap in existing benchmarks, which primarily evaluate mathematical reasoning in textual contexts, overlooking the intrinsic visual nature of many mathe- matical problems. Our dataset, MATHVISTA, is therefore motivated to bridge this gap, offering a robust evaluation benchmark for mathematical reasoning intertwined with visual understanding, thus pushing AI assistants towards general-purpose capabilities. Our benchmark adheres to the following collection guidelines: (1) it covers multiple tasks and topics to mirror real-world applications; (2) it incorporates diverse visual contexts and mathematical skills to foster a well-rounded evaluation; (3) it offers varying levels of challenge to effectively probe and uncover the potential limitations of current models; and (4) it provides robust evaluation settings for deterministic evaluations. The taxonomy for this work is introduced as follows: We identify seven types of mathematical rea- soning: algebraic reasoning, arithmetic reasoning, geometry reasoning, logical reasoning, numeric common sense, scientific reasoning, and statistical reasoning, with detailed definitions provided in | 2310.02255#4 | 2310.02255#6 | 2310.02255 | [
"2302.13971"
] |
2310.02255#6 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 3 Published as a conference paper at ICLR 2024 §C.1 and examples shown in §C.2. We focus on five primary tasks: figure question answering (FQA), which centers around statistical reasoning over multiple charts and plots; geometry problem solving (GPS), which deals with geometrical topics; math word problem (MWP), which involves arithmetic reasoning in everyday scenarios; textbook question answering (TQA), which usually en- tails knowledge-intensive reasoning on scientific topics and figures; and visual question answering (VQA). Furthermore, our objective is to account for a diverse array of visual contexts, including natural images, geometry diagrams, abstract scenes, synthetic scenes, multiple charts and plots, scientific figures, tables, function plots, puzzle test figures, and more, with examples shown in §C.3. 2.2 DATA COLLECTION Collection of MathQA datasets. We collected nine MathQA datasets in multimodal settings, in- cluding four for GPS, two for MWP with visual contexts of synthetic scenes, abstract diagrams, and tables, and two for TQA on college curricula (see §C.4). Annotations such as solutions, programs, parsing results, and grounded theorems are also collected, providing demonstration examples for LLMs. Each source dataset is limited to up to 400 examples to ensure a balanced representation of each source in our final compiled benchmark. In total, we collected 2,666 examples. Review and collection of VQA datasets. Many existing VQA datasets feature instances requiring mathematical reasoning abilities, such as arithmetic operations or numeric common sense. Incor- porating these datasets enhances problem diversity in terms of tasks, domains, visual contexts, and reasoning skills involved. We reviewed more than 70 datasets, collecting 19 of them that contain math-related instances and are publicly available, as listed in §C.4. Since these datasets are not orig- inally math-targeted, we initially designed heuristic rules to automatically select examples likely to involve mathematical reasoning from a large pool of candidates. Examples with numeric an- swers or those containing quantity words (as listed in §D.1) in the questions were selected. This automatic filtration yielded 4,949 VQA-format examples, though some false positive examples re- mained. Therefore, we engaged three expert annotators to manually label these examples to deter- mine if they involve mathematical reasoning (more details in § D.2). | 2310.02255#5 | 2310.02255#7 | 2310.02255 | [
"2302.13971"
] |
2310.02255#7 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Utilizing majority voting and limiting each source dataset to 400 examples, we finalized a collection of 2,739 examples. Collection of three new datasets. While the source datasets we collected encompass multiple visual contexts and mathematical reasoning abilities, certain scenarios remain unaddressed: logical reasoning on puzzle test diagrams, statistical reasoning on functional plots, and scientific reasoning on academic figures. To address these gaps, we introduced three new datasets: IQTest, FunctionQA, and PaperQA, with examples illustrated in Figure 2. IQTest comprises 228 examples requiring in- ductive reasoning, abstract thinking, pattern prediction, and calculations, sourced from puzzle test figures on online learning platforms. FunctionQA, with 400 examples, emphasizes subtle visual per- ceptions of functional plots and algebraic reasoning concerning variables, expressions, equations, and functions. PaperQA is a novel dataset featuring questions derived from informative academic il- lustrations, including tables, figures, and charts from online education resources, with 107 examples sourced from papers released in August 2023 on Huggingface1. To ensure data quality, all questions were manually annotated by graduate students in STEM fields and further refined through a rigorous review process. To ensure consistency in annotation, we employed a two-step process. Initially, each dataset was independently annotated by three review- ers, resulting in a high inter-annotation consistency rate of 99.2%. Specifically, among the newly collected 736 questions, only 6 exhibited disagreements in the annotated answers. Then, these dis- crepancies were resolved through discussion among the entire review team, ensuring a consensus was reached on each example. The GUI of the annotation tool is shown in Figure 23 in §D.3. 2.3 METADATA ANNOTATION Fine-grained metadata facilitates a comprehensive analysis of modelsâ reasoning capabilities across various aspects. To this end, we annotate the examples in MATHVISTA with information including question type, answer type, language, source, category, task, grade level, and visual context, which can be accurately obtained from the details provided in the source datasets. MATHVISTA features # 1https://huggingface.co/papers 4 Published as a conference paper at ICLR 2024 | 2310.02255#6 | 2310.02255#8 | 2310.02255 | [
"2302.13971"
] |
2310.02255#8 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Statistic Number Total questions - multiple-choice questions - Free-form questions - Questions with annotations - Questions newly annotated 6,141 3,392 (55.2%) 2,749 (44.8%) 5,261 (85.6%) 736 (12.0%) Unique number of images Unique number of questions Unique number of answers 5,487 4,746 1,464 Source datasets - Existing VQA datasets - Existing MathQA datasets - Our newly annotated datasets 31 19 9 3 Visual context (image) classes 19 Maximum question length Maximum answer length Maximum choice number Average question length Average answer length Average choice number 213 27 8 15.6 1.2 3.4 Table 1: Key statistics of MATHVISTA. ° ale Sy Q 2.5 g s gs es Cy s% \\= &O 3 ge $5,%, Oo op °° oo oe oF G %% press pa pyar Eos 6.5% -Math eae MwP. TOA TOA 19.5% 15.3% CIBEnepy oF (QA Alen plo 16.9% z â 44, & & 6% Ss? & 2 %% 0, SS ae oD 23 ied é é B ry Fo a Figure 3: Source dataset distribution of MATHVISTA. FQA: figure question answering, GPS: geometry prob- lem solving, MWP: math word problem, TQA: textbook question answering, VQA: visual question answering. seven different types of mathematical reasoning abilities, as categorized in Table 3 (§C.1). Coarse la- bels of mathematical reasoning can be automatically obtained from the details of the source datasets. To verify the quality of automatic annotation, expert annotators manually label the mathematical rea- soning categories from seven candidates for 1,000 examples, using the annotation tool illustrated in §D.4. The results show that 94.1% of the examples from automatic and human annotations have the exact same set of reasoning types, while 98.79% of the individual labels are identical, indicating that the automatic annotation for the labeling of mathematical reasoning is highly accurate. 2.4 DATA PREPARATION AND RELEASE | 2310.02255#7 | 2310.02255#9 | 2310.02255 | [
"2302.13971"
] |
2310.02255#9 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | MATHVISTA consists of 6,141 examples, divided into two subsets: testmini and test. testmini con- tains 1,000 examples, intended for model development validation or for those with limited comput- ing resources. The test set features the remaining 5,141 examples for standard evaluation. Notably, the answer labels for test will not be publicly released to prevent data contamination, and we will maintain an online evaluation platform. To ensure that each source dataset is well represented in testmini and to maintain a distribution in testmini closely resembling the whole set, we adopted this sampling strategy: (1) first, randomly sample questions with a threshold number of 4 for each source dataset; (2) then, randomly sample the remaining questions for each source dataset on its proportion in the entire set. The KL Divergence and Total Variation (TV) distance between the testmini set and the entire set are 0.008 and 0.035, respectively, suggesting that testmini is close to the distribution of the whole set. We also conducted several quality checks to address any unidentified errors. # 2.5 DATA ANALYSIS The main statistics of MATHVISTA are presented in Table 1. There are two types of questions: multiple-choice and free-form. Answers to free-form questions are categorized as integers, float- ing numbers, or lists. The large unique number of images, questions, and answers ensures pattern diversity in MATHVISTA. MATHVISTA is derived from 31 source datasets, including three newly annotated datasets to address the missing types of mathematical reasoning over specific visual con- texts. Dataset examples in Table 4 (§C.2 ) highlight the richness of mathematical reasoning involved. Examples in §C.3 demonstrate the diverse visual contexts present in MATHVISTA. Further details on data analysis are available in §E. 3 EXPERIMENTS 5 | 2310.02255#8 | 2310.02255#10 | 2310.02255 | [
"2302.13971"
] |
2310.02255#10 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 Prior work (Yang et al., 2023b) has studied the reasoning abilities of foundation models in visual settings from a qualitative perspective. In contrast, our goal is to conduct both qualitative and quan- titative studies to provide a systematic evaluation of existing foundation models for mathematical reasoning capabilities in visual contexts using MATHVISTA. We introduce a novel benchmarking strategy for MATHVISTA tailored for foundational models (§3.1). The models we have chosen are detailed in §3.2. Quantitative results can be found in §3.3 and §3.4, while the qualitative analysis is provided in §3.5. Given the significant advancements of GPT-4V over other models, we undertake an in-depth comparative study with its peers in various aspects and highlight potential avenues for future research in §H. 3.1 EVALUATION PROTOCOLS Recent LLMs and LMMs have been instructed to generate long responses in conventional settings instead of short text. Therefore, we propose a new strategy for benchmarking MATHVISTA, unlike using human-designed or template matching rules (Lu et al., 2022). The evaluation process consists of three stages: response generation, answer extraction, and score calculation. Initially, the base- lines generate responses given the input query, which incorporates the task description, the question, the choices, and the metadata, using the template defined in Table 9 (§F.3). Next, the short answer text is extracted from the detailed response. We propose an answer extractor (§F.2) based on LLMs such as GPT-4, inspired by its remarkable ability for text processing (Wei et al., 2022b). A prelim- inary study of 200 examples shows that GPT-4 can extract the answer text with more than 99.5% accuracy. Finally, the extracted answer is normalized to a required answer format (e.g., an option letter or an integer), and the target metric scores are computed. Taking advantage of the fact that the instances in MATHVISTA are either multiple-choice questions for textual answers or free-form questions for numerical answers, accuracy scores are used as metrics for deterministic evaluation. 3.2 EXPERIMENTAL SETUP | 2310.02255#9 | 2310.02255#11 | 2310.02255 | [
"2302.13971"
] |
2310.02255#11 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | We evaluate the models on MATHVISTA under three setups: (a) Text-Only LLMs including ChatGPT (OpenAI, 2022), GPT-4 (OpenAI, 2023a), and Claude-2 (Anthropic, 2023) in zero-shot and two-shot settings with Chain-of-Thought (CoT) (Wei et al., 2022b) and Program-of-Thought (PoT) (Chen et al., 2022b), (b) Augmented-LLMs where the LLMs are provided with additional visual information including the generated image captions from Multimodal Bard (Google, 2023) and the detected OCR text from EasyOCR (JaidedAI, 2020), (c) LMMs that include open-source models such as IDEFICS-9B (Laurenc¸on et al., 2023), mPLUG-OWL-LLaMA-7B (Ye et al., 2023), miniGPT-4- LLaMA-2-7B (Zhu et al., 2023a), LLaMA-Adapter-V2-7B (Gao et al., 2023), InstructBLIP-Vicuna- 7B (Dai et al., 2023), LLaVA-LLaMA-2-13B (Liu et al., 2023a), LLaVAR Zhang et al. (2023d), and proprietary models such as Bard and GPT-4V. Since GPT-4V does not offer API access, we resorted to manually evaluating it using the playground chatbot. We provide the prompts for LLMs and the hyperparameters used for LMMs in §F. 3.3 EXPERIMENTAL RESULTS We compare the performance of several models, including Text-only LLMs, Augmented LLMs, and LMMs on MATHVISTA in Table 2. We include random chance (i.e., one of the options in multiple- choice questions, and empty in the free-form questions) and frequency guess (§F.1) as naive base- lines. Additionally, we established a human performance baseline using Amazon Mechanical Turk. | 2310.02255#10 | 2310.02255#12 | 2310.02255 | [
"2302.13971"
] |
2310.02255#12 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Eligible human annotators must have a satisfactory annotating history, successfully pass qualifica- tion examples, and possess a high school degree or higher. We asked each annotator to complete five questions within 20 minutes. Further details can be found in §F.6. Among text-only LLMs, all models outperform the random baselines, with the 2-shot GPT-4 using chain-of-thought (CoT) prompting achieving 29.2%. The limited performance of text-only LLMs suggests that our dataset requires models to reason within visual contexts for optimal results. When equipped with image captions and detected OCR text, augmented LLMs exhibit superior perfor- mance compared to their text-only counterparts on MATHVISTA. Specifically, the best-performing augmented LLM is the 2-shot GPT-4 employing program-of-thought (PoT) prompting, which scores 33.9%. This model generates Python programs for execution, thereby promoting rigorous reasoning. | 2310.02255#11 | 2310.02255#13 | 2310.02255 | [
"2302.13971"
] |
2310.02255#13 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 6 Published as a conference paper at ICLR 2024 Model Input ALL FQA GPS MWP TQA VQA ALG ARI GEO LOG NUM SCI STA Heuristics baselines Random chance Frequent guess - - 17.9 18.2 21.6 3.8 26.3 22.7 34.1 20.4 19.6 26.3 21.7 14.7 20.1 13.5 17.2 16.3 31.0 24.6 33.1 18.7 31.4 24.3 19.4 32.0 20.9 8.3 Large Language Models (LLMs) Zero-shot ChatGPT Zero-shot GPT-4 Zero-shot Claude-2 Q only Q only Q only 9.1 23.5 21.9 26.9 26.1 22.3 37.0 7.0 26.4 21.9 34.1 13.4 41.5 20.5 38.6 23.5 27.7 15.9 25.7 21.6 39.2 27.4 33.6 17.4 35.6 16.2 45.8 19.5 36.1 29.1 32.8 20.4 33.3 13.5 12.1 36.4 20.5 9.9 9.2 2-shot CoT Claude-2 2-shot CoT ChatGPT 2-shot CoT GPT-4 Q only Q only Q only 24.4 18.6 29.8 26.8 20.1 36.5 29.2 20.1 44.7 9.7 8.6 8.6 33.5 34.1 29.2 19.0 28.0 13.9 36.9 18.9 44.9 28.5 35.6 17.0 33.5 21.6 14.6 45.9 17.9 46.2 31.3 41.6 19.3 41.0 18.9 13.9 47.5 18.9 5.4 2-shot PoT ChatGPT 2-shot PoT GPT-4 Q only Q only 25.1 19.0 30.8 16.1 8.1 26.0 20.1 33.2 38.0 25.7 29.9 19.8 29.3 24.3 19.4 38.5 16.9 13.2 48.4 18.3 44.9 28.5 32.7 16.7 31.0 24.3 Augmented Large Language Models (Augmented-LLMs) 2-shot CoT Claude-2 2-shot CoT ChatGPT 2-shot CoT GPT-4 Q, Ic, It 33.2 26.0 31.7 35.5 Q, Ic, It 33.2 27.5 29.3 36.0 Q, Ic, It 33.2 27.9 31.7 31.2 48.1 30.2 32.4 32.3 33.0 16.2 17.4 54.9 36.2 49.4 29.1 31.0 32.9 31.0 16.2 17.4 50.8 37.2 51.9 28.5 33.5 30.9 32.2 13.5 12.5 58.2 37.9 2-shot PoT ChatGPT 2-shot PoT GPT-4 Q, Ic, It 26.8 24.5 26.4 23.7 Q, Ic, It 33.9 30.1 39.4 30.6 33.5 27.9 27.8 26.1 28.0 18.9 13.2 33.6 29.9 39.9 31.3 37.4 31.7 41.0 18.9 20.1 44.3 37.9 Large Multimodal Models (LMMs) Q, I IDEFICS-9B-Instruct mPLUG-Owl-LLaMA-7B Q, I Q, I miniGPT4-LLaMA-2-7B Q, I LLaMA-Adapter-V2-7B Q, I LLaVAR Q, I InstructBLIP-Vicuna-7B Q, I LLaVA-LLaMA-2-13B Q, I Multimodal Bard Q, I GPT-4V (Playground) 19.8 21.6 21.1 6.5 22.2 22.7 23.6 10.2 23.1 18.6 26.0 13.4 23.9 21.2 25.5 11.3 25.2 21.9 25.0 16.7 25.3 23.1 20.7 18.3 26.1 26.8 29.3 16.1 34.8 26.0 47.1 29.6 49.9 43.1 50.5 57.5 25.9 24.0 22.1 15.0 19.8 18.9 24.6 18.1 27.2 27.9 23.6 19.2 23.9 13.5 12.7 26.3 21.4 30.4 30.2 28.1 21.0 24.7 16.2 16.7 25.4 17.9 32.3 31.8 26.3 20.4 24.3 24.3 13.9 29.5 18.3 34.8 30.7 24.2 22.1 23.0 13.5 15.3 42.6 21.9 32.3 35.2 21.8 27.1 20.7 18.9 20.4 33.0 23.1 32.3 26.3 27.3 20.1 28.8 24.3 18.3 37.3 25.1 48.7 26.8 46.5 28.6 47.8 13.5 14.9 47.5 33.0 65.2 38.0 53.0 49.0 51.0 21.6 20.1 63.1 55.8 9.9 Human Human performance Q, I 60.3 59.7 48.4 73.0 63.2 55.9 50.9 59.2 51.4 40.7 53.8 64.9 63.9 | 2310.02255#12 | 2310.02255#14 | 2310.02255 | [
"2302.13971"
] |
2310.02255#14 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Table 2: Accuracy scores on the testmini subset of MATHVISTA. Input: Q: question, I: image, Ic: image caption, It: OCR text detected in the image. ALL: overall accuracy. Task types: FQA: figure question answering, GPS: geometry problem solving, MWP: math word problem, TQA: text- book question answering, VQA: visual question answering. Mathematical reasoning types: ALG: algebraic reasoning, ARI: arithmetic reasoning, GEO: geometry reasoning, LOG: logical reasoning, NUM: numeric commonsense, SCI: scientific reasoning, STA: statistical reasoning. The highest scores among models in each section and overall are highlighted in blue and red, respectively. On the LMM side, Multimodal Bard scores a 34.8% accuracy, which is only 58% of human perfor- mance at 60.3%. Notably, the best-performing GPT-4V model achieves 49.9%, marking a substan- tial 15.1% improvement over Bard; however, it still falls 10.4% short of human performance. These gaps highlight that there is a significant scope for further improvements on our benchmark. The open-source models (IDEFICS to LLaVA) achieve underwhelming performance on MATHVISTA. This can be attributed to their lack of math reasoning capabilities, text recognition (useful for math word problems), shape detection (useful for geometrical problems), and chart understanding. No- tably, these models utilize different model architectures for processing the vision (e.g., OpenCLIP, CLIP, Vit-G) and language (e.g., LLaMA-1, LLaMA-2), different alignment strategies (e.g., MLP projection in LLaVA, Q-former in InstructBLIP, visual abstractor in mPLUGOwl), and instruction tuning data (e.g., 150K instruction-response pairs from LLaVA data, 3,500 instruction-response pairs from miniGPT-4). While fine-tuned with instruction-following data from text-rich images, LLaVAR does not perform well, indicating that strong text recognition abilities do not guarantee high performance on MATHVISTA, which requires comprehensive visual perception and mathemat- ical reasoning. | 2310.02255#13 | 2310.02255#15 | 2310.02255 | [
"2302.13971"
] |
2310.02255#15 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | This underscores that there are immense possibilities for innovations in model, data, or training objectives to improve the zero-shot performance of LMMs on MATHVISTA. 3.4 FINE-GRAINED RESULTS We also report fine-grained scores for a comprehensive study of the capabilities of existing models across different tasks (Table 2), mathematical reasoning abilities (Table 2, Figures 1, 33), visual con- 7 Published as a conference paper at ICLR 2024 (a) Errors in answers and explanations (b) Types of wrong explanations # Correct # Incorrect # Incorrect Figure 4: Error analysis of Bard results: (a) presents errors in answers and explanations; (b) delves into the details of wrong explanations. Notations: â Answerâ is â Ans.â , â Explanationâ is â Exp.â , â Partially Correctâ is â Partialâ , and â Not applicableâ refers to unanswerable or indeterminate cases. Question: Is the function (f : R to R) injective? Choices: (A) Yes (B) No Question: | 2310.02255#14 | 2310.02255#16 | 2310.02255 | [
"2302.13971"
] |
2310.02255#16 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Find z. Choices: (A) 7 (B) 9 (C) 12 (D) 15 Response from Bard: The function f (x) = x2 is not injective because it takes two different inputs, â 1 and 1, to the same output, 1. This means that there is no one-to-one correspondence between the domain and the range of the function. In the image, we can see that the graph of the function passes through the points (â 1, 1) and (1, 1). This means that f (â 1) = f (1) = 1. So, if we have two different inputs, â 1 and 1, they both map to the same output, 1. This violates the definition of an injective function. | 2310.02255#15 | 2310.02255#17 | 2310.02255 | [
"2302.13971"
] |
2310.02255#17 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Response from Bard: We can use the Law of Cosines to solve for z. The Law of Cosines states that for a triangle with side lengths a, b, and c and angle measure θ opposite side c, the following equation holds: c2 = a2 + b2 â 2ab cos(θ) In this case, we have a = 7, b = 9, and θ = 90â ¦ (because â ³RT S is a right triangle). Plugging these values into the Law of Cosines, we get: z2 = 72 +92 â 2(7)(9) cos(90â ¦) Since cos(90â ¦) = 0, we can simplify the equation to get: z2 = 49 + 81 = 130 Taking the square root of both sides, we get z = Since to the nearest integer, which is 12 . â 130. 130 is not among the answer choices, we round z â Therefore, the answer is (B). Correct output: (B) No Correct output: (C) 12 # (a) Correct answer and explanation # (b) Correct answer but wrong explanation Figure 5: Two examples from Bard. In (b), Bard does not correctly identify the geometry symbols and relationships. The accurate correct should identify the isosceles triangle and apply its properties. text types (Figures 1, 34), and grade levels (Figure 35). Remarkably, GPT-4V surpasses most other baselines in various categories, with exceptions in problems related to logical reasoning and numeric commonsense reasoning. Notably, GPT-4V surpasses human performance not only in tasks like ge- ometry problem solving (GPS), textbook question answering (TQA), and mathematical reasoning skills such as algebraic reasoning but also in visual contexts including function plots, geometry diagrams, scatter plots, and tables. Please refer to §G.2, §G.3, and §G.4 for more detailed analysis. We perform an ablation study on the augmented LLMs and present the results in Table 36 (see §G.5). | 2310.02255#16 | 2310.02255#18 | 2310.02255 | [
"2302.13971"
] |
2310.02255#18 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The gap in the performance of the Augmented LLMs can be attributed to poor image captions, which may not adequately describe the math in visual contexts, the inability of the OCR to detect shapes useful for geometrical reasoning, and the lack of mathematical reasoning capabilities. An in-depth study of GPT-4V can be found in §H. 3.5 QUALITATIVE ANALYSIS Success and failure analysis of Multimodal Bard. In §3.3, we observe that Multimodal Bard achieves the highest average accuracy on MATHVISTA. Here, we analyze its predictions through human evaluation to understand its mode of success and failure. To do so, we ask the human workers, from Amazon Mechanical Turk (AMT), to study Bardâ s predictions given the math question, its associated image, and the ground truth from MATHVISTA dataset for 250 instances. Specifically, workers were instructed to decide whether the predictions contained the correct answer with the | 2310.02255#17 | 2310.02255#19 | 2310.02255 | [
"2302.13971"
] |
2310.02255#19 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 8 Published as a conference paper at ICLR 2024 Question: A race car driver kept track of how many laps he drove in the past 5 days. What is the mode of the numbers? Detected OCR text: ([55, 9], Laps driven), ([21, 34], Day), ([81, 37], Number of laps), ([1, 64], Thursday), ([129, 67], 53), ([3, 99], Friday), ([129, 99], 53), ([1, 126], Saturday), ([129, 129], 53), ([3, 161], Sunday), ([129, 161], 55), ([1, 188], Monday), ([129, 191], 55) Response from GPT-4: def mode(laps): return max(set(laps), key =laps.count) laps = [53, 53, 53, 55, 55] print(mode(laps)) Correct output: 53 Question: What is this function most likely be? Choices: (A) polynomial (B) trigonometric (C) exponential (D) logarithmic Caption from Bard: The image shows a graph of a function f (x) = c on a graph paper. The x axis is the horizontal axis and the y axis is the vertical axis. The graph is a horizontal line that passes through the point (0, c). ... The text â y = 3â means that the y-value of all points on the graph is 3. The text â (0, c)â means that the point (0, c) is on the graph. The text â f (x) = câ means that the function f (x) is equal to c for all values of x. ... The graph of a constant function is a horizontal line. In this case, the horizontal line passes through the point (0, 3), so the constant value of the function is 3. ... | 2310.02255#18 | 2310.02255#20 | 2310.02255 | [
"2302.13971"
] |
2310.02255#20 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Response from GPT-4: The function shown in the image is a constant function, which is a type of polynomial function. Thus, the answer is A. Correct output: (A) polynomial # (a) Correct answer and code (b) Correct answer with partially correct outputs Figure 6: Two examples from GPT-4. GPT-4 depends on the qualities of the generated caption and detected OCR texts. In (b), some information is incorrect, even though the final answer is correct. correct explanation. If the workers find that the modelâ s explanation is incorrect, they had to choose whether the wrong explanation was due to various failure modes such as incorrect reasoning with hallucination or wrong calculations. In our setup, we define hallucination as an introduction of incorrect facts, in the model explanation, that is not mentioned in the context of the image or question (e.g., in Figure 39 and Figure 40). More details can be found in §F.7. | 2310.02255#19 | 2310.02255#21 | 2310.02255 | [
"2302.13971"
] |
2310.02255#21 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | We present the distribution of the quality of Bardâ s predictions, judged by the human annotators, in Figure 4 (a). We find that 44.6% of the Bardâ s predictions had incorrect answers with incorrect explanations. Interestingly, we observe that Bard responds with partial (6.8%) or completely (8.1%) incorrect explanations despite giving the correct answer to the input image and question, highlight- ing its failure to reach the correct answer for the wrong reasons. In Figure 4 (b), we present the distribution over possible reasons when Bard provides incorrect explanations. Notably, we find that 49.6% of its responses contain hallucinations. Our analysis highlights that hallucination is a major source of errors in the generative foundation models (Lu et al., 2023c; Ji et al., 2023). We also observe that the model responds with correct reasoning but either hallucinates (18.6%) or performs wrong calculations (19.5%) leaving an overall impression of being a wrong explanation. Qualitative examples of Multimodal Bard. We also present a few qualitative examples of Bardâ | 2310.02255#20 | 2310.02255#22 | 2310.02255 | [
"2302.13971"
] |
2310.02255#22 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | s predictions. In Figure 5 (a), we find that Bard generates the correct answer with the correct expla- nation, including detecting the correct function (i.e., f (x) = x2) and analyzing its properties (i.e., injective) to answer the question. However, in Figure 5 (b), we observe that the model provides the correct answer (i.e., 12) but with an incorrect explanation (i.e., using the law of cosines when the question requires an understanding of the properties of isosceles triangles). We present more ex- amples in §G.9. Overall, our analysis of Bard highlights its modes of failure in detail, which could guide future foundation model design to address these issues. Qualitative examples of Augmented GPT-4. Augmented with external visual models, CoT GPT- 4 and PoT GPT-4 are able to achieve comparable performance with Multimodal Bard. As shown 9 | 2310.02255#21 | 2310.02255#23 | 2310.02255 | [
"2302.13971"
] |
2310.02255#23 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 in Figure 6 (a), provided with the accurate OCR text detected in the image, PoT GPT-4 accurately understands the structural information of the image and generates a code snippet to perform precise statistical reasoning. In Figure 6 (b), the caption provides some accurate descriptions of the image (e.g., f (x) = c) along with hallucination (e.g., y = 3, the line passes through (0, 3)) caused by the external Bard model. Although CoT GPT-4 predicts the correct answer given the partially correct information, the qualities of visual information augmented by external models have an impact on the accurate visual perception and thus the final mathematical reasoning performance. Examples in §G.10 show failure cases due to hallucination caused by external visual models. # 4 RELATED WORK Several benchmarks (Amini et al., 2019; Cobbe et al., 2021; Mishra et al., 2022; Frieder et al., 2023) have emerged to assess the mathematical reasoning capabilities of LLMs, but most focus solely on text-based tasks. Current benchmarks, such as GSM-8K (Cobbe et al., 2021), exhibit perfor- mance saturation. Given the rise of LMMs Li et al. (2023a), there is a need for robust multimodal benchmarks in scientific domains. To address this gap, we introduce a math reasoning dataset that incorporates visual contexts. VQA datasets (Antol et al., 2015; Gurari et al., 2018; Mobasher et al., 2022) gauge the visual reason- ing abilities of LMMs. Recent studies explore assessing LMMs beyond natural images, including abstract scenes, geometry diagrams, figures, charts, documents, and synthetic images (Lu et al., 2021a; Kahou et al., 2017; Masry et al., 2022). In this work, we introduce new datasets (IQTest, FunctionQA, PaperQA) to create a holistic benchmark for evaluating mathematical reasoning. Generative foundation models like GPT-3, ChatGPT, GPT-4, Claude, and LLaMA have enabled di- verse task solutions without fine-tuning. | 2310.02255#22 | 2310.02255#24 | 2310.02255 | [
"2302.13971"
] |
2310.02255#24 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Specialized pretraining methods like PixStruct (Lee et al., 2023), MatCha (Liu et al., 2022), and UniChart (Masry et al., 2023) enhance chart reasoning in vi- sual contexts. Models like LLaVA, miniGPT4, InstructBLIP, and Bard leverage large-scale image- text data, while specialized versions, such as LLaVAR (Zhang et al., 2023d; Ye et al., 2023), em- phasize document understanding and math comprehension. Recent works (Bitton et al., 2023; Yu et al., 2023) evaluate instruction-following and reasoning capabilities, underscoring the growing im- portance of generative foundation models in practical applications. We introduce MATHVISTA as a benchmark to evaluate their math reasoning capabilities in varied visual contexts. # 5 CONCLUSION In this work, we introduce MATHVISTA, a benchmark designed to systematically analyze the math- ematical reasoning capabilities of state-of-the-art models in visually complex scenarios. Our evalu- ation of 12 prominent foundation models highlights that significant advancements have been made, especially with the GPT-4V model. However, a substantial gap of 10.4% still exists between GPT- 4V, the best-performing model, and human performance. This disparity sets a clear direction for future research, emphasizing the need for models that can seamlessly integrate mathematical rea- soning with visual comprehension. Moreover, our exploration of GPT-4Vâ s self-verification, self- consistency, and chatbot interactions offers valuable insights for future investigations. # REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. | 2310.02255#23 | 2310.02255#25 | 2310.02255 | [
"2302.13971"
] |
2310.02255#25 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â 23736, 2022. 20 Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based for- malisms. In Proceedings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies (NAACL), pp. 2357â 2367, 2019. 10, 20 10 | 2310.02255#24 | 2310.02255#26 | 2310.02255 | [
"2302.13971"
] |
2310.02255#26 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 Anthropic. Claude 2, 2023. URL https://www.anthropic.com/index/claude-2. 6, 20 Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit- nick, and Devi Parikh. VQA: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pp. 2425â 2433, 2015. 10, 20, 27 Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. OpenFlamingo: An open- arXiv preprint source framework for training large autoregressive vision-language models. arXiv:2308.01390, 2023. 20 Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu, Anas Awadalla, Josh Gard- ner, Rohan Taori, and Ludwig Schimdt. VisIT-Bench: A benchmark for vision-language instruc- tion following inspired by real-world use. arXiv preprint arXiv:2308.06595, 2023. 10, 20 | 2310.02255#25 | 2310.02255#27 | 2310.02255 | [
"2302.13971"
] |
2310.02255#27 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Nitzan Bitton-Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, and Roy Schwartz. Breaking common sense: WHOOPS! A vision-and-language benchmark of synthetic and compositional images. arXiv preprint arXiv:2303.07274, 2023. 20 Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportu- nities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. 20 Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â | 2310.02255#26 | 2310.02255#28 | 2310.02255 | [
"2302.13971"
] |
2310.02255#28 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 1901, 2020. 20 S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. 20 Jie Cao and Jing Xiao. An augmented benchmark dataset for geometric question answering through dual parallel text encoding. In Proceedings of the 29th International Conference on Computa- tional Linguistics, pp. 1511â 1520, 2022. 20, 27 Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier, and Ningchuan Xiao. MapQA: A dataset for question answering on choropleth maps. arXiv preprint arXiv:2211.08545, 2022. 20, 27 Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, Chongyu Chen, and Xiaodan Liang. UniGeo: Unifying geometry logical reasoning via reformulating mathematical expression. | 2310.02255#27 | 2310.02255#29 | 2310.02255 | [
"2302.13971"
] |
2310.02255#29 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3313â 3323, 2022a. 20, 27 Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. 20 Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. | 2310.02255#28 | 2310.02255#30 | 2310.02255 | [
"2302.13971"
] |
2310.02255#30 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022b. 2, 6, 21 Wenhu Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. TheoremQA: A theorem-driven question answering dataset. arXiv preprint arXiv:2305.12524, 2023. 21, 27 Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. 10, 20 11 Published as a conference paper at ICLR 2024 | 2310.02255#29 | 2310.02255#31 | 2310.02255 | [
"2302.13971"
] |
2310.02255#31 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Adam Dahlgren Lindstr¨om and Savitha Sam Abraham. CLEVR-Math: A dataset for composi- tional language, visual and mathematical reasoning. In 16th International Workshop on Neural- Symbolic Learning and Reasoning, NeSy 2022, Windsor, UK, september 28-30, 2022., volume 3212. CEUR-WS, 2022. 1, 20, 27 Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, InstructBLIP: Towards general-purpose vision- Boyang Li, Pascale Fung, and Steven Hoi. language models with instruction tuning, 2023. 6, 20, 39 Qingxiu Dong, Li Dong, Ke Xu, Guangyan Zhou, Yaru Hao, Zhifang Sui, and Furu Wei. Large language model for science: A study on P vs. NP. arXiv preprint arXiv:2309.05689, 2023. 1 Iddo Drori and Nakul Verma. Solving linear algebra by program synthesis. arXiv preprint arXiv:2111.08171, 2021. 21 Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceed- ings of the National Academy of Sciences, 119(32):e2123433119, 2022. 21 | 2310.02255#30 | 2310.02255#32 | 2310.02255 | [
"2302.13971"
] |
2310.02255#32 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of chatgpt. In 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks, 2023. 10, 20 Lingyue Fu, Huacan Chai, Shuang Luo, Kounianhua Du, Weiming Zhang, Longteng Fan, Jiayi Lei, Renting Rui, Jianghao Lin, Yuchen Fang, et al. | 2310.02255#31 | 2310.02255#33 | 2310.02255 | [
"2302.13971"
] |
2310.02255#33 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | CodeApex: A bilingual programming evaluation benchmark for large language models. arXiv preprint arXiv:2309.01940, 2023. 20 Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. LLaMA-Adapter V2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. 6, 20 | 2310.02255#32 | 2310.02255#34 | 2310.02255 | [
"2302.13971"
] |
2310.02255#34 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | # Google. Bard, 2023. URL https://bard.google.com/. 2, 6, 20 Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904â 6913, 2017. 20, 27 Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. VizWiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608â 3617, 2018. 10, 20, 27 Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â 9147. PMLR, 2022. 20 Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023. 20 JaidedAI. EasyOCR: Ready-to-use OCR, 2020. URL https://github.com/JaidedAI/ EasyOCR. 6 | 2310.02255#33 | 2310.02255#35 | 2310.02255 | [
"2302.13971"
] |
2310.02255#35 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D Hawkins, and Yoav Artzi. Abstract visual reasoning with tangram shapes. arXiv preprint arXiv:2211.16492, 2022. 20 Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â | 2310.02255#34 | 2310.02255#36 | 2310.02255 | [
"2302.13971"
] |
2310.02255#36 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 38, 2023. 9 12 Published as a conference paper at ICLR 2024 Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. DVQA: Understanding data visu- alizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5648â 5656, 2018. 20, 27 Daniel Kahneman. Thinking, fast and slow. macmillan, 2011. 1 Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, ´Akos K´ad´ar, Adam Trischler, and Yoshua Bengio. FigureQA: An annotated figure dataset for visual reasoning. arXiv preprint arXiv:1710.07300, 2017. 10, 20, 27 Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali In Computer Visionâ ECCV 2016: 14th Euro- Farhadi. A diagram is worth a dozen images. pean Conference, Amsterdam, The Netherlands, October 11â 14, 2016, Proceedings, Part IV 14, pp. 235â 251. Springer, 2016. 20, 27 Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. | 2310.02255#35 | 2310.02255#37 | 2310.02255 | [
"2302.13971"
] |
2310.02255#37 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Are you smarter than a sixth grader? Textbook question answering for multimodal machine comprehension. In Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, pp. 4999â 5007, 2017. 20, 27 Jason J Lau, Soumya Gayen, Asma Ben Abacha, and Dina Demner-Fushman. A dataset of clinically generated visual questions and answers about radiology images. Scientific data, 5(1):1â 10, 2018. 20, 27 Hugo Laurenc¸on, Lucile Saulnier, L´eo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh. | 2310.02255#36 | 2310.02255#38 | 2310.02255 | [
"2302.13971"
] |
2310.02255#38 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | OBELICS: An open web-scale filtered dataset of interleaved image-text documents, 2023. 6, 39 Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2Struct: Screen- In International Conference on shot parsing as pretraining for visual language understanding. Machine Learning, pp. 18893â 18912. PMLR, 2023. 10, 20 Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, and Jianfeng Gao. Multimodal foundation models: From specialists to general-purpose assistants. arXiv preprint arXiv:2309.10020, 2023a. 10 Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- arXiv preprint image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023b. 39 Yunxin Li, Longyue Wang, Baotian Hu, Xinyu Chen, Wanqi Zhong, Chenyang Lyu, and Min Zhang. A comprehensive evaluation of gpt-4v on knowledge-intensive visual question answering. arXiv preprint arXiv:2311.07536, 2023c. 39 Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, and Alan L Yuille. Super-CLEVR: A virtual benchmark to diagnose domain ro- bustness in visual reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14963â 14973, 2023d. 20, 27 Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. Are we learning yet? A meta review of evaluation failures across machine learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. 20 | 2310.02255#37 | 2310.02255#39 | 2310.02255 | [
"2302.13971"
] |
2310.02255#39 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâ s verify step by step. arXiv preprint arXiv:2305.20050, 2023. 1 Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In Computer Visionâ ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740â 755. Springer, 2014. 20 13 Published as a conference paper at ICLR 2024 Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, and Julian Martin Eisenschlos. MatCha: Enhancing visual lan- guage pretraining with math reasoning and chart derendering. arXiv preprint arXiv:2212.09662, 2022. 10, 20 Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023a. 6, 20 Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. AgentBench: Evaluating LLMs as agents. arXiv preprint arXiv:2308.03688, 2023b. 20 Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. MMBench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023c. 20 | 2310.02255#38 | 2310.02255#40 | 2310.02255 | [
"2302.13971"
] |
2310.02255#40 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Min- grui Chen, Chunyuan Li, Lianwen Jin, et al. On the hidden mystery of OCR in large multimodal models. arXiv preprint arXiv:2305.07895, 2023d. 20 Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. | 2310.02255#39 | 2310.02255#41 | 2310.02255 | [
"2302.13971"
] |
2310.02255#41 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Inter-GPS: Interpretable geometry problem solving with formal language and symbolic reasoning. In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021a. 1, 10, 20, 21, 27 Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. IconQA: A new benchmark for abstract diagram understanding and visual lan- guage reasoning. In The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2021b. 20, 27 Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. | 2310.02255#40 | 2310.02255#42 | 2310.02255 | [
"2302.13971"
] |
2310.02255#42 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS), 2022. 6, 20, 27 Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language mod- In The 37th Conference on Neural Information Processing Systems (NeurIPS), 2023a. 2, els. 37 Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured In International Conference on Learning Representations (ICLR), mathematical reasoning. 2023b. 21, 27 Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. A survey of deep learning for mathematical reasoning. In The 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023c. 9, 20 | 2310.02255#41 | 2310.02255#43 | 2310.02255 | [
"2302.13971"
] |
2310.02255#43 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Ahmed Masry, Xuan Long Do, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. ChartQA: A bench- mark for question answering about charts with visual and logical reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pp. 2263â 2279, 2022. 1, 10, 20, 27 Ahmed Masry, Parsa Kavehzadeh, Xuan Long Do, Enamul Hoque, and Shafiq Joty. UniChart: A universal vision-language pretrained model for chart comprehension and reasoning. arXiv preprint arXiv:2305.14761, 2023. 10, 20 Minesh Mathew, Viraj Bagal, Rub`en Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawa- har. InfographicsVQA. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1697â 1706, 2022. 20, 27 | 2310.02255#42 | 2310.02255#44 | 2310.02255 | [
"2302.13971"
] |
2310.02255#44 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Nitesh Methani, Pritha Ganguly, Mitesh M Khapra, and Pratyush Kumar. PlotQA: Reasoning over scientific plots. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1527â 1536, 2020. 20, 27 14 Published as a conference paper at ICLR 2024 Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. | 2310.02255#43 | 2310.02255#45 | 2310.02255 | [
"2302.13971"
] |
2310.02255#45 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | LILA: A unified benchmark for mathematical reasoning. In The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. 10, 20 Shaghayegh Mobasher, Ghazal Zamaninejad, Maryam Hashemi, Melika Nobakhtian, and Sauleh Eetemadi. ParsVQA-Caps: A benchmark for visual question answering and image captioning in persian. people, 101:404, 2022. 10, 20 | 2310.02255#44 | 2310.02255#46 | 2310.02255 | [
"2302.13971"
] |
2310.02255#46 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. Capabilities of GPT-4 on medical challenge problems. arXiv preprint arXiv:2303.13375, 2023. 20 # OpenAI. Chatgpt, 2022. URL https://openai.com/blog/chatgpt. 2, 6, 20 OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023a. 2, 6, 20 OpenAI. GPT-4V(ision) system card, 2023b. URL https://openai.com/research/ gpt-4v-system-card. 2, 3 Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. | 2310.02255#45 | 2310.02255#47 | 2310.02255 | [
"2302.13971"
] |
2310.02255#47 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Check your facts and try again: Improv- ing large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. 97 Rachel Pollitt, Caroline Cohrssen, and Wee Tiong Seah. Assessing spatial reasoning during play: Educator observations, assessment and curriculum planning. Mathematics Education Research Journal, 32(2):331â 363, 2020. 1 Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. | 2310.02255#46 | 2310.02255#48 | 2310.02255 | [
"2302.13971"
] |
2310.02255#48 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | LAION-5B: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278â 25294, 2022. 20 Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-OKVQA: A benchmark for visual question answering using world knowledge. In European Conference on Computer Vision, pp. 146â 162. Springer, 2022. 20, 27 | 2310.02255#47 | 2310.02255#49 | 2310.02255 | [
"2302.13971"
] |
2310.02255#49 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 1466â 1476, 2015. 1, 20, 27 Sanket Shah, Anand Mishra, Naganand Yadati, and Partha Pratim Talukdar. KVQA: Knowledge- aware visual question answering. In Proceedings of the AAAI conference on artificial intelligence, pp. 8876â 8884, 2019. 20, 27 Wenqi Shao, Yutao Hu, Peng Gao, Meng Lei, Kaipeng Zhang, Fanqing Meng, Peng Xu, Siyuan Huang, Hongsheng Li, Yu Qiao, et al. Tiny LVLM-eHub: Early multimodal experiments with bard. arXiv preprint arXiv:2308.03729, 2023. 20 | 2310.02255#48 | 2310.02255#50 | 2310.02255 | [
"2302.13971"
] |
2310.02255#50 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual Captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556â 2565, 2018. 20 Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint HuggingGPT: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023. 37 Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards VQA models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8317â 8326, 2019. 20, 27 Deborah Stipek and Douglas Mac Iver. | 2310.02255#49 | 2310.02255#51 | 2310.02255 | [
"2302.13971"
] |
2310.02255#51 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Developmental change in childrenâ s assessment of intellec- tual competence. Child development, pp. 521â 538, 1989. 1 15 Published as a conference paper at ICLR 2024 Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. SciEval: A multi-level large language model evaluation benchmark for scientific research. arXiv preprint arXiv:2308.13149, 2023. 20 | 2310.02255#50 | 2310.02255#52 | 2310.02255 | [
"2302.13971"
] |
2310.02255#52 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Sanaz Talaifar and William B Swann. Self-verification theory. Encyclopedia of personality and individual differences, pp. 4813â 4821, 2020. 97 John Chong Min Tan and Mehul Motani. Large language model (llm) as a system of multiple expert agents: An approach to solve the abstraction and reasoning corpus (arc) challenge. arXiv preprint arXiv:2310.05146, 2023. 21 Leonard Tang, Elizabeth Ke, Nikhil Singh, Bo Feng, Derek Austin, Nakul Verma, and Iddo Drori. Solving probability and statistics problems by probabilistic program synthesis at human level and In International Conference on Artificial Intelligence in Education, pp. predicting solvability. 612â 615. Springer, 2022. 21 | 2310.02255#51 | 2310.02255#53 | 2310.02255 | [
"2302.13971"
] |
Subsets and Splits