id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2310.02255#53 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, 2022. 1 Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 2 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 20 Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature, 625(7995):476â 482, 2024. 1 | 2310.02255#52 | 2310.02255#54 | 2310.02255 | [
"2302.13971"
] |
2310.02255#54 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah D Goodman. Hypothesis search: Inductive reasoning with language models. arXiv preprint arXiv:2309.05660, 2023a. 21 Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. SciBench: Evaluating college-level sci- entific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635, 2023b. 2, 20, 27 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. 103 In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 845â 854, 2017. 1 Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yo- gatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682, 2022a. 20 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022b. 2, 6, 21, 103 Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prab- hanjan Kambadur, David Rosenberg, and Gideon Mann. BloombergGPT: A large language model for finance. arXiv preprint arXiv:2303.17564, 2023. 1 | 2310.02255#53 | 2310.02255#55 | 2310.02255 | [
"2302.13971"
] |
2310.02255#55 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. LVLM-eHub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023. 20 16 Published as a conference paper at ICLR 2024 Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. FinGPT: Open-source financial large language models. arXiv preprint arXiv:2306.06031, 2023a. 1 Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Li- juan Wang. The Dawn of LMMs: Preliminary explorations with gpt-4v(ision). arXiv preprint arXiv:2309.17421, 2023b. 6, 97 Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mPlug-Owl: Modularization empowers large language mod- els with multimodality. arXiv preprint arXiv:2304.14178, 2023. 6, 10, 20 Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, and Kai-Wei Chang. Broaden the vision: Geo-diverse visual commonsense reasoning. arXiv preprint arXiv:2109.06860, 2021. 20 Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. MM-Vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023. 10, 20 | 2310.02255#54 | 2310.02255#56 | 2310.02255 | [
"2302.13971"
] |
2310.02255#56 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6720â 6731, 2019. 20 Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Qiao Yu. LLaMA-Adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199, 2023a. 20 Xiang Zhang, Senyu Li, Zijun Wu, and Ning Shi. Lost in translation: When gpt-4v (ision) canâ t see eye to eye with text. a vision-language-consistency analysis of vllms and beyond. arXiv preprint arXiv:2310.12520, 2023b. 21 Xiaoman Zhang, Chaoyi Wu, Ziheng Zhao, Weixiong Lin, Ya Zhang, Yanfeng Wang, and Weidi Xie. PMC-VQA: Visual instruction tuning for medical visual question answering. arXiv preprint arXiv:2305.10415, 2023c. 20, 27 Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, and Tong Sun. | 2310.02255#55 | 2310.02255#57 | 2310.02255 | [
"2302.13971"
] |
2310.02255#57 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | LLaVAR: Enhanced visual instruction tuning for text-rich image understanding. arXiv preprint arXiv:2306.17107, 2023d. 6, 10, 20 Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. MiniGPT-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023a. 6, 20 Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Young- jae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal C4: An open, billion- scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939, 2023b. 20 | 2310.02255#56 | 2310.02255#58 | 2310.02255 | [
"2302.13971"
] |
2310.02255#58 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 17 Published as a conference paper at ICLR 2024 CONTENTS A Detailed Related Work B Limitations of the Benchmark C Data Collection Guidelines C.1 Mathematical Reasoning Definition . . . . . . . . . . . . . . . . . . . . . . . . . C.2 Mathematical Reasoning Examples . . . . . . . . . . . . . . . . . . . . . . . . . . C.3 Visual Context Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.4 Source Dataset Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Data Collection Details D.1 Automatic Selection of Mathematical Problems . . . . . . . . . . . . . . . . . . . D.2 Human Labeling of Mathematical Problems . . . . . . . . . . . . . . . . . . . . . D.3 Annotating Three New Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . D.4 Human Labeling of Mathematical Reasoning . . . . . . . . . . . . . . . . . . . . E More Dataset Analysis F More Details on the Setup F.1 Frequent Guess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.2 Prompt for Answer Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.3 Prompts for Response Generation . . . . . . . . . . . . . . . . . . . . . . . . . . F.4 Prompt for Caption Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.5 Model Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2310.02255#57 | 2310.02255#59 | 2310.02255 | [
"2302.13971"
] |
2310.02255#59 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | F.6 Human Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.7 Multimodal Bard Assessment Task . . . . . . . . . . . . . . . . . . . . . . . . . . G More Experimental Results G.1 Results on the Test Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.2 Scores for Math Reasoning Types . . . . . . . . . . . . . . . . . . . . . . . . . . G.3 Scores for Various Visual Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . G.4 Scores Across Different Grade Levels . . . . . . . . . . . . . . . . . . . . . . . . 20 21 22 22 23 24 27 28 28 28 29 29 30 33 33 33 34 34 34 34 35 36 36 36 37 37 G.6 LLMs with Different Shots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.7 LMMs with Different Shots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.8 Hallucinations in Model Explanations . . . . . . . . . . . . . . . . . . . . . . . . G.9 More Examples for Multimodal Bard . . . . . . . . . . . . . . . . . . . . . . . . . # G.10 Comparisons of Different Models . . . . . . . . . . . . . . . . . . . . . . . . . . 18 39 39 40 41 47 | 2310.02255#58 | 2310.02255#60 | 2310.02255 | [
"2302.13971"
] |
2310.02255#60 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 # H A Comparative Study of GPT-4V, Bard, and Other Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.1 Algebraic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.2 Arithmetic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.3 Geometry Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.4 Logical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.5 Numeric Commonsense Reasoning . . . . . . . . . . . . . . . . . . . . . H.3.6 Scientific Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.3.7 Statistical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.1 Abstract Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.2 Bar Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.3 Function Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.4 Geometry Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2310.02255#59 | 2310.02255#61 | 2310.02255 | [
"2302.13971"
] |
2310.02255#61 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | H.4.5 Line Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.6 Natural Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.7 Puzzle Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.8 Scatter Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.9 Scientific Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.10 Synthetic Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.11 Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.4.12 Other Visual Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 54 55 56 56 59 61 63 66 69 72 74 74 76 77 79 81 83 85 87 89 92 94 96 97 103 H.1 GPT-4V Playground for Manual Evaluation . . . . . . . . . . . . . . . . . . . . . # H.2 Leaderboard Scores . H.3 Abilities in Mathematical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . H.4 Abilities Across Visual Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2310.02255#60 | 2310.02255#62 | 2310.02255 | [
"2302.13971"
] |
2310.02255#62 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | H.5 Self-Verification in GPT-4V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.6 Self-Consistency for GPT-4V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.7 GPT-4V for Multi-Turn Human-AI Interaction . . . . . . . . . . . . . . . . . . . . 109 | 2310.02255#61 | 2310.02255#63 | 2310.02255 | [
"2302.13971"
] |
2310.02255#63 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 19 Published as a conference paper at ICLR 2024 # A DETAILED RELATED WORK Mathematical reasoning benchmarks. Recently, numerous benchmarks (Amini et al., 2019; Cobbe et al., 2021; Mishra et al., 2022; Frieder et al., 2023) have been proposed to evaluate the math- ematical reasoning capabilities of Large Language Models (LLMs). However, most of these are tex- tual only (Lu et al., 2023c), despite a substantial amount of mathematical information and reasoning being encapsulated in visual modalities. Meanwhile, some datasets exhibit performance saturation; for instance, GPT-4 achieves 92.0% accuracy on GSM-8K (Cobbe et al., 2021), a dataset of grade- school mathematics questions. On the other hand, the recent rapid advancement of Large Multi- modal Models (LMMs) necessitates the establishment of robust multimodal benchmarks. However, current multimodal reasoning benchmarks provide limited coverage of rigorous and scientific do- mains (Antol et al., 2015; Kembhavi et al., 2016; Kahou et al., 2017; Mathew et al., 2022), which are key components for creating general-purpose AI assistants. To bridge this gap, it is crucial to develop a robust math reasoning dataset that integrates visual contexts. Vision-language reasoning benchmarks. High-quality evaluation datasets and benchmarks are a cornerstone for assessing the progress of machine learning models to solve real-world tasks Liao et al. (2021). Prior studies such as VQA (Antol et al., 2015; Goyal et al., 2017), VizWiz (Gurari et al., 2018), and ParsVQA-Caps (Mobasher et al., 2022) assess the general-purpose visual question answering abilities of the LMMs, with or without task-specific training, on open-ended questions about images. | 2310.02255#62 | 2310.02255#64 | 2310.02255 | [
"2302.13971"
] |
2310.02255#64 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | In addition, there are several works that focus on evaluating specific skills of the LMMs beyond natural scenes, such as abstract scenes and shapes) (Antol et al., 2015; Lu et al., 2021b; Ji et al., 2022), geometry diagrams (Seo et al., 2015; Lu et al., 2021a; Chen et al., 2022a; Cao & Xiao, 2022), figures and charts (Methani et al., 2020; Masry et al., 2022; Kahou et al., 2017; Chang et al., 2022; Kafle et al., 2018), documents (text in images) (Singh et al., 2019; Mathew et al., 2022; Liu et al., 2023d), or synthetic images (Dahlgren Lindstr¨om & Abraham, 2022; Li et al., 2023d; Bitton-Guetta et al., 2023). Besides, there has been significant progress on developing datasets to judge LMMs on skills that require external knowledge (Schwenk et al., 2022; Shah et al., 2019), common sense reasoning (Zellers et al., 2019; Yin et al., 2021), scientific-knowledge (Lu et al., 2022; Kembhavi et al., 2017; 2016), medical understanding (Zhang et al., 2023c; Lau et al., 2018). In this work, we create new datasets (IQTest, FunctionQA, PaperQA) and subsequently design a benchmark for holistic evaluation of the math reasoning capabilities of the LMMs. Generative foundation models and their evaluation. | 2310.02255#63 | 2310.02255#65 | 2310.02255 | [
"2302.13971"
] |
2310.02255#65 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Recently, there has been a surge of genera- tive foundation models (Bommasani et al., 2021) that are trained on web-scale data, such as GPT-3, ChatGPT, GPT-4, Claude, LLaMA, LLaMA-Adapter (Brown et al., 2020; OpenAI, 2022; 2023a; Anthropic, 2023; Touvron et al., 2023; Zhang et al., 2023a), with the ability to solve a wide range of downstream tasks (Wei et al., 2022a) without any task-specific finetuning. Prior work has focused on evaluating their abilities to respond to the queries from various disciplines, grounded in text, such as QA, math, medicine, coding and science (Bubeck et al., 2023; Nori et al., 2023; Chen et al., 2021; Fu et al., 2023; Sun et al., 2023; Wang et al., 2023b; Huang et al., 2023; 2022; Liu et al., 2023b; Zhang et al., 2023a). Prior work, such as PixStruct (Lee et al., 2023), MatCha (Liu et al., 2022), and UniChart (Masry et al., 2023), has focused on developing specialized pretraining recipe for improved math and chart reasoning in visual contexts. On the vision-language side, there are several generative foundation models such as LLaVA, miniGPT4, InstructBLIP, Flamingo, LLaMA-Adapter V2, Multimodal Bard (Liu et al., 2023a; Zhu et al., 2023a; Dai et al., 2023; Alayrac et al., 2022; Awadalla et al., 2023; Gao et al., 2023; Google, 2023) that are trained on vast amount of paired (Schuhmann et al., 2022; Sharma et al., 2018; Lin et al., 2014) and interleaved image-text data (Zhu et al., 2023b). | 2310.02255#64 | 2310.02255#66 | 2310.02255 | [
"2302.13971"
] |
2310.02255#66 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | In addition, there has been recent development on specialized versions of these LMMs for document understanding where visual con- texts require text recognition, math understanding being one of them (Zhang et al., 2023d; Ye et al., 2023). In recent times, there have been several works, such as Visit-Bench, LVLM-eHub, MM- Bench (Bitton et al., 2023; Yu et al., 2023; Liu et al., 2023c; Xu et al., 2023; Shao et al., 2023), that assess their instruction-following and reasoning capabilities. As the generative foundation models become more relevant to real-world applications, unlike prior work, we propose MATHVISTA to benchmark their capabilities of math reasoning (logical, arithmetic, statistical) on a diverse set of visual contexts (word problems in images, natural scenes, geometrical shapes, and plots). | 2310.02255#65 | 2310.02255#67 | 2310.02255 | [
"2302.13971"
] |
2310.02255#67 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 20 Published as a conference paper at ICLR 2024 Recent work of LLM prompting and GPT-4V. We have witnessed the remarkable abilities of large language models (LLMs), and their reasoning capabilities are further enhanced by promoting approaches such as chain-of-thought (CoT) (Wei et al., 2022b), program-of-thought (PoT) (Chen et al., 2022b), and inductive reasoning (Wang et al., 2023a; Tan & Motani, 2023). For example, the feasibility of using LLMs to solve the Abstraction and Reasoning Corpus (ARC) challenge has been verified using zero-shot, few-shot, and context-grounded prompting (Tan & Motani, 2023). In this paper, we evaluate LLMs using zero-shot, few-shot, CoT prompting, PoT prompting, as well as tool-augmented prompting, to explore their potential in solving mathematical reasoning in visual contexts on MATHVISTA. Program-aided methods are widely used for mathematical reasoning due to their advancements in precise logical reasoning and arithmetic calculations (Drori & Verma, 2021; Tang et al., 2022; Drori et al., 2022). In this work, we have developed the LLM baselines with PoT. Recently, OpenAI released GPT-4V, the multimodal version of GPT-4, which shows promising per- formance in vision-language reasoning. However, the fine-grained study of its strengths and limi- tations still remains underexplored. The recent work (Zhang et al., 2023b) contributes pioneering efforts in this field, studying whether large multimodal models (LMMs), like GPT-4V, execute vi- sion and language tasks consistently or independently. As concurrent work, our paper provides, for the first time, a comprehensive quantitative and qualitative study of GPT-4V and other LLMs in mathematical reasoning within visual contexts. # B LIMITATIONS OF THE BENCHMARK Our benchmark, MATHVISTA, makes significant contributions by combining mathematical and vi- sual tasks, a domain where existing models like GPT-4V have shown promise but also face chal- lenges, especially in complex figure understanding and rigorous reasoning. While we have made strides in evaluating model performance, we acknowledge several limitations. | 2310.02255#66 | 2310.02255#68 | 2310.02255 | [
"2302.13971"
] |
2310.02255#68 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | One limitation is the dataset coverage. While MATHVISTA encompasses a broad spectrum of tasks and visual contexts, there may be gaps in the representation of certain types of mathematical prob- lems and visuals. Furthermore, the datasetâ s focus on mathematical reasoning within visual contexts, spanning specific domains like science and college-level math, necessitates a more labor-intensive process for collecting high-quality data compared to textual-only or general-purpose datasets. Thus, the scalability and generalizability of our benchmark to other domains remain a concern. Anno- tations were sourced from original data providers, resulting in only 85.6% of examples (Table 1) having annotations. Due to the heterogeneity of these sources, annotations lack a unified format and structure. For example, the annotations could be logic forms of the problem parsing from Geome- try3K (Lu et al., 2021a), natural language solutions from TabMWP (Lu et al., 2023b), and theorems from TheoremQA (Chen et al., 2023). Given the rapid development in foundation models, our study focused exclusively on the most recent and prominent models. In future iterations, our benchmark will be beneficial to encompass a broader array of problems and visual contexts, while also providing unified and comprehensive annotations. Our benchmark is part of an ongoing research process, and we are committed to maintaining the datasets, such as refining the potential data noise, in response to the community feedback. Also, we are committed to evolving the leaderboard in response to new models. In conclusion, while there are limitations to our current approach, MATHVISTA represents a signif- icant step forward in the field. We are dedicated to continuously improving our benchmark to better understand and enhance the capabilities of AI in mathematical and visual reasoning. | 2310.02255#67 | 2310.02255#69 | 2310.02255 | [
"2302.13971"
] |
2310.02255#69 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 21 Published as a conference paper at ICLR 2024 # C DATA COLLECTION GUIDELINES C.1 MATHEMATICAL REASONING DEFINITION Seven mathematical reasoning types are defined in Table 3. Math Reasoning Description Arithmetic Reasoning (34.1%) It covers the fundamental operations such as addition, subtraction, multiplication, di- vision, and understanding of number properties. It may also include the ability to interpret numerical data in different forms. Statistical Reasoning (30.5%) It focuses on data interpretation and analysis, including measures (mean, median, mode), dispersion metrics (standard deviation, range), probability concepts, regres- sion, correlation, and data inferences. It also identifies trends, outliers, and patterns. Algebraic Reasoning (28.5%) It encompasses understanding variables, equations, and the manipulation of expres- sions with polynomials and exponents. It also covers solving simple to complex equa- tions, and grasping functions, their properties, and graphical depictions. Geometry Reasoning (23.3%) It emphasizes spatial understanding, analysis of 2D and 3D figures, and reasoning about their shapes, sizes, and relationships. It includes symmetry, congruency, simi- larity, area, volume, and transformations. Numeric common sense (14.0%) It involves intuitive understanding of daily numerical concepts, including understand- ing time differences, numerical judgment, and estimates. It covers temporal reasoning, spatial numeric assessments, and practical uses like budgeting and time reading. Scientific Reasoning (10.7%) It deals with the application of mathematical concepts in scientific contexts. This includes scientific notations, formula use, understanding rates, proportions, and per- centages in practical situations, and problem-solving in scientific inquiries. Logical Reasoning (3.8%) It focuses on critical thinking and deduction from provided information, including pattern recognition, sequence understanding, predictions, and statement evaluation. Key components include premises, conclusions, and the use of abstract reasoning. | 2310.02255#68 | 2310.02255#70 | 2310.02255 | [
"2302.13971"
] |
2310.02255#70 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Table 3: Definitions and proportions of seven mathematical reasoning categories in MATHVISTA. 22 Published as a conference paper at ICLR 2024 C.2 MATHEMATICAL REASONING EXAMPLES # Math Examples Question: Karen bought 4 pounds of silk scraps and 4 pounds of canvas scraps. How much did she spend? (Unit: $) Solution: Find the cost of the silk scraps. Multiply: $9.08 Ã 4 = $36.32 Find the cost of the canvas scraps. Multiply: $8.17 Ã 4 = $32.68 Now find the total cost by adding: $36.32 + $32.68 = $69 She spent $69. Answer: 69 silk scraps denim scraps canvas scraps felt scraps faux fur scraps lace scraps $9.08/lb $8.47/Ib $8.17/b $7.29/b $11.79/lb $6.37/b ARI STA Question: | 2310.02255#69 | 2310.02255#71 | 2310.02255 | [
"2302.13971"
] |
2310.02255#71 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | How many sequences have nega- tive Influence Scores? Answer: 2 ALG Question: The derivative of y at x = 6 is Choices: (A) larger than (B) equal to (C) smaller than Answer: (A) larger than that at x = 8. Question: How many zeros does this function have? Answer: 1 Question: What is the value of y at x = 1? Answer: 0 GEO Question: AB is a diameter, AC = 8 inches, and BC = 15 inches. Find the radius of the circle. Diagram logic forms: PointLiesOnLine(D, Line(B, A)) PointLiesOnCircle(B, Circle(D, radius)) PointLiesOnCircle(A, Circle(D, radius)) PointLiesOnCircle(C, Circle(D, radius)) Answer: (C) 8.5 NUM Question: | 2310.02255#70 | 2310.02255#72 | 2310.02255 | [
"2302.13971"
] |
2310.02255#72 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | What is the age gap between these two people in image? (unit: years) Named entities: Winston Churchill, Charles de Gaulle Wiki caption: Winston Churchill and General de Gaulle at Marrakesh, January 1944 Answer: 16 SCI 4 iil 0 2 4 6 8 10 12 14 £ (seconds) # LOG Brain â Teaser for) IQ) test Oul yen Osh 5 3 H =? 7 Question: Find the value of the square in the figure. Solution: Circle + Square = 5, Triangle + Triangle = 8, Triangle = 4. Circle + Triangle = 7, Circle = 3. Therefore Square = 2 Answer: 2 | 2310.02255#71 | 2310.02255#73 | 2310.02255 | [
"2302.13971"
] |
2310.02255#73 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Table 4: Examples of seven mathematical reasoning categories in MATHVISTA. 23 Published as a conference paper at ICLR 2024 C.3 VISUAL CONTEXT TYPES Published as a conference paper at ICLR 2024 Figure 7: Examples of the visual context for the geometry diagram type. Figure 8: Examples of the visual context for the synthetic scene type. Figure 9: Examples of the visual context for the bar chart type. Figure 10: Examples of the visual context for the natural image type. Figure 11: Examples of the visual context for the scientific figure type. | 2310.02255#72 | 2310.02255#74 | 2310.02255 | [
"2302.13971"
] |
2310.02255#74 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 24 Published as a conference paper at ICLR 2024 Published as a conference paper at ICLR 2024 ; â le 9 Kepler's Law of Periods othe cilantro $3.18 per kilogram Salar System Settings [LPSNRTSSIMT_1PIPS1 F Seninaoe riâ w/o Surface Normal Param. | 20464 0.720 0.349 arsle 3.10 per kilogram = : [Emmet | easly Cano panties rt aay Tet wo Lom 28,331 0878 0.103 Lather 7 rosemary $3.52 per kilogram iecsy sy van 2m ~~: W/0Plane Consistency 30.687 0.916 0.058 Bruce 10 , â Venus 108 0615 3.00 w/o Forward. Normal Reg. 31.108 0.923 0.052 Seot 3 oregano $2.04 per kilogram fa 389 1a) 298 â lo ent Optimization 27691 08750106 eT 3242 Mabel 8 mint $1.95 perkilogram Sr is ys ayy ___ Full Model 20988 0.047 Roxanne 5 ; ; Units 27 28 Table 3: We quantitatively analyze our model design and ikevnâ ~SOSS~*~â ¢C*C~SC«MMl $2.04 perkilogram â pioâ | 2310.02255#73 | 2310.02255#75 | 2310.02255 | [
"2302.13971"
] |
2310.02255#75 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 9028, 299 training schemes on the synthetic bedroom. Figure 12: Examples of the visual context for the table type. Figure 13: Examples of the visual context for the function plot type. # Figure 13: Examples of the visual context for the function plot type. Figure 14: Examples of the visual context for the abstract scene type. Figure 15: Examples of the visual context for the puzzle test type. Figure 16: Examples of the visual context for the scatter plot type. # Figure 17: Examples of the visual context for the line plot type. | 2310.02255#74 | 2310.02255#76 | 2310.02255 | [
"2302.13971"
] |
2310.02255#76 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 25 Published as a conference paper at ICLR 2024 Published as a conference paper at ICLR 2024 Figure 18: Examples of the visual context for the pie chart type. Figure 19: Examples of the visual context for the document image type. Figure 20: Examples of the visual context for the medical image type. Figure 21: Examples of the visual context for other types, including word cloud, map chart, radar chart, violin plot, and heatmap chart. 26 Published as a conference paper at ICLR 2024 # C.4 SOURCE DATASET SUMMARY The source datasets are summarized in Table 5. | 2310.02255#75 | 2310.02255#77 | 2310.02255 | [
"2302.13971"
] |
2310.02255#77 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Dataset Category Task Context Math Skill IQTest (Ours) PaperQA (Ours) Math-Targeted Math-Targeted FQA FQA Puzzle Test Charts and Plots Logical, Arithmetic Scientific FunctionQA (Ours) Math-Targeted TQA Function Plot Algebraic Geometry3K (2021a) GeoQA+ (2022) GEOS (2015) UniGeo (2022a) Math-Targeted Math-Targeted Math-Targeted Math-Targeted GPS GPS GPS GPS Geometry Diagram Geometry Diagram Geometry Diagram Geometry Diagram Geometry, Algebraic Geometry, Algebraic Geometry, Algebraic Geometry, Algebraic CLEVR-Math (2022) IconQA (2021b) TabMWP (2023b) Math-Targeted MWP Math-Targeted MWP Math-Targeted MWP Synthetic Scene Abstract Scene Table Arithmetic Arithmetic Statistical, Arithmetic SciBench (2023b) TheoremQA (2023) Math-Targeted Math-Targeted TQA TQA Scientific Figure Scientific Figure Scientific Scientific ChartQA (2022) FigureQA (2017) DVQA (2018) MapQA (2022) PlotQA (2020) DocVQA (2022) General VQA General VQA General VQA General VQA General VQA General VQA FQA FQA FQA FQA FQA FQA Charts and Plots Charts and Plots Bar Chart Map Chart Scatter Plot Document Image Statistical Statistical Statistical Statistical Statistical Statistical AI2D (2016) ScienceQA (2022) TQA (2017) General VQA General VQA General VQA TQA TQA TQA Scientific Figure Scientific Figure Scientific Figure Scientific Scientific Scientific A-OKVQA (2022) KVQA (2019) ParsVQA-Caps (2022) TextVQA (2019) VizWiz (2018) VQA2.0 (2017) PMC-VQA (2023c) VQA-RAD (2018) Super-CLEVR (2023d) VQA-AS (2015) General VQA General VQA General VQA General VQA General VQA General VQA General VQA General VQA General VQA General VQA VQA VQA VQA VQA VQA VQA VQA VQA VQA VQA Natural Image Natural Image Natural Image Natural Image Natural Image Natural Image Medical Image Medical Image Synthetic Scene Abstract Scene Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Scientific Scientific Arithmetic Arithmetic | 2310.02255#76 | 2310.02255#78 | 2310.02255 | [
"2302.13971"
] |
2310.02255#78 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Table 5: Summary of the 31 different source datasets in MATHVISTA. Among these, FunctionQA, IQTest, and PaperQA are our newly annotated datasets. The table provides details on their category, task, visual context, and primary mathematical reasoning skill types. 27 Published as a conference paper at ICLR 2024 # D DATA COLLECTION DETAILS D.1 AUTOMATIC SELECTION OF MATHEMATICAL PROBLEMS most, least, fewest more, less, fewer, largest, smallest, greatest, larger, smaller, greater, highest, lowest, higher, lower, increase, decrease, minimum, maximum, max, min, mean, average, median, total, sum, add, subtract, difference, quotient, gap, half, double, twice, triple, square, cube, root, approximate, approximation, triangle, rectangle, circle, square, cube, sphere, cylinder, cone, pyra- mid, multiply, divide, percentage, percent, ratio, proportion, fraction, rate Table 6: Dictionary of quantity words used for the automatic selection of questions likely to involve mathematical reasoning. D.2 HUMAN LABELING OF MATHEMATICAL PROBLEMS | 2310.02255#77 | 2310.02255#79 | 2310.02255 | [
"2302.13971"
] |
2310.02255#79 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | ome | Welcome! You are editing the A-OKVQA dataset! (problem id: 8, progress: 7 / 94) | Previous | | Next Problem Diagram Choices A. atkins ] B. weight watchers | & vegetarian | D. ketogenic | Answer vegetarian Comment a 'A person following what kind of diet is least likely to eat this neal? Is this a problem that involves mathematical reasoning? Figure 22: GUI for labeling if a problem involves mathematical reasoning. We are compiling a dataset that incorporates image context and involves mathematical reasoning (MathQA in visual contexts). We have gathered a set of examples in which some involve mathe- matical reasoning, while others do not. In our task, a question can be classified as a mathematical problem if it | 2310.02255#78 | 2310.02255#80 | 2310.02255 | [
"2302.13971"
] |
2310.02255#80 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | â ¢ Involves numbers or symbols in the question text or the image context, AND requires further operations or transformations to be performed on them to reach a solution. â ¢ Involves more complex forms of mathematical reasoning, including logical reasoning, abstract thought, and understanding of patterns. Based on the definition above, a problem is classified as a negative example (NOT involving math- ematical reasoning) if it: Does not involve any numbers or quantity words, OR â ¢ Involves only counting, reading, or recognizing numbers, OR â ¢ Relies solely on factual information, such as recalling years and dates. Table 7: Instructions for human annotators to identify if a problem involves mathematical reasoning. We developed an annotation tool, as illustrated in Figure 22, to enable expert annotators to label problems that involve mathematical reasoning. Annotators were trained using detailed instructions, | 2310.02255#79 | 2310.02255#81 | 2310.02255 | [
"2302.13971"
] |
2310.02255#81 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 28 | Published as a conference paper at ICLR 2024 as shown in Table 7, along with a variety of examplesâ positive ones that involve mathematical reasoning and negative ones that do not. We provided three labeling options: Yes - This indicates that the problem involves mathematical reasoning. â ¢ No - This indicates that the problem does not involve mathematical reasoning. â ¢ Unsure - This option should be selected if it is uncertain whether the problem involves mathematical reasoning. (Annotators are advised to use this option sparingly.) They may leave comments if they find anything incorrect or offensive for removal at a later stage. In our study, we employed the Fleiss Kappa score to conduct an inter-annotator agreement analysis among three annotators tasked with labeling examples based on mathematical reasoning. The Fleiss Kappa score is a statistical measure used to evaluate the reliability of agreement between multiple raters, providing a quantifiable metric to assess the consistency across different annotators. A score of 1 indicates perfect agreement, while a score of 0 suggests no agreement beyond what would be expected by chance. Our analysis yielded a Fleiss Kappa score of 0.775, indicating a substantial level of consistency among the annotators. This high degree of agreement underscores the reliability of our annotation process and affirms the quality of the labeled data generated for our study. D.3 ANNOTATING THREE NEW DATASETS | 2310.02255#80 | 2310.02255#82 | 2310.02255 | [
"2302.13971"
] |
2310.02255#82 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Welcome! You are annotating #1 data. ! Which number is missing? AQ OM â â ; Options Detailed Solution (Optional) 2) (4) @) GB) â The top 2 digits divided by the diamond are equal to the digits at the bottom. ] Source (url or file name) is ilar ors 7967) Figure 23: GUI for annotating our new source datasets. # D.4 HUMAN LABELING OF MATHEMATICAL REASONING | 2310.02255#81 | 2310.02255#83 | 2310.02255 | [
"2302.13971"
] |
2310.02255#83 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Welcome! You are labeling the mathematical reasoning skills! (problem id: 46 ) Problem Diagram Choices SPIDER by LIFECYCLE |» (Egg sac â Adult spider population would remain the same KK B. Adult Dy â Adult spider population would double. @ Adults spider population would decrease a D. â Adult spider population would increase. Answer Adults spider population would decrease Spiderlings NL co Baby spiderlings Problem Text Which of the following mathematical skills does this problem involve? What would happen to the population of adult spiders if predator ate all the | | Logical I Scientific I Commonsensd, Geometry ] spider eggs? Algebraic | Statistical | Arithmetic J Save and Next Figure 24: GUI for labeling mathematical reasoning skills. 29 | 2310.02255#82 | 2310.02255#84 | 2310.02255 | [
"2302.13971"
] |
2310.02255#84 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 # E MORE DATASET ANALYSIS Question distribution. Apart from English questions, MATHVISTA contains 6.57% non-English questions, including languages such as Chinese and Persian. The multilingual feature necessitates that models be capable of understanding and processing multiple languages to ensure accurate results across the dataset. As illustrated in Table 3, the average number of words in English questions within MATHVISTA is 15.58, while the maximum number of words in a question reaches 213. Figure 25 further elucidates the distribution of word counts, highlighting the diverse patterns of questions. MATHVISTA features two types of questions: multiple-choice questions and free-form questions. For multiple-choice questions, the average number of choices is 3.4, while the maximum number of choices is 8. In the case of free-form questions, answers can be integers, floating-point numbers, or lists, which can be converted into a standard format. The standard settings in question and answer types facilitate consistent accuracy evaluation for existing models. # Distribution of Number of Question Words 12 H ---- Mean = 15.58 = â Median = 13.00 10 uv S38 g © 6 7) Sa @ ; | 0 | Bese _â sâ s_â si 0 10 20 30 40 50 60 Question Length Figure 25: The distribution of the number of words per question in MATHVISTA. Questions with a length greater than 60 are categorized as 61 for visualization simplicity. Dataset category and task type. Source datasets in MATHVISTA can be categorized into two types: math-targeted VQA datasets, which are originally proposed for assessing mathematical rea- soning, and general VQA datasets, which address visual reasoning in everyday scenarios. The dis- tribution proportions of these two categories (55.4% vs. 44.6%, as illustrated in Figure 26) within MATHVISTA enable a balanced examination of mathematical reasoning in both domain-specific and general-purpose applications. The distribution of the five tasks contained within MATHVISTA is vi- sualized in Figure 27. The relatively balanced distribution of these tasks enhances the benchmarking robustness that our dataset provides. Math-targeted VQA â ¢ General VQA 55.4% 3,402 Figure 26: | 2310.02255#83 | 2310.02255#85 | 2310.02255 | [
"2302.13971"
] |
2310.02255#85 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Category distribution of problems within MATHVISTA. Grade level. The datasets within MATHVISTA are categorized into four distinct grade levels: el- ementary school, high school, college, and not applicable, each representing a different level of reasoning complexity and contextual application. The elementary school category aligns with the typical mathematical curriculum of elementary education, introducing basic topics such as arith- metic operations and introductory geometry. High school level questions delve into more complex 30 Published as a conference paper at ICLR 2024 | 2310.02255#84 | 2310.02255#86 | 2310.02255 | [
"2302.13971"
] |
2310.02255#86 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Figure question answering Geometry problem solving Math word problem Visual question answering Textbook question answering Figure 27: Task type distribution of problems within MATHVISTA. mathematical concepts such as algebra, geometry, and introductory calculus. The college category encapsulates the highest level of complexity, featuring questions on advanced mathematical and sci- entific concepts like calculus, linear algebra, and physics. Questions without specific grade levels are categorized as not applicable. The distribution of questions across these grade levels is visualized in Figure 28. This structured categorization enriches the diversity of MATHVISTA, providing a meaningful framework for evalu- ating and benchmarking the mathematical and visual reasoning capabilities of various models across different educational contexts, thereby assessing their practical utility and educational relevance. Not applicable â ¢ Elementary school â ¢ High school â ¢ College 37.7% 2,313 Figure 28: Distribution of questions across different grade levels within MATHVISTA. Visual context. The datasets within MATHVISTA encompass over 10 different visual contexts (with the distribution shown in Figure 29), crucial for evaluating modelsâ ability to interpret and reason across diverse visual information. Common visual contexts include geometry diagrams, syn- thetic scenes, bar charts, natural images, and scientific figures as illustrated in Figure 8 to Figure 19. Less frequent, yet equally important visual contexts such as medical images, word clouds, map charts, radar charts, violin plots, and heatmap charts are depicted in Figure 20 and Figure 21. These visual contexts, ranging from common to specialized representations, challenge the models to de- code and reason with varying visual information, contributing to a more robust and comprehensive evaluation. The diversity in visual contexts enriches MATHVISTA, enhancing the benchmarking ro- bustness and providing a solid foundation for understanding the practical utility and domain-specific performance of various models across different domains and applications. Mathematical reasoning ability. The datasets within MATHVISTA encompass a spectrum of seven distinct mathematical reasoning types, facilitating a thorough evaluation of modelsâ mathe- matical reasoning capabilities. Figure 30 illustrates the portion of each reasoning type involved in the problems, with arithmetic being the most frequent and logical reasoning being the least frequent. This distribution reflects the varying degrees of mathematical reasoning required across different problems. Figure 31 further delineates the distribution of reasoning types, showcasing a mean of 31 | 2310.02255#85 | 2310.02255#87 | 2310.02255 | [
"2302.13971"
] |
2310.02255#87 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 Geometry diagram Synthetic scene Bar chart Natural image Scientific figure Table Function plot Abstract scene Puzzle test Scatter plot ine plot ie chart Others Figure 29: Visual context distribution within MATHVISTA. 1.45. The sparse distribution observed aids in the precise analysis of each typeâ s performance by the models, providing a nuanced understanding of their strengths and weaknesses across different mathematical reasoning domains. This structured representation of mathematical reasoning types within MATHVISTA not only enriches the dataset but also significantly contributes to a more robust and comprehensive evaluation of models, aiding in the identification of areas for improvement and the development of more proficient mathematical reasoning models. Logical reasoring iii = sj |= no Scientific reasoning SOON iii Numeric commonsense STOPS lll Geometry reasoning N23 lll Algebraic reasoning [2522 i Statistical reasoring_ SO! Arithmetic reasoning [Ii TS2°c i | 2310.02255#86 | 2310.02255#88 | 2310.02255 | [
"2302.13971"
] |
2310.02255#88 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 0% 5% 10% 15% 20% 25% 30% 35% 40% Figure 30: Portion of each mathematical reasoning type involved in the problems of MATHVISTA. Distribution of Number of Mathematical Reasoning Classes 3500 --- Mean =1.45 â Median = 1.00 3000 2500 Frequency boON a 8 3 8 8 8 1000 500 1 2 Number of Skills Figure 31: Distribution of the number of mathematical reasoning types within MATHVISTA. 32 | 2310.02255#87 | 2310.02255#89 | 2310.02255 | [
"2302.13971"
] |
2310.02255#89 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 F MORE DETAILS ON THE SETUP F.1 FREQUENT GUESS We employ a strategy where the most frequent answers in the testmini set are utilized as predictions for various question and answer types. For multiple-choice questions, the most frequent option is selected based on the number of available options. For instance, option B is chosen for questions with two options, aligning with the answer distribution in testmini. Similarly, for questions requir- ing an answer type of integer, a floating number with one decimal place, a floating number with two decimal places, or a list, we use 2, 1.2, 0.21, and [0, 2, 0, 2, 1, 7, 1, 2, 0, 3, 0, 6] respectively, in accordance with the answer distribution observed in testmini. F.2 PROMPT FOR ANSWER EXTRACTION The prompt used to instruct GPT-4 for answer extraction is illustrated in Table 8. # Element # Prompt # Task description Please read the following example. Then extract the answer from the model response and type it at the end of the prompt. Hint: Please answer the question requiring an integer answer and provide the final value, e.g., 1, 2, 3, at the end. Question: Which number is missing? # Example 1 Model response: | 2310.02255#88 | 2310.02255#90 | 2310.02255 | [
"2302.13971"
] |
2310.02255#90 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The number missing in the sequence is 14. Extracted answer: 14 Hint: Please answer the question requiring a floating-point number with one decimal place and provide the final value, e.g., 1.2, 1.3, 1.4, at the end. Question: What is the fraction of females facing the camera? # Example 2 Model response: The fraction of females facing the camera is 0.6, which means that six out of ten females in the group are facing the camera. Extracted answer: 0.6 Hint: Please answer the question requiring a floating-point number with two decimal places and provide the final value, e.g., 1.23, 1.34, 1.45, at the end. Question: How much money does Luca need to buy a sour apple candy and a butter- scotch candy? (Unit: $) Example 3 Model response: | 2310.02255#89 | 2310.02255#91 | 2310.02255 | [
"2302.13971"
] |
2310.02255#91 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Luca needs $1.45 to buy a sour apple candy and a butterscotch candy. Extracted answer: 1.45 Hint: Please answer the question requiring a Python list as an answer and provide the final list, e.g., [1, 2, 3], [1.2, 1.3, 1.4], at the end. Question: Between which two years does the line graph saw its maximum peak? Example 4 Model response: | 2310.02255#90 | 2310.02255#92 | 2310.02255 | [
"2302.13971"
] |
2310.02255#92 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The line graph saw its maximum peak between 2007 and 2008. Extracted answer: [2007, 2008] Hint: Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end. Question: What fraction of the shape is blue? Choices: (A) 3/11 (B) 8/11 (C) 6/11 (D) 3/5 Hint: Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end. Question: What fraction of the shape is blue? Choices: (A) 3/11 (B) 8/11 (C) 6/11 (D) 3/5 5 Example 5 Model response: The correct answer is (B) 8/11. Extracted answer: B Table 8: Task description along with five examples used to prompt GPT-4 for answer extraction. 33 Published as a conference paper at ICLR 2024 F.3 PROMPTS FOR RESPONSE GENERATION Question type Answer type Task instruction multiple-choice Text Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end. Free-form Integer Please answer the question requiring an integer answer and provide the final value, e.g., 1, 2, 3, at the end. Free-form Float (1) Please answer the question requiring a floating-point number with one dec- imal place and provide the final value, e.g., 1.2, 1.3, 1.4, at the end. Free-form Float (2) Please answer the question requiring a floating-point number with two dec- imal places and provide the final value, e.g., 1.23, 1.34, 1.45, at the end. Free-form List Please answer the question requiring a Python list as an answer and provide the final list, e.g., [1, 2, 3], [1.2, 1.3, 1.4], at the end. | 2310.02255#91 | 2310.02255#93 | 2310.02255 | [
"2302.13971"
] |
2310.02255#93 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Table 9: The task instructions for different question and answer types in answer extraction. Here, Float (1) refers to a floating-point number with one decimal place, and Float (2) refers to a floating- point number with two decimal places. F.4 PROMPT FOR CAPTION GENERATION We instruct Multimodal Bard to generate a detailed description for an input image, aiming to aug- ment current LLMs with visual understanding capabilities. The prompt is shown in Table 10. Describe the fine-grained content of the image or figure, including scenes, objects, relationships, and any text present. Table 10: Prompt for instructing Multimodal Bard to generate a detailed caption for an input image. F.5 MODEL HYPERPARAMETERS The hyperparameters for the experiments in §3.2 are set to their default values unless specified otherwise. Table 11 and Table 12 detail specific generation parameters for the various large language models (LLMs) and large multimodal models (LMMs) we evaluated, respectively. Model Generation Setup Claude-2 ChatGPT GPT-4 model = claude-2, temperature = 0, max tokens = 1024 model = gpt-3.5-turbo, temperature = 0, max tokens = 1024 model = gpt-4-0613, temperature = 0, max tokens = 1024 Table 11: Generating parameters for various LMMs. F.6 HUMAN PERFORMANCE We conducted a study to evaluate human performance on the testmini subset of the MATHVISTA, utilizing Amazon Mechanical Turk (AMT). Each question from the testmini subset was assigned to five annotators, all of whom have a history of completing more than 5,000 HIT tasks and boast an acceptance score higher than 0.99, to ensure the quality of the results. The study comprised five test questions and two qualification questions, which were to be answered within a 20-minute timeframe. The qualification questions consisted of elementary math word problems requiring basic arithmetic operations (e.g., addition and subtraction). Only annotators who successfully answered the qualification questions were deemed eligible for the study, and their responses were included in the final analysis. Additionally, annotators were requested to provide information regarding their | 2310.02255#92 | 2310.02255#94 | 2310.02255 | [
"2302.13971"
] |
2310.02255#94 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 34 Published as a conference paper at ICLR 2024 Model Generation Setup IDEFICS-9B-Instruct max new tokens = 256, temperature = 1.0 mPLUG-Owl-LLaMA-7B do sample = True, top-k = 5, max length = 512 miniGPT4-LLaMA-2-7B num beams = 1, temperature = 1.0, max new tokens = 300, max length = 1000 LLaMA-Adapter-V2-7B max gen len = 256, temperature = 0.1, top p= 0.75 LLaVAR do sample = True, temperature = 0.2, max new tokens = 1024 InstructBLIP-Vicuna-7B do sample = False, num beams = 5, max length = 256, min length = 1, top p = 0.9, repetition penalty = 1.0, temperature = 1 LLaVA-LLaMA-2-13B do sample = True, temperature = 0.2, max new tokens = 1024 Multimodal Bard Chatbot URL: https://bard.google.com, evaluation dates range from Sep 8, 2023 to Sep 10, 2023 GPT-4V (Playground) Chatbot URL: https://chat.openai.com, evaluation dates range from Oct 7, 2023 to Oct 15, 2023 Table 12: Generating parameters for various LMMs. | 2310.02255#93 | 2310.02255#95 | 2310.02255 | [
"2302.13971"
] |
2310.02255#95 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | highest level of educational attainment. We retained the results exclusively from annotators who had achieved a high school diploma or higher, as 30.9% of the problems in MATHVISTA are of high-school level difficulty and 10.8% correspond to college-level curricula. F.7 MULTIMODAL BARD ASSESSMENT TASK A screenshot of our AMT worker interface, utilized for the Multimodal Bard assessment task, is provided in Figure 32. The workers were compensated at a rate of $18 per hour. | 2310.02255#94 | 2310.02255#96 | 2310.02255 | [
"2302.13971"
] |
2310.02255#96 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | lororo CHRONO) foron) Figure 32: Screenshot of the Multimodal Bard assessment task interface. 35 Published as a conference paper at ICLR 2024 G MORE EXPERIMENTAL RESULTS G.1 RESULTS ON THE TEST SET Table 13 reports the accuracy scores of two heuristic baselines, two leading augmented LLMs (CoT GPT-4, PoT GPT-4), and one leading LMM (LLaVA-LLaMA-2-13B) on the test subset. The minor differences between scores on the test subset and the testmini subset, as shown in Table 2, suggest that testmini effectively mirrors the test subset, serving as a valuable evaluation subset for model development, especially for those who have limited computing resources. Model Input ALL FQA GPS MWP TQA VQA ALG ARI GEO LOG NUM SCI Random chance Frequent guess - - 17.86 15.46 24.12 4.54 23.36 24.33 25.84 13.85 22.69 13.40 8.82 15.76 14.28 23.48 20.97 27.18 16.27 26.06 28.87 28.29 20.86 25.71 11.86 19.61 20.45 20.08 2-shot CoT GPT-4 2-shot PoT GPT-4 Q, Ic, It 30.50 27.21 35.91 21.30 43.13 28.17 35.72 25.17 35.80 24.74 15.41 47.28 31.29 Q, Ic, It 31.74 27.58 37.35 23.87 43.00 30.27 37.15 27.93 37.48 22.68 15.83 44.47 31.87 LLaVA-LLaMA-2-13B Q, I 25.40 22.86 24.57 18.15 35.82 29.69 26.93 22.47 24.45 19.07 19.05 34.71 21.61 STA Table 13: | 2310.02255#95 | 2310.02255#97 | 2310.02255 | [
"2302.13971"
] |
2310.02255#97 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Accuracy scores on the test subset of MATHVISTA. Input: Q: question, I: image, Ic: image caption, It: OCR texts detected from the image. ALL: overall accuracy. Task types: FQA: figure question answering, GPS: geometry problem solving, MWP: math word problem, TQA: text- book question answering, VQA: visual question answering. Mathematical reasoning types: ALG: algebraic reasoning, ARI: arithmetic reasoning, GEO: geometry reasoning, LOG: logical reasoning, NUM: numeric common sense, SCI: scientific reasoning, STA: statistical reasoning. G.2 SCORES FOR MATH REASONING TYPES The accuracy scores across seven mathematical reasoning categories are reported in Table 2, with primary baselines highlighted in Figures 1 and 33. GPT-4V outperforms other baseline models in most mathematical reasoning categories, except for logical reasoning and numeric commonsense reasoning. Multimodal Bard achieves comparable performance with GPT-4V in geometry reasoning (47.8% vs. 51.0%) and algebraic reasoning (46.5% vs. 53.0%), highlighting its enhanced abilities in comprehending geometry diagrams and performing algebraic calculations. Mmm Random chance Mmm LLaVA lm PoT GPT-4 lm GPT-4V @mm_ LLaMA-Adapter V2 Mmm Col GPT-4 = il. Multimodal Bard @mm Human 60 | __50 L on i oe v - 1) | |) o rr) z | is) i | 5 o g i Til | All! Til! : TT TT TT # Algebraic # Arithmetic # Geometry # Logical # Numeric # Scientific # Statistical Figure 33: Accuracy scores of baselines across mathematical reasoning types in MATHVISTA. Among open-source LMMs (ranging from IDEFICS to LLaVA), LLaVA achieves the best overall accuracy on MATHVISTA and the highest fine-grained scores for problems in geometry reasoning, logical reasoning, and statistical reasoning. However, these scores still substantially lag behind GPT-4V and Multimodal Bard, indicating a gap in the overall effectiveness of these open-source models compared to more advanced proprietary systems. | 2310.02255#96 | 2310.02255#98 | 2310.02255 | [
"2302.13971"
] |
2310.02255#98 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Despite this, LLaMA-Adapter-V2, tied with LLaVA, outperforms GPT-4V by 2.7% in logical reasoning, and InstructBLIP beats GPT-4V 36 Published as a conference paper at ICLR 2024 by 0.3% in numeric commonsense, suggesting that specific enhancements in open-source models can lead to superior performance in certain niches. LLaVAR, being on par with Multimodal Bard, which is specifically designed to enhance capabilities in detecting OCR texts and symbols from various forms, including scientific domains, further illustrates the potential of targeted improvements in open-source LMMs to achieve competencies that rival or even exceed those of their proprietary counterparts in specialized areas. CoT GPT-4, augmented with OCR texts and Bard captions, performs well in scientific reasoning, achieving a gain of 26.2% over random chance, showcasing its superiority in domain-specific knowl- edge. This performance suggests a significant trend (Shen et al., 2023; Lu et al., 2023a) where the integration of specialized functionalities, such as OCR text recognition and advanced captioning, into LLMs enhances their applicability and accuracy in specific domains. PoT GPT-4 outperforms Multimodal Bard in categories such as arithmetic reasoning, logical reasoning, numeric common- sense reasoning, and statistical reasoning. This superior performance is attributed to its ability to generate high-quality codes for precise mathematical reasoning, illustrating the effectiveness of in- tegrating advanced coding capabilities into language models for complex problem-solving tasks. # G.3 SCORES FOR VARIOUS VISUAL CONTEXTS Figure 34 illustrates the accuracy scores of leading baselines on MATHVISTA across a diverse range of visual contexts. Remarkably, GPT-4V outperforms human performance in visual contexts of function plots, geometry diagrams, scatter plots, tables, and other types, which aligns with its su- periority in terms of related mathematical reasoning types. Other foundation models trail behind humans in visual perception and reasoning across most visual context categories. Multimodal Bard demonstrates comparable performance to humans in questions with a visual context of geometry diagrams, showcasing its promising capabilities in recognizing geometric shapes and relationships. | 2310.02255#97 | 2310.02255#99 | 2310.02255 | [
"2302.13971"
] |
2310.02255#99 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | On the other hand, PoT GPT-4, augmented by Bard captions, achieves a significant performance ad- vantage over other baselines, exhibiting strong abilities in discerning structural information in tables and generating symbolic codes for precise statistical reasoning. @mm Random mmm LLaVA mm PoT GPT-4 lm GPT-4V l@m_ LLaMA-Adapter V2 mm CoT GPT-4 Mm Multimodal Bard @mm Human we UD yw @ $668 6 8 = â â Accuracy Score (%) N 3 10 Figure 34: Accuracy scores of leading baselines across various visual contexts in MATHVISTA. G.4 SCORES ACROSS DIFFERENT GRADE LEVELS Figure 35 displays the average accuracy scores across different grade levels (elementary school, high school, and college) for the leading foundation models, as well as random chance and human performance. Humans exhibit the highest performance on questions at the elementary school level (70.4%), while they fare the worst on college-level questions (52.6%) within MATHVISTA. Foun- dation model baselines exhibit varying performance behaviors: they achieve better accuracy scores on high school level questions compared to the other two categories. | 2310.02255#98 | 2310.02255#100 | 2310.02255 | [
"2302.13971"
] |
2310.02255#100 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 37 Published as a conference paper at ICLR 2024 In addressing elementary school problems, the performance gap between human performance and the best-performing model, GPT-4V, is notably the largest when compared to other grade levels. This gap could potentially be attributed to the limited availability of age-specific training data that accurately captures the unique learning styles (i.e., rich with abstract scenes) of elementary school students. On the other hand, GPT-4V demonstrates an improvement of 20.9% over the Multimodal Bard, the second-best performing model in this category. This improvement suggests that while GPT-4V still lags behind human performance, its ability to tackle elementary-level problems in visually intensive settings has been significantly enhanced. For high school problems, GPT-4V, with a score of 61.8%, outperforms human performance, which stands at 58.2%. Additionally, the second-best performing model, Multimodal Bard, with a score of 50.3%, is on par with human performance. This disparity might be attributed to the training regimen of the models, which perhaps aligns well with the high school curriculum. In the context of college curriculum, the performance of various baselines varies dramatically. GPT- 4V demonstrates performance comparable to that of humans. The GPT-4 model, when augmented with vision inputs (CoT GPT-4V), outperforms the Multimodal Bard. Among the best open-source Large Multimodal Models (LMMs) on MATHVISTA, LLaMA achieves only a negligible gain over random chance. This suggests that while advanced models like GPT-4V and CoT GPT-4V show promise in higher education settings, there remains significant room for improvement in the devel- opment of LMMs to effectively address the complex and diverse nature of college-level content. @mm Random chance mmm LLaVA lm PoT GPT-4 lm GPT-4V lm LLaMA-Adapter V2 mmm CoT GPT-4 @m Multimodal Bard @mm Human Accuracy Score (%) yo ow 8 wa ix 8 6 & $6 8 6 » ° | 2310.02255#99 | 2310.02255#101 | 2310.02255 | [
"2302.13971"
] |
2310.02255#101 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Elementary School High School # College Figure 35: Average accuracy scores across different grade levels for primary baselines. G.5 ABLATION STUDY FOR LLMS Table 36 presents an ablation study conducted on LLMs, examining their performance under varying visual information inputs. ma mm OCR Text @mm Caption @mm Caption + OCR Text 20 | | ih i CoT ChatGPT CoT GPT-4 PoT ChatGPT PoT GPT-4 wow No o8 w i N a Accuracy Score (%) N N 8 © N Nn Figure 36: Average accuracy scores of LLM baselines under various visual inputs. 38 Published as a conference paper at ICLR 2024 # G.6 LLMS WITH DIFFERENT SHOTS We explored whether LLMs and Augmented LLMs can benefit from larger numbers of few-shot examples on MATHVISTA, with results reported in Figure 37. In the question-only input setting (a), both Claude-2 and ChatGPT suffer from a performance drop, suggesting that they are more sensitive to the bias in demonstrations, especially in the absence of visual inputs. There is a marginal improvement of 1.4% when the shot number increases from 2 to 4 for GPT-4. A similar phenomenon is observed when LLMs are augmented with external OCR texts and image captions with CoT prompting (b); notably, there is a significant drop of 3.4% when the shot number increases from 2 to 4 for CoT Claude-2. With PoT prompting (c), LLMs like ChatGPT and GPT-4 can obtain gains of 3.4% and 1.4%, respectively, with the shot number increasing from 2 to 4. Overall, while there might be marginal improvements, larger numbers of few-shot examples do not necessarily benefit the LLMs on MATHVISTA. In some settings, LLMs suffer from unstable performance drops. This further indicates that the quality of the augmented information plays a more important role for augmented LLMs. | 2310.02255#100 | 2310.02255#102 | 2310.02255 | [
"2302.13971"
] |
2310.02255#102 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | (a) Q Only. (b) Q + OCR + Caption. (c) Q + OCR + Caption. Figure 37: Performance comparison of LLM models across different shots. G.7 LMMS WITH DIFFERENT SHOTS We conducted an initial study on the few-shot learning ability of the Large Multimodal Model (LMM), specifically IDEFICS (Laurenc¸on et al., 2023), on MATHVISTA. As shown in Figure 38, there is a modest improvement with increased shot numbers, suggesting potential benefits of few-shot learning for LMMs on MATHVISTA. However, recent studies highlight the instability of LMMs in few-shot settings. For instance, a significant accuracy drop was observed in models like BLIP-2 (Li et al., 2023b) and InstructBLIP (Dai et al., 2023) when applying 4-shot in-context learning in common sense reasoning tasks (Li et al., 2023c). These variations may stem from the specific training techniques or the nature of few- shot examples used, impacting the in-context learning performance of LMMs. Given the rapidly evolving landscape of LMMs, the consistent benefits of few-shot learning remain an open question. | 2310.02255#101 | 2310.02255#103 | 2310.02255 | [
"2302.13971"
] |
2310.02255#103 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | â eâ IDEFICS-9B-Instruct N a N FS Accuracy Score (%) N N oO N B cy i} 1 2 3 4 Shot Number Figure 38: Performance comparison of the LMM model in the few-shot setting. 39 Published as a conference paper at ICLR 2024 G.8 HALLUCINATIONS IN MODEL EXPLANATIONS Question: What is the highest amount this class measures? Response from Bard: The highest amount the beaker in the image measures is 600 ml. The markings on the side of the beaker are in milliliters (ml). The largest marking is 600 ml, which is the maximum amount of liquid that the beaker can hold. The other markings on the beaker are 300 ml, 200 ml, 100 ml, and -400 ml. The negative markings indicate the amount of liquid that is below the zero mark. Wrong output: 600 Ground truth: 400 Figure 39: An example illustrating hallucinated information not depicted in the image. Question: As shown in the figure, if CB = 4.0, DB = 7.0, and D is the midpoint of AC, then the length of AC is (). Choices: (A) 6cm (B) 7cm (C) 4cm (D) 5cm Response from Bard: Since D is the midpoint of AC, then AD = DC = (AC/2). We know that CB = 4.0 and DB = 7.0, and that AD = DC. So we can write the equation: AD + DC = 4.0 + 7.0 2AD = 11.0 AD = 5.5 AC = 2AD = 11.0cm Therefore, the length of AC is 11.0 cm, which is answer choice (D). Wrong output: (D) 5cm Ground truth: (A) 6cm A D c B Figure 40: An example illustrating hallucinated information not depicted in the question. 40 | 2310.02255#102 | 2310.02255#104 | 2310.02255 | [
"2302.13971"
] |
2310.02255#104 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 G.9 MORE EXAMPLES FOR MULTIMODAL BARD Question: As shown in the figure: In Rtâ ³ABC, â C = 90.0, AC = 8.0, AB = 10.0, then the value of sinB is equal to (). Choices: (A) 3 5 (B) 4 5 (C) 3 4 (D) 4 3 Response from Bard: In a right triangle, the sine of an angle is equal to the ratio of the opposite side to the hypotenuse. In this case, the opposite side is AC = 8 and the hypotenuse is AB = 10. Therefore, sinB = 8/10 = 3/5. Here is the detailed solution: Given: * â ³ABC is a right triangle with right angle at C * AC = 8 * AB = 10 To find: sinB sinB = opposite/hypotenuse = AC/AB = 8/10 = 3/5 Wrong output: (A) 3 5 Ground truth: (B) 4 5 Cc Figure 41: A failure example from Multimodal Bard: incorrect answer with wrong calculation. Question: What is the age gap between these two people in image? (unit: years) Response from Bard: Sorry, I canâ t help with images of people yet. Wrong output: N/A Ground truth: 7 Figure 42: A failure example of Multimodal Bard: restricted in responding to images of people. 41 Published as a conference paper at ICLR 2024 Question: What is the accuracy of the algorithm with lowest accuracy? Response from Bard: The algorithm with the lowest accuracy in the graph is heel. Its accuracy is 0. The other algorithms are desert, pine, and globe. Their accuracies are 2, 4, and 6, respectively. Wrong output: 1 Ground truth: 0 Accuracy of different algorithms Accuracy | 2310.02255#103 | 2310.02255#105 | 2310.02255 | [
"2302.13971"
] |
2310.02255#105 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Figure 43: A failure example from Multimodal Bard: incorrect answer with hallucination caused by visual perception. Question: What is the age gap between these two people in image? (unit: years) Response from Bard: The dissatisfied people are 78%, which is 1.2 times more than the satisfied people (20%). So the answer is 1.2. Wrong output: 1.2 Ground truth: 3.9 Most Dissatisfied With Country Direction Figure 44: A failure example from Multimodal Bard: incorrect answer with correct detection but wrong calculation. 42 Published as a conference paper at ICLR 2024 Question: Is the number of cars in front of the tiny metal thing less than the number of large matte things in front of the cyan rubber road bike? Choice: (A) Yes (B) No Response from Bard: The answer is (B), No. | 2310.02255#104 | 2310.02255#106 | 2310.02255 | [
"2302.13971"
] |
2310.02255#106 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | There are no cars in front of the tiny metal thing (the motorcycle). There are two large matte things in front of the cyan rubber road bike (the van and the bicycle). So the number of large matte things in front of the cyan rubber road bike is greater than the number of cars in front of the tiny metal thing. Wrong output: (B) No Ground truth: (A) Yes Figure 45: A failure example from Multimodal Bard: incorrect answer with correct explanation. Question: What is the missing number in the picture? Choice: (A) 6 (B) 8 (C) 10 (D) 11 Response from Bard: The missing number in the picture is (B) 8. The numbers in the triangles form a sequence where each number is 8 more than the previous num- ber. 2 10 (8) 20 /12 So, the missing number is 8. Correct output: (B) 8 Figure 46: A successful example from Multimodal Bard: correct answer with hallucination (20 is not present in the image) and wrong reasoning. 43 | 2310.02255#105 | 2310.02255#107 | 2310.02255 | [
"2302.13971"
] |
2310.02255#107 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 Question: The magnitude of the acceleration vector a is 10 cm/s2. Use the figure to estimate the normal components of a. Response from Bard: From the figure, we can see that the acceleration vector a is making an angle of θ = 30â ¦ with the radius vector. The normal component of the acceleration is given by an = a cos θ = 10 cm/s2 · cos 30â ¦ = â 10 2 3 cm/s2 The normal component of the acceleration is approximately 7.07 cm/s2. Here is a breakdown of the steps involved in finding the normal component of the acceleration: 1. Find the angle between the acceleration vector and the radius vector. 2. Use the cosine function to find the cosine of the angle. 3. Multiply the magnitude of the acceleration vector by the cosine of the angle to find the normal component of the acceleration. | 2310.02255#106 | 2310.02255#108 | 2310.02255 | [
"2302.13971"
] |
2310.02255#108 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Wrong output: 7.07 0 x Ground truth: 9 Figure 47: A failure example from Multimodal Bard: the question is incomplete (or requires implicit measuring ability), but the model hallucinates the angle (the actual angle is 25â ¦). Here, an = 10cm/s2 · cos 25â ¦ = 9cm/s2. 44 Published as a conference paper at ICLR 2024 Question: The derivative of y at x = 6 is Chocies: (A) larger than (B) equal to (C) smaller than that at x = 8 Response from Bard: The correct answer is (C). The derivative of y at x = 6 is smaller than that at x = 8. The graph of the function y = log2 x is a concave up curve. This means that the derivative of the function is increasing as x increases. So, the derivative of the function at x = 6, which is a smaller value of x, will be smaller than the derivative of the function at x = 8, which is a larger value of x. Here is a table of the values of the derivative of y at different values of x: x | dy/dx â |â 6 | 0.5 7 | 0.7 8 | 1 As you can see, the value of dy/dx increases from 0.5 to 0.7 to 1 as x increases from 6 to 7 to 8. This confirms that the derivative of y at x = 6 is smaller than that at x = 8. Wrong output: (C) smaller than Ground truth: (A) larger than Figure 48: A failure example from Multimodal Bard: incorrect reasoning and wrong calculations. 45 Published as a conference paper at ICLR 2024 Question: The derivative of f (x) at x = 2 is Chocies: (A) larger than (B) equal to (C) smaller than that at x = 5 f(x) = [2x - 3|+1 | 2310.02255#107 | 2310.02255#109 | 2310.02255 | [
"2302.13971"
] |
2310.02255#109 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Response from Bard: The correct answer is (C). The derivative of f (x) at x = 2 is smaller than that at x = 5. The derivative of a function at a point gives the slope of the line tangent to the functionâ s graph at that point. The steeper the line, the larger the derivative. In the graph, the line tangent to the graph of f (x) at x = 2 is less steep than the line tangent to the graph at x = 5. Therefore, the derivative of f (x) at x = 2 is smaller than that at x = 5. In mathematical terms, the derivative of f (x) is given by the expression f â ²(x) = 2x â | 2310.02255#108 | 2310.02255#110 | 2310.02255 | [
"2302.13971"
] |
2310.02255#110 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 3. Plugging in x = 2 and x = 5, we get f â ²(2) = 2(2) â 3 = 1 and f â ²(5) = 2(5) â 3 = 7. Therefore, f â ²(2) < f â ²(5). Wrong output: (C) smaller than Ground truth: (B) equal to Figure 49: A failure example from Multimodal Bard: arguing based on incorrect facts in the context of the image. 46 | 2310.02255#109 | 2310.02255#111 | 2310.02255 | [
"2302.13971"
] |
2310.02255#111 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 G.10 COMPARISONS OF DIFFERENT MODELS Question: Which number is missing? Ground truth: 40 Solution: 4 + 1 = 5, 5 + 2 = 7 (2 = 1 + 1) 10 + 4 = 14, 14 + 5 = 19 (5 = 4 + 1) 25 + 7 = 32, 32 + 8 = 40 (8 = 7 + 1) | 2310.02255#110 | 2310.02255#112 | 2310.02255 | [
"2302.13971"
] |
2310.02255#112 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | () (s) @) © © @ © IDEFICS: The answer is 1. mPLUG-Owl: There are 13 missing numbers in the image. The missing number can be 3. (miniGPT4: The number missing intheimageis4 (ELaMA-Adapter-V2: The missing numbers 13 miniGPT4: The number missing in the image is 4. LLaMA-Adapter-V2: The missing number is 13. LLaVAR: The number 25 is missing. # (IntrutBLIP:NAsâ â â sSâ ¢Sâ ¢SOSOâ â Câ â CSCSCSCSCSCSOQS InstructBLIP: N/A LLaVA: | 2310.02255#111 | 2310.02255#113 | 2310.02255 | [
"2302.13971"
] |
2310.02255#113 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The number missing in the sequence is 14. Multimodal Bard: The missing number in the sequence is 28. The sequence progresses by adding consecutive odd numbers to each number. The first number is 4, and we add 1 to get 5. We then add 3 to get 7, and so on. The missing number is therefore the next number after 32, which is 28. Here is the complete sequence: 4 + 1 = 5 5 + 3 = 7 7 + 5 = 10 10 + 7 = 14 14 + 9 = 19 19 + 11 = 25 25 + 13 = 32 32 + 15 = 28 GPT-4V: 4, 5, 7, 10, 14, 19, 25, 32, ?, 52 Answer: The missing number is 40. This sequence follows a pattern where the difference between each number increases by one. For example, the difference between 4 and 5 is 1, the difference between 5 and 7 is 2, the difference between 7 and 10 is 3, and so on. So, the difference between 32 and the missing number should be 8, making the missing number 40. Correct output: 40 Figure 50: Among all LMM baselines, only GPT-4V accurately predicts the correct answer to this logical reasoning question, demonstrating correct visual perception and textual reasoning. Multi- modal Bard generates well-organized reasoning steps; however, incorrect calculations are present. | 2310.02255#112 | 2310.02255#114 | 2310.02255 | [
"2302.13971"
] |
2310.02255#114 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 47 Published as a conference paper at ICLR 2024 Question: Which number is missing? Ground truth: 40 Solution: 4 + 1 = 5, 5 + 2 = 7 (2 = 1 + 1) 10 + 4 = 14, 14 + 5 = 19 (5 = 4 + 1) 25 + 7 = 32, 32 + 8 = 40 (8 = 7 + 1) () (s) @) © ©) (5) (2) Augmented Caption from Bard: The image shows a set of 8 numbers in circles on a white background. The numbers are 4, 5, 7, 10, 14, 19, 25, and 32. There is a question mark in the last circle. The numbers are arranged in increasing or- der, with each number being 1 more than the previous number. Therefore, the missing number is 19 + 6 = 25. The image is a simple math problem, and it is likely that it is being used as part of an IQ test or other kind of intelligence test. The image is also consistent with the webpages that you mentioned, which are all related to IQ tests. Augmented OCR Text from EasyOCR: ([300, 80], 5), ([83, 281], 10), ([285, 281], 14), ([491, 279], 19), ([81, 485], 25), ([285, 485], 32), ([491, 467], 2) CoT GPT-4: The pattern in the numbers is that each number is one more than the previous number plus the position of the number in the sequence. For example, the second number (5) is 1 more than the first number (4) plus the position of the number in the sequence (2). The third number (7) is 1 more than the second number (5) plus the position of the number in the sequence (3). Following this pattern, position of the number in the sequence (9). the missing number should be 1 more than the last number (32) plus the 32 + 9 + 1 = 42 So, the missing number is 42. | 2310.02255#113 | 2310.02255#115 | 2310.02255 | [
"2302.13971"
] |
2310.02255#115 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | PoT GPT-4: numbers = [4, 5, 7, 10, 14, 19, 25, 32] diffs = [numbers[i+1] - numbers[i] for i in range(len(numbers)-1)] # [1, 2, 3, 4, 5, 6, 7] missing_number = numbers[-1] + diffs[-1] print(missing_number) Output: 39 Figure 51: The LLM baselines augmented with Bard image captions and OCR text also fail on this question. CoT GPT-4 generates explanations with incorrect reasoning. PoT GPT-4 also fails to understand the correct pattern behind the sequence of numbers. | 2310.02255#114 | 2310.02255#116 | 2310.02255 | [
"2302.13971"
] |
2310.02255#116 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 48 Published as a conference paper at ICLR 2024 Question: What fraction of the shape is blue? Choices: (A) 3/11 (B) 8/11 (C) 6/11 (D) 3/5 Ground truth: (A) 3/11 IDEFICS: The answer is (B) 8/11. mPLUG-Owl: B: 8/11 miniGPT4: The fraction of the shape that is blue is (B) 8/11. LLaMA-Adapter-V2: The correct answer is (C) 6/11. LLaVAR: The fraction of the shape that is blue is 1/3. InstructBLIP: N/A LLaVA: | 2310.02255#115 | 2310.02255#117 | 2310.02255 | [
"2302.13971"
] |
2310.02255#117 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The correct answer is (B) 8/11. Multimodal Bard: The correct answer is (C) 6/11. The shape is divided into 11 equal parts. The blue part takes up 6 of these parts. Therefore, the fraction of the shape that is blue is 6/11. The other choices are incorrect. Option (A), 3/11, is the fraction of the shape that is white. Option (B), 8/11, is the fraction of the shape that is not blue. Option (D), 3/5, is not a fraction of the shape. | 2310.02255#116 | 2310.02255#118 | 2310.02255 | [
"2302.13971"
] |
2310.02255#118 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | GPT-4V: The correct answer is (B) 8/11. Figure 52: All LMM baselines fail to answer this statistical reasoning question, indicating the un- satisfactory abilities in understanding abstract scenes and performing statistical reasoning. 49 Published as a conference paper at ICLR 2024 Question: What fraction of the shape is blue? Choices: (A) 3/11 (B) 8/11 (C) 6/11 (D) 3/5 Ground truth: (A) 3/11 Augmented Caption from Bard: The image shows a blue circle with a white stripe in the middle. The circle is divided into four parts, each with a different color. The colors are blue, green, yellow, and red. There is no text present in the image. The image could be used to represent a variety of concepts, such as fractions, colors, or the four elements. It could also be used as a decorative element or a symbol. | 2310.02255#117 | 2310.02255#119 | 2310.02255 | [
"2302.13971"
] |
2310.02255#119 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | CoT GPT-4: The image description does not provide enough information to determine what fraction of the shape is blue. Please provide a more detailed description of the image. PoT GPT-4: def fraction_of_blue(description, choices): parts = { â blueâ : 1, â greenâ : 1, â yellowâ : 1, â redâ : 1 } total_parts = sum(parts.values()) blue_fraction = parts[â blueâ ] / total_parts for choice in choices: fraction = choice.split(â /) if int(fraction[0]) / int(fraction[1]) == blue_fraction: return choice return "" choices = [â 3/11â , â 8/11â , â 6/11â , â 3/5â | 2310.02255#118 | 2310.02255#120 | 2310.02255 | [
"2302.13971"
] |
2310.02255#120 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | ] description = "The image shows a blue circle with a white stripe in the middle. The circle is divided into four parts, each with a different color. The colors are blue, green, yellow, and red. There is no text present in the image." print(fraction_of_blue(description, choices)) Output: â â Figure 53: Two LLM baselines also fail to answer this question. The incorrect facts in the image captions generated by Multimodal Bard further verify its weakness in understanding the abstract concepts in this figure. | 2310.02255#119 | 2310.02255#121 | 2310.02255 | [
"2302.13971"
] |
2310.02255#121 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 50 Published as a conference paper at ICLR 2024 Question: If you add the two visible numbers, on the jerseys, what is the total sum? Ground truth: 3 Solution: 2 + 1 = 3 | : # (WEFICS:Theansweris 10 ssiâ â â â â â C;s*Sâ ¢Sâ ¢S!O!OOCOCOCOCOC;C;*é*@ds; IDEFICS: The answer is 10. mPLUG-Owl: | 2310.02255#120 | 2310.02255#122 | 2310.02255 | [
"2302.13971"
] |
2310.02255#122 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The total sum of the two numbers on the playersâ jerseys is 24. [miniGPT4: The numbers on the jerseys are 7 and 15. If you add them together, the total sum 22.) (LLaMA-Adapter-V2: The total sum of the two visible numbers onthe jerseysis 12. miniGPT4: The numbers on the jerseys are 7 and 15. If you add them together, the total sum is 22. | 2310.02255#121 | 2310.02255#123 | 2310.02255 | [
"2302.13971"
] |
2310.02255#123 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | LLaMA-Adapter-V2: The total sum of the two visible numbers on the jerseys is 12. LLaVAR: The two visible numbers on the jerseys add up to 21. (IstrutBLIP:27sâ â â â sCs*sâ â â â â â â â ¢ONOOOOOOOOO;#(ésC*S InstructBLIP: 21 LLaVA: The two visible numbers on the jerseys are 2 and 1. If we add them together, the total sum is 3. Correct output: 3 | 2310.02255#122 | 2310.02255#124 | 2310.02255 | [
"2302.13971"
] |
2310.02255#124 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Multomodal Bard: Sorry, I canâ t help with images of people yet. GPT-4V: The total sum of the two visible numbers on the jerseys is 29 (21 + 8). Augmented OCR Text from EasyOCR: ([110, 154], 21) CoT GPT-4: The image only shows one visible number on the jerseys, which is 21. Thus, the total sum is 21. Augmented OCR Text from EasyOCR: ([110, 154], 21) PoT GPT-4: visible_numbers = [21] total_sum = sum(visible_numbers) print(total_sum) Execution output: 21 Figure 54: Among all LMM and LLM baselines, only LLaVA generates the correct answer in this instance. Models like LLaVAR, Augmented CoT GPT-4, and Augmented PoT GPT-4 identify the correct number(s) in the image but fail to resolve the ambiguity within the contexts of the input question and image. | 2310.02255#123 | 2310.02255#125 | 2310.02255 | [
"2302.13971"
] |
2310.02255#125 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 51 Published as a conference paper at ICLR 2024 Question: How much money does Luca need to buy a sour apple candy and a butterscotch candy? (Unit: $) I id $0.06 SIL) piece of gum $0.07 gummy worm $0.09 lemon drop $0.05 ; =a Piece of licorice $0.07 butterscotch candy $0.07 # Ground truth: 0.13 Solution: Add the price of a sour apple candy and the price of a but- terscotch candy: $0.06 + $0.07 = $0.13. Luca needs $0.13. | 2310.02255#124 | 2310.02255#126 | 2310.02255 | [
"2302.13971"
] |
2310.02255#126 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | IDEFICS: The answer is 1.50. mPLUG-Owl: The price of each candy is not provided, so it is not possible to calculate the total price for buying both candy items. miniGPT4: The answer to the question is: Luca needs $0.75 to buy a sour apple candy and a butterscotch candy. Explanation: * The price of the sour apple candy is $0.50. * The price of the butterscotch candy is $0.25. * Therefore, Luca needs a total of $0.75 to buy both candies. | 2310.02255#125 | 2310.02255#127 | 2310.02255 | [
"2302.13971"
] |
2310.02255#127 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | LLaMA-Adapter-V2: Luca needs $1.23 to buy a sour apple candy and a butterscotch candy. LLaVAR: Luca needs to earn $1.23 to be able to buy a sour apple candy and a butterscotch candy. InstructBLIP: 1.23 LLaVA: Luca needs $1.45 to buy a sour apple candy and a butterscotch candy. Multimodal Bard: The price of a sour apple candy is $0.06 and the price of a butterscotch candy is $0.07. So, Luca needs $0.06 + $0.07 = $0.13. In two decimal places, the amount is 0.13. Correct output: 0.13 GPT-4V: The total cost for a sour apple candy and a butterscotch candy is $0.13. Luca would need at least $0.13 to purchase both items. Correct output: 0.13 Figure 55: Among all LMM baselines, only Multimodal Bard and GPT-4V accurately detect the text information in the image, and perform the correct calculation to deduce the right answer for the arithmetic question within the visual context of a table. 52 Published as a conference paper at ICLR 2024 Question: How much money does Luca need to buy a sour apple candy and a butterscotch candy? (Unit: $) Ground truth: 0.13 Solution: Add the price of a sour apple candy and the price of a but- terscotch candy: $0.06 + $0.07 = $0.13. Luca needs $0.13. Augmented Caption from Bard: The image shows a table with the prices of different types of candy. The table is on a pink background. The candy is all different shapes and sizes, and the prices are all different. The candy is all different colors, and the prices are all different. | 2310.02255#126 | 2310.02255#128 | 2310.02255 | [
"2302.13971"
] |
2310.02255#128 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The following text is present in the image: Sour apple candy $0.06 * Piece of gum $0.07 * Gummy worm $0.09 * Lemon drop $0.05 * Piece of licorice $0.07 * Butterscotch candy $0.07 Augmented OCR Text from EasyOCR: ([9, 7], sour apple candy), ([195, 5], $0.06), ([7, 39], piece of gum), ([195, 37], $0.07), ([9, 73], gummy worm), ([195, 69], $0.09), ([7, 105], lemon drop), ([195, 101], $0.05), ([6, 132], piece of licorice), ([195, 133], $0.07), ([9, 167], butterscotch candy), ([195, 165], $0.07) | 2310.02255#127 | 2310.02255#129 | 2310.02255 | [
"2302.13971"
] |
2310.02255#129 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | CoT GPT-4: Find the total cost of a sour apple candy and a butterscotch candy. $0.06 + $0.07 = $0.13 Luca needs $0.13. Correct output: 0.13 PoT GPT-4: sour_apple_candy_price = 0.06 butterscotch_candy_price = 0.07 total_price = sour_apple_candy_price + butterscotch_candy_price print(total_price) Execution output: 0.13 Correct output: 0.13 Figure 56: Using the correct image captions and OCR text as augmented inputs, both CoT GPT-4 and PoT GPT-4 predict the correct answer. 53 Published as a conference paper at ICLR 2024 # H A COMPARATIVE STUDY OF GPT-4V, BARD, AND OTHER MODELS GPT-4 with vision (GPT-4V) is the multimodal version of GPT-4 that is instructed to understand multiple modalities such as texts and images. Due to its remarkable improvements over other AI models (§3.3 and §3.4), we have conducted a comprehensive evaluation to understand its capa- bilities, strengths, and areas for improvement. Our findings not only validate GPT-4Vâ s various problem-solving skills but also shed light on developing general-purpose multimodal AI agents. H.1 GPT-4V PLAYGROUND FOR MANUAL EVALUATION Given that GPT-4V does not offer API access, we have performed manual evaluations using the playground platform2. For a fair comparison, we used the same input queries as those for all the other LMMs and recorded the responses in a single round of chat without additional feedback (Figure 57). | 2310.02255#128 | 2310.02255#130 | 2310.02255 | [
"2302.13971"
] |
2310.02255#130 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 2 peanut butter cookie dough $3 per Ib double chocolate cookie dough $3 per Ib chocolate chip cookie dough $5 per Ib oatmeal raisin cookie dough $7 per Ib gingersnap cookie dough $3 per Ib snickerdoodle cookie dough $8 per Ib Hint: Please answer the question requiring a floating-point number with one decimal place and provide the final value, e.g., 1.2, 1.3, 14, at the end. Question: Linda wants to buy 0.9 pounds of double chocolate cookie dough. How much will she spend? (Unit: $) To find out how much Linda will spend on 0.9 pounds of double chocolate cookie dough: Price per pound = $3 Amount she wants to buy = 0.9 pounds. Total cost = 0.9 pounds x $3/pound = $2.7 Linda will spend $2.7. peanut butter cookie dough $3 per Ib double chocolate cookie dough $3 per Ib chocolate chip cookie dough $5 per Ib oatmeal raisin cookie dough $7 per Ib gingersnap cookie dough $3 per Ib snickerdoodle cookie dough $8 per Ib | 2310.02255#129 | 2310.02255#131 | 2310.02255 | [
"2302.13971"
] |
2310.02255#131 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Figure 57: The GPT-4V playground for our manual evaluation. 2https://chat.openai.com/ 54 Published as a conference paper at ICLR 2024 H.2 LEADERBOARD SCORES The leaderboard in Figure 58 highlights GPT-4Vâ s substantial advancements over the current LLM and LMM baselines. Notably, there is a 15.1% improvement over the second-best performing Mul- timodal Bard model. However, a significant gap of 10.4% still exists between GPT-4V and human performance, indicating plenty of room for further improvement by developing new LMMs and tool-augmented LLMs. Accuracy on the testmini set of MathVista (1,000 examples) Hr cr a 6 0.3 GPT-4V (Playground) <i it i 49.2 Multimodal Bard (is 34.8 PoT GPT-4 (Caption-+OCR) {iT TTTâ 33.9 CoT GPT-4 (Caption+OCR) {ST NNN 33.2 ChatGPT (Caption+OCR) (i 33.2 Claude-2 (Caption+OCR) | i as 33.2 PoT ChatGPT (Caption+OCR) {ss 26.8 LLaVA (LLaMA-2-13B) 7s 26.1 InstructBLIP (Vicuna-7B) {ss 25.3 LLaVAR | 25.2 LLaMA-Adapter-V2 (7B) {I 23.9 miniGPT4 (LLaMA-2-7B) {iy 23.1. mPLUG-Owl (LLaMA-7B) 22.2 IDEFICS (9B-Instruct) 19.8 Random chance 17.9 0% 10% 20% 30% 40% 50% 60% # CoT # CoT Figure 58: | 2310.02255#130 | 2310.02255#132 | 2310.02255 | [
"2302.13971"
] |
2310.02255#132 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Accuracy scores of primary baselines on the testmini subset (1,000 examples) of MATH- VISTA. Both CoT GPT-4 and PoT GPT-4 are augmented with Bard captions and OCR text. 55 Published as a conference paper at ICLR 2024 H.3 ABILITIES IN MATHEMATICAL REASONING This section compares the mathematical reasoning ability of GPT-4V with that of other LLMs on MATHVISTA, including LLaMA-Adapter-V2-7B (LLaMA-Adapter-V2 for simplification), LLaVA- LLaMA-2-13B (LLaVA for simplification), and Multimodal Bard. H.3.1 ALGEBRAIC REASONING Algebraic reasoning problems on MATHVISTA require understanding the function plots from figures and inferring their properties. As shown in Figure 1, GPT-4V demonstrates outstanding capabilities in algebraic reasoning, surpassing all competing models and even humans. For instance, GPT-4V accurately identifies the function plot by its equation and subsequently infers its correct properties (Figure 59). However, both GPT-4V and the other LLMs face challenges in comprehending low- resolution figures (Figure 60) and those that depict multiple functions (Figure 61). | 2310.02255#131 | 2310.02255#133 | 2310.02255 | [
"2302.13971"
] |
2310.02255#133 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | ID: 926 Question: What is the size of the shaded area under the curve? Round the answer to 2 decimal places. Ground truth: 7.07 f(x) = 19 â (« - 37 LLaMA-Adapter-V2: The size of the shaded area under the curve is 0.00. LLaVA: The size of the shaded area under the curve is approximately 1.34. Multimodal Bard: The given curve is a circle with center at (3, 0) and radius 3. The shaded area is a quarter of the circleâ s area. Area of the circle = Ï r2 Area of the shaded region = = 3.14 à 32 = 28.26. 28.26 4 = 7.065. Rounding to two decimal places, the area is 7.07. Correct output: 7.07 | 2310.02255#132 | 2310.02255#134 | 2310.02255 | [
"2302.13971"
] |
2310.02255#134 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | GPT-4V: To find the size of the shaded area under the curve f(x) = \/9 â (a â 3)?, we need to integrate the function over the given interval. The given function represents the upper half of a circle with a radius of 3 and centered at the point (3,0). The shaded region corresponds to the area of a quarter circle with a radius of 3. The formula for the area of a circle is Ï r2. For a quarter of the circle, the area is 1 Given: r = 3 # Area = # Area = # Area = in(3?) ix(9) a 1 4 1 4 9 4 Ï (32) Ï (9) # Ï Area = 7.07 (rounded to 2 decimal places) So, the size of the shaded area under the curve is approximately 7.07 square units. Correct output: 7.07 Figure 59: Both Multimodal Bard and GPT-4V accurately determine the function depicted in the image and compute the answer to the specified decimal places. 56 | 2310.02255#133 | 2310.02255#135 | 2310.02255 | [
"2302.13971"
] |
2310.02255#135 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 ID: 346 Question: What is the y-intercept of this function? Ground truth: 1 LLaMA-Adapter-V2: The y-intercept of this function is 0. LLaVA: The y-intercept of the function is 2. Multimodal Bard: The y-intercept of the function is the point where the graph crosses the y-axis. Since the y-axis is also the line x = 0, the x-coordinate of the y-intercept will always be 0. In the image, the graph crosses the y-axis at the point (0, 3). Therefore, the y-intercept of the function is 3. Here are the steps on how to find the y-intercept of a function: | 2310.02255#134 | 2310.02255#136 | 2310.02255 | [
"2302.13971"
] |
2310.02255#136 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 1. Set the x-coordinate to 0. 2. Solve for the y-coordinate. In this case, we set x = 0 and get y = 3. Therefore, the y-intercept is 3. GPT-4V: The y-intercept of a function is the point where the graph of the function crosses the y-axis. From the provided graph, the curve intersects the y-axis at y = 4. Thus, the y-intercept is 4. Figure 60: The plot represents a function that intersects the y-axis at y = 1. However, large multi- modal models, including GPT-4V, struggle to grasp the algebraic relationship depicted in this plot. | 2310.02255#135 | 2310.02255#137 | 2310.02255 | [
"2302.13971"
] |
2310.02255#137 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 57 Published as a conference paper at ICLR 2024 ID: 963 Question: For what x does f reach its local maximum? Ground truth: 3 LLaMA-Adapter-V2: The local maximum of the function f (x) occurs at x = 2. # (Liava: LLaVA: The function f reaches its local maximum at x = 2. Multimodal Bard: The local maximum of the function is the point where the graph of the function changes from increasing to decreasing. In the given graph, this point is at x = 3. So the answer is 3. Here is a more detailed explanation: | 2310.02255#136 | 2310.02255#138 | 2310.02255 | [
"2302.13971"
] |
2310.02255#138 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | â ¢ A local maximum point of a function is a point on the graph of the function where the function value is higher than all the function values in a small neighborhood of the point. â ¢ In the given graph, the function value is increasing for all values of x less than 3. However, for values of x greater than 3, the function value starts to decrease. This means that the function value at x = 3 is higher than all the function values in the interval (3, 4). | 2310.02255#137 | 2310.02255#139 | 2310.02255 | [
"2302.13971"
] |
2310.02255#139 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | â ¢ Therefore, the local maximum point of the function is at x = 3. # Correct output: 3 GPT-4V: The function f (x) reaches its local maximum where its first derivative, f â ²(x), changes from positive to negative. Looking at the graph, f â ²(x) crosses the x-axis (becoming 0) at x = 1. So, the answer is 1. Figure 61: The graph depicts two functions, f (x) and f â ²(x). While Multimodal Bard produces the correct answer, it provides inaccurate information about the properties of f (x). In contrast, GPT-4V confuses f (x) with f â ²(x). 58 } | 2310.02255#138 | 2310.02255#140 | 2310.02255 | [
"2302.13971"
] |
2310.02255#140 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 H.3.2 ARITHMETIC REASONING Arithmetic reasoning problems in MATHVISTA require accurate fundamental operations in con- junction with understanding diverse visual contexts. As illustrated in Figure 1, GPT-4V exhibits a significant improvement in arithmetic reasoning compared to existing models. For instance, some LLMs struggle with basic arithmetic tasks, such as determining the difference between two values in a bar chart (Figure 62) or computing the probability based on simple statistical data (Figure 63). | 2310.02255#139 | 2310.02255#141 | 2310.02255 | [
"2302.13971"
] |
2310.02255#141 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | ID: 74 Question: What is the difference of largest and smallest bar? Ground truth: 47.6 Share of population using at least basic drinking water source, 2000 RG Snare ofthe population using atleast a basic drinking water sence: Matis. an iproved source win 30 minut rou mp cect teetana oe cambedia 0% 20% 403% Sours: WHOVUNICEF Jie Mentoring Programme fo Water S LLaMA-Adapter-V2: | 2310.02255#140 | 2310.02255#142 | 2310.02255 | [
"2302.13971"
] |
2310.02255#142 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | The difference of the largest and smallest bar is 0.1. LLaVA: The difference between the largest and smallest bar in the graph is 1.4. Multimodal Bard: The largest bar in the image is Iceland with 100%, and the smallest bar is Cambodia with 52.4%. So the difference between the largest and smallest bar is 100 - 52.4 = 47.6%. However, the question asks for the answer in one decimal place. Rounding 47.6% to one decimal place gives 47.7%. Therefore, the difference between the largest and smallest bar is 47.7%. | 2310.02255#141 | 2310.02255#143 | 2310.02255 | [
"2302.13971"
] |
2310.02255#143 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | GPT-4V: From the graph: Largest bar (Iceland) = 100% Smallest bar (Cambodia) = 52.4% Difference = 100% - 52.4% = 47.6% Correct output: 47.6 Figure 62: The problem involves performing a straightforward arithmetical operation after identi- fying two numbers from the bar chart. However, LLaMA-Adapter-V2 and LLaVA are unable to interpret the figure correctly, while Multimodal Bard fails to round 47.6% to one decimal place. 59 | 2310.02255#142 | 2310.02255#144 | 2310.02255 | [
"2302.13971"
] |
2310.02255#144 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Published as a conference paper at ICLR 2024 ID: 718 Question: Jeffrey is the proud owner of an eclectic bow tie collection. He keeps track of how many bow ties he has, and organizes them by pattern and material. What is the probability that a randomly selected bow tie is designed with swirls and is made of velvet? Simplify any fractions. Ground truth: 0.21 LLaMA-Adapter-V2: The probability that a randomly selected bow tie is designed with swirls and is made of velvet is 0.00. LLaVA: To determine the probability of a randomly selected bow tie being designed with swirls and made of velvet, we need to divide the total number of bow ties with the specific characteristics (swirls and velvet) by the total number of bow ties in Jeffreyâ s collection. In the image, there are three bow ties with swirls and two bow ties made of velvet. The total number of bow ties in the collection is 10. So, the probability of a randomly selected bow tie being designed with swirls and made of velvet is: (3 bow ties with swirls and velvet) / (10 total bow ties) = 3/10 = 0.3 or 30%. | 2310.02255#143 | 2310.02255#145 | 2310.02255 | [
"2302.13971"
] |
2310.02255#145 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Multimodal Bard: The probability that a randomly selected bow tie is designed with swirls and is made of velvet is: (Number of swirls and velvet bow ties)/(Total number of bow ties) We can find the number of swirls and velvet bow ties by adding the values in the â Swirlsâ and â Velvetâ columns. This gives us 3 + 5 = 8 bow ties. We can find the total number of bow ties by adding the values in all three columns. This gives us 5 + 3 + 1 + 5 = 14 bow ties. Therefore, the probability that a randomly selected bow tie is designed with swirls and is made of velvet is: (8)/(14) = 0.57 = 57.14% To two decimal places, the probability is 0.5714, or 57.14%. | 2310.02255#144 | 2310.02255#146 | 2310.02255 | [
"2302.13971"
] |
2310.02255#146 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | GPT-4V: To find the probability that a randomly selected bow tie is designed with swirls and is made of velvet, we first need to find the total number of bow ties and then determine how many of these fit the criteria. Total number of bow ties: Zigzags (Velvet) = 5 Zigzags (Cotton) = 1 Swirls (Velvet) = 3 Swirls (Cotton) = 5 Total = 5 + 1 + 3 + 5 = 14 Number of bow ties that are designed with swirls and are made of velvet = 3 Probability = Number of desired outcomes / Total number of possible outcomes Probability = 3/14 Now, converting the fraction to a decimal (rounded to two decimal places): 3 ÷ 14 = 0.21 Correct output: 0.21 Figure 63: The problem requires arithmetic reasoning combined with an understanding of a basic statistical table and fundamental knowledge of probability. However, only GPT-4V predicts the correct answer. | 2310.02255#145 | 2310.02255#147 | 2310.02255 | [
"2302.13971"
] |
2310.02255#147 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 60 Published as a conference paper at ICLR 2024 H.3.3 GEOMETRY REASONING In geometry reasoning, the performance of GPT-4V is comparable to that of humans on MATH- VISTA, as demonstrated in Figure 1. Figure 64 and Figure 65, respectively, present two geometry reasoning problems: one at an elementary level and the other at a college level. For both problems, GPT-4V produces the correct answers accompanied by detailed explanations. ID: 606 Question: What is the perimeter of the rectangle? Ground truth: 10 2cm LLaMA-Adapter-V2: The perimeter of the rectangle is 10 units. Correct output: 10 | 2310.02255#146 | 2310.02255#148 | 2310.02255 | [
"2302.13971"
] |
2310.02255#148 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | LLaVA: The perimeter of the rectangle is 10 cm. Correct output: 10 Multimodal Bard: I see the image. The rectangle has a width of 3 cm and a length of 2 cm. So, the perimeter of the rectangle is 2*3 + 2*2 = 8 cm. GPT-4V: The perimeter of a rectangle is calculated by adding up all its sides. Using the provided dimensions: Perimeter = 2 Ã (length + width) Perimeter = 2 Ã (3 cm + 2 cm) Perimeter = 2 Ã 5 cm Perimeter = 10 cm The perimeter of the rectangle is 10 cm. Correct output: 10 Figure 64: This problem poses an elementary geometry question, calculating the perimeter of a rectangle. While LLaMA-Adapter-V2 and LLaVA produce the correct answer, they do not offer explanations to confirm the correctness of their reasoning steps. Conversely, Multimodal Bard com- mits a calculation error when summing up the lengths of the four sides. | 2310.02255#147 | 2310.02255#149 | 2310.02255 | [
"2302.13971"
] |
2310.02255#149 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 61 Published as a conference paper at ICLR 2024 wD: 640 Question: Find WN if AIWN is equilateral and WN = 5 N Choices: (A) 3x (B) 3x (C) 5x (D) 107 Ground truth: (B) 37 wD: N LLaMA-Adapter-V2: The correct answer is (D) 10Ï . LLaVA: Given that â ³IWN is equilateral, we can use the cosine rule to find the length of the side opposite the angle â WN. a b cos â WN = where a and b are the lengths of the adjacent sides of the equilateral triangle. a = 5 b = 5 Substituting the values, we get: cos â WN = 5 5 = 1 So, the length of the side opposite the angle â WN is equal to the length of the adjacent sides, which is 5. Therefore, the correct answer is: (D) 10 Ï | 2310.02255#148 | 2310.02255#150 | 2310.02255 | [
"2302.13971"
] |
2310.02255#150 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Multimodal Bard: The correct answer is (B). Since AIWN is equilateral, WIN = WNI = 60°. Therefore, WN = r â 2-60° = Correct output: (B) 37 GPT-4V: For an equilateral triangle, each interior angle is 60°. The arc corresponding to a central angle in a circle is given by: Arc length = central angle in radians x radius Given that WI = IN = NW = 5 (since AIW N is equilateral), the radius of the circle is 5. Using the formula above: WN = 60° x wa x5 =3x5=% The correct option is: (B) 37. Correct output: (B) 37 Figure 65: This is a high school-level geometry problem requiring an understanding of geometric symbols and relations, the application of the correct theorem, and accurate calculations. While Multimodal Bard generates the correct answer, it applies the wrong theorem. GPT-4V outperforms other baselines, demonstrating its remarkable ability to reason geometrically. | 2310.02255#149 | 2310.02255#151 | 2310.02255 | [
"2302.13971"
] |
2310.02255#151 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | 62 Published as a conference paper at ICLR 2024 H.3.4 LOGICAL REASONING Logical reasoning problems represent a different type of question in MATHVISTA. Solving these problems requires abstract thinking to deduce the underlying patterns of numbers or shapes from figures. Current foundation models struggle to effectively tackle logical reasoning problems: GPT- 4V achieves only 21.6% accuracy in logical reasoning, which is a modest improvement of 8.1% over random chance, as shown in Table 2. The challenges that logical reasoning problems present to current LMMs are further highlighted in Figures 66, 67, and 68. | 2310.02255#150 | 2310.02255#152 | 2310.02255 | [
"2302.13971"
] |
2310.02255#152 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | ID: 98 Question: Find the value of the square in the figure. Ground truth: 2 BrainTeaser foriQâ Test _ @ + || =//5 A < ras & B =? OsAi5\V LLaMA-Adapter-V2: The value of the square in the figure is 16. LLaVA: The square in the figure has a value of 1. Multimodal Bard: The value of the square in the figure is 7. Hereâ s how to solve it: 1. 5 + 8 = 13 2. 13 + 7 = 20 3. 20/2 = 10 The answer is the final number, which is 10. | 2310.02255#151 | 2310.02255#153 | 2310.02255 | [
"2302.13971"
] |
Subsets and Splits