id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2310.09497#32 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon to further enhance efficiency by incorporating more compared documents into the prompt, thereby reducing the number of LLM inference calls. However, we acknowledge that there is an input length limitation to LLMs (in our experiments this is 512 prompt tokens) and setting ð to a large value may require more aggressive document truncation, likely impacting effectiveness. To investigate the trade-off between effectiveness and efficiency inherent in our Setwise approach, we set ð = 3, 5, 7, 9 while trun- cating the documents in the prompt to 128, 85, 60, 45 tokens2, re- spectively. The NDCG@10, along with query latency for all models while varying ð , is visualized in Figure 3a for the TREC DL datasets. As expected, larger ð reduces query latency but often degrades effectiveness. Notably, the heap sort algorithm consistently proves more efficient than bubble sort. | 2310.09497#31 | 2310.09497#33 | 2310.09497 | [
"2302.13971"
] |
2310.09497#33 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | For instance, with Flan-t5-xl and ð = 9, heap sort achieves strong NDCG@10 with a query latency of â 3 seconds. When compared to the other methods outlined in Table 2, this represents the lowest query latency, except for the Pointwise approaches with Flan-t5-large, albeit with superior rank- ing effectiveness. Itâ s worth noting that the ranking effectiveness decline with larger ð values could also be attributed to the increased truncation of passages. LLMs with extended input length capacity might potentially yield improved ranking effectiveness for larger ð | 2310.09497#32 | 2310.09497#34 | 2310.09497 | [
"2302.13971"
] |
2310.09497#34 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | . This area warrants further exploration in future studies. Similarly, the Listwise balance effectiveness and efficiency through the adjustment of the repetition count ð for the sliding window. In our prior experiments, we consistently set ð = 5 to ensure that at least 10 of the most relevant documents can be brought to the top. In Figure 3b, we investigate the influence of varying ð on Listwise approaches. Latency exhibits a linear relationship with ð , which aligns with expectations. A larger value of ð can enhance the effec- tiveness of listwise.generate, and beyond ð > 5, the improvement levels off. Conversely, the listwise.likelihood approach, which lever- ages our Setwise prompting, showcases notably higher effectiveness and efficiency. Even with a small value of ð the performance of list- wise.likelihood exceeds that of listwise.generate, with the highest performance achieved around ð = 5. 5.5 Sensitivity to the Initial Ranking The ranking effectiveness of the original Listwise and Pairwise meth- ods is influenced by the initial ranking order [18, 20]. To investigate this aspect in relation to our approach, we consider different order- ings of the initial BM25 list; specifically, 1) initial BM25 ranking; 2) inverted BM25 ranking; and 3) random shuffled BM25 ranking. Each of these initial rankings was used to test different reranking meth- ods using Flan-t5-large. The results are presented in Figure 4. Differ- ent initial ranking orders negatively impact listwise.generate, pair- wise.heapsort and pairwise.bubblesort; pairwise.heapsort is the most robust method. These findings align with the literature [18, 20]. In contrast, Setwise prompting is far more robust to variations in the initial ranking order. Both listwise.likelihood and setwise.bubblesort exhibit large improvements over listwise.generate and pairwise.bubblesort, in the case of the inverted BM25 ranking and randomly shuffled BM25 ranking. Moreover, they demonstrate a similar level of robust- ness to pairwise.heapsort. | 2310.09497#33 | 2310.09497#35 | 2310.09497 | [
"2302.13971"
] |
2310.09497#35 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | This leads us to the conclusion that our 2This reduction in document length is necessary to ensure prompt size is not exceeded. A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint (a) TREC DL 2019 # (b) TREC DL 2020 Figure 4: Sensitivity to the initial ranking. We use Flan-t5-large and ð = 4 for the Setwise approach. Setwise prompting approach substantially enhances the zero-shot re-ranking with LLMs in relation to the initial ranking. 6 CONCLUSION We undertook a comprehensive study of existing LLM-based zero- shot document ranking methods, employing strict and consistent experimental conditions. Our primary emphasis was on evaluating both their ranking effectiveness and their efficiency in terms of computational efficiency and runtime latency â factors that are often disregarded in previous studies. Our findings unveil some unforeseen insights, and effectiveness-efficiency trade-offs between different methods. This information equips practitioners with valu- able guidance when selecting the most appropriate method for their specific applications. To further boost efficiency of LLM-based zero-shot document ranking, we introduced an innovative Setwise prompting strategy. Setwise has the potential to enhance both effectiveness and effi- ciency for Listwise approaches provided the model logits are ac- cessible. Setwise also notably enhances the efficiency of sorting- based Pairwise approaches. Furthermore, Setwise prompting offers a straightforward way to balance effectiveness and efficiency by incorporating more documents for comparison in the prompt. Ad- ditionally, approaches equipped with Setwise prompting demon- strated strong robustness to variation in the initial retrieval set used for reranking. Future work should focus on evaluating the Setwise prompting approach on a wider array of LLMs, including LLaMA models [22, 23] as well as the OpenAI LLM APIs. Additionally, recent advanced self-supervised prompt learning techniques [6, 27] could be used to refine the Setwise approach. We make our code and results publicly available at https://github.com/ielab/llm-rankers. [5] Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv preprint arXiv:2003.07820 (2020). | 2310.09497#34 | 2310.09497#36 | 2310.09497 | [
"2302.13971"
] |
2310.09497#36 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | [6] Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. 2023. Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution. arXiv preprint arXiv:2309.16797 (2023). [7] Lukas Gienapp, Maik Fröbe, Matthias Hagen, and Martin Potthast. 2022. Sparse Pairwise Re-Ranking with Pre-Trained Transformers. In Proceedings of the 2022 ACM SIGIR International Conference on Theory of Information Retrieval (Madrid, Spain) (ICTIR â | 2310.09497#35 | 2310.09497#37 | 2310.09497 | [
"2302.13971"
] |
2310.09497#37 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | 22). Association for Computing Machinery, New York, NY, USA, 72â 80. https://doi.org/10.1145/3539813.3545140 [8] Donald Ervin Knuth. 1997. The art of computer programming. Vol. 3. Pearson Education. [9] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. | 2310.09497#36 | 2310.09497#38 | 2310.09497 | [
"2302.13971"
] |
2310.09497#38 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199â 22213. [10] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michi- hiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022). [11] Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. | 2310.09497#37 | 2310.09497#39 | 2310.09497 | [
"2302.13971"
] |
2310.09497#39 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | Pyserini: A Python Toolkit for Reproducible In- formation Retrieval Research with Sparse and Dense Representations. In Pro- ceedings of the 44th International ACM SIGIR Conference on Research and De- velopment in Information Retrieval (Virtual Event, Canada) (SIGIR â 21). Asso- ciation for Computing Machinery, New York, NY, USA, 2356â 2362. https: //doi.org/10.1145/3404835.3463238 [12] Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-Shot Listwise Document Reranking with a Large Language Model. arXiv preprint arXiv:2305.02156 (2023). [13] Aliaksei Mikhailiuk, Clifford Wilmot, Maria Perez-Ortiz, Dingcheng Yue, and Rafal Mantiuk. 2021. Active Sampling for Pairwise Comparisons via Approximate Message Passing and Information Gain Maximization. In 2020 IEEE International Conference on Pattern Recognition (ICPR). [14] Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Docu- ment Ranking with a Pretrained Sequence-to-Sequence Model. In Findings of the Association for Computational Linguistics: EMNLP 2020. 708â 718. [15] Jay M Ponte and W Bruce Croft. 2017. | 2310.09497#38 | 2310.09497#40 | 2310.09497 | [
"2302.13971"
] |
2310.09497#40 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | A language modeling approach to in- formation retrieval. In ACM SIGIR Forum, Vol. 51. ACM New York, NY, USA, 202â 208. [16] Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv preprint arXiv:2101.05667 (2021). [17] Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023. | 2310.09497#39 | 2310.09497#41 | 2310.09497 | [
"2302.13971"
] |
2310.09497#41 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models. arXiv preprint arXiv:2309.15088 (2023). [18] Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, et al. 2023. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563 (2023). [19] Devendra Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. 2022. Improving Passage Retrieval with Zero-Shot Question Generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 3781â 3797. https://doi.org/10. 18653/v1/2022.emnlp-main.249 [20] Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542 (2023). [21] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). REFERENCES [1] Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are zero-shot clinical information extractors. arXiv preprint arXiv:2205.12689 (2022). | 2310.09497#40 | 2310.09497#42 | 2310.09497 | [
"2302.13971"
] |
2310.09497#42 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | [2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877â 1901. [3] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Se- bastian Gehrmann, et al. 2022. | 2310.09497#41 | 2310.09497#43 | 2310.09497 | [
"2302.13971"
] |
2310.09497#43 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022). [22] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). [23] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yas- mine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhos- ale, et al. 2023. | 2310.09497#42 | 2310.09497#44 | 2310.09497 | [
"2302.13971"
] |
2310.09497#44 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023). [24] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPSâ 17). Curran Associates Inc., Red Hook, NY, USA, 6000â 6010. [4] Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv preprint arXiv:2102.07662 (2021). [25] Shuai Wang, Harrisen Scells, Bevan Koopman, and Guido Zuccon. 2023. Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search?. Arxiv, 2023, preprint In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (Taipei, Taiwan) (SIGIR â 23). Association for Computing Machinery, New York, NY, USA, 1426â 1436. https://doi.org/10. 1145/3539618.3591703 [26] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. | 2310.09497#43 | 2310.09497#45 | 2310.09497 | [
"2302.13971"
] |
2310.09497#45 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | Finetuned Language Models are Zero-Shot Learners. In International Conference on Learning Representations. [27] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv preprint Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon arXiv:2309.03409 (2023). [28] Shengyao Zhuang, Hang Li, and Guido Zuccon. 2021. Deep query likelihood model for information retrieval. In Advances in Information Retrieval: 43rd Euro- pean Conference on IR Research, ECIR 2021, Virtual Event, March 28â April 1, 2021, Proceedings, Part II 43. Springer, 463â 470. [29] Shengyao Zhuang and Guido Zuccon. 2021. TILDE: Term independent likelihood moDEl for passage re-ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1483â | 2310.09497#44 | 2310.09497#46 | 2310.09497 | [
"2302.13971"
] |
2310.09497#46 | A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | 1492. | 2310.09497#45 | 2310.09497 | [
"2302.13971"
] |
|
2310.09611#0 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | 3 2 0 2 # t c O 4 1 ] C H . s c [ 1 v 1 1 6 9 0 . 0 1 3 2 : v i X r a # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Joshua Gorniak [email protected] Boston College Chestnut Hill, Massachusetts, USA # Yoon Kim [email protected] MIT Cambridge, Massachusetts, USA # Stephen Gwon [email protected] Cambridge Rindge & Latin School Cambridge, Massachusetts, USA # Donglai Wei [email protected] Boston College Chestnut Hill, Massachusetts, USA # Nam Wook Kim nam.wook.kim@bcu Boston College Chestnut Hill, Massachusetts, USA | 2310.09611#1 | 2310.09611 | [
"2303.04048"
] |
|
2310.09611#1 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | VizAbility Interface Global Land and Ocean January-December Temperature Anomalies we Explore the structure and components ofthe chart through a text representation, Instructions: Press enter on the traeview to explore â the contents ofthe chart. Navigate using the arrows keys. To exit press escape. _â â â â â ne vail 3 through typing or voice input ee Vega-lite Spec Supplement your Knowledge ofthe chart by asking questions, ether Natural Language Query Q&A Pipeline & Query Classification Few shot prompting Analytical Query Visual Query Contextual Query Navigation Query = Data Web Browser Agent + Tree view text + User location l rr Shortest Path Finding { CSV Agent End-Point Detection â @ Large Language Model (Open Al GPT 3.5 Turbo) ) Figure 1: VizAbility pipeline: users navigate the chart using a keyboard and ask questions that are answered by classifying their query type (e.g., visual query) and referring to underlying data, chart visual structure, user location, and internet browsing. ABSTRACT Data visualization serves as a crucial tool for communicating im- portant information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual im- pairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual im- pairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising poten- tial of VizAbilityâ s multimodal approach. We explore opportunities for further refinement, including comprehensive benchmark testing and integration with current visualization tools. | 2310.09611#0 | 2310.09611#2 | 2310.09611 | [
"2303.04048"
] |
2310.09611#2 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY © 2018 Association for Computing Machinery. This is the authorâ s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Woodstock â 18: ACM Symposium on Neural Gaze Detection, June 03â 05, 2018, Woodstock, NY , https://doi.org/XXXXXXX.XXXXXXX. CCS CONCEPTS â ¢ Human-centered computing â Interactive systems and tools; Visualization systems and tools. # KEYWORDS data visualization, accessibility, blind and low vision people ACM Reference Format: Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, and Nam Wook Kim. 2018. | 2310.09611#1 | 2310.09611#3 | 2310.09611 | [
"2303.04048"
] |
2310.09611#3 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | VizAbility: Multimodal Accessible Data Visualization with Key- board Navigation and Conversational Interaction. In Woodstock â 18: ACM Symposium on Neural Gaze Detection, June 03â 05, 2018, Woodstock, NY . ACM, New York, NY, USA, 13 pages. https://doi.org/XXXXXXX.XXXXXXX 1 INTRODUCTION Data visualization has become an indispensable tool in our broader society, aiding in the comprehension of vital information and fa- cilitating informed decision-making [36]. Its strength stems from leveraging the vast information bandwidth of our visual perception, which surpasses other sensory modalities [18]. However, an over- reliance on visual representation can inadvertently marginalize those with blindness or low vision (BLV), restricting their ability to engage with and understand data visualizations [39]. Individ- uals with BLV often come across data visualizations while using screen readers such as JAWS, NVDA, and VoiceOver to navigate the | 2310.09611#2 | 2310.09611#4 | 2310.09611 | [
"2303.04048"
] |
2310.09611#4 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY web [34, 46]. Unfortunately, a significant portion of data visualiza- tions on the web remains largely inaccessible to this group [26, 46], resulting in a pronounced information gap. Numerous assistive technologies have been developed to allow BLV users to access visualizations using sensory modalities other than vision [34]. Tactile visualizations can provide a tangible rep- resentation of data while necessitating specialized hardware such as haptic displays [42] and embossing machines [15]. On the other hand, sonification can enable users to discern trends and anomalies through sound [51], but it is typically limited to single-series data. Traditional methods for adapting web visualizations for screen read- ers include data tables and alternative text [34]. However, these methods often diminish the inherent advantages of data visualiza- tions. New strategies have emerged that aim to offer enriched data experiences by enabling users to navigate chart structures with keyboards [48, 53, 55] or by permitting them to pose verbal ques- tions [45]. A recent comparative study indicates that each approach has its own advantages and disadvantages [33]. This work introduces VizAbility, a multimodal approach to cre- ating accessible data visualizations for screen readers, blending keyboard navigation with conversational interaction (Figure 1). In- stead of focusing exclusively on single-modality techniques, we combine the strengths of existing accessibility methods [33] to deliver an enhanced data experience, while minimizing their draw- backs. We utilize the established structured navigation method to facilitate a richer comprehension of chart appearances [10] while also giving users the option to transition to a data table view for a more familiar interaction. Our innovation lies in the question-and- answer segment, which addresses on-demand queries, fostering efficient data exploration. Our LLM-based pipeline first uses few-shot prompting to classify user queries into visual, analytical, contextual, and naviga- tion queries. Once classified, VizAbility employs a query-specific prompting strategy. For analytical and visual queries, we aggre- gate both the chartâ s transformed data and color encoding into one CSV file, which is subsequently fed along with the keyboard- navigable text representation [10] to the LLM via a CSV Agent [2]. | 2310.09611#3 | 2310.09611#5 | 2310.09611 | [
"2303.04048"
] |
2310.09611#5 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Contextual queries utilize a Web Browser Agent [3], whereas navigation queries employ the LLM to discern the starting/ending nodes from a user query and employ a breadth-search algorithm to calculate the shortest path between the nodes. We designed the prompts to minimize hallucinations and address unanswerable queries via structured output formatting. We collaborated with a blind co-design participant in the development of VizAbility, hold- ing two feedback sessions. Their insights, particularly on enhancing interface transparency, were integral to shaping our system design. We carried out both quantitative and qualitative assessments to evaluate VizAbilityâ s question & answering pipeline and overall usability. We evaluated response accuracy using a dataset of 979 real BLV user questions derived from previous research [32]. Splitting the dataset, 80% was used for testing and 20% for validation. Our query classification achieved an accuracy of 88.5%. For response evaluation, we leveraged GPT4 to measure the coherence between the ground truth and our response on a 5-point Likert scale, ranging from â Very Poorâ to â Very Goodâ . Notably, 47% of the responses were rated as â Very Goodâ . | 2310.09611#4 | 2310.09611#6 | 2310.09611 | [
"2303.04048"
] |
2310.09611#6 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Additionally, using a binary scale to Gorniak et al. categorize responses as either â Correctâ or â Incorrectâ , we attained a 69.4% accuracy rate. For the usability study, we enlisted six BLV participants through the National Institute for the Blind. Initially, participants explored VizAbility without guidance and were subsequently introduced to various query types. They also completed the System Usability Scale survey. The results suggest that while participants could learn to use the system, discerning query types without guidance proved challenging. Nonetheless, they acknowledged the merits of the inte- grated approach and offered suggestions for further improvements and potential applications. Combining insights from both quantita- tive and qualitative evaluations, we identify potential avenues for future work. These include enhancing user-driven customization, developing a more robust benchmarking system, and integrating our solution into existing visualization tools. 2 RELATED WORK 2.1 Accessibility Systems for Data Visualization The recent survey offers an overview of previous efforts explor- ing the use of non-visual modalities, such as speech, sound, and touch [34]. For example, sonification employs non-speech audi- tory channels, such as pitch and volume, to represent data [43, 51]. While this can offer users a swift overview of a graph, it struggles to communicate exact values and might not be effective beyond single-series charts [19]. An empirical study indicates that blind individuals favor speech over sonification, as the cognitive load for a sonified graph feels subjectively more intense [43]. Tactile systems employ methods like embossed prints, haptic feedback through vibrations, and braille for text representation. These systems enable both simultaneous and on-demand explo- ration of data trends, offering an advantage over linear audio [17]. However, they also necessitate enhanced perceptual motor skills. Similar to sonification, accurately discerning complex structures can be challenging, often demanding a more refined spatial reso- lution [15]. Producing tactile graphs typically involves specialized hardware, such as embossers, which might not be economically feasible for the average user [34]; thus, they are typically used and created in the field of education by teachers [16]. Screen readers, utilizing text/speech modalities, stand as the predominant assistive technology, particularly for navigating web content. The go-to accessibility techniques for screen readers en- compass alternative text and data tables. | 2310.09611#5 | 2310.09611#7 | 2310.09611 | [
"2303.04048"
] |
2310.09611#7 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Yet, these strategies often reduce data visualizations to brief descriptions or mere numbers, undermining their inherent advantages. An alternative approach in- volves crafting navigable text descriptions derived from the chartâ s structure. A select group of data visualization tools and toolkits, such as HighCharts, offer some degree of this navigation and cus- tomization [33]. In recent times, several systems have elevated their offerings by introducing advanced navigation structures, represent- ing the chart as a traversable graph structure [14, 22, 48, 53, 55]. Voice-based virtual assistants are emerging as valuable acces- sibility tools in human-computer interaction [49]. However, only a handful of studies have delved into using natural language for accessing data visualization content. For instance, Murillo-Morales & Miesenberger [41] showcased a prototype system where users can ask predefined questions related to data metrics such as mean, # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | 2310.09611#6 | 2310.09611#8 | 2310.09611 | [
"2303.04048"
] |
2310.09611#8 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY extremes, and range. In a similar vein, VoxLens [32] facilitates voice-activated interactions capable of addressing basic queries with terms like â maximumâ and â medianâ . Additionally, Kim et al. [32] used a wizard-of-oz approach to study the types of ques- tions blind individuals pose about charts. To address the limitations of relying on a single sensory modality, multi-sensory perception is frequently utilized. A prevalent strategy involves merging verbal (speech) cues with non-verbal ones, such as sonification, tactile graphics, and haptic feedback. Examples include offering on-demand audio descriptions of touched elements [21, 23, 35] or pairing sonification with speech or screen readers [47, 48]. However, these solutions often necessitate specialized software and hardware, especially for interactive tactile support, making them expensive to implement. In this study, we adopt a different multimodal approach that merges structured chart and table navigation using the keyboard with conversational interaction via verbal commands. Our work builds on the prior work that showcases the respective advantages of data tables (familiarity), structured navigation via keyboard (deeper understanding) [55], and conversational interaction via verbal commands (faster data exploration) [45]. Our primary tech- nical advancement centers on employing LLMs to substantially enhance the current chart question-and-answer mechanism for the visually impaired. # 2.2 Question & Answering Systems for Data Visualization Within the realm of image understanding research, visual question answering has been rigorously explored in both natural language processing and computer vision, specifically regarding answering text-based queries about images [8, 28, 54]. Yet, the majority of these endeavors have centered on natural scene images rather than human-generated visuals such as data visualizations. questions differently compared to those with sight [13, 24]. A lim- ited number of systems directly address the challenge of crafting question-and-answer systems tailored for the blind [41, 45]. How- ever, these systems do not always offer specialized features for the blind and are constrained in their question-answering capabilities. For instance, VoxLens [45] is limited to charts with single series data, while the system by Murillo-Morales & Miesenberger [41] is restricted to bar charts. | 2310.09611#7 | 2310.09611#9 | 2310.09611 | [
"2303.04048"
] |
2310.09611#9 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Kim et al. [32] have recently curated a set of questions posed by blind individuals through a wizard-of- oz study, laying the groundwork for more refined and targeted question-and-answer systems. In this paper, we present an enhanced chart question-and-answer system for the blind, harnessing the power of LLMs. We integrate structured information from the keyboard navigation method [10], which takes Vega-lite as input. Our system addresses a wide range of queries, from data and visual to contextual ones that necessi- tate auxiliary information surrounding the chart. Additionally, it facilitates navigation queries to synchronize with keyboard naviga- tion. We assessed our system using the data collection from Kim et al. [32], which comprises questions posed by blind individuals. # 3 VIZABILITY DESIGN DECISIONS | 2310.09611#8 | 2310.09611#10 | 2310.09611 | [
"2303.04048"
] |
2310.09611#10 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | G1: Enable understanding the chart structure. Bridging the per- ceptual gap between BLV and sighted individuals requires a deep understanding of chart structures. While some blind individuals may not prioritize visual encoding information [38, 48], previous research indicates that navigating charts based on their visual en- coding helps BLV users gain a clearer visual understanding. Fur- thermore, a hierarchical representation of charts, rooted in visual encodings, offers a layered approach to information, allowing users to delve from broad summaries to specific data points [48]. In this study, we employ Olli [10] to facilitate structured chart navigation. Recent studies have begun to focus on data visualization im- ages [25]. For example, FigureQA [30] offers a corpus tailored for yes/no questions, such as â | 2310.09611#9 | 2310.09611#11 | 2310.09611 | [
"2303.04048"
] |
2310.09611#11 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Is Light Gold less than Periwinkle?â . Con- versely, DVQA [29] expands its purview to encompass questions about chart structure (â are the bars horizontal?â ), data retrieval (â what percent of people prefer A?â ), and reasoning (â Is A preferred more than B?â ). While both FigureQA and DVQA rely on synthet- ically generated charts, PlotQA introduces a large-scale dataset of real-world scientific plots. Unlike the templated questions of the aforementioned datasets, ChartQA delivers human-composed questions, enhanced using LLMs [40]. These models predominantly process pixel images as input. For instance, ChartQA extracts data tables and other image features, feeding them into vision and lan- guage task models [12]. Consequently, their accuracy largely hinges on their image processing capabilities, often leading to suboptimal results. In a different approach, Kim et al.[31] unveiled a system that not only answers questions but also provides explanations, op- erating on Vega-lite[44] instead of images. All the current question- answering systems are limited to basic visualization types like bar, line, and pie charts. While chart QA systems hint at the potential for enhancing visualization accessibility, they often overlook the specific needs of BLV users. Recent studies have shown that BLV users frame | 2310.09611#10 | 2310.09611#12 | 2310.09611 | [
"2303.04048"
] |
2310.09611#12 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | G2: Support efficient data exploration. Navigating through a large number of data points using keyboard navigation can be cum- bersome, as highlighted in previous studies [33, 55]. Furthermore, extracting aggregate measures and discerning perceptual patterns beyond basic value retrievals becomes challenging when navigating data points individually. A conversational agent offers a potential solution to these challenges [33]. When combined with keyboard navigation, the userâ s current location can offer situational context, reducing the cognitive load when formulating clear questions for the intelligent agent. In this study, we leverage the advanced lan- guage understanding and reasoning capabilities of LLMs to address on-demand conversational queries. | 2310.09611#11 | 2310.09611#13 | 2310.09611 | [
"2303.04048"
] |
2310.09611#13 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | G3: Provide contextual knowledge on demand. Current chart ques- tion and answering systems often neglect the distinct types of ques- tions posed by blind versus sighted individuals. Recent research involving blind participants indicates that they frequently ask con- textual questions alongside data-related and visual inquiries [32]. These questions often seek external information not present in the chart, such as meanings about axes or specific data labels. Provid- ing answers to these inquiries can enhance the self-efficacy and autonomy of blind individuals. In our approach, we utilize an LLM with web search capabilities to address these contextual queries. | 2310.09611#12 | 2310.09611#14 | 2310.09611 | [
"2303.04048"
] |
2310.09611#14 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY G4: Use data tables as a familiar fallback strategy. The hierarchi- cal text representation of the chart may be regarded as excessive for smaller data sets, in which case conventional data tables are the preferable alternative. Moreover, data tables are well supported by screen readers and the most familiar method. This perspective, although not our initial focus, was reinforced by our user study and corroborated by previous research [33, 55]. Consequently, we incorporated the data table feature post-user study (Section 6). | 2310.09611#13 | 2310.09611#15 | 2310.09611 | [
"2303.04048"
] |
2310.09611#15 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | G5: Reduce gulf of execution and execution. Beyond the primary objectives, enhancing the user experience of VizAbility was also a key focus. For example, we expanded upon the query types iden- tified in prior research [32] by introducing navigation queries, fa- cilitating nonlinear navigation across charts and assisting users with orientation. We meticulously designed LLM prompts to ensure responses were succinct yet descriptive, while also minimizing the risk of misinterpretations or fabricated information. Additionally, we ensured numbers were formatted properly for screen readers, offered an alternative text box for speech queries, and added loading indicators to signal when LLM responses were pending. # 4 VIZABILITY SYSTEM INTERFACE & ARCHITECTURE Below, we outline the input chart format for VizAbility, explain how VizAbility facilitates keyboard navigation and conversational interaction with the chart, and address additional accessibility con- siderations based on the design decisions mentioned earlier. 4.1 Input Chart Format VizAbility assumes that both the visual encoding information and underlying dataset are made accessible. In this work, we use a Vega-Lite specification [44] as input to our system, while other specifications such as Observable Plot [4] are easily adaptable. 4.2 Exploring Chart Content using Keyboard Among many keyboard navigation methods available, we leverage Olli [10] to make the chart explorable as it is open-source. Olli accepts a Vega-lite spec and renders a visual chart for sighted users and also a keyboard navigable text representation (Figure 2). Olliâ s tree view displays the chart content in a hierarchical struc- ture, starting with the chart type description at the rootâ A bar chart. With axes Year and Temperature Anomaly (°C), followed by visual encoding channels such as axes and legendsâ Legend titled Temporal Polarity. For a nominal scale. With 2 values from nega- tive to positive. Within each encoding channel node, Olli lists data categories or numerical ranges depending on the data type being encoded; e.g., for a color legend, it lists all categories in the legendâ 1 of 2. Temporal Polarity equals negative. 101 values. Press t to open table. Individual data points reside in these group nodes. All four chart types we used in this work, including line chart, bar chart, scatter plot, and choropleth map, had four levels of information granularity. | 2310.09611#14 | 2310.09611#16 | 2310.09611 | [
"2303.04048"
] |
2310.09611#16 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | A user first needs to enter the tree view to explore the content. Based on its hierarchical structure, users can navigate the different levels of the tree view using up and down arrow keys (barchart â legend â negative polarity) while using left and right arrow keys Gorniak et al. to navigate sibling nodes in the level (negative polarity â positive polarity). In order to access individual data points, Olli requires users to press t to open up a screen-reader-compatible data table. This table shows a subset of the whole data, only displaying data points within the category or numerical range. The current version of Olli does not support navigating a choro- pleth map by geographic regions. We extended it to support the level of detail channel in Vega-lite1. As a result, we can encode country names or state names into the detail channel, which is in turn converted into an additional encoding channel node (see Figure 2). # 4.3 Rapid Chart Probing via Conversational Interaction The keyboard navigation of the chart content can convey a clear picture of how a chart looks to blind users [33]. However, it can also be cumbersome to navigate individual nodes in the tree view or derive aggregate measures on the go. | 2310.09611#15 | 2310.09611#17 | 2310.09611 | [
"2303.04048"
] |
2310.09611#17 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | To address this challenge, we integrate a speech-based interaction in which users can ask natural language questions as needed. Leveraging the question-answering capabilities of Large Language Models (LLMs), we detail our incor- poration of LLMs into our accessible data visualization systems. We outline the supported query types and how we seamlessly merge keyboard and speech inputs to enhance the chart experience. 4.3.1 Data Set. We utilized a prior studyâ s data set, comprising 979 BLV user questions spanning four visual stimuli (bar, line, scat- ter, and map) for the development and quantitative evaluation of VizAbility. These questions were gathered through a wizard-of-oz study, where a human facilitator acted as a question-answering system. We reconstructed the visualization images into Vega-Lite specifications and partitioned the questions into analytical, visual, and contextual queries. We then partition the pool of questions once more into an 80/20 split between the testing and validation sets via stratified random sampling so that there is a proportionate representation of each query type amongst both sets. The ground truths for the testing and validation sets were gen- erated manually. Each user query within the data set has an accom- panying ground truth classification, expressed as either â Analyti- cal Queryâ , â Visual Queryâ , or â Classification Queryâ , as well as a ground truth for the query response, for which we emphasized ver- boseness. For instance, the ground truth response to the question â | 2310.09611#16 | 2310.09611#18 | 2310.09611 | [
"2303.04048"
] |
2310.09611#18 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | What is the vaccination rate of South Africaâ is â The vaccination rate for South Africa is 36%â , as opposed to the more concise â 36%â . This enables us to evaluate both the quantitative and qualitative aspects of the response yielded by VizAbility. Supported Query Types. Analytical queries primarily focus 4.3.2 on understanding the underlying data, such as â Is Africa the country that needs the vaccine the most?â or â What is the highest positive anomaly?â Visual queries relate to visual encoding information or demand visual interpretation, exemplified by questions like â What color is North America?â or â Is the line fluctuating?â Analytical and visual queries are not entirely distinct; visual queries often necessitate data interpretation, as in â Which country exhibits the darkest shades for both the lowest and highest values?â . 1https://vega.github.io/vega-lite/docs/encoding.html#detail VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym â XX, June 03â 05, 2018, Woodstock, NY | 2310.09611#17 | 2310.09611#19 | 2310.09611 | [
"2303.04048"
] |
2310.09611#19 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Tree View Keyboard Navigation â A geographic map. percent fully vaccinated, country. | Press Close Table View â geographic map. percent.fully_vaccinated, country. - Legend titled percent fully_vaccinated. For a quantitative scale, With values from 0 to 100. Table View â Share of Population Receiving at Least One Dose i aeaeee Detail ttied country. For a â â vn 180 values from Costa Rica to Vanuatu percent fully_vaccinated is between 10 ress and 20 #4 â A geographic map. percent fully vaccinated, country. Â¥ A . Legend titled percent fully.vaccinated, For a quantitative scale. With values from 0 to 100 lif Aad . 1 of 10, Percent.fully_vaccinated Is between 0 and 10.6 values. Press t to open table. percent_fully vaccinated country Lye ae 2 of 10, Percent fully vaccinated is between 10 and 20, 12 values. Press t to open 5 syria ee > table. 3 of 10. Percent fully vaccinated is between 20 and 30. 11 values. Press t to open 12 â Congo table 7 Democratic Republic 4 of 10, Percent fully vaccinated is between 30 and 40, 19 values. Press t to open creas table 3 J Press G2 18 Libya [A geographic map. percent-fully vaccinated, country, ; Qe Legend titted percent fully vaccinated. For a quantitative scale, With values from 0 to 100. - te 1 of 10, Percent fully-vaccinatedis between 0 and 10.6 values. Press to open table. rage rH 15 Algeria porn Percent fully_vaccinated is between 10 and 20, 12 values. Press t to open 13 Cameroon 3 of 10, Percent fuly-vaccinated is between 20 and 30, 11 values. Press to open 12 Gabon table 7 Burkina Faso 4 of 10, Percentfully. vaccinated is between 30 and 40. 19 values. Press to open table 19 â Togo Figure 2: An example of a userâ s keyboard traversal of the Olli Tree. | 2310.09611#18 | 2310.09611#20 | 2310.09611 | [
"2303.04048"
] |
2310.09611#20 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Users can widen/narrow the scope of the text via the up/down arrow keys (respectively), and otherwise navigate between sibling nodes using left/right arrow keys. To access individual data, users can press the â tâ key to view a snapshot data table Line Chart Bar Chart 117 36 137 21 196 37 155 32 605 126 8 8 N/A 21 N/A 9 N/A 46 N/A 161 166 254 196 777 Table 1: Testing data distribution amongst the four query classifications Contextual questions seek information not directly present on the chart but require ancillary knowledge related to it. For instance, some questions aim to understand the chartâ s encoding, like â What is a scatterplot?â or â What does â positive temperature anomalyâ mean?â Others ask about context related to the data, such as â Where is Palestine?â or â | 2310.09611#19 | 2310.09611#21 | 2310.09611 | [
"2303.04048"
] |
2310.09611#21 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Why does the data start in 1880? What occurred then?â Additionally, there are inquiries about the dataâ s origin, exemplified by â What is the source of this information?â or â From where was this data obtained?â Navigation queries are a category we introduced to enhance the user experience. These queries are tailored to the synergy be- tween keyboard navigation and conversational interaction. For instance, to reduce cumbersome keyboard navigation and assist users in orientation, questions such as â How can I get to the X-axisâ (direction) or â Where am I?â (orientation) can be beneficial. Our motivation for this stems from a previous empirical study [33], where blind individuals highlighted such challenges with Olliâ s tree view. 4.3.3 Query Classification. First, we aim to classify user queries based on this categorization rather than diving straight into re- sponses. Once classified, we proceed to address each type of query in the subsequent phase (see the next section). This task division provides the LLM with a well-defined task and has been proven to increase its performance [52]. Figure 3 shows our few-shot prompt- ing approach. In the prompt, we provide a clear definition for each | 2310.09611#20 | 2310.09611#22 | 2310.09611 | [
"2303.04048"
] |
2310.09611#22 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | query type. To bolster the definition, we accompany each with four exemplar questions. These examples are sourced from our validation set, chosen based on their close alignment with the user query. Specifically, for each query type and the given user query, we sift through the validation set to pinpoint the four most analogous queries. These are then incorporated as representative examples for each query definition within the prompt. For this endeavor, we used sentence transformers to generate text embeddings and then applied cosine similarity to these embeddings to identify the most closely aligned examples. This method offers greater precision compared to arbitrarily selecting samples for each query type. We constrain the range of LLM responses by explicitly instruct- ing it to output either: â Analytical Queryâ , â Visual Queryâ , â Con- textual Queryâ , or â Navigation Queryâ . | 2310.09611#21 | 2310.09611#23 | 2310.09611 | [
"2303.04048"
] |
2310.09611#23 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | To thwart any potential hallucinations from the LLM, we provide an accessible escape route by instructing the model to return â I am sorry. I am unable to an- swer this questionâ when confronted with a question that does not immediately conform to any of the specified query types. Without such a safeguard, GPT frequently generates technical jargon and error messages that can deter users. 4.3.4 Query-Specific Prompting. The answering pipeline diverges into three unique paths, depending on the query type. | 2310.09611#22 | 2310.09611#24 | 2310.09611 | [
"2303.04048"
] |
2310.09611#24 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY Gorniak et al. Example Queries â Validation Dataset â The number of homes for sale nationally has plummeted Analytical Queries involve any possible lookup operations, computations, or analysis involving data. â Analytical Query // What Is the average number of homes for sale from 2018 to 2021: â Analytical Query // What were the percentage of increase or decrease in average number of nouses an sal between 2015 and 20177 (CAnaiytical Query) aN fra (Visual Query) Oe i Analytical Query // How many houses were sold in 2017? I lesed Analytical Query // What is the average amount of houses sold? Classification Prompt ia (Contextual Query } ins Visual Queries involve references to visual cues such as color or graph shape/characteristics. Your objective is to classify Navigation Query Visual Query // Is column two showing houses for sale? Visual Query // Is the picture(cnart) in between $0 to 80? Visual Query // What countries are in brighter range? Visual Query // Does each circle give the specific number of population lke the table or just the size of the circle? the following question into one of these four categories | (Example Queries } User Query (© What's the average number of homes for. > sale between 2017 and 2020? Navigation Queries involve questions relating to location within the Olli navigation table. | 2310.09611#23 | 2310.09611#25 | 2310.09611 | [
"2303.04048"
] |
2310.09611#25 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | They usually take up the form: "How do get from () to ()" Navigation Query // Does this one that 'm on have a comma in it? | Refer back to the examples | above to help classify the question: @ â Contextual Queries involve broad questions that do not necessarily need the graph's specific data to be answered, | Contextual Query // Does the system know the causes for sharp decrease in home s. | Contextual Query // when was this data collected? | Contextual Query // Do you have information on the percent of people who recelved two doses of a | Contextual Query // What is meant by upper range? Compute cosine similarity scores (User Query} to extract four most aligned queries per type « other than Covia? Figure 3: User questions are initially categorized based on query type via an LLM trained with few-shot prompting. We populate the prompt with sample questions and their corresponding ground truth classifications, which we extract from the validation set. Only those validation questions which share the highest cosine similarity score with the user query are selected within each query type. Data, Visual, Context LLM Output ! | 2310.09611#24 | 2310.09611#26 | 2310.09611 | [
"2303.04048"
] |
2310.09611#26 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Orange data points represent countries Entity Year Life Expectancy at Birth _ GDP per Capita in Asia. . . Afghanistan 1950 27.7 S Albania 1950 44.7 ae : 1950 42.4 Population 7480464 1252587 9019866 Explore the structure and components of the chart through a text representation, Instructions: Press enter onthe treeview to explore the contents ofthe chart. Navigate using the arrows keys. To el, press escape, Context! | 2310.09611#25 | 2310.09611#27 | 2310.09611 | [
"2303.04048"
] |
2310.09611#27 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Ascatrplet howe xsi at han Ppa apr counts adhe wari anf yar rm 15002018, â â â â â ->| Chart text + User cursor location 1.3.2 |/ 2 of 6. Continent equals Europe. 25 values {Address of Node // String Representation} | User Query | What do orange data points mean? 1156 1596 2176 orange darkolivegreen teal | LLM Prompt | Before you output the answer check for the following | Make sure to format all numerical responses ' appropriately, using things such as commas, dollar ' signs, appropriate rounding, and other identifiers when | necessary. ' {Context | Use this information along with everything else you are ! given to answer the following question: i! {User Query} Figure 4: Query-specific evaluation for Analytical and Visual queries. We parse the chartâ s transformed data set and aggregate color encoding within a CSV file, which we then supply to an LLM via a CSV agent. For further context, we also populate the prompt with the userâ s active position within the Olli Tree, in addition to a text representation of the Tree itself. Analytical & Visual Queries. Figure 5 illustrates our approach to handling analytical and visual queries. To circumvent the pre- defined token limit of the LLM, we consolidate the transformed data extracted from the Vega View [5] into an external CSV file. This file is then processed by LangChainâ s CSV Agent [2], which operates in the background. Under the hood, this agent leverages the Pandas DataFrame agent, subsequently executing Python code generated by the LLM. | 2310.09611#26 | 2310.09611#28 | 2310.09611 | [
"2303.04048"
] |
2310.09611#28 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | We purposefully avoid including the entire raw dataset, recognizing that it might differ from the final view data. Often, the agent can get stuck in an infinite loop of thinking. To prevent this, we have implemented a time constraint. If this time limit is exceeded, VizAbility will display the message: â Answer: Iâ m sorry, but the process has been terminated because it took too long to arrive at an answer.â While the CSV agent can handle most data-related queries, it is not aware of any visual encoding information of the chart. To ad- dress visual queries, we extract color information directly from the Vega View [5] and incorporate it as an additional column within the CSV file. This modification ensures that each data point is paired with its corresponding color. Initially, the extracted color data is in hex codes. To enhance user-friendliness, we employ a color-matching algorithm to convert the hex codes into more com- mon English names. This algorithm works by cross-referencing the source hex code with a predefined list of color hex codes and English names [1], ultimately determining the closest matching name based on RGB distance. The color augmentation process enables answering visual ques- tions like â What color is Algeria? What other countries are the color of Algeria?â | 2310.09611#27 | 2310.09611#29 | 2310.09611 | [
"2303.04048"
] |
2310.09611#29 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | , as VizAbility responds: â Algeria is orange-red and other countries with the same color are Syria, Iraq, Congo, [...].â Furthermore, LLM is lenient with user queries and accepts a certain margin of error for color input. e.g., if the user asks about what blue represents, the system can infer blue refers to steelblue in the map. # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym â XX, June 03â 05, 2018, Woodstock, NY | 2310.09611#28 | 2310.09611#30 | 2310.09611 | [
"2303.04048"
] |
2310.09611#30 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | To provide further context for the chart, we have integrated a tex- tual representation of the chart generated by Olli directly into the LLM prompt (see Figure 5). This addition has the potential to signif- icantly enhance the performance of visual question-answering. For example, when presented with the question â What does the graph show?â , the system, without the text representation, provided a response like â The graph shows the data from the dataframe, which includes the year, value, temporal polarity, ...â . However, when fur- nished with the text representation, the LLM responded with a more comprehensive and human-friendly answer: â | 2310.09611#29 | 2310.09611#31 | 2310.09611 | [
"2303.04048"
] |
2310.09611#31 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | The graph shows the temporal polarity of the temperature anomaly (in degrees Celsius) from 1850 to 2021...â , illustrating the substantial improvement in response quality. interpreted as involving navigation, but either no starting/ending point was provided, or the tree view was not activated. Please try again.â Once the starting and ending points have been identified, we employ a breadth-search algorithm that returns string instructions of the shortest path, which users can then manually follow at their own discretion. We opted for this approach as opposed to auto- matically moving the user to their desired ending point with the rationale that autonomy and transparency are crucial for our in- tended audience. # 4.4 Other Accessibility and Usability Considerations Moreover, we supplement it with the userâ s current position within the tree view, tracked via the userâ s keyboard movements. | 2310.09611#30 | 2310.09611#32 | 2310.09611 | [
"2303.04048"
] |
2310.09611#32 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | This feature can help address potentially ambiguous questions. For instance, a user might ask, "Whatâ s an average?" with the intention of inquiring about the average within a category where their cursor is located. We also ensure that the responses are properly formatted with commas and special characters so that they are optimized for screen reader interpretation (e.g., 468297 â 468,297). Contextual Queries. To address contextual queries that do not necessitate a deep understanding of the chart or its data, we have incorporated a Web Browser agent [3] to retrieve more general information relevant to chart comprehension. For example, when presented with the contextual query, â | 2310.09611#31 | 2310.09611#33 | 2310.09611 | [
"2303.04048"
] |
2310.09611#33 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | What do you mean by temper- ature anomalies,â the LLM responds with, â Temperature anomalies are any measure of temperatures that are unusual for a partic- ular region, season, or time period. [...]â Categorizing questions beforehand enabled us to streamline the process and eliminate un- necessary, resource-intensive prompts needed for analytical and visual queries. Previous research[33, 55] highlights that data tables are a highly familiar and well-supported technology among blind individuals. In this context, VizAbility offers users the flexibility to seamlessly switch between the tree view and a conventional raw data table view. While the tree view facilitates structured exploration based on visual encoding, the data table provides additional advantages like sorting features, enabling users to quickly access specific data values and patterns. We disable navigation queries in the data table mode. | 2310.09611#32 | 2310.09611#34 | 2310.09611 | [
"2303.04048"
] |
2310.09611#34 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Users can submit conversational queries via voice recordings that are processed via the Whisper speech recognition [6]. However, oftentimes, enabling microphones can be problematic. Thus, we provide an alternative text box so that they can type the queries using the keyboard. Upon inputting their question (regardless of the modality), users are provided with an audible cue of â Loading. Please Waitâ . Every subsequent 3 seconds, the user is exposed to yet another audible cue, this time â Still Loadingâ . This loading cue significantly improves transparency and mitigates any possible confusion that can arise from an unresponsive webpage. Navigation Queries. We seek to integrate usersâ keyboard naviga- tion with the conversational module via navigation queries. VizAbil- ity currently supports two types of navigation queries: (a) wayfind- ing questions, in which, upon being provided a starting and ending point within the tree view, the model returns a series of directions dictating the shortest traversal and (b) orientation questions, in which the VizAbility returns the userâ s current location within the tree view. To handle navigation queries, we attribute a unique address to each node of the tree view and convey this, along with the userâ s current position, to the LLM. Through the utilization of few-shot prompting, we instruct the LLM to discern the starting point and ending point from the user query. It is crucial that the model has significant leniency in user queries, as it is highly unlikely that the user will specify the exact starting/ending points verbatim. Thus, the few-shot prompting primes the LLM to properly interpret the user query. For example, in response to the query â | 2310.09611#33 | 2310.09611#35 | 2310.09611 | [
"2303.04048"
] |
2310.09611#35 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Take me to Haitiâ (related to the choropleth map), the LLM comprehends the user queryâ s context and correctly deduces that the absence of an explicit starting node means the user intends to initiate navigation from their current location. On the other hand, VizAbility can easily infer the ending point, which is the node titled: â 3 of 180. Country equals Haiti. 1 value. Press t to open table.â If the model cannot discern any starting or ending point, it yields: â | 2310.09611#34 | 2310.09611#36 | 2310.09611 | [
"2303.04048"
] |
2310.09611#36 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | The question was VizAbility does not solely display the answer, and instead pro- vides the user query and brief justification behind its response in conjunction with the actual answer. For instance, the following is articulated by VizAbility when a user asks, â What is a choropleth map?â : â Your question â What is a choropleth map?â was categorized as being context-seeking, and as such, has been answered based on information found on the web.â By letting users know the scope of the answer (i.e., whether it was sourced from the internet, data, or the tree view), we allow users to verify and evaluate the effective- ness of the LLM response independently, thus bolstering user trust and system transparency. # 5 EVALUATION: Q&A PERFORMANCE BENCHMARK For our quantitative evaluation, we concentrated on validating the question-answering pipeline using the testing dataset. This evaluation comprised two components: assessing the accuracy of query classification and evaluating the correctness of question responses. 5.1 Classification Evaluation We simply compared the classification result of VizAbility to the ground truth query type. We used a relaxed comparison, allowing for any potential discrepancies in formatting (such as the addition | 2310.09611#35 | 2310.09611#37 | 2310.09611 | [
"2303.04048"
] |
2310.09611#37 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY Gorniak et al. A scatterplot showing life expectancy at birth and GDP per capita for â A circle chart. With axes GOP per capita and Life expectancy at birth ( : X-axis titled GDP per capita. For a quantitative scale. With value: : Y-axis titled Life expectancy at birth (historical). For a quantitative ' Legend titled Continent. For a nominal scale. With 6 values from, 1 of 6. Continent equals Asia. 36 values. Press t to open ta 2 of 6. Continent equals Europe. 25 values. Press t to open ' 3 of 6. Continent equals Africa. 50 values. Press t to open ti : : Ending Point | Europe.} : Chart text + User cursor location : ! 1.3.2 // 2 of 6. Continent equals Europe. 25 values ! {Address of Node // String Representation} | {Press the up arrow key. Press the left arrow ! key. Press the left arrow key} a | Starting Point: | 2310.09611#36 | 2310.09611#38 | 2310.09611 | [
"2303.04048"
] |
2310.09611#38 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Not Explicitly Stated > Consult : Active Node = {1.3.2 // Continent equals 'â , Ending Node: Explicitly Stated > â X-Axisâ > {1.1 // X-axis titled GDP per capita} Press @) A scatterplot showing life expectancy at birth and GDP p A circle chart, With axes GDP per capita and Life expect: X-axis titled GDP per capita. For a quantitative scal Y-axis titled Life expectancy at birth (historical). Fo Legend titled Continent. For a nominal scale. With Press A scatterplot showing life expectancy at birth and GDP Acircle chart. With axes GDP per capita and Life expec! X-axis titled GDP per capita. For a quantitative sc Y-axis titled Life expectancy at birth (historical), Fé Legend titled Continent, For a nominal scale. With] Press A scaiterplot showing life expectancy at birth and GDP A circle chart. With axes GDP per capita and Life expe X-axis titled GDP per capita, For a quantitative sc! Y-axis titled Life expectancy at birth (historical, Fi Legend titled Continent. For a nominal scale. | 2310.09611#37 | 2310.09611#39 | 2310.09611 | [
"2303.04048"
] |
2310.09611#39 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Witl) Figure 5: Query-specific evaluation for Navigation queries. We pass a text representation of the Olli Tree and the addresses of corresponding nodes within the Tree to an LLM alongside the user question. With the aid of few-shot prompting, the LLM then identifies the starting and ending nodes within the Olli Tree. Should the starting node not be explicitly mentioned within the question, the model instead utilizes the userâ s current location within the Tree. We then execute a breadth-search algorithm and relay the shortest path between starting and ending nodes back to the user. | 2310.09611#38 | 2310.09611#40 | 2310.09611 | [
"2303.04048"
] |
2310.09611#40 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Classification Accuracy # Response @® Incorrect Ml Correct 0% 20% 40% 60% 80% 100% Percentage (%) Binary Scale Response @® Incorrect Mi Correct 0% 20% 40% 60% 80% 100% Percentage (%) Response @â ¢ Very Poor = poor me ae ood M@â ¢ Very Good 0% 20% 40% 60% 80% 100% Percentage (%) Figure 6: Quantitative results display the distributions for classification accuracy, factual accuracy (via a binary scale), and qualitative rating (via a 5-point likert scale) for user questions in the testing set. of a white space or newline character by the LLM). If there is a 1:1 correspondence, we output â Correct Classificationâ . Otherwise, we output â Incorrect Classificationâ . The evaluation of our testing set yielded a classification accuracy of 88.5% ( 688 777 ). The 88 user queries which were incorrectly classified by the LLM consisted of 52 queries that could not be classified (signifying overtly vague or impossible to answer questions) and 36 queries that were classified into a query type that did not correspond with the ground truth. 5.2 Question Response Quality Evaluation We employed GPT-4 to evaluate the quality of natural language responses, following recent studies [20, 37, 50]. With us identifying trustworthiness and transparency as vital factors, we wanted to reflect this fact by emphasizing explanatory responses over more concise ones. We adopted a Likert scale evaluation prompt, inspired by Liu et al. [37], which framed a responseâ | 2310.09611#39 | 2310.09611#41 | 2310.09611 | [
"2303.04048"
] |
2310.09611#41 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | s â correctnessâ in terms of its coherence with the ground truth. This coherence metric ranged from 1-5, but for clarity, we adapted it to the Likert Scale of Quality: [Very Poor, Poor, Fair, Good, Very Good]. The evaluation prompt presented two responses: Response A and Response B, with Response A acting as the ground truth. GPT4 was directed to assess the coherence of Response B in relation to Response A. To prevent bias, we refrained from revealing which re- sponse was the ground truth or our own creation. GPT4 pinpointed five key elements (keywords/numbers/characters) from Response A and sought matches or their synonyms in Response B. Each match increased the coherence score by one. If the score deviated from the 1-5 range, GPT4 reassessed. The results were formatted as â Score: coherence scoreâ . Of the 777 user queries, 365 or 47% were deemed "Very Good" by our evaluation pipeline. Responses rated as "Very Good" often re- stated the userâ s question, formatted quantitative data correctly, and included contextual labels. For example, in response to the query â | 2310.09611#40 | 2310.09611#42 | 2310.09611 | [
"2303.04048"
] |
2310.09611#42 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | What country has the highest vaccination rate in the world?â re- lated to a choropleth map, VizAbility answered, â Malta has the highest vaccination rate in the world with 94.0%.â This response, more detailed than the ground truth â Malta has the highest vaccina- tion rate according to the data,â demonstrates VizAbilityâ s ability to provide comprehensive answers. Moreover, by appropriately rating this response as â Very Good,â the evaluation pipeline effectively showcases its capability to judge response quality and depth. The distribution for Good, Fair, and Poor responses was 13.5% or 105 777 , 10.9% or 85 777 respectively. The pipeline evaluated 149 or 19.5% questions as being â Very Poorâ in coherence to the ground truth. As will be discussed in the binary scale evalua- tion, this statistic is significantly less than the percent of questions deemed to be â Incorrectâ by the LLM operating under a binary VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym â XX, June 03â 05, 2018, Woodstock, NY scale (31.7%). This might indicate a successful distinction between response quality in the Likert evaluation and factual correctness in the binary scale. For example, the response â | 2310.09611#41 | 2310.09611#43 | 2310.09611 | [
"2303.04048"
] |
2310.09611#43 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | House for sale has been decreasing over time from 2014-10-3 to 2021-2-12â to the query â What years have house for sale decreasing over time?â was rated â Poorâ on the Likert scale but â Correctâ on the binary scale. Com- pared to the ground truth, "2014, 2016, 2017, 2019, 2020, and 2021 all saw house for sale decrease", the response, while factually accurate in its date range, did not explicitly list out the years. 5.3 Question Response Correctness Evaluation We aimed to evaluate the factual accuracy of VizAbility outputs, essential for its trustworthiness. Using a binary scale, given the binary nature of accuracy, our evaluation method was similar to the Likert Scale but with a key difference. Instead of five key elements, we had GPT extract one or two key items from Response A to compare with Response B. This narrowed focus shifts the evaluation from the verbosity of the response to its factual accuracy. By evaluating verbosity and factual accuracy separately, we bet- ter prime the evaluation pipeline for a verbosity metric in the future. Factual accuracy will always remain constant; a response can either be factually correct or incorrect given the data set it is provided. By contrast, verbosity can and should be regulated by the user for their convenience, as is reflected by the feedback received during our qualitative study (see Section 6). | 2310.09611#42 | 2310.09611#44 | 2310.09611 | [
"2303.04048"
] |
2310.09611#44 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | I think that | would like to use this system â _ I thought there was too much inconsistency in frequently the system 0% 20% «40% «60% 80% 100% 0% 20% «40% + «60% 80% 100% I would imagine that most people would learn to use this system very quickly 80% 100% I found the system unnecessarily complex 0% 20% © 40% «60% + 80% 100% 0% 20% © 40% += «60% I thought the system was easy to use 0% 20% = «40% 60% | think that | would need the support of a technical person to be able to use this system I found the system very cumbersome to use -â ho 80% 100% 0% 20% «40% ~~ «60% © BOX 100% felt very confident using the system 0% 20% «40% += «60% BOK 100% 0% 20% «40% += «60% © BOX 100% Ifound the various functions in this system _I needed to learn a lot of things before I could were well integrated get going with the system 0% 20% + 40% ~~ «60% © BOX 100% 0% 20% «40% + «60% BOX 100% I Strongly Disagree lB Disagree Neutral Agree I Strongly Agree Figure 7: System Usability Scale Survey # 6 EVALUATION: USER STUDY WITH BLIND PEOPLE During the development process, we engaged with a blind partic- ipant who had prior experience using a screen reader on a daily basis. This participant, as a design partner, provided feedback at two intermediate stages of development. In addition to this interme- diate prototype evaluation, we conducted a formal usability study with six additional blind/low-vision individuals. Our evaluation deemed 69.4% or 539 of the 777 questions to be â Correctâ . Of particular interest is VizAbilityâ s ability to avoid hal- lucinations. For instance, VizAbility responded â The variables you mentioned are not provided in the datasetâ to the query, â | 2310.09611#43 | 2310.09611#45 | 2310.09611 | [
"2303.04048"
] |
2310.09611#45 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | What is the date of this data?â . Framed in the context of the ground truth, â Data pertaining to this question is not providedâ , GPT (operating under the binary scale) evaluated the response as â Correctâ . Many user questions comprising the testing set were ambiguous or refer- enced variables not found within the respective data sets (as can be witnessed in the example above). This is a natural consequence of emphasizing self-guided exploration. Users will tend to push the boundaries of our model in terms of what questions it can compre- hend; therefore, it is crucial that we incorporate a pipeline to avoid any potential hallucinations. 5.4 Comparisons to an existing system We also sought to frame our evaluation in the context of similar external systems - one such being an automatic chart question- answering pipeline that generates visual explanations describing how the answer was obtained [31]. In the evaluation of the system with our data set from blind people, the model reported an overall factual accuracy rate of 16% [32]. It is important to note that this model has a limited number of compatible chart types, with it only supporting bar and line charts. Seeking to maintain consistency between the two models, we extracted data solely from the bar and line charts for a more fitting comparison. When narrowing the scope to these two types of visual stimuli, VizAbility reports 68% accuracy in outputting â | 2310.09611#44 | 2310.09611#46 | 2310.09611 | [
"2303.04048"
] |
2310.09611#46 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Correctâ responses (based on the binary scale), signifying a significant improvement in user query handling. 6.1 Participants We recruited six blind/low-vision individuals from the National Institute of the Blind. Their demographics are shown in Table 2. We tried to recruit diverse participants based on their gender and screen reader expertise. 6.2 Procedure Upon entering the session, participants opened up our system in a web browser and chose a chart of their choice among the four op- tions: line chart, bar chart, scatterplot, or choropleth map. The study was divided into three parts: the first two focused on the individual components of our multimodal approachâ the keyboard-navigable tree view and the conversational module. Each was evaluated in a standalone setting. The final part centered on their combined functionality to assess the potential advantages of their collabora- tive operation. In the beginning, we refrained from providing any external guidance so that the participantsâ experiences could better imitate those of a real-world situation. 6.3 Behavioral Observations Here, we detail participantsâ actions and feedback while using Viz- Ability during the study sessions. 6.3.1 Navigating the tree view. Participants were able to utilize the tree view using arrow keys and tab shortcuts as reported in prior studies [33, 55], although the learning curve proved to be slightly steeper for P2 and P5. P5 remarked on the â cumbersomeâ structure of the Tree for the Bar Chart, noting that it was due to the presence of over 170 unique data values. Rather than tediously | 2310.09611#45 | 2310.09611#47 | 2310.09611 | [
"2303.04048"
] |
2310.09611#47 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY Gorniak et al. PID Gender Age Vision Level Screen Reader Expertise Screen Reader Type Chart Selected P1 Male P2 P3 P4 P5 Male P6 Male Female Female Female 45-54 65 or older Blind since birth 25-34 25-34 45-54 55-64 Blind with later onset Expert Advanced Intermediate Advanced Blind with later onset Blind since birth Blind with later onset Expert Blind with later onset Advanced JAWS VoiceOver JAWS JAWS JAWS NVDA Bar Chart Line Chart Choropleth Map Scatterplot Bar Chart Choropleth Map Table 2: Participant Information Distribution. | 2310.09611#46 | 2310.09611#48 | 2310.09611 | [
"2303.04048"
] |
2310.09611#48 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | navigating through the data using the down arrow key, P5 wished for a more efficient method to move between specific nodes within the tree view. P2 echoed this sentiment, highlighting the risk of disorientation, particularly with larger and more intricate data sets. Several participants (P1, P3, P4, P5, P6) independently recognized the distinctive structure of the tree view, which presents a data set through visual encoding variables. For example, P5, after navigating a choropleth map and expressing frustration over manually sifting through 172 countries without an apparent order, was pleasantly surprised when using the right arrow key led him to the same data set, this time organized by vaccination rates in 10 percent increments. This participant then confirmed that the tree view was more effective in conveying a visualizationâ s structure compared to a traditional data table. was able to deduce that the color â orange-redâ indicates positive temperature values. | 2310.09611#47 | 2310.09611#49 | 2310.09611 | [
"2303.04048"
] |
2310.09611#49 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | We also observed an affinity for contextual queries among the participant pool. One user (P4) who had little to no experience with map visualizations prior to the study asked: â What is a choropleth map?â , to which the LLM outputted a correct response. However, when the same participant asked, â What is a temporal polarityâ (pertaining to the bar chart), the LLM responded with a definition tied to linguistics. Although initially taken aback, the user acknowl- edged the possible ambiguities with the word â temporal polarityâ (which has multiple meanings), and upon rephrasing her query to incorporate more precision, received a more accurate response. The participant attributed her realization to the VizAbilityâ s justification (outputted alongside the response), which explicitly told her that it sourced its answer from the internet. After having used their keyboard to navigate through the tree view, participants were asked to describe the visual stimuli to the best of their capabilities. Responses were mixed, with two partic- ipants (P3 and P4) only being able to identify the two variables that were being compared. This suggests that despite being a good overall indicator of chart structure, the Olli Tree alone is not suffi- cient for complete data visualization. This was reaffirmed by the usefulness rating most individuals attributed to the system, with the average hovering around a 3 out of 5. | 2310.09611#48 | 2310.09611#50 | 2310.09611 | [
"2303.04048"
] |
2310.09611#50 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | 6.3.2 Exploring the conversational module. Although 4 Participants (P1, P2, P3, P5) gravitated towards the text input modality, all af- firmed the importance of retaining an option for voice input as well. All but one participant (P1, P2, P3, P4, P5) immediately asked data-driven questions (either simple fetches for data, like â What is the vaccination percentage for Haitiâ or more complex queries involving multiple steps), with P6 instead asking a contextual ques- tion: â Is there a way to rank the various countries into continents?â (regarding the choropleth map). This coincided with subsequent participant ratings for the usefulness of the four query types, with all users asserting â Analytical Queriesâ as the most useful for chart comprehension. Most users (P1, P2, P3, P5) could not fathom the possibility that more broad questions were supported. Following this independent exploration of the conversational model, participants were made aware of the four distinct types of queries and were once again directed to input their own questions; however, this time around, they had to broadly adhere to one of the 4 query classifications. Users demonstrated a greater proficiency with the conversational module during this guided exploration, with P1 even chaining multiple individual queries to arrive at a broader understanding of the chart. By consecutively asking â | 2310.09611#49 | 2310.09611#51 | 2310.09611 | [
"2303.04048"
] |
2310.09611#51 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | What is the temperature for 2020?â and â What color is 2020?â , the participant Integrating the two components. Participants were then in- 6.3.3 troduced to navigation queries. We explained the purpose of these queries, emphasizing their role in wayfinding and orientation, and then allowed them to formulate their own navigation queries. All users concurred that these queries were essential for understanding the tree view, a sentiment echoed in the usefulness ratings they assigned to the integrated system. While previous ratings averaged around 3, after this introduction, participants consistently rated the system between 4 and 5, with 5 being extremely useful. Most participants tended to input short and concise navigation queries. Rather than inputting â | 2310.09611#50 | 2310.09611#52 | 2310.09611 | [
"2303.04048"
] |
2310.09611#52 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | How do I get from my current loca- tion to the percentage vaccinated value for Guamâ , one user (P5) opted for the much simpler â Take me to Guamâ . Showcasing its conversational strengths, our model was able to precisely identify the starting as well as ending nodes from this colloquial text, yield- ing the instructions: â Press the right arrow key. Press the down arrow key. Press the down arrow key.â 6.4 User Feedback and Reflection Participants completed a post-study questionnaire based on the System Usability Scale (see Figure 7). Notably, most participants (4 Agree; 1 Strongly Agree; 1 Disagree) concurred with the statement: â I found the various functions in this system were well integrated.â Results can be found in Figure 7. Participants also valued Viz- Abilityâ s commitment to accessibility and transparency, especially within the conversational module. They envisioned real-world ap- plications for VizAbility, relating it to their personal experiences. For instance, P1 saw its potential in providing testing accommoda- tions for GRE exams, noting its superiority over human proctors # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | 2310.09611#51 | 2310.09611#53 | 2310.09611 | [
"2303.04048"
] |
2310.09611#53 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY in translating visual graphs. P6, who teaches the NVDA screen reader to the BLV community, expressed interest in incorporating the system into his lessons. However, there was also constructive feedback. Although most participants deemed the structure of navigation query responses (a sequence of directions) to be satisfactory, P2 advised that the system should automatically transport the userâ s cursor to the desired location, as opposed to currently requiring the user to manually traverse the tree view themselves. One participant (P5) sought more control over the nature of LLM responses out- putted by the conversational model. He brought up the necessity of having some implementation of a dial to regulate the verboseness of the outputted answers. The same user who commented on the cumbersome structure of the tree view (P5) further elaborated that he would prefer a more concise raw data table in its place, espe- cially for less extensive datasets. This same participant (P5), who had earlier commented on the tree viewâ s cumbersome structure, further expressed a preference for a more concise raw data table, especially for smaller datasets. 7 DISCUSSION & FUTURE WORK Our evaluation studies underscore the potential of VizAbility and also pinpoint areas for enhancement. We reflect on the limitations and challenges, paving the way for future opportunities. relevant follow-up questions after an initial query could further enhance efficient chart exploration. Our quantitative study results indicate room for improvement as well. Areas of enhancement encompass a more accurate un- derstanding of the userâ s context when drawing upon external knowledge, discerning unanswerable questions, as well as refining the accuracy of analytical and visual queries. While the conversa- tional module may not fully decipher the inherent ambiguities of natural languages, our commitment to crafting safe and explanatory responses enabled participants to readily rectify errors. 7.2 Need for Rigorous Benchmark Testing The cornerstone of our project is the conversational module, de- signed to address the inherent limitations of keyboard navigation. While the existing dataset enabled a meaningful evaluation of re- sponse quality based on real-world queries, our study revealed the need for a more extensive benchmarking dataset. Our evalu- ation was constrained not only by the four chart types but also by the limited range of questions, preventing a full assessment of VizAbilityâ s capabilities. | 2310.09611#52 | 2310.09611#54 | 2310.09611 | [
"2303.04048"
] |
2310.09611#54 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Specifically, we need to evaluate situa- tional questions focused on a userâ s current point of interest within the tree view. Moreover, questions that hinge on understanding prior conversational context were not explored in this study. Given the generative capabilities of LLMs, synthetically generating these additional questions could be a viable approach. 7.1 Limitations and Opportunities The user study yielded actionable insights to enhance VizAbility, leading to several post-study modifications. For example, we added data tables as an alternative to the tree view and introduced a direct navigation option to the target node. Despite our initial aim to offer concise and informative answers, P5â s recommendation for user-adjustable response verbosity un- derscored the importance of user agency over designer-imposed settings. Given that speech is processed serially, the text length read by screen readers becomes a pivotal design consideration. This concern has been reiterated in prior research [7, 9, 27, 55]. Similarly, offering users the capability to customize node descriptions in the tree view could prove advantageous. Our quantitative study result shows that there is still an oppor- tunity to improve. These include more accurately understanding the user situation when eliciting contextual knowledge, when to know which question is not answerable, in addition to improving the accuracy of analytical and visual queries. Although the con- versational module is not perfect in figuring out the ambiguous nature of natural languages, our efforts to make responses safe and explanatory still allowed participants to easily recover from mistakes. Participants primarily attempted data queries when no guidance was provided, indicating difficulty in figuring out all four types of queries. This underscores the need for help to bridge the gap in execution. Likewise, one participant (P2) also highlighted the potential benefit of help documentation. Instead of merely offering passive documentation, integrating a real-time help function could be more effective. For example, when a userâ s cursor lands on a category, the system could convey tooltip-like info suggesting pos- sible questions about the current selection. Additionally, suggesting In our study, we compared our system exclusively with another that also assumes the availability of chart specifications, emphasiz- ing reasoning over image understanding. While recent vision-based question-answering systems like ChartQA [25] are noteworthy, public chatbots like Bing and Bard have also started supporting image inputs. | 2310.09611#53 | 2310.09611#55 | 2310.09611 | [
"2303.04048"
] |
2310.09611#55 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Although these systems are still in the early stages of understanding synthetic images, such as graphic designs and data visualizations, beyond natural scenes [11], a comparison with VizAbility could be insightful. A balanced evaluation approach might involve using an independent image parser to feed data into VizAbility, thereby concentrating on reasoning capabilities. Addi- tionally, to refine VizAbility, we plan to explore various prompting strategies, such as further leveraging user locality information or adjusting the number of examples in query classification. 7.3 Integrating into Existing Visualization Tools Since VizAbility operates under the assumption that a chart spec- ification is available, it may not be directly applicable to charts currently found on the web. Instead, our vision is to integrate Viz- Ability within existing data visualization platforms. Prior research underscores that many data visualization practitioners base their choices on the accessibility features of these platforms [26]. An- other study delves into the extent of accessible design support these tools offer [33]. Exploring the design space to determine how Viz- Ability can seamlessly fit into current data visualization workflows would be compelling. Additionally, considering the degree of cus- tomization for data visualization designers, such as setting default verbosity levels, warrants further investigation. | 2310.09611#54 | 2310.09611#56 | 2310.09611 | [
"2303.04048"
] |
2310.09611#56 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Conference acronym â XX, June 03â 05, 2018, Woodstock, NY 8 CONCLUSION In this work, we presented VizAbility, a novel multimodal approach to enhancing accessibility in data visualizations, catering to the needs of BLV community. By seamlessly integrating structured chart and table navigation via keyboard inputs with conversational interactions through verbal commands, VizAbility offers a compre- hensive solution that bridges the gap between traditional visualiza- tion tools and the unique requirements of BLV users. Evaluations of the system underscored its potential value, with participants appreciating the integration of modalities and the systemâ s commit- ment to accessibility and transparency. Based on our evaluations, weâ ve identified several avenues for further refinement, includ- ing the need for user-centric customization options and enhanced guidance mechanisms. Additionally, a more comprehensive bench- marking approach is essential to elevate the performance of our question-answering capabilities. REFERENCES [1] [n. d.]. | 2310.09611#55 | 2310.09611#57 | 2310.09611 | [
"2303.04048"
] |
2310.09611#57 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | CSS color codes. https://www.w3.org/wiki/CSS/Properties/color/ keywords. Accessed: October 17, 2023. [2] [n. d.]. LangChain CSV Agent Documentation. https://python.langchain.com/ docs/integrations/toolkits/csv. Accessed: October 17, 2023. [3] [n. d.]. LangChain: Serp API. https://python.langchain.com/docs/integrations/ tools/serpapi. Accessed on Sep 7, 2023. | 2310.09611#56 | 2310.09611#58 | 2310.09611 | [
"2303.04048"
] |
2310.09611#58 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | [4] [n. d.]. Observable Plot. https://observablehq.com/plot/. Accessed on Sep 7, 2023. [5] [n. d.]. Vega View API. https://vega.github.io/vega/docs/api/view/. Accessed: October 17, 2023. [6] [n. d.]. Whisper. https://openai.com/research/whisper. Accessed on Sep 7, 2023. https://www.w3.org/WAI/tutorials/images/ [7] 2023. W3C Complex Images. complex/. [8] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision (Santiago, Chile). | 2310.09611#57 | 2310.09611#59 | 2310.09611 | [
"2303.04048"
] |
2310.09611#59 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | IEEE, 2425â 2433. [9] HK Ault, JW Deloge, RW Lapp, MJ Morgan, and JR Barnett. 2002. Evaluation of long descriptions of statistical graphics for blind and low vision web users. In Computers Helping People with Special Needs: 8th International Conference, ICCHP 2002 Linz, Austria, July 15â 20, 2002 Proceedings 8. Springer, 517â 526. [10] Matt Blanco, Jonathan Zong, and Arvind Satyanarayan. 2022. Olli: | 2310.09611#58 | 2310.09611#60 | 2310.09611 | [
"2303.04048"
] |
2310.09611#60 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | An Extensible Visualization Library for Screen Reader Accessibility. In IEEE VIS Posters. http: //vis.csail.mit.edu/pubs/olli [11] Zoya Bylinskii, Nam Wook Kim, Peter Oâ Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, and Aaron Hertzmann. 2017. Learning visual importance for graphic designs and data visualizations. In Proceedings of the 30th Annual ACM symposium on user interface software and technology. 57â 69. [12] Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and- language tasks via text generation. In International Conference on Machine Learn- ing. PMLR, 1931â 1942. | 2310.09611#59 | 2310.09611#61 | 2310.09611 | [
"2303.04048"
] |
2310.09611#61 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | [13] Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff, Richard A Young, and Brian Belgodere. 2020. Image captioning as an assistive technology: Lessons learned from vizwiz 2020 challenge. arXiv preprint arXiv:2012.11696 (2020). [14] Frank Elavsky, Lucas Nadolskis, and Dominik Moritz. 2023. | 2310.09611#60 | 2310.09611#62 | 2310.09611 | [
"2303.04048"
] |
2310.09611#62 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Data Navigator: An accessibility-centered data navigation toolkit. arXiv preprint arXiv:2308.08475 (2023). [15] Christin Engel and Gerhard Weber. 2017. Analysis of tactile chart design. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments. 197â 200. [16] Christin Engel and Gerhard Weber. 2017. Improve the accessibility of tactile charts. In Human-Computer Interaction-INTERACT 2017: 16th IFIP TC 13 International Conference, Mumbai, India, September 25â 29, 2017, Proceedings, Part I 16. Springer, 187â 195. [17] Christin Engel and Gerhard Weber. 2018. | 2310.09611#61 | 2310.09611#63 | 2310.09611 | [
"2303.04048"
] |
2310.09611#63 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | A user study to evaluate tactile charts with blind and visually impaired people. In Computers Helping People with Special Needs: 16th International Conference, ICCHP 2018, Linz, Austria, July 11-13, 2018, Proceedings, Part II 16. Springer, 177â 184. [18] Jean-Daniel Fekete, Jarke J Van Wijk, John T Stasko, and Chris North. 2008. The value of information visualization. Information Visualization: Human-Centered Issues and Perspectives (2008), 1â | 2310.09611#62 | 2310.09611#64 | 2310.09611 | [
"2303.04048"
] |
2310.09611#64 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | 18. Gorniak et al. [19] Leo Ferres, Gitte Lindgaard, Livia Sumegi, and Bruce Tsuji. 2013. Evaluating a tool for improving accessibility to charts and graphs. ACM Transactions on Computer-Human Interaction (TOCHI) 20, 5 (2013), 1â 32. [20] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166 (2023). [21] John A Gardner and Vladimir Bulatov. [n. d.]. Making Scientific Graphics Acces- sible With Viewplus Iveo®. [22] A Jonathan R Godfrey, Paul Murrell, and Volker Sorge. 2018. An accessible interaction model for data visualisation in statistics. In Computers Helping People with Special Needs: 16th International Conference, ICCHP 2018, Linz, Austria, July 11-13, 2018, Proceedings, Part I 16. | 2310.09611#63 | 2310.09611#65 | 2310.09611 | [
"2303.04048"
] |
2310.09611#65 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Springer, 590â 597. [23] Cagatay Goncu and Kim Marriott. 2011. GraVVITAS: generic multi-touch presen- tation of accessible graphics. In IFIP Conference on Human-Computer Interaction. Springer, 30â 48. https://doi.org//10.1007/978-3-642-23774-4_5 [24] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. 2018. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT, USA). IEEE, 3608â 3617. [25] Enamul Hoque, Parsa Kavehzadeh, and Ahmed Masry. 2022. | 2310.09611#64 | 2310.09611#66 | 2310.09611 | [
"2303.04048"
] |
2310.09611#66 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Chart Question Answering: State of the Art and Future Directions. arXiv preprint arXiv:2205.03966 (2022). [26] Shakila Cherise S Joyner, Amalia Riegelhuth, Kathleen Garrity, Yea-Seul Kim, and Nam Wook Kim. 2022. Visualization Accessibility in the Wild: Challenges Faced by Visualization Designers. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI â 22). Association for Computing Machinery, New York, NY, USA, Article 83, 19 pages. https://doi.org/10.1145/3491102.3517630 [27] Crescentia Jung, Shubham Mehta, Atharva Kulkarni, Yuhang Zhao, and Yea-Seul Kim. 2021. Communicating visualizations without visuals: Investigation of visu- alization alternative text for people with visual impairments. IEEE transactions on visualization and computer graphics 28, 1 (2021), 1095â 1105. [28] Kushal Kafle and Christopher Kanan. 2017. | 2310.09611#65 | 2310.09611#67 | 2310.09611 | [
"2303.04048"
] |
2310.09611#67 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Visual question answering: Datasets, algorithms, and future challenges. Computer Vision and Image Understanding 163 (2017), 3â 20. [29] Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. 2018. Dvqa: Understanding data visualizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition (Salt Lake City, UT, USA). IEEE, 5648â 5656. [30] Samira Ebrahimi Kahou, Adam Atkinson, Vincent Michalski, à kos Kádár, Adam Trischler, and Yoshua Bengio. 2017. FigureQA: An Annotated Figure Dataset for Visual Reasoning. CoRR abs/1710.07300 (2017). arXiv:1710.07300 http: //arxiv.org/abs/1710.07300 [31] Dae Hyun Kim, Enamul Hoque, and Maneesh Agrawala. 2020. | 2310.09611#66 | 2310.09611#68 | 2310.09611 | [
"2303.04048"
] |
2310.09611#68 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Answering Ques- tions about Charts and Generating Visual Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI â 20). Association for Computing Machinery, New York, NY, USA, 1â 13. https://doi.org/10.1145/3313831.3376467 [32] Jiho Kim, Arjun Srinivasan, Nam Wook Kim, and Yea-Seul Kim. 2023. | 2310.09611#67 | 2310.09611#69 | 2310.09611 | [
"2303.04048"
] |
2310.09611#69 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Exploring Chart Question Answering for Blind and Low Vision Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1â 15. [33] N. W. Kim, G. Ataguba, S. C. Joyner, Chuangdian Zhao, and Hyejin Beyond Alternative Text and tables: Comparative Analy- Computer Graph- https://doi.org/10.1111/cgf.14833 Im. 2023. sis of Visualization Tools and Accessibility Methods. ics Forum 42, 3 (2023), 323â 335. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14833 [34] N. W. Kim, S. C. Joyner, A. Riegelhuth, and Y. Kim. 2021. Accessi- ble Visualization: Design Space, Opportunities, and Challenges. Computer Graphics Forum 40, 3 (2021), 173â 188. https://doi.org/10.1111/cgf.14298 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14298 [35] Steven Landau and Karen Gourgey. 2001. | 2310.09611#68 | 2310.09611#70 | 2310.09611 | [
"2303.04048"
] |
2310.09611#70 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Development of a talking tactile tablet. Information Technology and Disabilities 7, 2 (2001). [36] Bongshin Lee, Eun Kyoung Choe, Petra Isenberg, Kim Marriott, and John Stasko. IEEE Computer 2020. Reaching broader audiences with data visualization. Graphics and Applications 40, 2 (2020), 82â 90. [37] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634 (2023). [38] Alan Lundgard and Arvind Satyanarayan. 2021. Accessible visualization via natural language descriptions: A four-level model of semantic content. IEEE transactions on visualization and computer graphics 28, 1 (2021), 1073â 1083. [39] Kim Marriott, Bongshin Lee, Matthew Butler, Ed Cutrell, Kirsten Ellis, Cagatay Goncu, Marti Hearst, Kathleen McCoy, and Danielle Albers Szafir. 2021. Inclusive data visualization for people with disabilities: a call to action. Interactions 28, 3 (2021), 47â 51. [40] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. 2022. ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning. arXiv preprint arXiv:2203.10244 (2022). VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym â XX, June 03â 05, 2018, Woodstock, NY | 2310.09611#69 | 2310.09611#71 | 2310.09611 | [
"2303.04048"
] |
2310.09611#71 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | [41] Tomas Murillo-Morales and Klaus Miesenberger. 2017. Non-visually performing analytical tasks on statistical charts. In Harnessing the Power of Technology to Improve Lives. IOS Press, 339â 346. [42] Sabrina Paneels and Jonathan C Roberts. 2009. Review of designs for haptic data visualization. IEEE Transactions on Haptics 3, 2 (2009), 119â 137. [43] Prabodh Sakhardande, Anirudha Joshi, Charudatta Jadhav, and Manjiri Joshi. 2019. | 2310.09611#70 | 2310.09611#72 | 2310.09611 | [
"2303.04048"
] |
2310.09611#72 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Comparing user performance on parallel-tone, parallel-speech, serial-tone and serial-speech auditory graphs. In Human-Computer Interactionâ INTERACT 2019: 17th IFIP TC 13 International Conference, Paphos, Cyprus, September 2â 6, 2019, Proceedings, Part I 17. Springer, 247â 266. [44] Arvind Satyanarayan, Dominik Moritz, Kanit Wongsuphasawat, and Jeffrey Heer. 2016. | 2310.09611#71 | 2310.09611#73 | 2310.09611 | [
"2303.04048"
] |
2310.09611#73 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Vega-lite: A grammar of interactive graphics. IEEE transactions on visual- ization and computer graphics 23, 1 (2016), 341â 350. [45] Ather Sharif, Olivia H. Wang, Alida T. Muongchan, Katharina Reinecke, and Jacob O. Wobbrock. 2022. VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI â 22). Association for Computing Machinery, New York, NY, USA, Article 478, 19 pages. https://doi.org/10.1145/3491102.3517431 [46] Alexa F. Siu, Danyang Fan, Gene S-H Kim, Hrishikesh V. Rao, Xavier Vazquez, Sile Oâ Modhrain, and Sean Follmer. 2021. | 2310.09611#72 | 2310.09611#74 | 2310.09611 | [
"2303.04048"
] |
2310.09611#74 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | COVID-19 Highlights the Issues Facing Blind and Visually Impaired People in Accessing Data on the Web. In Proceedings of the 18th International Web for All Conference (Ljubljana, Slovenia) (W4A â 21). Association for Computing Machinery, New York, NY, USA, Article 11, 15 pages. https://doi.org/10.1145/3430263.3452432 [47] Marzia Taibbi, Cristian Bernareggi, Andrea Gerino, Dragan Ahmetovic, and Sergio Mascetti. 2014. Audiofunctions: Eyes-free exploration of mathematical functions on tablets. In International Conference on Computers for Handicapped Persons. | 2310.09611#73 | 2310.09611#75 | 2310.09611 | [
"2303.04048"
] |
2310.09611#75 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Springer, 537â 544. https://doi.org//10.1007/978-3-319-08596-8_84 [48] John R Thompson, Jesse J Martinez, Alper Sarikaya, Edward Cutrell, and Bongshin Lee. 2023. Chart Reader: Accessible Visualization Experiences Designed with Screen Reader Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1â 18. [49] Alexandra Vtyurina, Adam Fourney, Meredith Ringel Morris, Leah Findlater, and Ryen W White. 2019. Verse: Bridging screen readers and voice assistants for enhanced eyes-free web search. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, NY, USA, 414â 426. [50] Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048 (2023). [51] Ruobin Wang, Crescentia Jung, and Y Kim. 2022. | 2310.09611#74 | 2310.09611#76 | 2310.09611 | [
"2303.04048"
] |
2310.09611#76 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Seeing through sounds: Mapping auditory dimensions to data and charts for people with visual impairments. In Computer Graphics Forum, Vol. 41. Wiley Online Library, 71â 83. [52] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824â 24837. [53] Markus Weninger, Gerald Ortner, Tobias Hahn, Olaf Drümmer, and Klaus Miesen- berger. 2015. ASVG- Accessible Scalable Vector Graphics: intention trees to make charts more accessible and usable. Journal of assistive technologies 9, 4 (2015), 239â 246. [54] Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding 163 (2017), 21â 40. [55] Jonathan Zong, Crystal Lee, Alan Lundgard, JiWoong Jang, Daniel Hajas, and Arvind Satyanarayan. 2022. | 2310.09611#75 | 2310.09611#77 | 2310.09611 | [
"2303.04048"
] |
2310.09611#77 | VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction | Rich Screen Reader Experiences for Accessible Data Visualization. Computer Graphics Forum 41, 3 (2022), 15â 27. https://doi.org/10. 1111/cgf.14519 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14519 Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009 | 2310.09611#76 | 2310.09611 | [
"2303.04048"
] |
|
2310.08118#0 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | 3 2 0 2 t c O 2 1 ] I A . s c [ 1 v 8 1 1 8 0 . 0 1 3 2 : v i X r a # Can Large Language Models Really Improve by Self-critiquing Their Own Plans? # Karthik Valmeekamâ School of Computing & AI Arizona State University Tempe. [email protected] # Matthew Marquezâ School of Computing & AI Arizona State University, Tempe. [email protected] Subbarao Kambhampati School of Computing & AI Arizona State University, Tempe. [email protected] | 2310.08118#1 | 2310.08118 | [
"2305.10601"
] |
|
2310.08118#1 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | # Abstract There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLMâ s performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the systemâ s reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks. | 2310.08118#0 | 2310.08118#2 | 2310.08118 | [
"2305.10601"
] |
2310.08118#2 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | # Introduction Large Language Models have rapidly captured the attention of the AI research community with their exceptional natural language completion capabilities. Trained on web-scale language corpora, these models have demonstrated the ability to generate seemingly valuable completions across a wide range of topics. This led to a surge of interest in determining whether such models were able to perform well on reasoning tasks. Even though initial anecdotal results showed promise, systematic studies revealed their incompetency in reasoning â be it planning [12] or in simple arithmetic or logic [3]. These results questioning the robustness of their reasoning abilities led to researchers exploring ways to improve these systems. Of particular interest to us is the emerging research on self-critiquing, where the LLMs are used to critique their own candidate generations and iterate. The current works [15, 10, 14] exhibit considerable optimism about using LLMs to critique their own candidate generations, especially in an iterative setting where they keep refining their candidate generations. Additionally, the notion that verifying correctness is computationally simpler than generation for reasoning adds to the optimism. However, there are grounds to be skeptical about it as | 2310.08118#1 | 2310.08118#3 | 2310.08118 | [
"2305.10601"
] |
2310.08118#3 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | # â Equal Contribution Preprint. Under Review. the complexity of a reasoning task in the classical sense should be irrelevant to models like LLMs that do approximate retrieval. Intrigued by the prevailing optimism, in this paper, we set out to systematically investigate the effectiveness of using LLMs to critique their own generations in the context of planning. We look at the simplest class of planning problems, the goal-directed deterministic planning problems colloquially referred to as classical planning problems. Our methodology employs a planning system that utilizes the same LLM for both generation and verification, which we term the LLM+LLM system in an iterative setting. Within this setting, the generator LLM continuously produces candidate plans, drawing upon feedback from the verifier LLM, until the verifier LLM either approves a candidate plan as correct or the number of iterations surpasses a predefined threshold. We present an empirical evaluation of (i) the effect of self-critiquing on the plan generation performance of the overall LLM+LLM system (ii) the performance of the verifier LLM in comparison to the ground-truth verification and finally (iii) the influence of varying feedback levels while critiquing the LLMâ s generation on the overall system performance. For our study, we use GPT-4 [9] as both the generator and verifier. Our findings suggest that self-critiquing degrades the plan generation performance compared to when an external, sound verifier is utilized. This decline in performance can be directly attributed to the verifier LLMâ s subpar results. The verifier LLM yields a significant number of false positives, which can severely undermine the systemâ s reliability. Furthermore, we explored whether the nature of feedback on invalid plans influences plan generation performance. Our results indicate that the type of feedbackâ whether itâ s merely binary verification or combined with detailed feedback on the errors of the generated planâ doesnâ t significantly impact plan generation performance. Thus, our systematic investigation offers compelling preliminary evidence to question the efficacy of LLMs as verifiers for planning tasks within an iterative, self-critiquing framework. In the rest of the paper, we first present the related work, then the required background before delving into the methodology and the evaluation. # 2 Related Work There has been significant interest in investigating the reasoning capabilities of LLMs, spanning from planning [12] to logic and arithmetic [3], and even puzzles [15]. | 2310.08118#2 | 2310.08118#4 | 2310.08118 | [
"2305.10601"
] |
2310.08118#4 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | As the initial excitement from triumphant anecdotes about LLMsâ reasoning capabilities began to wane with systematic studies [12, 11, 3], researchers proposed that allowing LLMs to verify their own candidate solutions and iterate over this process could enhance their reasoning abilities [10, 7, 6, 14]. Our work systematically investigates the effect of iterative self-critiquing in the context of planning. There have also been studies that utilize multiple LLMs to generate and verify candidate solutions, either in the form of a debate [2] or through cross-examination [1]. However, these studies still rely solely on the verification/self-critiquing abilities of the LLMs, an aspect our work critically examines in the context of planning. Our results provide compelling reasons to question the use of LLMs for self-critiquing in planning. | 2310.08118#3 | 2310.08118#5 | 2310.08118 | [
"2305.10601"
] |
2310.08118#5 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | # 3 Background We specifically are interested in classical planning problems that are represented within the PDDL (Planning Domain and Definition Language) framework [8]. These problem classes consist of a domain, initial state and a goal state. The domain consists of a set of predicates and a set of actions. The state-space of the planning problem is represented with some truth-assignment on the predicates. Every action in domain have a set of preconditions which determine when an action can be applied and a set of effects which determine the modifications to the state after the action is applied. A plan here is a sequence of actions which are present in the domain that when executed in the initial state, satisfy the goal conditions. | 2310.08118#4 | 2310.08118#6 | 2310.08118 | [
"2305.10601"
] |
2310.08118#6 | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? | 2 PDDL Files Generator LLM (generates candidate plans) Back-prompting with feedback if plan is invalid Instance i files Prompt Generator Verification prompt Generation prompt VAL (for ground-truth verification) Final Plan If plan is valid or back-prompting iterations > 15 Figure 1: Overall evaluation architecture # 4 Methodology # 4.1 The LLM+LLM planning system The LLM+LLM planning system (as shown in Figure 1) consists of a generator LLM and a verifier LLM. For a given instance, the generator LLM produces a candidate plan, while the verifier LLM determines its correctness. If the plan is found to be incorrect, the verifier provides feedback detailing the reasons for its failure. This feedback is then relayed to the generator LLM, prompting the generation of a new candidate plan. | 2310.08118#5 | 2310.08118#7 | 2310.08118 | [
"2305.10601"
] |
Subsets and Splits