Papers
arxiv:2502.09083

Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking

Published on Feb 13
ยท Submitted by gretawarren on Feb 18
Authors:
,

Abstract

The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.

Community

Paper author Paper submitter

How can explainable AI empower fact-checkers to tackle misinformation? ๐Ÿ“ฐ๐Ÿค–

We interviewed fact-checkers about how they use AI tools and uncovered key criteria and explanation needs for automated fact-checking at each stage of the fact-checking pipeline.

We connect these needs to current work in hashtag NLP, HCI, and XAI and highlight design implications for human-centred explainable fact-checking.

Excited to present this work with Isabelle Augenstein and Irina Shklovski at CHI 2025 later this year!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.09083 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.09083 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.09083 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.