Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Abstract
The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.
Community
How can explainable AI empower fact-checkers to tackle misinformation? ๐ฐ๐ค
We interviewed fact-checkers about how they use AI tools and uncovered key criteria and explanation needs for automated fact-checking at each stage of the fact-checking pipeline.
We connect these needs to current work in hashtag NLP, HCI, and XAI and highlight design implications for human-centred explainable fact-checking.
Excited to present this work with Isabelle Augenstein and Irina Shklovski at CHI 2025 later this year!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Automated Fact-Checking of Real-World Claims: Exploring Task Formulation and Assessment with LLMs (2025)
- Automatic Fact-Checking with Frame-Semantics (2025)
- FactIR: A Real-World Zero-shot Open-Domain Retrieval Benchmark for Fact-Checking (2025)
- Decision Information Meets Large Language Models: The Future of Explainable Operations Research (2025)
- Explaining GitHub Actions Failures with Large Language Models: Challenges, Insights, and Limitations (2025)
- Decoding AI Judgment: How LLMs Assess News Credibility and Bias (2025)
- Explainability-Driven Quality Assessment for Rule-Based Systems (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper