--- task_categories: - text-classification language: - en --- # Dataset Card for Twitter Misinformation Dataset ## Dataset Description ### Dataset Summary This dataset is a compilation of several existing datasets focused on misinformation detection, disaster-related tweets, and fact-checking. It combines data from multiple sources to create a comprehensive dataset for training misinformation detection models. This dataset has been utilized in research studying backdoor attacks in textual content, notably in "Claim-Guided Textual Backdoor Attack for Practical Applications" (Song et al., 2024). ### Supported Tasks and Leaderboards The primary supported task is binary text classification for misinformation detection. The model should classify text as either factual (0) or misinformation (1). ### Languages The dataset is primarily in English. ## Dataset Structure ### Data Instances Each instance in the dataset contains: - text: string feature containing the tweet or news content - label: binary classification (0 for factual, 1 for misinformation) ### Data Fields - text: Text content to be classified - label: Binary label (0: factual, 1: misinformation) ### Data Splits - Training set: 92,394 examples - Testing set: 10,267 examples ## Dataset Creation ### Curation Rationale The dataset was created by combining multiple existing datasets to provide a comprehensive resource for misinformation detection. It incorporates both news and social media content to capture different types of misinformation. ### Source Data #### Initial Data Collection and Normalization Data was compiled from four main sources: 1. Fake and Real News Dataset - Source: https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset - License: CC BY-NC-SA 4.0 2. Natural Language Processing with Disaster Tweets - Source: https://kaggle.com/competitions/nlp-getting-started - Created by: Addison Howard, devrishi, Phil Culliton, and Yufeng Guo 3. Natural Hazards Twitter Dataset - Source: https://arxiv.org/abs/2004.14456 - Created by: Lingyu Meng and Zhijie Sasha Dong 4. MuMiN Dataset - Paper: Proceedings of the 45th International ACM SIGIR Conference - Created by: Dan Saattrup Nielsen and Ryan McConville #### Who are the source language producers? The text content comes from: - News organizations (for real news) - Social media users (Twitter) - Various online sources ### Annotations #### Annotation process The dataset's binary classification (factual vs misinformation) was created through the following mapping process: 1. Fake News Dataset: - Original "True" news labeled as 0 (factual) - Original "Fake" news labeled as 1 (misinformation) - Direct mapping based on existing true/fake labels 2. Disaster Tweets Dataset: - Original labels were inverted in preprocessing: - Non-disaster tweets (0) → factual (0) - Disaster-related tweets (1) → misinformation (1) 3. Natural Hazards Twitter Dataset: - All tweets were labeled as 0 (factual) - This decision was made since these were real disaster event reports 4. MuMiN Dataset: - Used claim-tweet relationships from source dataset - Direct mapping where: - "factual" claims → 0 - "misinformation" claims → 1 The final dataset combines these sources with the unified labeling scheme where: - 0 = factual content - 1 = misinformation #### Who are the annotators? The final labels were programmatically assigned based on the original datasets' classifications, with a consistent mapping applied across all sources to create a binary classification task. ### Personal and Sensitive Information This dataset contains publicly available tweets and news articles. While personal identifiers like usernames have not been explicitly removed, the content comes from public sources. Users of this dataset should: - Be aware that tweets may contain usernames, handles, and personal information - Follow Twitter's terms of service when using tweet data - Consider privacy implications when using the data for research - Not attempt to deanonymize or identify individual users ## Considerations for Using the Data ### Social Impact of Dataset This dataset was created to help develop better misinformation detection systems. Potential impacts include: Positive impacts: - Improved automated detection of misinformation - Better understanding of how misinformation spreads - Development of tools to combat fake news - Enhanced public awareness of misinformation Potential negative impacts: - Risk of false positives in misinformation detection - Possible perpetuation of existing biases in labeling - Privacy concerns for users whose content is included - Potential misuse for censorship or content control ### Discussion of Biases The dataset may contain several inherent biases: 1. Source Bias: - News articles primarily from mainstream media sources - Twitter content may not represent all social media platforms - Geographic bias towards English-speaking regions 2. Label Bias: - Binary classification may oversimplify complex truth values - Different definitions of "misinformation" across source datasets - Potential bias in original labeling processes 3. Content Bias: - Temporal bias due to when data was collected - Topic bias towards certain types of news/events - Language bias (English-only content) - Platform-specific bias (Twitter conventions and culture) ### Other Known Limitations 1. Dataset Constraints: - Limited to English language content - Binary classification may not capture nuanced cases - Historical content may not reflect current misinformation patterns 2. Technical Limitations: - No multimedia content analysis (images, videos) - No context beyond text content - No network/sharing information preserved - No temporal ordering of content 3. Practical Considerations: - May require periodic updates to remain relevant - Limited coverage of emerging misinformation topics - May not generalize well to other languages or cultures - Does not account for evolving nature of misinformation tactics ## Additional Information ### Dataset Curators Dataset Sources: 1. Natural Hazards Twitter Dataset: Lingyu Meng and Zhijie Sasha Dong 2. MuMiN Dataset: Dan Saattrup Nielsen and Ryan McConville 3. Disaster Tweets Dataset: Addison Howard, devrishi, Phil Culliton, and Yufeng Guo 4. Fake and Real News Dataset: Clément Bisaillon ### Licensing Information - MuMiN Dataset: CC BY-NC-SA 4.0 - Fake and Real News Dataset: CC BY-NC-SA 4.0 - Natural Hazards Twitter Dataset: CC BY-NC-SA 4.0 - Disaster Tweets Dataset: Academic, Non-Commercial Use Only (as specified in competition rules) Note: This dataset is intended for academic and non-commercial use only, in compliance with the most restrictive license terms of its component datasets. ### Citation Information If you use this dataset in your research, please cite both the original source datasets and any relevant papers using this compiled dataset: Research Using This Dataset: ```bibtex @misc{song2024claimguidedtextualbackdoorattack, title={Claim-Guided Textual Backdoor Attack for Practical Applications}, author={Minkyoo Song and Hanna Kim and Jaehan Kim and Youngjin Jin and Seungwon Shin}, year={2024}, eprint={2409.16618}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.16618}, } ``` Source Datasets: ```bibtex @inproceedings{nielsen2022mumin, title={MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset}, author={Nielsen, Dan Saattrup and McConville, Ryan}, booktitle={Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2022}, publisher={ACM} } @misc{meng2020naturalhazardstwitterdataset, title={Natural Hazards Twitter Dataset}, author={Meng, Lingyu and Dong, Zhijie Sasha}, year={2020}, eprint={2004.14456}, archivePrefix={arXiv}, primaryClass={cs.SI}, url={https://arxiv.org/abs/2004.14456} } @misc{howard2019nlp, title={Natural Language Processing with Disaster Tweets}, author={Howard, Addison and devrishi and Culliton, Phil and Guo, Yufeng}, year={2019}, publisher={Kaggle}, howpublished={\url{https://kaggle.com/competitions/nlp-getting-started}} } ```