Iker's picture
Update README.md
d281fdc verified
|
raw
history blame
3.95 kB
metadata
language:
  - es
license: apache-2.0
multilinguality:
  - monolingual
size_categories:
  - n<1K
source_datasets:
  - original
task_categories:
  - summarization
pretty_name: NoticIA Human Validation
dataset_info:
  features:
    - name: web_url
      dtype: string
    - name: web_headline
      dtype: string
    - name: summary
      dtype: string
    - name: summary2
      dtype: string
    - name: web_text
      dtype: string
  splits:
    - name: test
      num_examples: 100
configs:
  - config_name: default
    data_files:
      - split: test
        path: test.jsonl
tags:
  - summarization
  - clickbait
  - news

"A Spanish dataset for Clickbait articles summarization"

This repository contains the manual annotations from a second human in order to validate the test set of the NoticIA dataset.

The full NoticIA dataset is available here: https://huggingface.co/datasets/Iker/NoticIA

Data explanation

  • web_url (int): The URL of the news article

  • web_headline (str): The headline of the article, which is a Clickbait.

  • summary (str): The original summary in the NoticIA dataset.

  • summary2 (str): The second summary written by another human to validate the quality of summary

  • web_text (int): The body of the article.

Dataset Description

Dataset Usage

# pip install datasets evaluate rouge-score
from datasets import load_dataset
from evaluate import load

dataset = load_dataset("Iker/NoticIA_Human_Validation",split="test")
rouge = load("rouge")
results = rouge.compute(
                predictions=[x["summary2"] for x in dataset],
                references=[[x["summary"]] for x in dataset],
                use_aggregator=True,
            )
print(results)

Uses

This dataset is intended to build models tailored for academic research that can extract information from large texts. The objective is to research whether current LLMs, given a question formulated as a Clickbait headline, can locate the answer within the article body and summarize the information in a few words. The dataset also aims to serve as a task to evaluate the performance of current LLMs in Spanish.

Out-of-Scope Use

You cannot use this dataset to develop systems that directly harm the newspapers included in the dataset. This includes using the dataset to train profit-oriented LLMs capable of generating articles from a short text or headline, as well as developing profit-oriented bots that automatically summarize articles without the permission of the article's owner. Additionally, you are not permitted to train a system with this dataset that generates clickbait headlines.

Dataset Creation

The dataset has been meticulously created by hand. We utilize two sources to compile Clickbait articles:

  • The Twitter user @ahorrandoclick1, who reposts Clickbait articles along with a hand-crafted summary. Although we use their summaries as a reference, most of them have been rewritten (750 examples from this source).
  • The web demo ⚔️ClickbaitFighter⚔️, which operates a pre-trained model using an early iteration of our dataset. We collect all the model inputs/outputs and manually correct them (100 examples from this source).

Who are the annotators?

The dataset was originaly by Iker García-Ferrero and has been validated by Begoña Altura. The annotation took ~40 hours.