Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- es
|
4 |
-
license:
|
5 |
multilinguality:
|
6 |
- monolingual
|
7 |
size_categories:
|
@@ -52,10 +52,9 @@ tags:
|
|
52 |
<img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="height: 250px;">
|
53 |
</p>
|
54 |
|
55 |
-
<h3 align="center">"A
|
56 |
-
|
57 |
-
We introduce a dataset that contains articles with clickbait headlines. We provide the clickbait headline for the article, the corresponding web text, and the summary. The summaries are written by humans and aim to answer the clickbait headlines using the fewest words possible.
|
58 |
|
|
|
59 |
|
60 |
- 📖 Paper: [Coming soon]()
|
61 |
- 💻 Baseline Code: [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA)
|
@@ -85,10 +84,7 @@ Que los estudiantes vuelven a clase.
|
|
85 |
|
86 |
# Dataset Usage
|
87 |
|
88 |
-
1.
|
89 |
-
|
90 |
-
```bash
|
91 |
-
```
|
92 |
|
93 |
2. If you want to train an LLM or reproduce the results in our paper, you can use our code. See the repository for more info: [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA)
|
94 |
|
@@ -183,16 +179,20 @@ print(output_text[0])
|
|
183 |
# Uses
|
184 |
This dataset is intended to build models tailored for academic research that can extract information from large texts. The objective is to research whether current LLMs, given a question formulated as a Clickbait headline, can locate the answer within the article body and summarize the information in a few words. The dataset also aims to serve as a task to evaluate the performance of current LLMs in Spanish.
|
185 |
|
|
|
186 |
# Out-of-Scope Use
|
187 |
You cannot use this dataset to develop systems that directly harm the newspapers included in the dataset. This includes using the dataset to train profit-oriented LLMs capable of generating articles from a short text or headline, as well as developing profit-oriented bots that automatically summarize articles without the permission of the article's owner. Additionally, you are not permitted to train a system with this dataset that generates clickbait headlines.
|
188 |
|
|
|
|
|
|
|
189 |
# Dataset Creation
|
190 |
The dataset has been meticulously created by hand. We utilize two sources to compile Clickbait articles:
|
191 |
- The Twitter user [@ahorrandoclick1](https://twitter.com/ahorrandoclick1), who reposts Clickbait articles along with a hand-crafted summary. Although we use their summaries as a reference, most of them have been rewritten (750 examples from this source).
|
192 |
- The web demo [⚔️ClickbaitFighter⚔️](https://iker-clickbaitfighter.hf.space/), which operates a pre-trained model using an early iteration of our dataset. We collect all the model inputs/outputs and manually correct them (100 examples from this source).
|
193 |
|
194 |
# Who are the annotators?
|
195 |
-
The dataset was annotated by [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) and validated by .
|
196 |
The annotation took ~40 hours.
|
197 |
|
198 |
|
|
|
1 |
---
|
2 |
language:
|
3 |
- es
|
4 |
+
license: cc-by-nc-sa-4.0
|
5 |
multilinguality:
|
6 |
- monolingual
|
7 |
size_categories:
|
|
|
52 |
<img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="height: 250px;">
|
53 |
</p>
|
54 |
|
55 |
+
<h3 align="center">"A Clickbait Article Summarization Dataset in Spanish."</h3>
|
|
|
|
|
56 |
|
57 |
+
We present NoticIA, a dataset consisting of 850 Spanish news articles featuring prominent clickbait headlines, each paired with high-quality, single-sentence generative summarizations written by humans.
|
58 |
|
59 |
- 📖 Paper: [Coming soon]()
|
60 |
- 💻 Baseline Code: [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA)
|
|
|
84 |
|
85 |
# Dataset Usage
|
86 |
|
87 |
+
1. We are working on implementing NoticIA on the Language Model Evaluation Harness library: https://github.com/EleutherAI/lm-evaluation-harness
|
|
|
|
|
|
|
88 |
|
89 |
2. If you want to train an LLM or reproduce the results in our paper, you can use our code. See the repository for more info: [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA)
|
90 |
|
|
|
179 |
# Uses
|
180 |
This dataset is intended to build models tailored for academic research that can extract information from large texts. The objective is to research whether current LLMs, given a question formulated as a Clickbait headline, can locate the answer within the article body and summarize the information in a few words. The dataset also aims to serve as a task to evaluate the performance of current LLMs in Spanish.
|
181 |
|
182 |
+
|
183 |
# Out-of-Scope Use
|
184 |
You cannot use this dataset to develop systems that directly harm the newspapers included in the dataset. This includes using the dataset to train profit-oriented LLMs capable of generating articles from a short text or headline, as well as developing profit-oriented bots that automatically summarize articles without the permission of the article's owner. Additionally, you are not permitted to train a system with this dataset that generates clickbait headlines.
|
185 |
|
186 |
+
This dataset contains text and headlines from newspapers; therefore, you cannot use it for commercial purposes unless you have the license for the data.
|
187 |
+
|
188 |
+
|
189 |
# Dataset Creation
|
190 |
The dataset has been meticulously created by hand. We utilize two sources to compile Clickbait articles:
|
191 |
- The Twitter user [@ahorrandoclick1](https://twitter.com/ahorrandoclick1), who reposts Clickbait articles along with a hand-crafted summary. Although we use their summaries as a reference, most of them have been rewritten (750 examples from this source).
|
192 |
- The web demo [⚔️ClickbaitFighter⚔️](https://iker-clickbaitfighter.hf.space/), which operates a pre-trained model using an early iteration of our dataset. We collect all the model inputs/outputs and manually correct them (100 examples from this source).
|
193 |
|
194 |
# Who are the annotators?
|
195 |
+
The dataset was annotated by [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) and validated by [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139).
|
196 |
The annotation took ~40 hours.
|
197 |
|
198 |
|