AndyReas commited on
Commit
84428b3
1 Parent(s): b05b12d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -1
README.md CHANGED
@@ -21,7 +21,53 @@ dataset_info:
21
  num_examples: 13118041
22
  download_size: 2268969824
23
  dataset_size: 4266768296
 
 
 
 
 
 
 
24
  ---
25
  # Dataset Card for "frontpage-news"
26
 
27
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  num_examples: 13118041
22
  download_size: 2268969824
23
  dataset_size: 4266768296
24
+ license: mit
25
+ task_categories:
26
+ - text-generation
27
+ language:
28
+ - en
29
+ size_categories:
30
+ - 10M<n<100M
31
  ---
32
  # Dataset Card for "frontpage-news"
33
 
34
+ ## The Data
35
+ The data consists of ~13,000,000 English articles from ~90 outlets. The articles were collected from the [Sciride News Mine](http://sciride.org/news.html), after which some additional cleaning / processing was performed on the data.
36
+
37
+ ### Data processing
38
+ - Removing duplicate articles (a result of being on the frontpage for multiple days.)
39
+ - Removing repeated "outlet tags" appearing before or after headlines such as "| Daily Mail Online".
40
+ - Removing dates that were not part of a natural sentence but rather "tags", such as "\[Some headline\] - 2020-12-03".
41
+ - Removing duplicate articles (again. This time due to dates making otherwise identical articles unique. Removing the date made them 100% identical.)
42
+ - Removing HTML elements that were missed on the first scraping.
43
+ - Unescaping HTML characters, replacing them with "regular" characters.
44
+ - Removing "junk" articles such as empty articles and articles with a length below a certain threshold.
45
+
46
+ Note: the cleaning process was not perfect and some "outlet tags" still remain.
47
+ For instance, some outlets use "--" instead of "|" before a tag, and those were missed.
48
+ There is also the case of uncommon characters, such as "\u00a" being used instead of regular characters. This specific example results in tokenizers not being able to properly tokenize sentences using that space.
49
+ There are possibly (likely) other things, that were overlooked during cleaning.
50
+
51
+ ### Outlets
52
+ ```
53
+ [9news.com.au, abc.net.au, abcnews.go.com, afr.com, aljazeera.com, apnews.com, bbc.com, bostonglobe.com, breakingnews.ie, breitbart.com, businessinsider.com, cbc.ca, cbsnews.com, channel4.com, chicagotribune.com, cnbc.com, csmonitor.com, ctvnews.ca, dailymail.co.uk, dailystar.co.uk, dw.com, economist.com, edition.cnn.com, euronews.com, express.co.uk, foxnews.com, france24.com, globalnews.ca, huffpost.com, independent.co.uk, independent.ie, inquirer.com, irishexaminer.com, irishmirror.ie, irishtimes.com, itv.com, latimes.com, liverpoolecho.co.uk, macleans.ca, metro.co.uk, mirror.co.uk, montrealgazette.com, morningstaronline.co.uk, msnbc.com, nbcnews.com, news.com.au, news.sky.com, news.yahoo.com, newshub.co.nz, newsweek.com, npr.org, nypost.com, nytimes.com, nzherald.co.nz, politico.com, rcinet.ca, reuters.com, rfi.fr, rnz.co.nz, rt.com, rte.ie, sbs.com.au, scoop.co.nz, scotsman.com, slate.com, smh.com.au, standard.co.uk, stuff.co.nz, telegraph.co.uk, theage.com.au, theatlantic.com, theglobeandmail.com, theguardian.com, thehill.com, thejournal.ie, thestar.com, thesun.co.uk, thesun.ie, thetimes.co.uk, thewest.com.au, time.com, torontosun.com, upi.com, usatoday.com, vancouversun.com, walesonline.co.uk, washingtonpost.com, washingtontimes.com, westernjournal.com, wnd.com, wsj.com]
54
+ ```
55
+
56
+ ## Features (columns)
57
+
58
+ ### title
59
+ A news headline.
60
+
61
+ ### description
62
+ A news subheader.
63
+
64
+ ### meta
65
+
66
+ - article_id: Article ID from the original sciride news mine. A hashing of the original title + description.
67
+
68
+ - date: The date on which the article appeared on the frontpage.
69
+
70
+ - outlet: The outlet which published the article on their frontpage.
71
+
72
+ ### new_article_id
73
+ A new article ID created by hashing the title + description. Can be different from article_id because titles and descriptions changed during "cleaning".