danish-gigaword / README.md
KennethEnevoldsen's picture
fixed yaml header
0ab1895 unverified
metadata
license: other
configs:
  - config_name: default
    data_files:
      - split: train
        path: '*/*.parquet'
  - config_name: retsinformationdk
    data_files:
      - split: train
        path: retsinformationdk/*.parquet
  - config_name: ep
    data_files:
      - split: train
        path: ep/*.parquet
  - config_name: ft
    data_files:
      - split: train
        path: ft/*.parquet
  - config_name: wikisource
    data_files:
      - split: train
        path: wikisource/*.parquet
  - config_name: spont
    data_files:
      - split: train
        path: spont/*.parquet
  - config_name: tv2r
    data_files:
      - split: train
        path: tv2r/*.parquet
  - config_name: adl
    data_files:
      - split: train
        path: adl/*.parquet
  - config_name: hest
    data_files:
      - split: train
        path: hest/*.parquet
  - config_name: skat
    data_files:
      - split: train
        path: skat/*.parquet
  - config_name: dannet
    data_files:
      - split: train
        path: dannet/*.parquet
  - config_name: retspraksis
    data_files:
      - split: train
        path: retspraksis/*.parquet
  - config_name: wikibooks
    data_files:
      - split: train
        path: wikibooks/*.parquet
  - config_name: jvj
    data_files:
      - split: train
        path: jvj/*.parquet
  - config_name: gutenberg
    data_files:
      - split: train
        path: gutenberg/*.parquet
  - config_name: botxt
    data_files:
      - split: train
        path: botxt/*.parquet
  - config_name: depbank
    data_files:
      - split: train
        path: depbank/*.parquet
  - config_name: naat
    data_files:
      - split: train
        path: naat/*.parquet
  - config_name: synne
    data_files:
      - split: train
        path: synne/*.parquet
  - config_name: wiki
    data_files:
      - split: train
        path: wiki/*.parquet
  - config_name: relig
    data_files:
      - split: train
        path: relig/*.parquet
annotations_creators:
  - no-annotation
language_creators:
  - crowdsourced
language:
  - da
multilinguality:
  - monolingual
source_datasets:
  - original
task_categories:
  - text-generation
task_ids:
  - language-modeling
pretty_name: Danish Gigaword
language_bcp47:
  - da
  - da-bornholm
  - da-synnejyl

Danish Gigaword Corpus

Version: 1.0.0

License: See the respective dataset

Table of Contents

Dataset Description

Dataset Summary

The Danish Gigaword Corpus contains text spanning several domains and forms. This version does not include the sections containing tweets ("General Discussions" and "Parliament Elections"), "danavis", "Common Crawl" and "OpenSubtitles" due to potential privacy, quality and copyright concerns.

Loading the dataset

from datasets import load_dataset

name = "danish-foundation-models/danish-gigaword"
ds = load_dataset(name, split = "train")
sample = ds[1] # see "Data Instances" below

# or load by streaming the data
ds = load_dataset(name, split = "train", streaming=True)
sample = next(iter(ds))

Dataset Structure

The dataset contains text from different sources which are thoroughly defined in Source Data. See the homepage or paper for more information.

Data Instances

Each entry in the dataset consists of a single text with associated metadata

{
  'text': 'Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig.\nCykelløbet Paris-Camembert slutter i Vimoutiers.\nHistorie.\nDen 14. juni 1944, under invasionen i Normandiet blev Vimoutiers bombarderet af allierede styrker. Landsbyen blev ødelagt og 220 civile dræbt.\nPersonligheder.\nPolitikeren Joseph Laniel (1889-1975) var født i Vomoutiers.', 
  'source': 'wiki', 
  'id': 'wiki_366127',
  'added': '2021-03-28',
  'created': '2019-01-01, 2021-01-01',
  'metadata':
    {'domain': 'Wiki & Books',
    'license': 'Creative Commons Legal Code\n\nCC0 1.0 Universal', 'source-pretty': 'Wikipedia'
    }
}

Data Fields

An entry in the dataset consists of the following fields:

  • text(str): The content of the document.
  • source (str): The source of the document (see Source Data).
  • id (str): An unique identifer for each document.
  • added (str): An date for when the document was added to this collection.
  • created (str): An date range for when the document was originally created.
  • metadata/license (str): The license of the document. The licenses vary according to the source.
  • metadata/domain (str): The domain of the source
  • metadata/source-pretty (str): The longform version of the short-form source name

Data Splits

The entire corpus is provided in the train split.

Dataset Creation

Source Data

Below follows a brief overview of the sources in the corpus along with their individual license.

Source License
adl Creative Commons Legal Code 1.0 Universal
botxt Creative Commons Legal Code 1.0 Universal
dannet dannet license
depbank Attribution-ShareAlike 4.0 International
ep Creative Commons Legal Code 1.0 Universal
ft Creative Commons Legal Code 1.0 Universal
gutenberg gutenberg license
hest Creative Commons Legal Code 1.0 Universal
jvj Attribution-ShareAlike 4.0 International
naat Creative Commons Legal Code 1.0 Universal
relig Creative Commons Legal Code 1.0 Universal
retsinformationdk Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler."
retspraksis Creative Commons Legal Code 1.0 Universal
skat Creative Commons Legal Code 1.0 Universal
spont Creative Commons Legal Code 1.0 Universal
synne Creative Commons Legal Code 1.0 Universal
tv2r The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International
wiki Creative Commons Legal Code 1.0 Universal
wikibooks Creative Commons Legal Code 1.0 Universal
wikisource Creative Commons Legal Code 1.0 Universal

These sources corresponds to the following top-level domains in the dataset:

# mapping from domain to top-level domain
domain_mapping_dict = {
    "retsinformationdk": "Legal",
    "skat": "Legal",
    "retspraksis": "Legal",
    "hest": "Social Media",
    "cc": "Web",
    "adl": "Wiki & Books",
    "botxt": "Other",
    "danavis": "News",
    "dannet": "dannet",
    "depbank": "Other",
    "ep": "Conversation",
    "ft": "Conversation",
    "gutenberg": "Wiki & Books",
    "jvj": "Wiki & Books",
    "naat": "Conversation",
    "opensub": "Conversation",
    "relig": "Wiki & Books",
    "spont": "Conversation",
    "synne": "Other",
    "tv2r": "News",
    "wiki": "Wiki & Books",
    "wikibooks": "Wiki & Books",
    "wikisource": "Wiki & Books",
    "twfv19": "Social Media", # not present in this version of the dataset
}

And the following mapping translates between the short form and the long form of the source name

# mapping from domain to its long name format
longname_mapping_dict = {
    "retsinformationdk": "retsinformation.dk (Danish legal information)",
    "skat": "Skat (Danish tax authority)",
    "retspraksis": "retspraksis (Danish legal information)",
    "hest": "Hestenettet (Danish debate forum)",
    "cc": "Common Crawl",
    "adl": " Archive for Danish Literature",
    "botxt": "Bornholmsk (Danish dialect)",
    "danavis": "Danish daily newspapers",
    "dannet": "DanNet (Danish WordNet)",
    "depbank": "Danish Dependency Treebank",
    "ep": "European Parliament",
    "ft": "Folketinget (Danish Parliament)",
    "gutenberg": "Gutenberg",
    "jvj": "Johannes V. Jensen (Danish poet)",
    "naat": "NAAT",
    "opensub": "Open Subtitles",
    "relig": "Religious texts",
    "spont": "Spontaneous speech",
    "synne": "Synderjysk (Danish dialect)",
    "tv2r": "TV 2 Radio (Danish news)",
    "wiki": "Wikipedia",
    "wikibooks": "Wikibooks",
    "wikisource": "Wikisource",
    "twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset
}

Additional Information

Citation Information

Sample attributions:

In a press release:

Modellen er præ-trænet på et datasæt fra The Danish Gigaword Project (https://gigaword.dk), der er udviklet af forskere fra IT-Universitetet i København

The model is pre-trained using the Danish Gigaword Corpus (https://gigaword.dk), developed at the IT University of Copenhagen

In academic writing:

Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).

@inproceedings{dagw,
 title = {{The Danish Gigaword Corpus}},
 author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
 year = 2021,
 booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
 publisher = {NEALT}
}

In a software product, tool, or service:

Denne service er lavet med data fra The Danish Gigaword Corpus

Contributions

Dataset created by Derczynski et al. (2021). Thanks to @HLasse, @KennethEnevoldsen, and Jan Kostkan for adding this dataset to the Hugging Face Hub.