This view is limited to 50 files because it contains too many changes.  See the raw diff here.
Files changed (50) hide show
  1. .gitignore +0 -10
  2. .vscode/settings.json +1 -1
  3. CONTRIBUTING.md +0 -78
  4. README.md +135 -199
  5. data/adl/adl.md +35 -77
  6. data/adl/adl.parquet +2 -2
  7. data/adl/descriptive_stats.json +0 -1
  8. data/adl/images/dist_document_length.png +0 -3
  9. data/botxt/botxt.md +36 -77
  10. data/botxt/botxt.parquet +2 -2
  11. data/botxt/descriptive_stats.json +0 -1
  12. data/botxt/images/dist_document_length.png +0 -3
  13. data/dannet/dannet.md +57 -79
  14. data/dannet/dannet.parquet +2 -2
  15. data/dannet/descriptive_stats.json +0 -1
  16. data/dannet/images/dist_document_length.png +0 -3
  17. data/depbank/depbank.md +31 -83
  18. data/depbank/depbank.parquet +2 -2
  19. data/depbank/descriptive_stats.json +0 -1
  20. data/depbank/images/dist_document_length.png +0 -3
  21. data/ep/descriptive_stats.json +0 -1
  22. data/ep/ep.md +32 -78
  23. data/ep/ep.parquet +2 -2
  24. data/ep/images/dist_document_length.png +0 -3
  25. data/ft/descriptive_stats.json +0 -1
  26. data/ft/ft.md +34 -81
  27. data/ft/ft.parquet +2 -2
  28. data/ft/images/dist_document_length.png +0 -3
  29. data/gutenberg/descriptive_stats.json +0 -1
  30. data/gutenberg/gutenberg.md +337 -97
  31. data/gutenberg/gutenberg.parquet +2 -2
  32. data/gutenberg/images/dist_document_length.png +0 -3
  33. data/hest/descriptive_stats.json +0 -1
  34. data/hest/hest.md +34 -80
  35. data/hest/hest.parquet +2 -2
  36. data/hest/images/dist_document_length.png +0 -3
  37. data/jvj/descriptive_stats.json +0 -1
  38. data/jvj/images/dist_document_length.png +0 -3
  39. data/jvj/jvj.md +33 -84
  40. data/jvj/jvj.parquet +2 -2
  41. data/lexdk/create.py +0 -78
  42. data/lexdk/descriptive_stats.json +0 -1
  43. data/lexdk/images/dist_document_length.png +0 -3
  44. data/lexdk/lexdk.md +0 -85
  45. data/lexdk/lexdk.parquet +0 -3
  46. data/naat/descriptive_stats.json +0 -1
  47. data/naat/images/dist_document_length.png +0 -3
  48. data/naat/naat.md +32 -74
  49. data/naat/naat.parquet +2 -2
  50. data/nordjyllandnews/create.py +0 -51
.gitignore CHANGED
@@ -1,13 +1,3 @@
1
  # Python
2
  __pycache__/*
3
  *.pyc
4
-
5
- # cSpell
6
- cspell.json
7
-
8
- # debugfile
9
- .vscode/launch.json
10
-
11
- # tmp files
12
- tmp.py
13
- tmp.png
 
1
  # Python
2
  __pycache__/*
3
  *.pyc
 
 
 
 
 
 
 
 
 
 
.vscode/settings.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "python.testing.pytestArgs": [
3
- "src/tests"
4
  ],
5
  "python.testing.unittestEnabled": false,
6
  "python.testing.pytestEnabled": true
 
1
  {
2
  "python.testing.pytestArgs": [
3
+ "."
4
  ],
5
  "python.testing.unittestEnabled": false,
6
  "python.testing.pytestEnabled": true
CONTRIBUTING.md DELETED
@@ -1,78 +0,0 @@
1
- ## Working with dataset locally
2
-
3
- A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
4
-
5
- ```bash
6
- git clone https://huggingface.co/datasets/danish-foundation-models/danish-dynaword
7
- cd danish-dynaword
8
- ```
9
-
10
- You can the work with the dataset locally like so:
11
-
12
- ```py
13
- from datasets import load_dataset
14
-
15
- name = "../." # instead of "danish-foundation-models/danish-dynaword"
16
- dataset = load_dataset("../.", split="train")
17
- # make transformations here
18
- ```
19
-
20
- > Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
21
-
22
- ## Installing dependencies
23
-
24
- This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:
25
-
26
- ```bash
27
- make install
28
- ```
29
-
30
- ## Running dataset tests
31
-
32
- This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
33
-
34
- ```bash
35
- make test
36
- ```
37
-
38
- ## Submitting a PR
39
-
40
- Creating a PR on Huggingface is a bit different from creating one on Github.
41
-
42
- 1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:
43
-
44
- ```bash
45
- git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
46
- git checkout pr/{PR NUMBER}
47
- # make your changes here
48
- # push to hub
49
- git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
50
- ```
51
-
52
- Before you make the PR do be sure to make sure that you have completed the following checklist.
53
-
54
- ### Checklist
55
-
56
- - [ ] I have run the test suite using `make test` and all tests pass
57
- - [ ] I have added/changed a dataset and have
58
- - [ ] I have updated descriptive statistics using `make update-descriptive-statistics`
59
- - [ ] I have bumped the version use `make bump-version`
60
-
61
- ### Examples of Previous PRs
62
- To see example PR you can see the following:
63
-
64
- - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
65
- - [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
66
- - Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)
67
-
68
- ## Frequently asked questions
69
-
70
- ### Do you accept synthetic dataets
71
-
72
- Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages.
73
- However, you should be aware that synthetic dataset will probably require a more detailed examination and description.
74
- We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.
75
-
76
- ### Do you accept non-Danish data
77
-
78
- Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -5,14 +5,6 @@ configs:
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
8
- - config_name: lexdk
9
- data_files:
10
- - split: train
11
- path: data/lexdk/*.parquet
12
- - config_name: opensubtitles
13
- data_files:
14
- - split: train
15
- path: data/opensubtitles/*.parquet
16
  - config_name: retsinformationdk
17
  data_files:
18
  - split: train
@@ -89,10 +81,6 @@ configs:
89
  data_files:
90
  - split: train
91
  path: data/wiki/*.parquet
92
- - config_name: nordjyllandnews
93
- data_files:
94
- - split: train
95
- path: data/nordjyllandnews/*.parquet
96
  - config_name: relig
97
  data_files:
98
  - split: train
@@ -111,125 +99,75 @@ task_categories:
111
  - text-generation
112
  task_ids:
113
  - language-modeling
114
- pretty_name: Danish Dynaword
115
  language_bcp47:
116
  - da
117
  - da-bornholm
118
  - da-synnejyl
119
  ---
120
 
121
- <!--
122
- readme structure is inspired by:
123
- https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
124
- -->
125
-
126
-
127
- # 🧨 Danish Dynaword
128
 
129
- | | |
130
- | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
131
- | **Language** | dan, dansk, Danish |
132
- | **License** | Permissible, See the respective dataset |
133
- | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
134
- | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions) |
135
 
 
136
 
137
  ## Table of Contents
138
- - [🧨 Danish Dynaword](#-danish-dynaword)
139
  - [Table of Contents](#table-of-contents)
140
  - [Dataset Description](#dataset-description)
141
  - [Dataset Summary](#dataset-summary)
142
  - [Loading the dataset](#loading-the-dataset)
143
- - [Languages:](#languages)
144
  - [Dataset Structure](#dataset-structure)
145
  - [Data Instances](#data-instances)
146
  - [Data Fields](#data-fields)
147
  - [Data Splits](#data-splits)
148
  - [Dataset Creation](#dataset-creation)
149
- - [Curation Rationale](#curation-rationale)
150
- - [Annotations](#annotations)
151
  - [Source Data](#source-data)
152
- - [Dataset Statistics](#dataset-statistics)
153
  - [Additional Information](#additional-information)
154
- - [Contributing to the dataset](#contributing-to-the-dataset)
155
  - [Citation Information](#citation-information)
156
- - [Disclaimer](#disclaimer)
157
- - [Notice and take down policy](#notice-and-take-down-policy)
158
 
159
  ## Dataset Description
160
 
161
- <!-- START-DESC-STATS -->
162
- - **Language**: dan, dansk, Danish
163
- - **Number of samples**: 588.48K
164
- - **Number of tokens (Llama 3)**: 1.84B
165
- - **Average document length (characters)**: 9222.58
166
- <!-- END-DESC-STATS -->
167
-
168
 
169
  ### Dataset Summary
170
 
171
- The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
172
-
173
 
174
  ### Loading the dataset
175
 
176
  ```py
177
  from datasets import load_dataset
178
 
179
- name = "danish-foundation-models/danish-dynaword"
180
  ds = load_dataset(name, split = "train")
181
  sample = ds[1] # see "Data Instances" below
182
- ```
183
 
184
- or load it by streaming the data
185
- ```py
186
  ds = load_dataset(name, split = "train", streaming=True)
187
- dataset_iter = iter(ds)
188
- sample = next(iter(dataset_iter))
189
- ```
190
-
191
- You can also load a single subset at a time:
192
- ```py
193
- ds = load_dataset(name, "adl", split = "train")
194
  ```
195
 
196
-
197
- As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision:
198
- You can also load a single subset at a time:
199
- ```py
200
- ds = load_dataset(name, revision="{desired revision}")
201
- ```
202
-
203
- ### Languages:
204
- This dataset includes the following languages:
205
-
206
- - dan-Latn
207
- - dan-Latn-bornholm
208
- - dan-Latn-synnejyl
209
-
210
- Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
211
-
212
  ## Dataset Structure
213
 
214
- The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
215
 
216
  ### Data Instances
217
 
218
  Each entry in the dataset consists of a single text with associated metadata
219
 
220
- <!-- START-SAMPLE -->
221
  ```py
222
  {
223
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
224
- "source": "adl",
225
- "id": "adl_aakjaer06val",
226
- "added": "2020-09-14",
227
- "created": "1700-01-01, 2022-01-01",
228
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
229
- "domain": "Wiki & Books",
230
- "metadata": {
231
- "source-pretty": "Archive for Danish Literature"
232
- }
233
  }
234
  ```
235
 
@@ -239,14 +177,13 @@ An entry in the dataset consists of the following fields:
239
 
240
  - `text`(`str`): The content of the document.
241
  - `source` (`str`): The source of the document (see [Source Data](#source-data)).
242
- - `id` (`str`): An unique identifier for each document.
243
  - `added` (`str`): An date for when the document was added to this collection.
244
  - `created` (`str`): An date range for when the document was originally created.
245
- - `license` (`str`): The license of the document. The licenses vary according to the source.
246
- - `domain` (`str`): The domain of the source
247
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
248
- - `metadata/*`: Potentially additional metadata
249
- <!-- END-SAMPLE -->
250
 
251
  ### Data Splits
252
 
@@ -254,129 +191,128 @@ The entire corpus is provided in the `train` split.
254
 
255
  ## Dataset Creation
256
 
257
- ### Curation Rationale
258
-
259
- These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
260
-
261
- ### Annotations
262
-
263
- This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to.
264
-
265
  ### Source Data
266
 
267
  Below follows a brief overview of the sources in the corpus along with their individual license.
268
 
269
- <!-- START-MAIN TABLE -->
270
- | Source | Description | N. Tokens | License |
271
- |:--------------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------|:-----------------------|
272
- | [lexdk] | Permissible use articles from [lex.dk](https://lex.dk) | 5.69M | [CC-BY-SA 4.0] |
273
- | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.60M | [CC-0] |
274
- | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
275
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
276
- | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
277
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
278
- | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
279
- | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
280
- | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
281
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
282
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
283
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
284
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
285
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
286
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
287
- | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
288
- | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
289
- | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
290
- | [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
291
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
292
- | [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
293
- | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
294
- | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
295
- | **Total** | | 1.84B | |
296
-
297
- [lexdk]: data/lexdk/lexdk.md
298
- [opensubtitles]: data/opensubtitles/opensubtitles.md
299
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
300
- [ep]: data/ep/ep.md
301
- [ft]: data/ft/ft.md
302
- [wikisource]: data/wikisource/wikisource.md
303
- [spont]: data/spont/spont.md
304
- [tv2r]: data/tv2r/tv2r.md
305
- [adl]: data/adl/adl.md
306
- [hest]: data/hest/hest.md
307
- [skat]: data/skat/skat.md
308
- [dannet]: data/dannet/dannet.md
309
- [retspraksis]: data/retspraksis/retspraksis.md
310
- [wikibooks]: data/wikibooks/wikibooks.md
311
- [jvj]: data/jvj/jvj.md
312
- [gutenberg]: data/gutenberg/gutenberg.md
313
- [botxt]: data/botxt/botxt.md
314
- [depbank]: data/depbank/depbank.md
315
- [naat]: data/naat/naat.md
316
- [synne]: data/synne/synne.md
317
- [wiki]: data/wiki/wiki.md
318
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
319
- [relig]: data/relig/relig.md
320
-
321
-
322
- [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
323
- [CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
324
- [Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
325
- [DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
326
- [Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
327
- <!-- END-MAIN TABLE -->
328
-
329
-
330
- You can learn more about each dataset by pressing
331
-
332
- <!-- ### Quality Control
333
-
334
- Dynaword performs quality checks along with each PR. These quality checks includes:
335
- - ensuring unique ids
336
- TODO:
337
- - checking for duplicates
338
- -->
339
-
340
-
341
-
342
- ### Dataset Statistics
343
-
344
- <!-- START-DATASET PLOTS -->
345
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
346
- <img>
347
- <!-- END-DATASET PLOTS -->
348
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
349
 
350
  ## Additional Information
351
 
352
- ### Contributing to the dataset
353
-
354
- We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
355
 
356
  ### Citation Information
357
 
358
- This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
359
-
360
- ### Disclaimer
361
- We do not own any of the text from which the data has been extracted.
362
- We only offer files that we believe we are free to redistribute. If any doubt occurs about the legality of any of our file downloads we will take them off right away after [contacting us](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
363
-
364
- ### Notice and take down policy
365
- Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
366
-
367
- - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
368
- - Clearly identify the copyrighted work claimed to be infringed.
369
- - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
370
 
371
- You can contact us through [this channel](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
372
 
373
- Take down: We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
 
 
 
 
 
 
 
 
374
 
375
- ---
376
 
377
- <h3 style="display: flex; align-items: center;">
378
- <a href="https://www.foundationmodels.dk">
379
- <img src="./docs/icon.png" width="30" style="margin-right: 10px;" />
380
- </a>
381
- A&nbsp;<a href=https://www.foundationmodels.dk>Danish Foundation Models</a>&nbsp;dataset
382
- </h3>
 
 
 
 
 
 
 
 
 
 
 
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
 
 
 
 
 
 
 
 
8
  - config_name: retsinformationdk
9
  data_files:
10
  - split: train
 
81
  data_files:
82
  - split: train
83
  path: data/wiki/*.parquet
 
 
 
 
84
  - config_name: relig
85
  data_files:
86
  - split: train
 
99
  - text-generation
100
  task_ids:
101
  - language-modeling
102
+ pretty_name: Danish Gigaword
103
  language_bcp47:
104
  - da
105
  - da-bornholm
106
  - da-synnejyl
107
  ---
108
 
109
+ # Danish Gigaword 2
 
 
 
 
 
 
110
 
111
+ *Version*: 2.0.0
 
 
 
 
 
112
 
113
+ *License*: See the respective dataset
114
 
115
  ## Table of Contents
116
+ - [Danish Gigaword 2](#danish-gigaword-2)
117
  - [Table of Contents](#table-of-contents)
118
  - [Dataset Description](#dataset-description)
119
  - [Dataset Summary](#dataset-summary)
120
  - [Loading the dataset](#loading-the-dataset)
 
121
  - [Dataset Structure](#dataset-structure)
122
  - [Data Instances](#data-instances)
123
  - [Data Fields](#data-fields)
124
  - [Data Splits](#data-splits)
125
  - [Dataset Creation](#dataset-creation)
 
 
126
  - [Source Data](#source-data)
 
127
  - [Additional Information](#additional-information)
 
128
  - [Citation Information](#citation-information)
 
 
129
 
130
  ## Dataset Description
131
 
132
+ This is intended as a second version of the Danish Gigaword corpus. It is intended to be continually updated with new data sources. This is currently a work in progress.
 
 
 
 
 
 
133
 
134
  ### Dataset Summary
135
 
136
+ The Danish Gigaword Corpus contains text spanning several domains and forms.
 
137
 
138
  ### Loading the dataset
139
 
140
  ```py
141
  from datasets import load_dataset
142
 
143
+ name = "danish-foundation-models/danish-gigaword"
144
  ds = load_dataset(name, split = "train")
145
  sample = ds[1] # see "Data Instances" below
 
146
 
147
+ # or load by streaming the data
 
148
  ds = load_dataset(name, split = "train", streaming=True)
149
+ sample = next(iter(ds))
 
 
 
 
 
 
150
  ```
151
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
  ## Dataset Structure
153
 
154
+ The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](https://aclanthology.org/2021.nodalida-main.46.pdf) for more information.
155
 
156
  ### Data Instances
157
 
158
  Each entry in the dataset consists of a single text with associated metadata
159
 
 
160
  ```py
161
  {
162
+ 'text': 'Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig.\nCykelløbet Paris-Camembert slutter i Vimoutiers.\nHistorie.\nDen 14. juni 1944, under invasionen i Normandiet blev Vimoutiers bombarderet af allierede styrker. Landsbyen blev ødelagt og 220 civile dræbt.\nPersonligheder.\nPolitikeren Joseph Laniel (1889-1975) var født i Vomoutiers.',
163
+ 'source': 'wiki',
164
+ 'id': 'wiki_366127',
165
+ 'added': '2021-03-28',
166
+ 'created': '2019-01-01, 2021-01-01',
167
+ 'metadata':
168
+ {'domain': 'Wiki & Books',
169
+ 'license': 'Creative Commons Legal Code\n\nCC0 1.0 Universal', 'source-pretty': 'Wikipedia'
170
+ }
 
171
  }
172
  ```
173
 
 
177
 
178
  - `text`(`str`): The content of the document.
179
  - `source` (`str`): The source of the document (see [Source Data](#source-data)).
180
+ - `id` (`str`): An unique identifer for each document.
181
  - `added` (`str`): An date for when the document was added to this collection.
182
  - `created` (`str`): An date range for when the document was originally created.
183
+ - `metadata/license` (`str`): The license of the document. The licenses vary according to the source.
184
+ - `metadata/domain` (`str`): The domain of the source
185
+ - `metadata/source-pretty` (`str`): The longform version of the short-form source name
186
+
 
187
 
188
  ### Data Splits
189
 
 
191
 
192
  ## Dataset Creation
193
 
 
 
 
 
 
 
 
 
194
  ### Source Data
195
 
196
  Below follows a brief overview of the sources in the corpus along with their individual license.
197
 
198
+ | Source | License |
199
+ | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
200
+ | adl | Creative Commons Legal Code 1.0 Universal |
201
+ | botxt | Creative Commons Legal Code 1.0 Universal |
202
+ | dannet | [dannet license](https://cst.ku.dk/projekter/dannet/license.txt) |
203
+ | depbank | Attribution-ShareAlike 4.0 International |
204
+ | ep | Creative Commons Legal Code 1.0 Universal |
205
+ | ft | Creative Commons Legal Code 1.0 Universal |
206
+ | gutenberg | [gutenberg license](https://www.gutenberg.org/policy/license.html) |
207
+ | hest | Creative Commons Legal Code 1.0 Universal |
208
+ | jvj | Attribution-ShareAlike 4.0 International |
209
+ | naat | Creative Commons Legal Code 1.0 Universal |
210
+ | relig | Creative Commons Legal Code 1.0 Universal |
211
+ | retsinformationdk | Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler." |
212
+ | retspraksis | Creative Commons Legal Code 1.0 Universal |
213
+ | skat | Creative Commons Legal Code 1.0 Universal |
214
+ | spont | Creative Commons Legal Code 1.0 Universal |
215
+ | synne | Creative Commons Legal Code 1.0 Universal |
216
+ | tv2r | The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International |
217
+ | wiki | Creative Commons Legal Code 1.0 Universal |
218
+ | wikibooks | Creative Commons Legal Code 1.0 Universal |
219
+ | wikisource | Creative Commons Legal Code 1.0 Universal |
220
+
221
+ These sources corresponds to the following top-level domains in the dataset:
222
+ ```python
223
+ # mapping from domain to top-level domain
224
+ domain_mapping_dict = {
225
+ "retsinformationdk": "Legal",
226
+ "skat": "Legal",
227
+ "retspraksis": "Legal",
228
+ "hest": "Social Media",
229
+ "cc": "Web",
230
+ "adl": "Wiki & Books",
231
+ "botxt": "Other",
232
+ "danavis": "News",
233
+ "dannet": "dannet",
234
+ "depbank": "Other",
235
+ "ep": "Conversation",
236
+ "ft": "Conversation",
237
+ "gutenberg": "Wiki & Books",
238
+ "jvj": "Wiki & Books",
239
+ "naat": "Conversation",
240
+ "opensub": "Conversation",
241
+ "relig": "Wiki & Books",
242
+ "spont": "Conversation",
243
+ "synne": "Other",
244
+ "tv2r": "News",
245
+ "wiki": "Wiki & Books",
246
+ "wikibooks": "Wiki & Books",
247
+ "wikisource": "Wiki & Books",
248
+ "twfv19": "Social Media", # not present in this version of the dataset
249
+ }
250
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
251
 
252
+ And the following mapping translates between the short form and the long form of the source name
253
+ ```python
254
+ # mapping from domain to its long name format
255
+ longname_mapping_dict = {
256
+ "retsinformationdk": "retsinformation.dk (Danish legal information)",
257
+ "skat": "Skat (Danish tax authority)",
258
+ "retspraksis": "retspraksis (Danish legal information)",
259
+ "hest": "Hestenettet (Danish debate forum)",
260
+ "cc": "Common Crawl",
261
+ "adl": " Archive for Danish Literature",
262
+ "botxt": "Bornholmsk (Danish dialect)",
263
+ "danavis": "Danish daily newspapers",
264
+ "dannet": "DanNet (Danish WordNet)",
265
+ "depbank": "Danish Dependency Treebank",
266
+ "ep": "European Parliament",
267
+ "ft": "Folketinget (Danish Parliament)",
268
+ "gutenberg": "Gutenberg",
269
+ "jvj": "Johannes V. Jensen (Danish author/poet)",
270
+ "naat": "NAAT",
271
+ "opensub": "Open Subtitles",
272
+ "relig": "Religious texts",
273
+ "spont": "Spontaneous speech",
274
+ "synne": "Synderjysk (Danish dialect)",
275
+ "tv2r": "TV 2 Radio (Danish news)",
276
+ "wiki": "Wikipedia",
277
+ "wikibooks": "Wikibooks",
278
+ "wikisource": "Wikisource",
279
+ "twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset
280
+ }
281
+ ```
282
 
283
  ## Additional Information
284
 
 
 
 
285
 
286
  ### Citation Information
287
 
288
+ The original version of Danish Gigawords was created as a part of the following publication.
 
 
 
 
 
 
 
 
 
 
 
289
 
290
+ > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
291
 
292
+ ```
293
+ @inproceedings{dagw,
294
+ title = {{The Danish Gigaword Corpus}},
295
+ author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
296
+ year = 2021,
297
+ booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
298
+ publisher = {NEALT}
299
+ }
300
+ ```
301
 
 
302
 
303
+ <!--
304
+ Todo:
305
+
306
+ add tests
307
+ - unique ids
308
+ - valid metadata
309
+
310
+ add ci:
311
+ - summary statistics
312
+ - tables
313
+
314
+ prettify:
315
+ - license as independent column
316
+ - ensure pretty_name is standard
317
+ - potentially remove some columns
318
+ -->
data/adl/adl.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Archive for Danish Literature
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,89 +11,47 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Archive for Danish Literature
19
-
20
  ## Dataset Description
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- Danish literature from 1700-2023 from the Archive for Danish Literature (ADL).
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
- See also dataset [entry](https://sprogteknologi.dk/dataset/public-adl-text-sources) on sprogteknologi.dk and their API [here](https://rawgit.com/Det-Kongelige-Bibliotek/access-digital-objects/master/form-demos/adl-form.html).
27
-
28
- <!-- START-DESC-STATS -->
29
- - **Language**: dan, dansk, Danish
30
- - **Number of samples**: 498
31
- - **Number of tokens (Llama 3)**: 58.49M
32
- - **Average document length (characters)**: 324932.24
33
- <!-- END-DESC-STATS -->
34
-
35
-
36
-
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
-
40
-
41
- <!-- START-SAMPLE -->
42
- ```py
43
  {
44
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
45
- "source": "adl",
46
- "id": "adl_aakjaer06val",
47
- "added": "2020-09-14",
48
- "created": "1700-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Wiki & Books",
51
- "metadata": {
52
- "source-pretty": "Archive for Danish Literature"
53
- }
 
 
 
 
54
  }
55
  ```
56
 
57
- ### Data Fields
58
-
59
- An entry in the dataset consists of the following fields:
60
 
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
- <!-- END-SAMPLE -->
71
 
 
 
 
 
 
72
 
73
-
74
- ### Dataset Statistics
75
-
76
- <!-- START-DATASET PLOTS -->
77
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
78
- <img>
79
- <!-- END-DATASET PLOTS -->
80
-
81
-
82
- ## Additional Information
83
-
84
-
85
- ### Citation Information
86
-
87
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
88
-
89
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
90
-
91
- ```bash
92
- @inproceedings{dagw,
93
- title = {{The Danish Gigaword Corpus}},
94
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
95
- year = 2021,
96
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
97
- publisher = {NEALT}
98
- }
99
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Archive for Danish Literature
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 498
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'SAMLEDE VÆRKER
24
+
25
+ JEPPE AAKJÆR GYLDENDALSKE BOGHANDE',
26
+ 'source': 'adl',
27
+ 'id': 'adl_aakjaer06val',
28
+ 'added': '2020-09-14',
29
+ 'created': '1700-01-01, 2022-01-01',
30
+ 'metadata': {
31
+ 'domain': 'Wiki & Books',
32
+ 'license': 'Creative Commons Legal Code
33
+
34
+ CC0 1.0 Universal',
35
+ 'source-pretty': ' Archive for Danish Literature'
36
+ }
37
  }
38
  ```
39
 
40
+ ## Data Fields
 
 
41
 
42
+ - **id**: source-specific identifier.
43
+ - **text**: textual content of the document.
44
+ - **source**: source of the data.
45
+ - **added**: timestamp ai2 acquired this data.
46
+ - **created**": timestamp when original document was created (best-guess if not available)
47
+ - **metadata**: source-specific metadata.
 
 
 
 
48
 
49
+ ## License Information
50
+ <details>
51
+ <summary>Creative Commons Zero v1.0 Universal</summary>
52
+ <p>
53
+ Creative Commons Legal Code
54
 
55
+ CC0 1.0 Universal
56
+ </p>
57
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/adl/adl.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5af9444529d92c37f35161829c652f8b928f9f1dfb5836065f320d1e1d698818
3
- size 106401744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d51c291d1cf6461a1e59dd45dfd63ee39a5c62cd3c2fd05877489d50aaa5115e
3
+ size 106409966
data/adl/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 498, "average_document_length": 324932.2429718876, "number_of_tokens": 58493311, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/adl/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 297677d067d7831f90c4d539c1d160af2087a25119691bbfda61e95de62ca5f5
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/botxt/botxt.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- pretty_name: Bornholmsk
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,88 +11,47 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Bornholmsk
19
-
20
  ## Dataset Description
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- The Bornholmsk Ordbog Dictionary Project
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
- Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
27
-
28
-
29
- <!-- START-DESC-STATS -->
30
- - **Language**: dan, dansk, Danish
31
- - **Number of samples**: 106
32
- - **Number of tokens (Llama 3)**: 847.97K
33
- - **Average document length (characters)**: 18972.42
34
- <!-- END-DESC-STATS -->
35
-
36
-
37
-
38
- ## Dataset Structure
39
  An example from the dataset looks as follows.
40
-
41
-
42
- <!-- START-SAMPLE -->
43
- ```py
44
  {
45
- "text": "Ræua-Lârs\n\nRæua-Lârs å hans Konna, Stina, bode uda i Torpabakkana. Hanj hed nok æjla Lârs\nNielsen, m[...]",
46
- "source": "botxt",
47
- "id": "botxt_0000040",
48
- "added": "2024-05-16",
49
- "created": "2000-01-01, 2022-01-01",
50
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
51
- "domain": "Other",
52
- "metadata": {
53
- "source-pretty": "Bornholmsk (Danish dialect)"
54
- }
 
 
 
 
55
  }
56
  ```
57
 
58
- ### Data Fields
59
 
60
- An entry in the dataset consists of the following fields:
 
 
 
 
 
61
 
62
- - `text`(`str`): The content of the document.
63
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
64
- - `id` (`str`): An unique identifier for each document.
65
- - `added` (`str`): An date for when the document was added to this collection.
66
- - `created` (`str`): An date range for when the document was originally created.
67
- - `license` (`str`): The license of the document. The licenses vary according to the source.
68
- - `domain` (`str`): The domain of the source
69
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
70
- - `metadata/*`: Potentially additional metadata
71
- <!-- END-SAMPLE -->
72
 
73
- ### Dataset Statistics
74
-
75
- <!-- START-DATASET PLOTS -->
76
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
77
- <img>
78
- <!-- END-DATASET PLOTS -->
79
-
80
-
81
- ## Additional Information
82
-
83
-
84
- ### Citation Information
85
-
86
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
87
-
88
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
89
-
90
- ```bash
91
- @inproceedings{dagw,
92
- title = {{The Danish Gigaword Corpus}},
93
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
94
- year = 2021,
95
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
96
- publisher = {NEALT}
97
- }
98
- ```
 
1
  ---
2
+ pretty_name: Bornholmsk (Danish dialect)
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Bornholmsk (Danish dialect)
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 106
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Ræua-Lârs
24
+
25
+ Ræua-Lârs å hans Konna, Stina, bode uda',
26
+ 'source': 'botxt',
27
+ 'id': 'botxt_0000040',
28
+ 'added': '2024-05-16',
29
+ 'created': '2000-01-01, 2022-01-01',
30
+ 'metadata': {
31
+ 'domain': 'Other',
32
+ 'license': 'Creative Commons Legal Code
33
+
34
+ CC0 1.0 Universal',
35
+ 'source-pretty': 'Bornholmsk (Danish dialect)'
36
+ }
37
  }
38
  ```
39
 
40
+ ## Data Fields
41
 
42
+ - **id**: source-specific identifier.
43
+ - **text**: textual content of the document.
44
+ - **source**: source of the data.
45
+ - **added**: timestamp ai2 acquired this data.
46
+ - **created**": timestamp when original document was created (best-guess if not available)
47
+ - **metadata**: source-specific metadata.
48
 
49
+ ## License Information
50
+ <details>
51
+ <summary>Creative Commons Zero v1.0 Universal</summary>
52
+ <p>
53
+ Creative Commons Legal Code
 
 
 
 
 
54
 
55
+ CC0 1.0 Universal
56
+ </p>
57
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/botxt/botxt.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ec89c1dd57f1987dc6fe059a33a1d16b41b8c87439673a381f9671497f65b017
3
- size 1344033
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b42642896dfda21b23bb8e8ef5ba65f878ebfa5fec2f6d57aec1e06778c75bbf
3
+ size 1353171
data/botxt/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 106, "average_document_length": 18972.415094339623, "number_of_tokens": 847973, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/botxt/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e98f2f59f8cbe8be5691f1d7c073b2c13361d331546f9451d24b27fcde649f6c
  • Pointer size: 131 Bytes
  • Size of remote file: 541 kB
data/dannet/dannet.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
- pretty_name: DanNet
3
  language:
4
  - da
5
- license: other
6
  license_name: DanNet 1.0 License
7
  size_categories:
8
  - 10k-100k
@@ -11,68 +11,74 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for DanNet
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- A WordNet is a lexico-semantic network which show the meaning and the relation between words through named connections. It can be considered a machine-readable dictionary.
26
-
27
-
28
  ## Dataset Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
 
 
 
30
 
31
- <!-- START-DESC-STATS -->
32
- - **Language**: dan, dansk, Danish
33
- - **Number of samples**: 49.04K
34
- - **Number of tokens (Llama 3)**: 1.52M
35
- - **Average document length (characters)**: 90.80
36
- <!-- END-DESC-STATS -->
37
 
 
38
 
 
 
 
 
 
39
 
40
- ## Dataset Structure
41
- An example from the dataset looks as follows.
 
 
 
 
42
 
 
 
 
 
 
 
 
43
 
44
- <!-- START-SAMPLE -->
45
- ```py
46
- {
47
- "text": "Når fodboldholdet fra 1. division i Ikast spiller hjemmekampe, lyder råbet ud over Ikast Stadion: We[...]",
48
- "source": "dannet",
49
- "id": "dannet_46506",
50
- "added": "2020-09-24",
51
- "created": "2000-01-01, 2022-01-01",
52
- "license": "Commercial Use of DanNet\n\nDanNet may be used in commercial applications in accordance with the follo[...]",
53
- "domain": "dannet",
54
- "metadata": {
55
- "source-pretty": "DanNet (Danish WordNet)"
56
- }
57
  }
58
  ```
59
 
60
- ### Data Fields
61
-
62
- An entry in the dataset consists of the following fields:
63
-
64
- - `text`(`str`): The content of the document.
65
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
66
- - `id` (`str`): An unique identifier for each document.
67
- - `added` (`str`): An date for when the document was added to this collection.
68
- - `created` (`str`): An date range for when the document was originally created.
69
- - `license` (`str`): The license of the document. The licenses vary according to the source.
70
- - `domain` (`str`): The domain of the source
71
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
72
- - `metadata/*`: Potentially additional metadata
73
- <!-- END-SAMPLE -->
74
-
75
 
 
 
 
 
 
 
76
 
77
  ## License Information
78
  <details>
@@ -119,31 +125,3 @@ LICENSEE agrees to preserve same.
119
  DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
120
  </p>
121
  </details>
122
-
123
-
124
- ### Dataset Statistics
125
-
126
- <!-- START-DATASET PLOTS -->
127
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
128
- <img>
129
- <!-- END-DATASET PLOTS -->
130
-
131
-
132
- ## Additional Information
133
-
134
-
135
- ### Citation Information
136
-
137
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
138
-
139
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
140
-
141
- ```bash
142
- @inproceedings{dagw,
143
- title = {{The Danish Gigaword Corpus}},
144
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
145
- year = 2021,
146
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
147
- publisher = {NEALT}
148
- }
149
- ```
 
1
  ---
2
+ pretty_name: DanNet (Danish WordNet)
3
  language:
4
  - da
5
+ license: DanNet 1.0 License
6
  license_name: DanNet 1.0 License
7
  size_categories:
8
  - 10k-100k
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for DanNet (Danish WordNet)
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 49040
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
20
+ An example from the dataset looks as follows.
21
+ ```yaml
22
+ {
23
+ 'text': 'Når fodboldholdet fra 1. division i Ikast spiller ',
24
+ 'source': 'dannet',
25
+ 'id': 'dannet_46506',
26
+ 'added': '2020-09-24',
27
+ 'created': '2000-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'dannet',
30
+ 'license': 'Commercial Use of DanNet
31
 
32
+ DanNet may be used in commercial applications in accordance with the following
33
+ license agreement. An attorney representing the commercial interest should
34
+ review this DanNet license with respect to the intended use.
35
 
36
+ DanNet 1.0 License
 
 
 
 
 
37
 
38
+ DanNet Release 2.1
39
 
40
+ This software and database is being provided to you, the LICENSEE, by University
41
+ of Copenhagen and Society for Danish Language and Literature under the following
42
+ license. By obtaining, using and/or copying this software and database, you
43
+ agree that you have read, understood, and will comply with these terms and
44
+ conditions.
45
 
46
+ Permission to use, copy, modify and distribute this software and database and
47
+ its documentation for any purpose and without fee or royalty is hereby granted,
48
+ provided that you agree to comply with the following copyright notice and
49
+ statements, including the disclaimer, and that the same appear on ALL copies of
50
+ the software, database and documentation, including modifications that you make
51
+ for internal use or for distribution.
52
 
53
+ THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND UNIVERSITY OF COPENHAGEN and
54
+ SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO REPRESENTATIONS OR
55
+ WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION,
56
+ UNIVERSITY OF COPENHAGEN AND SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO
57
+ REPRESENTATIONS OR WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR
58
+ PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL
59
+ NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
60
 
61
+ The names of University of Copenhagen and Society for Danish Language and
62
+ Literature may not be used in advertising or publicity pertaining to
63
+ distribution of the software and/or database. Title to copyright in this
64
+ software, database and any associated documentation shall at all times remain
65
+ with University of Copenhagen and Society for Danish Language and Literature and
66
+ LICENSEE agrees to preserve same.
67
+
68
+ DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish',
69
+ 'source-pretty': 'DanNet (Danish WordNet)'
70
+ }
 
 
 
71
  }
72
  ```
73
 
74
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
+ - **id**: source-specific identifier.
77
+ - **text**: textual content of the document.
78
+ - **source**: source of the data.
79
+ - **added**: timestamp ai2 acquired this data.
80
+ - **created**": timestamp when original document was created (best-guess if not available)
81
+ - **metadata**: source-specific metadata.
82
 
83
  ## License Information
84
  <details>
 
125
  DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
126
  </p>
127
  </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/dannet/dannet.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b9006617e35f568e7b7e4dacc87c4a490cf0a9170bd4e91488de77e00d3fb38c
3
- size 4487008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:905c2441a4c242e24d370775e9e035df3c67a7a1d797a615297cb6a1bbf51a96
3
+ size 4743422
data/dannet/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 49040, "average_document_length": 90.80340538336053, "number_of_tokens": 1523416, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/dannet/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 91dca32a1fd83b3699bb8ebae083dc697a0dac4b703ada720381448216ea0117
  • Pointer size: 131 Bytes
  • Size of remote file: 538 kB
data/depbank/depbank.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Danish Dependency Treebank
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,93 +11,41 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for Danish Dependency Treebank
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The Danish UD treebank has been converted from the Danish Dependency Treebank (Buch-Kromman, 2003) into Universal Dependencies (UD). It consists of 5,512 sentences (100k words). The Danish source texts and the Danish part-of-speech tags were created by the PAROLE-DK project (Keson 1998) by the Danish Society for Language and Literature.
26
-
27
- While the dataset was initially intended as a rich annotation, this corpora only uses the raw text.
28
-
29
  ## Dataset Description
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 536
35
- - **Number of tokens (Llama 3)**: 185.45K
36
- - **Average document length (characters)**: 1018.90
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "\nH.L. Hansen var en usædvanmlig og frodig personlighed. Han skabte \nglæde og munterhed omkring sig o[...]",
49
- "source": "depbank",
50
- "id": "depbank_0375",
51
- "added": "2024-05-16",
52
- "created": "2000-01-01, 2022-01-01",
53
- "license": "Attribution-ShareAlike 4.0 International",
54
- "domain": "Other",
55
- "metadata": {
56
- "source-pretty": "Danish Dependency Treebank"
57
- }
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
-
86
- ## Additional Information
87
-
88
-
89
- ### Citation Information
90
-
91
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
92
-
93
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
94
-
95
- ```bash
96
- @inproceedings{dagw,
97
- title = {{The Danish Gigaword Corpus}},
98
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
99
- year = 2021,
100
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
101
- publisher = {NEALT}
102
- }
103
- ```
 
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
+ license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Danish Dependency Treebank
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 536
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'H.L. Hansen var en usædvanmlig og frodig personlig',
24
+ 'source': 'depbank',
25
+ 'id': 'depbank_0375',
26
+ 'added': '2024-05-16',
27
+ 'created': '2000-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Other',
30
+ 'license': 'Attribution-ShareAlike 4.0 International',
31
+ 'source-pretty': 'Danish Dependency Treebank'
32
+ }
33
  }
34
  ```
35
 
36
+ ## Data Fields
37
+
38
+ - **id**: source-specific identifier.
39
+ - **text**: textual content of the document.
40
+ - **source**: source of the data.
41
+ - **added**: timestamp ai2 acquired this data.
42
+ - **created**": timestamp when original document was created (best-guess if not available)
43
+ - **metadata**: source-specific metadata.
44
+
45
+ ## License Information
46
+ <details>
47
+ <summary>Creative Commons Attribution Share Alike 4.0</summary>
48
+ <p>
49
+ Attribution-ShareAlike 4.0 International
50
+ </p>
51
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/depbank/depbank.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3d4172e2ab4d7256ca5b76ad45b4d7326616e6679642056fdef20c5e3a8b1c62
3
- size 392216
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:863aac5735bee6995b665864ea355b488e35bb2cca696ea340d8febc653b8886
3
+ size 394917
data/depbank/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 536, "average_document_length": 1018.8992537313433, "number_of_tokens": 185454, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/depbank/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: b23e81411e3f3b86bbd3990cf2e59f4a08f7dae10b908cf3101487069c0296bc
  • Pointer size: 131 Bytes
  • Size of remote file: 547 kB
data/ep/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 4213, "average_document_length": 74063.40469973891, "number_of_tokens": 100888932, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/ep/ep.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: European Parliament
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,91 +11,45 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for European Parliament
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The europarl is a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web. This corpus has found widespread use in the NLP community. It was initially intended as training data for statistical machine translation.
26
-
27
-
28
  ## Dataset Description
29
-
30
-
31
- <!-- START-DESC-STATS -->
32
- - **Language**: dan, dansk, Danish
33
- - **Number of samples**: 4.21K
34
- - **Number of tokens (Llama 3)**: 100.89M
35
- - **Average document length (characters)**: 74063.40
36
- <!-- END-DESC-STATS -->
37
-
38
-
39
-
40
- ## Dataset Structure
41
  An example from the dataset looks as follows.
42
-
43
-
44
- <!-- START-SAMPLE -->
45
- ```py
46
  {
47
- "text": "TALER 6703: Jeg har stemt for henstillingen om godkendelse af opdelingsanordninger til beskyttelse a[...]",
48
- "source": "ep",
49
- "id": "ep_07-02-01-008",
50
- "added": "2019-11-20",
51
- "created": "2004-01-01, 2009-01-01",
52
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
53
- "domain": "Conversation",
54
- "metadata": {
55
- "source-pretty": "European Parliament"
56
- }
 
 
57
  }
58
  ```
59
 
60
- ### Data Fields
61
-
62
- An entry in the dataset consists of the following fields:
63
-
64
- - `text`(`str`): The content of the document.
65
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
66
- - `id` (`str`): An unique identifier for each document.
67
- - `added` (`str`): An date for when the document was added to this collection.
68
- - `created` (`str`): An date range for when the document was originally created.
69
- - `license` (`str`): The license of the document. The licenses vary according to the source.
70
- - `domain` (`str`): The domain of the source
71
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
72
- - `metadata/*`: Potentially additional metadata
73
- <!-- END-SAMPLE -->
74
-
75
- ### Dataset Statistics
76
-
77
- <!-- START-DATASET PLOTS -->
78
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
79
- <img>
80
- <!-- END-DATASET PLOTS -->
81
-
82
 
 
 
 
 
 
 
83
 
84
- ## Additional Information
 
 
 
 
85
 
86
-
87
- ### Citation Information
88
-
89
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
90
-
91
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
92
-
93
- ```bash
94
- @inproceedings{dagw,
95
- title = {{The Danish Gigaword Corpus}},
96
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
97
- year = 2021,
98
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
99
- publisher = {NEALT}
100
- }
101
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for European Parliament
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 4213
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'TALER 6703: Jeg har stemt for henstillingen om god',
24
+ 'source': 'ep',
25
+ 'id': 'ep_07-02-01-008',
26
+ 'added': '2019-11-20',
27
+ 'created': '2004-01-01, 2009-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Conversation',
30
+ 'license': 'Creative Commons Legal Code
31
+
32
+ CC0 1.0 Universal',
33
+ 'source-pretty': 'European Parliament'
34
+ }
35
  }
36
  ```
37
 
38
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ - **id**: source-specific identifier.
41
+ - **text**: textual content of the document.
42
+ - **source**: source of the data.
43
+ - **added**: timestamp ai2 acquired this data.
44
+ - **created**": timestamp when original document was created (best-guess if not available)
45
+ - **metadata**: source-specific metadata.
46
 
47
+ ## License Information
48
+ <details>
49
+ <summary>Creative Commons Zero v1.0 Universal</summary>
50
+ <p>
51
+ Creative Commons Legal Code
52
 
53
+ CC0 1.0 Universal
54
+ </p>
55
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
data/ep/ep.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f76e86335bd765b3ff3cf5ccdfe8f220e39349a0344fdf2b9918adbdd96aedeb
3
- size 170796385
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85c8eb6954522c757ee3e410f7f277a74ecedd8e7507ef00a698a654dc8bea20
3
+ size 171150568
data/ep/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 8914d9fad81bcbc519c29b7c258a256d4eb7084ed8ff9c9100a93ad87fbb4171
  • Pointer size: 131 Bytes
  • Size of remote file: 545 kB
data/ft/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 1315, "average_document_length": 266745.19163498096, "number_of_tokens": 114087231, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/ft/ft.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- pretty_name: Folketinget
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,92 +11,45 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Folketinget
19
-
20
  ## Dataset Description
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- Records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
-
27
- All records have a transcript produced by commercial Automatic Speech Recognition (ASR) followed by postediting by linguists employed by Folketinget for intelligibility, i.e., edit out dysfluencies, restarts, repairs, and mistakes. The transcript is, therefore, not a representation of spoken Danish but rather information content.
28
-
29
- In the parliament hall, one speaker at a time addresses members of the parliament. Monologues may include rebuttals or other comments to statements in previous monologues. While speakers can read aloud from a prepared statement or speak extemporaneously, we expect no difference to be apparent in the data because of the post-editing. The Folketinget section covers parliament hall sessions between 2009 and 2019. It contains discussions on a wide range of topics, issues, and named entities relevant to Danish society.
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 1.31K
35
- - **Number of tokens (Llama 3)**: 114.09M
36
- - **Average document length (characters)**: 266745.19
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "TALER 50: Mødet er åbnet. I dag er der følgende anmeldelser: Ministeren for by, bolig og landdistrik[...]",
49
- "source": "ft",
50
- "id": "ft_20121M100",
51
- "added": "2021-03-28",
52
- "created": "2009-01-01, 2019-01-01",
53
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
54
- "domain": "Conversation",
55
- "metadata": {
56
- "source-pretty": "Folketinget (Danish Parliament)"
57
- }
 
 
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
 
 
 
 
 
 
 
84
 
85
- ## Additional Information
 
 
 
 
86
 
87
-
88
- ### Citation Information
89
-
90
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
91
-
92
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
93
-
94
- ```bash
95
- @inproceedings{dagw,
96
- title = {{The Danish Gigaword Corpus}},
97
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
98
- year = 2021,
99
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
100
- publisher = {NEALT}
101
- }
102
- ```
 
1
  ---
2
+ pretty_name: Folketinget (Danish Parliament)
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Folketinget (Danish Parliament)
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 1315
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'TALER 50: Mødet er åbnet. I dag er der følgende an',
24
+ 'source': 'ft',
25
+ 'id': 'ft_20121M100',
26
+ 'added': '2021-03-28',
27
+ 'created': '2009-01-01, 2019-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Conversation',
30
+ 'license': 'Creative Commons Legal Code
31
+
32
+ CC0 1.0 Universal',
33
+ 'source-pretty': 'Folketinget (Danish Parliament)'
34
+ }
35
  }
36
  ```
37
 
38
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ - **id**: source-specific identifier.
41
+ - **text**: textual content of the document.
42
+ - **source**: source of the data.
43
+ - **added**: timestamp ai2 acquired this data.
44
+ - **created**": timestamp when original document was created (best-guess if not available)
45
+ - **metadata**: source-specific metadata.
46
 
47
+ ## License Information
48
+ <details>
49
+ <summary>Creative Commons Zero v1.0 Universal</summary>
50
+ <p>
51
+ Creative Commons Legal Code
52
 
53
+ CC0 1.0 Universal
54
+ </p>
55
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
data/ft/ft.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e46276c575c7d9ddc30f44111206d250cb02473c992d0087bf0a9a5f4266da18
3
- size 181926375
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31775c6e84a1542897641712e39d4c6cde2aa69673d7875c6a39f3148c08e0fb
3
+ size 182049520
data/ft/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e16a1a9de4f1ef8fedd3e85035287a813d5980b25b40b09c54462671eaebcd81
  • Pointer size: 131 Bytes
  • Size of remote file: 550 kB
data/gutenberg/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 66, "average_document_length": 290147.9393939394, "number_of_tokens": 6763317, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/gutenberg/gutenberg.md CHANGED
@@ -2,7 +2,7 @@
2
  pretty_name: Gutenberg
3
  language:
4
  - da
5
- license: other
6
  license_name: Gutenberg License
7
  size_categories:
8
  - 1-10k
@@ -11,75 +11,365 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for Gutenberg
19
-
20
  ## Dataset Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- <!-- START-SHORT DESCRIPTION -->
23
- The Danish subsection from Project [Gutenberg](https://www.gutenberg.org).
24
- <!-- END-SHORT DESCRIPTION -->
25
 
 
 
 
 
 
 
26
 
27
- Project Gutenberg is an online library of free eBooks. Project Gutenberg was the first provider of free electronic books, or eBooks.
28
 
 
 
29
 
30
- <!-- START-DESC-STATS -->
31
- - **Language**: dan, dansk, Danish
32
- - **Number of samples**: 66
33
- - **Number of tokens (Llama 3)**: 6.76M
34
- - **Average document length (characters)**: 290147.94
35
- <!-- END-DESC-STATS -->
 
 
 
 
36
 
 
 
 
 
 
 
 
 
 
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- ## Dataset Structure
40
- An example from the dataset looks as follows.
 
 
 
 
 
 
 
41
 
 
42
 
43
- <!-- START-SAMPLE -->
44
- ```py
45
- {
46
- "text": "Afskriverens bemærkninger: Åbenlyse trykfejl er rettet\ni denne e-bog, men forfatterens stavning er f[...]",
47
- "source": "gutenberg",
48
- "id": "gutenberg_43899",
49
- "added": "2020-09-12",
50
- "created": "1700-01-01, 2022-01-01",
51
- "license": "*** START: FULL LICENSE ***\n\nTHE FULL PROJECT GUTENBERG LICENSE\nPLEASE READ THIS BEFORE YOU DISTRIBU[...]",
52
- "domain": "Wiki & Books",
53
- "metadata": {
54
- "source-pretty": "Gutenberg"
55
- }
56
- }
57
- ```
58
 
59
- ### Data Fields
 
 
 
60
 
61
- An entry in the dataset consists of the following fields:
 
 
 
 
 
 
 
 
 
62
 
63
- - `text`(`str`): The content of the document.
64
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
65
- - `id` (`str`): An unique identifier for each document.
66
- - `added` (`str`): An date for when the document was added to this collection.
67
- - `created` (`str`): An date range for when the document was originally created.
68
- - `license` (`str`): The license of the document. The licenses vary according to the source.
69
- - `domain` (`str`): The domain of the source
70
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
71
- - `metadata/*`: Potentially additional metadata
72
- <!-- END-SAMPLE -->
73
 
 
 
 
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
- ## License Information
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  <details>
79
  <summary>Gutenberg License</summary>
80
  <p>
81
-
82
- ```
83
  *** START: FULL LICENSE ***
84
 
85
  THE FULL PROJECT GUTENBERG LICENSE
@@ -404,56 +694,6 @@ This Web site includes information about Project Gutenberg-tm,
404
  including how to make donations to the Project Gutenberg Literary
405
  Archive Foundation, how to help produce our new eBooks, and how to
406
  subscribe to our email newsletter to hear about new eBooks.
407
- ```
408
 
409
  </p>
410
  </details>
411
-
412
-
413
- ### Dataset Statistics
414
-
415
- <!-- START-DATASET PLOTS -->
416
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
417
- <img>
418
- <!-- END-DATASET PLOTS -->
419
-
420
-
421
-
422
- ## Additional Information
423
-
424
-
425
- ### Citation Information
426
-
427
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
428
-
429
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
430
-
431
- ```bash
432
- @inproceedings{dagw,
433
- title = {{The Danish Gigaword Corpus}},
434
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
435
- year = 2021,
436
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
437
- publisher = {NEALT}
438
- }
439
- ```
440
-
441
-
442
- ## Additional Information
443
-
444
-
445
- ### Citation Information
446
-
447
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
448
-
449
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
450
-
451
- ```bash
452
- @inproceedings{dagw,
453
- title = {{The Danish Gigaword Corpus}},
454
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
455
- year = 2021,
456
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
457
- publisher = {NEALT}
458
- }
459
- ```
 
2
  pretty_name: Gutenberg
3
  language:
4
  - da
5
+ license: Gutenberg License
6
  license_name: Gutenberg License
7
  size_categories:
8
  - 1-10k
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Gutenberg
 
16
  ## Dataset Description
17
+ - **Number of records:** 66
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
20
+ An example from the dataset looks as follows.
21
+ ```yaml
22
+ {
23
+ 'text': 'Afskriverens bemærkninger: Åbenlyse trykfejl er re',
24
+ 'source': 'gutenberg',
25
+ 'id': 'gutenberg_43899',
26
+ 'added': '2020-09-12',
27
+ 'created': '1700-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Wiki & Books',
30
+ 'license': '*** START: FULL LICENSE ***
31
 
32
+ THE FULL PROJECT GUTENBERG LICENSE
33
+ PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
 
34
 
35
+ To protect the Project Gutenberg-tm mission of promoting the free
36
+ distribution of electronic works, by using or distributing this work
37
+ (or any other work associated in any way with the phrase "Project
38
+ Gutenberg"), you agree to comply with all the terms of the Full Project
39
+ Gutenberg-tm License available with this file or online at
40
+ www.gutenberg.org/license.
41
 
 
42
 
43
+ Section 1. General Terms of Use and Redistributing Project Gutenberg-tm
44
+ electronic works
45
 
46
+ 1.A. By reading or using any part of this Project Gutenberg-tm
47
+ electronic work, you indicate that you have read, understand, agree to
48
+ and accept all the terms of this license and intellectual property
49
+ (trademark/copyright) agreement. If you do not agree to abide by all
50
+ the terms of this agreement, you must cease using and return or destroy
51
+ all copies of Project Gutenberg-tm electronic works in your possession.
52
+ If you paid a fee for obtaining a copy of or access to a Project
53
+ Gutenberg-tm electronic work and you do not agree to be bound by the
54
+ terms of this agreement, you may obtain a refund from the person or
55
+ entity to whom you paid the fee as set forth in paragraph 1.E.8.
56
 
57
+ 1.B. "Project Gutenberg" is a registered trademark. It may only be
58
+ used on or associated in any way with an electronic work by people who
59
+ agree to be bound by the terms of this agreement. There are a few
60
+ things that you can do with most Project Gutenberg-tm electronic works
61
+ even without complying with the full terms of this agreement. See
62
+ paragraph 1.C below. There are a lot of things you can do with Project
63
+ Gutenberg-tm electronic works if you follow the terms of this agreement
64
+ and help preserve free future access to Project Gutenberg-tm electronic
65
+ works. See paragraph 1.E below.
66
 
67
+ 1.C. The Project Gutenberg Literary Archive Foundation ("the Foundation"
68
+ or PGLAF), owns a compilation copyright in the collection of Project
69
+ Gutenberg-tm electronic works. Nearly all the individual works in the
70
+ collection are in the public domain in the United States. If an
71
+ individual work is in the public domain in the United States and you are
72
+ located in the United States, we do not claim a right to prevent you from
73
+ copying, distributing, performing, displaying or creating derivative
74
+ works based on the work as long as all references to Project Gutenberg
75
+ are removed. Of course, we hope that you will support the Project
76
+ Gutenberg-tm mission of promoting free access to electronic works by
77
+ freely sharing Project Gutenberg-tm works in compliance with the terms of
78
+ this agreement for keeping the Project Gutenberg-tm name associated with
79
+ the work. You can easily comply with the terms of this agreement by
80
+ keeping this work in the same format with its attached full Project
81
+ Gutenberg-tm License when you share it without charge with others.
82
 
83
+ 1.D. The copyright laws of the place where you are located also govern
84
+ what you can do with this work. Copyright laws in most countries are in
85
+ a constant state of change. If you are outside the United States, check
86
+ the laws of your country in addition to the terms of this agreement
87
+ before downloading, copying, displaying, performing, distributing or
88
+ creating derivative works based on this work or any other Project
89
+ Gutenberg-tm work. The Foundation makes no representations concerning
90
+ the copyright status of any work in any country outside the United
91
+ States.
92
 
93
+ 1.E. Unless you have removed all references to Project Gutenberg:
94
 
95
+ 1.E.1. The following sentence, with active links to, or other immediate
96
+ access to, the full Project Gutenberg-tm License must appear prominently
97
+ whenever any copy of a Project Gutenberg-tm work (any work on which the
98
+ phrase "Project Gutenberg" appears, or with which the phrase "Project
99
+ Gutenberg" is associated) is accessed, displayed, performed, viewed,
100
+ copied or distributed:
 
 
 
 
 
 
 
 
 
101
 
102
+ This eBook is for the use of anyone anywhere at no cost and with
103
+ almost no restrictions whatsoever. You may copy it, give it away or
104
+ re-use it under the terms of the Project Gutenberg License included
105
+ with this eBook or online at www.gutenberg.org
106
 
107
+ 1.E.2. If an individual Project Gutenberg-tm electronic work is derived
108
+ from the public domain (does not contain a notice indicating that it is
109
+ posted with permission of the copyright holder), the work can be copied
110
+ and distributed to anyone in the United States without paying any fees
111
+ or charges. If you are redistributing or providing access to a work
112
+ with the phrase "Project Gutenberg" associated with or appearing on the
113
+ work, you must comply either with the requirements of paragraphs 1.E.1
114
+ through 1.E.7 or obtain permission for the use of the work and the
115
+ Project Gutenberg-tm trademark as set forth in paragraphs 1.E.8 or
116
+ 1.E.9.
117
 
118
+ 1.E.3. If an individual Project Gutenberg-tm electronic work is posted
119
+ with the permission of the copyright holder, your use and distribution
120
+ must comply with both paragraphs 1.E.1 through 1.E.7 and any additional
121
+ terms imposed by the copyright holder. Additional terms will be linked
122
+ to the Project Gutenberg-tm License for all works posted with the
123
+ permission of the copyright holder found at the beginning of this work.
 
 
 
 
124
 
125
+ 1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm
126
+ License terms from this work, or any files containing a part of this
127
+ work or any other work associated with Project Gutenberg-tm.
128
 
129
+ 1.E.5. Do not copy, display, perform, distribute or redistribute this
130
+ electronic work, or any part of this electronic work, without
131
+ prominently displaying the sentence set forth in paragraph 1.E.1 with
132
+ active links or immediate access to the full terms of the Project
133
+ Gutenberg-tm License.
134
+
135
+ 1.E.6. You may convert to and distribute this work in any binary,
136
+ compressed, marked up, nonproprietary or proprietary form, including any
137
+ word processing or hypertext form. However, if you provide access to or
138
+ distribute copies of a Project Gutenberg-tm work in a format other than
139
+ "Plain Vanilla ASCII" or other format used in the official version
140
+ posted on the official Project Gutenberg-tm web site (www.gutenberg.org),
141
+ you must, at no additional cost, fee or expense to the user, provide a
142
+ copy, a means of exporting a copy, or a means of obtaining a copy upon
143
+ request, of the work in its original "Plain Vanilla ASCII" or other
144
+ form. Any alternate format must include the full Project Gutenberg-tm
145
+ License as specified in paragraph 1.E.1.
146
+
147
+ 1.E.7. Do not charge a fee for access to, viewing, displaying,
148
+ performing, copying or distributing any Project Gutenberg-tm works
149
+ unless you comply with paragraph 1.E.8 or 1.E.9.
150
+
151
+ 1.E.8. You may charge a reasonable fee for copies of or providing
152
+ access to or distributing Project Gutenberg-tm electronic works provided
153
+ that
154
+
155
+ - You pay a royalty fee of 20% of the gross profits you derive from
156
+ the use of Project Gutenberg-tm works calculated using the method
157
+ you already use to calculate your applicable taxes. The fee is
158
+ owed to the owner of the Project Gutenberg-tm trademark, but he
159
+ has agreed to donate royalties under this paragraph to the
160
+ Project Gutenberg Literary Archive Foundation. Royalty payments
161
+ must be paid within 60 days following each date on which you
162
+ prepare (or are legally required to prepare) your periodic tax
163
+ returns. Royalty payments should be clearly marked as such and
164
+ sent to the Project Gutenberg Literary Archive Foundation at the
165
+ address specified in Section 4, "Information about donations to
166
+ the Project Gutenberg Literary Archive Foundation."
167
+
168
+ - You provide a full refund of any money paid by a user who notifies
169
+ you in writing (or by e-mail) within 30 days of receipt that s/he
170
+ does not agree to the terms of the full Project Gutenberg-tm
171
+ License. You must require such a user to return or
172
+ destroy all copies of the works possessed in a physical medium
173
+ and discontinue all use of and all access to other copies of
174
+ Project Gutenberg-tm works.
175
+
176
+ - You provide, in accordance with paragraph 1.F.3, a full refund of any
177
+ money paid for a work or a replacement copy, if a defect in the
178
+ electronic work is discovered and reported to you within 90 days
179
+ of receipt of the work.
180
+
181
+ - You comply with all other terms of this agreement for free
182
+ distribution of Project Gutenberg-tm works.
183
+
184
+ 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg-tm
185
+ electronic work or group of works on different terms than are set
186
+ forth in this agreement, you must obtain permission in writing from
187
+ both the Project Gutenberg Literary Archive Foundation and Michael
188
+ Hart, the owner of the Project Gutenberg-tm trademark. Contact the
189
+ Foundation as set forth in Section 3 below.
190
+
191
+ 1.F.
192
+
193
+ 1.F.1. Project Gutenberg volunteers and employees expend considerable
194
+ effort to identify, do copyright research on, transcribe and proofread
195
+ public domain works in creating the Project Gutenberg-tm
196
+ collection. Despite these efforts, Project Gutenberg-tm electronic
197
+ works, and the medium on which they may be stored, may contain
198
+ "Defects," such as, but not limited to, incomplete, inaccurate or
199
+ corrupt data, transcription errors, a copyright or other intellectual
200
+ property infringement, a defective or damaged disk or other medium, a
201
+ computer virus, or computer codes that damage or cannot be read by
202
+ your equipment.
203
+
204
+ 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right
205
+ of Replacement or Refund" described in paragraph 1.F.3, the Project
206
+ Gutenberg Literary Archive Foundation, the owner of the Project
207
+ Gutenberg-tm trademark, and any other party distributing a Project
208
+ Gutenberg-tm electronic work under this agreement, disclaim all
209
+ liability to you for damages, costs and expenses, including legal
210
+ fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
211
+ LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
212
+ PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE
213
+ TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE
214
+ LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR
215
+ INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH
216
+ DAMAGE.
217
+
218
+ 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a
219
+ defect in this electronic work within 90 days of receiving it, you can
220
+ receive a refund of the money (if any) you paid for it by sending a
221
+ written explanation to the person you received the work from. If you
222
+ received the work on a physical medium, you must return the medium with
223
+ your written explanation. The person or entity that provided you with
224
+ the defective work may elect to provide a replacement copy in lieu of a
225
+ refund. If you received the work electronically, the person or entity
226
+ providing it to you may choose to give you a second opportunity to
227
+ receive the work electronically in lieu of a refund. If the second copy
228
+ is also defective, you may demand a refund in writing without further
229
+ opportunities to fix the problem.
230
+
231
+ 1.F.4. Except for the limited right of replacement or refund set forth
232
+ in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO OTHER
233
+ WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
234
+ WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
235
+
236
+ 1.F.5. Some states do not allow disclaimers of certain implied
237
+ warranties or the exclusion or limitation of certain types of damages.
238
+ If any disclaimer or limitation set forth in this agreement violates the
239
+ law of the state applicable to this agreement, the agreement shall be
240
+ interpreted to make the maximum disclaimer or limitation permitted by
241
+ the applicable state law. The invalidity or unenforceability of any
242
+ provision of this agreement shall not void the remaining provisions.
243
+
244
+ 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the
245
+ trademark owner, any agent or employee of the Foundation, anyone
246
+ providing copies of Project Gutenberg-tm electronic works in accordance
247
+ with this agreement, and any volunteers associated with the production,
248
+ promotion and distribution of Project Gutenberg-tm electronic works,
249
+ harmless from all liability, costs and expenses, including legal fees,
250
+ that arise directly or indirectly from any of the following which you do
251
+ or cause to occur: (a) distribution of this or any Project Gutenberg-tm
252
+ work, (b) alteration, modification, or additions or deletions to any
253
+ Project Gutenberg-tm work, and (c) any Defect you cause.
254
 
 
255
 
256
+ Section 2. Information about the Mission of Project Gutenberg-tm
257
+
258
+ Project Gutenberg-tm is synonymous with the free distribution of
259
+ electronic works in formats readable by the widest variety of computers
260
+ including obsolete, old, middle-aged and new computers. It exists
261
+ because of the efforts of hundreds of volunteers and donations from
262
+ people in all walks of life.
263
+
264
+ Volunteers and financial support to provide volunteers with the
265
+ assistance they need are critical to reaching Project Gutenberg-tm's
266
+ goals and ensuring that the Project Gutenberg-tm collection will
267
+ remain freely available for generations to come. In 2001, the Project
268
+ Gutenberg Literary Archive Foundation was created to provide a secure
269
+ and permanent future for Project Gutenberg-tm and future generations.
270
+ To learn more about the Project Gutenberg Literary Archive Foundation
271
+ and how your efforts and donations can help, see Sections 3 and 4
272
+ and the Foundation information page at www.gutenberg.org
273
+
274
+
275
+ Section 3. Information about the Project Gutenberg Literary Archive
276
+ Foundation
277
+
278
+ The Project Gutenberg Literary Archive Foundation is a non profit
279
+ 501(c)(3) educational corporation organized under the laws of the
280
+ state of Mississippi and granted tax exempt status by the Internal
281
+ Revenue Service. The Foundation's EIN or federal tax identification
282
+ number is 64-6221541. Contributions to the Project Gutenberg
283
+ Literary Archive Foundation are tax deductible to the full extent
284
+ permitted by U.S. federal laws and your state's laws.
285
+
286
+ The Foundation's principal office is located at 4557 Melan Dr. S.
287
+ Fairbanks, AK, 99712., but its volunteers and employees are scattered
288
+ throughout numerous locations. Its business office is located at 809
289
+ North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email
290
+ contact links and up to date contact information can be found at the
291
+ Foundation's web site and official page at www.gutenberg.org/contact
292
+
293
+ For additional contact information:
294
+ Dr. Gregory B. Newby
295
+ Chief Executive and Director
296
297
+
298
+ Section 4. Information about Donations to the Project Gutenberg
299
+ Literary Archive Foundation
300
+
301
+ Project Gutenberg-tm depends upon and cannot survive without wide
302
+ spread public support and donations to carry out its mission of
303
+ increasing the number of public domain and licensed works that can be
304
+ freely distributed in machine readable form accessible by the widest
305
+ array of equipment including outdated equipment. Many small donations
306
+ ($1 to $5,000) are particularly important to maintaining tax exempt
307
+ status with the IRS.
308
+
309
+ The Foundation is committed to complying with the laws regulating
310
+ charities and charitable donations in all 50 states of the United
311
+ States. Compliance requirements are not uniform and it takes a
312
+ considerable effort, much paperwork and many fees to meet and keep up
313
+ with these requirements. We do not solicit donations in locations
314
+ where we have not received written confirmation of compliance. To
315
+ SEND DONATIONS or determine the status of compliance for any
316
+ particular state visit www.gutenberg.org/donate
317
+
318
+ While we cannot and do not solicit contributions from states where we
319
+ have not met the solicitation requirements, we know of no prohibition
320
+ against accepting unsolicited donations from donors in such states who
321
+ approach us with offers to donate.
322
+
323
+ International donations are gratefully accepted, but we cannot make
324
+ any statements concerning tax treatment of donations received from
325
+ outside the United States. U.S. laws alone swamp our small staff.
326
+
327
+ Please check the Project Gutenberg Web pages for current donation
328
+ methods and addresses. Donations are accepted in a number of other
329
+ ways including checks, online payments and credit card donations.
330
+ To donate, please visit: www.gutenberg.org/donate
331
+
332
+
333
+ Section 5. General Information About Project Gutenberg-tm electronic
334
+ works.
335
+
336
+ Professor Michael S. Hart was the originator of the Project Gutenberg-tm
337
+ concept of a library of electronic works that could be freely shared
338
+ with anyone. For forty years, he produced and distributed Project
339
+ Gutenberg-tm eBooks with only a loose network of volunteer support.
340
+
341
+ Project Gutenberg-tm eBooks are often created from several printed
342
+ editions, all of which are confirmed as Public Domain in the U.S.
343
+ unless a copyright notice is included. Thus, we do not necessarily
344
+ keep eBooks in compliance with any particular paper edition.
345
+
346
+ Most people start at our Web site which has the main PG search facility:
347
+
348
+ www.gutenberg.org
349
+
350
+ This Web site includes information about Project Gutenberg-tm,
351
+ including how to make donations to the Project Gutenberg Literary
352
+ Archive Foundation, how to help produce our new eBooks, and how to
353
+ subscribe to our email newsletter to hear about new eBooks.
354
+ ',
355
+ 'source-pretty': 'Gutenberg'
356
+ }
357
+ }
358
+ ```
359
+
360
+ ## Data Fields
361
+
362
+ - **id**: source-specific identifier.
363
+ - **text**: textual content of the document.
364
+ - **source**: source of the data.
365
+ - **added**: timestamp ai2 acquired this data.
366
+ - **created**": timestamp when original document was created (best-guess if not available)
367
+ - **metadata**: source-specific metadata.
368
+
369
+ ## License Information
370
  <details>
371
  <summary>Gutenberg License</summary>
372
  <p>
 
 
373
  *** START: FULL LICENSE ***
374
 
375
  THE FULL PROJECT GUTENBERG LICENSE
 
694
  including how to make donations to the Project Gutenberg Literary
695
  Archive Foundation, how to help produce our new eBooks, and how to
696
  subscribe to our email newsletter to hear about new eBooks.
 
697
 
698
  </p>
699
  </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/gutenberg/gutenberg.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e8364195e60b64e285d0c1b8c4b6ae0da7a1b6165de77bb4fc4049c317b445c
3
- size 12342492
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973df5121d3da73a5915f6dd1da0290ffbaece92b2c7c4dec562155974c0076f
3
+ size 12361984
data/gutenberg/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 7211ebb972796ee921e5c9d19cc8a266cc42ccab560d1701464ff2a865268116
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/hest/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 14391, "average_document_length": 82950.79104996179, "number_of_tokens": 389325153, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/hest/hest.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Hestenettet (Danish debate forum)
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
@@ -11,92 +11,46 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Hestenettet
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- Samples from the Danish debate forum www.heste-nettet.dk.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The forum have been in use since 1997 and it is used as a debate forum covering a wide range of everyday topics.
26
-
27
- Its inclusion as training data for large language models have multiple times reached [national news](https://www.dr.dk/nyheder/viden/teknologi/heste-nettet-kan-blive-grundlag-kunstig-intelligens-paa-dansk).
28
-
29
  ## Dataset Description
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 14.39K
35
- - **Number of tokens (Llama 3)**: 389.33M
36
- - **Average document length (characters)**: 82950.79
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "Er den ikke kær? \nJeg kan ikke forstå at der altid er nogle der åbenbart ser alle indlæg her på HN ,[...]",
49
- "source": "hest",
50
- "id": "hest_forum112802271280227_0",
51
- "added": "2020-10-05",
52
- "created": "2000-01-01, 2022-01-01",
53
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
54
- "domain": "Social Media",
55
- "metadata": {
56
- "source-pretty": "Hestenettet (Danish debate forum)"
57
- }
 
 
 
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
 
 
 
 
 
 
 
84
 
85
- ## Additional Information
 
 
 
 
86
 
87
-
88
- ### Citation Information
89
-
90
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
91
-
92
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
93
-
94
- ```bash
95
- @inproceedings{dagw,
96
- title = {{The Danish Gigaword Corpus}},
97
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
98
- year = 2021,
99
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
100
- publisher = {NEALT}
101
- }
102
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Hestenettet (Danish debate forum)
 
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 14391
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Er den ikke kær?
24
+ Jeg kan ikke forstå at der altid',
25
+ 'source': 'hest',
26
+ 'id': 'hest_forum112802271280227_0',
27
+ 'added': '2020-10-05',
28
+ 'created': '2000-01-01, 2022-01-01',
29
+ 'metadata': {
30
+ 'domain': 'Social Media',
31
+ 'license': 'Creative Commons Legal Code
32
+
33
+ CC0 1.0 Universal',
34
+ 'source-pretty': 'Hestenettet (Danish debate forum)'
35
+ }
36
  }
37
  ```
38
 
39
+ ## Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
+ - **id**: source-specific identifier.
42
+ - **text**: textual content of the document.
43
+ - **source**: source of the data.
44
+ - **added**: timestamp ai2 acquired this data.
45
+ - **created**": timestamp when original document was created (best-guess if not available)
46
+ - **metadata**: source-specific metadata.
47
 
48
+ ## License Information
49
+ <details>
50
+ <summary>Creative Commons Zero v1.0 Universal</summary>
51
+ <p>
52
+ Creative Commons Legal Code
53
 
54
+ CC0 1.0 Universal
55
+ </p>
56
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
data/hest/hest.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:258c9263b68b8d8573eab1eaa8221c557e9259aa1a222911fdff41f5cbbda66b
3
- size 747678214
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b85d658074ebec3eb95da8f8e522d83707b646b5f3b8b706279496eec3b31c3
3
+ size 748670544
data/hest/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 721ef6123a43f89bca03351e7a6459d6e40906024bcd2bc9e0a1fa377c37d60b
  • Pointer size: 131 Bytes
  • Size of remote file: 545 kB
data/jvj/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 42, "average_document_length": 254893.66666666666, "number_of_tokens": 3549181, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/jvj/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 842b2aff42b3efabe2ec7dd425a9b41f836ca21f1f6332561dcc90e6bb7db62e
  • Pointer size: 131 Bytes
  • Size of remote file: 534 kB
data/jvj/jvj.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- pretty_name: Johannes V. Jensen
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,92 +11,41 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
- # Dataset Card for Johannes V. Jensen
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
-
26
-
27
-
28
-
29
  ## Dataset Description
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Language**: dan, dansk, Danish
34
- - **Number of samples**: 42
35
- - **Number of tokens (Llama 3)**: 3.55M
36
- - **Average document length (characters)**: 254893.67
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
-
44
-
45
- <!-- START-SAMPLE -->
46
- ```py
47
  {
48
- "text": "JØRGINE JØRGINE KØBENHAVN HAGE & CLAUSENS FORLAG (J. FR. CLAUSEN) 1926 JOHANNES V. JENSEN COPYRIGHT [...]",
49
- "source": "jvj",
50
- "id": "jvj_Jørgine",
51
- "added": "2020-06-26",
52
- "created": "1873-01-01, 1951-01-01",
53
- "license": "Attribution-ShareAlike 4.0 International",
54
- "domain": "Wiki & Books",
55
- "metadata": {
56
- "source-pretty": "Johannes V. Jensen (Danish poet)"
57
- }
58
  }
59
  ```
60
 
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
- <!-- END-SAMPLE -->
75
-
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
- ## Additional Information
86
-
87
-
88
- ### Citation Information
89
-
90
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
91
-
92
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
93
-
94
- ```bash
95
- @inproceedings{dagw,
96
- title = {{The Danish Gigaword Corpus}},
97
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
98
- year = 2021,
99
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
100
- publisher = {NEALT}
101
- }
102
- ```
 
1
  ---
2
+ pretty_name: Johannes V. Jensen (Danish poet)
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
+ license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
+ # Dataset Card for Johannes V. Jensen (Danish poet)
 
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 42
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'JØRGINE JØRGINE KØBENHAVN HAGE & CLAUSENS FORLAG (',
24
+ 'source': 'jvj',
25
+ 'id': 'jvj_Jørgine',
26
+ 'added': '2020-06-26',
27
+ 'created': '1873-01-01, 1951-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Wiki & Books',
30
+ 'license': 'Attribution-ShareAlike 4.0 International',
31
+ 'source-pretty': 'Johannes V. Jensen (Danish poet)'
32
+ }
33
  }
34
  ```
35
 
36
+ ## Data Fields
37
+
38
+ - **id**: source-specific identifier.
39
+ - **text**: textual content of the document.
40
+ - **source**: source of the data.
41
+ - **added**: timestamp ai2 acquired this data.
42
+ - **created**": timestamp when original document was created (best-guess if not available)
43
+ - **metadata**: source-specific metadata.
44
+
45
+ ## License Information
46
+ <details>
47
+ <summary>Creative Commons Attribution Share Alike 4.0</summary>
48
+ <p>
49
+ Attribution-ShareAlike 4.0 International
50
+ </p>
51
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/jvj/jvj.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5706ac4ddb20ce41ac198d3a603c80a7ab76e8a84d028bf145934a704401e17d
3
- size 6824089
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a524aafe8fe1ba86bc09c091b10aacf55e558124fef59e68f60bed03816636a
3
+ size 6829395
data/lexdk/create.py DELETED
@@ -1,78 +0,0 @@
1
- """download lexdk from alexandrainst/lexdk-open"""
2
-
3
- from datetime import datetime
4
- from pathlib import Path
5
- from typing import cast
6
-
7
- import pandas as pd
8
- from datasets import Dataset, load_dataset
9
-
10
- column_order = [
11
- "text",
12
- "source",
13
- "id",
14
- "added",
15
- "created",
16
- "license",
17
- "domain",
18
- "metadata",
19
- ]
20
-
21
-
22
- def convert_sample(example: dict) -> dict:
23
- # from sample:
24
- # {
25
- # "url": "https://denstoredanske.lex.dk/Kullmanns_M%C3%B8lle",
26
- # "title": "Kullmanns Mølle",
27
- # "clarification": "",
28
- # "authors": ["https://brugere.lex.dk/6929"],
29
- # "date": "2021-01-20T13:23:20+01:00",
30
- # "license": "fri anvendelse",
31
- # "text": "Kullmanns Mølle er en mølle i Gudhjem, opkaldt efter Matts Kullmann, der byggede møllen i 1893 til sin søn, Christian Kullmann, se Gudhjem Mølle.",
32
- # }
33
- date = datetime.fromisoformat(example["date"])
34
- text = f"{example["title"]}\n\npubliceret: {date}\n{example["text"]}"
35
-
36
- new_example = dict(
37
- text_new=text,
38
- id=example["url"],
39
- source="lexdk",
40
- domain="Conversation",
41
- license="cc-by-sa-4.0",
42
- added="2025-01-04",
43
- created=f"{date.date()}, {date.date()}",
44
- metadata={"source-pretty": "Lex.dk"},
45
- )
46
-
47
- return new_example
48
-
49
-
50
- def main():
51
- ds = load_dataset("alexandrainst/lexdk-open", split="train")
52
- ds = cast(Dataset, ds)
53
-
54
- dates = [datetime.fromisoformat(date).date() for date in ds["date"]]
55
- print(str(min(dates)), ",", str(max(dates))) # 2009-01-28, 2023-09-05
56
-
57
- assert len(set(ds["url"])) == len(ds)
58
-
59
- ds = ds.map(convert_sample, num_proc=4)
60
- ds = ds.select_columns(column_order[1:] + ["text_new"])
61
- ds = ds.rename_columns({"text_new": "text"})
62
- # ensure order
63
- ds = ds.select_columns(column_order)
64
-
65
- df = ds.to_pandas()
66
- df = cast(pd.DataFrame, df)
67
- dedup_df = df.drop_duplicates(keep="first", subset=["text"])
68
- print("N. duplicates: ", df.shape[0] - dedup_df.shape[0]) # 0
69
-
70
- ds = ds.select(dedup_df.index)
71
- assert len(set(ds["text"])) == len(ds)
72
-
73
- save_path = Path(__file__).parent / "lexdk.parquet"
74
- ds.to_parquet(save_path)
75
-
76
-
77
- if __name__ == "__main__":
78
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/lexdk/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 11887, "average_document_length": 1405.6435601918063, "number_of_tokens": 5688613, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/lexdk/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 9aead97c97d52f9b4b9fced8eea7827d764a6a91f2af23ddc4e90607d23c0076
  • Pointer size: 131 Bytes
  • Size of remote file: 552 kB
data/lexdk/lexdk.md DELETED
@@ -1,85 +0,0 @@
1
- ---
2
- pretty_name: OpenSubtitles
3
- language:
4
- - da
5
- license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- source_datasets:
13
- - alexandrainst/lexdk-open
14
- ---
15
-
16
- # Dataset Card for OpenSubtitles
17
-
18
- <!-- START-SHORT DESCRIPTION -->
19
- Permissible use articles from [lex.dk](https://lex.dk).
20
- <!-- END-SHORT DESCRIPTION -->
21
-
22
- Lex.dk is a Danish online encyclopedia platform providing access to reliable and authoritative knowledge on a wide range of topics. It is created and curated by experts, ensuring high-quality, accurate content. The platform serves as a central hub for general and specialized information in Danish, making it a valuable resource for education, research, and general learning.
23
-
24
-
25
-
26
-
27
- ## Dataset Description
28
-
29
- <!-- START-DESC-STATS -->
30
- - **Language**: dan, dansk, Danish
31
- - **Number of samples**: 11.89K
32
- - **Number of tokens (Llama 3)**: 5.69M
33
- - **Average document length (characters)**: 1405.64
34
- <!-- END-DESC-STATS -->
35
-
36
-
37
- ## Dataset Structure
38
- An example from the dataset looks as follows.
39
-
40
- <!-- START-SAMPLE -->
41
- ```py
42
- {
43
- "text": "Oluf Høst Museet\n\npubliceret: 2014-04-23 03:42:33+02:00\nOluf Høst Museet, kunstmuseum i Gudhjem, Bor[...]",
44
- "source": "lexdk",
45
- "id": "https://denstoredanske.lex.dk/Oluf_H%C3%B8st_Museet",
46
- "added": "2025-01-04",
47
- "created": "2014-04-23, 2014-04-23",
48
- "license": "cc-by-sa-4.0",
49
- "domain": "Conversation",
50
- "metadata": {
51
- "source-pretty": "Lex.dk"
52
- }
53
- }
54
- ```
55
-
56
- ### Data Fields
57
-
58
- An entry in the dataset consists of the following fields:
59
-
60
- - `text`(`str`): The content of the document.
61
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
62
- - `id` (`str`): An unique identifier for each document.
63
- - `added` (`str`): An date for when the document was added to this collection.
64
- - `created` (`str`): An date range for when the document was originally created.
65
- - `license` (`str`): The license of the document. The licenses vary according to the source.
66
- - `domain` (`str`): The domain of the source
67
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
68
- - `metadata/*`: Potentially additional metadata
69
- <!-- END-SAMPLE -->
70
-
71
-
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
- ## Additional Information
81
-
82
-
83
- ### Citation Information
84
-
85
- This dataset is derived from the publicly availabe dataset [alexandrainst/lexdk-open](https://huggingface.co/datasets/alexandrainst/lexdk-open).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/lexdk/lexdk.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c4779881f575d6f612c8603ed4896f10ebc7293c59637fa8a0773ee4545fce3
3
- size 10007743
 
 
 
 
data/naat/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 129, "average_document_length": 6832.387596899225, "number_of_tokens": 286677, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/naat/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e4f14416631cbf0b8a6fe2dc260e6be69155313af1f93c94bd435a60413e4836
  • Pointer size: 131 Bytes
  • Size of remote file: 537 kB
data/naat/naat.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: NAAT
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,87 +11,45 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
  ---
17
-
18
  # Dataset Card for NAAT
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- Danish speeches from 1930-2022.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
  ## Dataset Description
26
-
27
-
28
- <!-- START-DESC-STATS -->
29
- - **Language**: dan, dansk, Danish
30
- - **Number of samples**: 129
31
- - **Number of tokens (Llama 3)**: 286.68K
32
- - **Average document length (characters)**: 6832.39
33
- <!-- END-DESC-STATS -->
34
-
35
-
36
-
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
-
40
-
41
- <!-- START-SAMPLE -->
42
- ```py
43
  {
44
- "text": "Naar jeg i aften sender min nytaarshilsen til det danske folk og tænker tilbage paa det aar, der sva[...]",
45
- "source": "naat",
46
- "id": "naat_1958kongfrederikix",
47
- "added": "2020-02-11",
48
- "created": "1930-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Conversation",
51
- "metadata": {
52
- "source-pretty": "NAAT"
53
- }
 
 
54
  }
55
  ```
56
 
57
- ### Data Fields
58
 
59
- An entry in the dataset consists of the following fields:
 
 
 
 
 
60
 
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
- <!-- END-SAMPLE -->
71
 
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
- ## Additional Information
81
-
82
-
83
- ### Citation Information
84
-
85
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
86
-
87
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
88
-
89
- ```bash
90
- @inproceedings{dagw,
91
- title = {{The Danish Gigaword Corpus}},
92
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
93
- year = 2021,
94
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
95
- publisher = {NEALT}
96
- }
97
- ```
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for NAAT
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 129
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Naar jeg i aften sender min nytaarshilsen til det ',
24
+ 'source': 'naat',
25
+ 'id': 'naat_1958kongfrederikix',
26
+ 'added': '2020-02-11',
27
+ 'created': '1930-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Conversation',
30
+ 'license': 'Creative Commons Legal Code
31
+
32
+ CC0 1.0 Universal',
33
+ 'source-pretty': 'NAAT'
34
+ }
35
  }
36
  ```
37
 
38
+ ## Data Fields
39
 
40
+ - **id**: source-specific identifier.
41
+ - **text**: textual content of the document.
42
+ - **source**: source of the data.
43
+ - **added**: timestamp ai2 acquired this data.
44
+ - **created**": timestamp when original document was created (best-guess if not available)
45
+ - **metadata**: source-specific metadata.
46
 
47
+ ## License Information
48
+ <details>
49
+ <summary>Creative Commons Zero v1.0 Universal</summary>
50
+ <p>
51
+ Creative Commons Legal Code
 
 
 
 
 
52
 
53
+ CC0 1.0 Universal
54
+ </p>
55
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/naat/naat.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc7c4b8640c72a20abba667d9630fe8d234266a7d42f50a9a20be28b1e0ecff6
3
- size 544392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6958784a0c4039e9357dee0dedc6bd010e7dd3573d2d9a4db45ce5e4a6608feb
3
+ size 545253
data/nordjyllandnews/create.py DELETED
@@ -1,51 +0,0 @@
1
- """
2
- This scripts download nordjylland news and converts it to the format of danish dynaword
3
- """
4
-
5
- import random
6
- from pathlib import Path
7
- from typing import cast
8
-
9
- from datasets import Dataset, load_dataset
10
-
11
- schemas = [
12
- "{summary}\n\n{text}",
13
- "{text}\n\nOpsummering:\n{summary}",
14
- "{text}\n\nReferat:\n{summary}",
15
- "Lav et referat af nedenstående tekst:\n\nTekst:\n{text}\n\nReferat:\n{summary}",
16
- ]
17
- source = "nordjyllandnews"
18
-
19
-
20
- def convert_sample(example):
21
- schema = random.choice(schemas)
22
- new_example = dict(
23
- text_new=schema.format(text=example["text"], summary=example["summary"]),
24
- source=source,
25
- domain="News",
26
- license="Creative Commons Legal Code\n\nCC0 1.0 Universal",
27
- added="2024-12-16",
28
- created="2000-01-01, 2024-01-01", # best guess
29
- metadata={"source-pretty": "Nordjylland News"},
30
- )
31
-
32
- return new_example
33
-
34
-
35
- def main():
36
- ds = load_dataset("alexandrainst/nordjylland-news-summarization", split="train")
37
- ds = cast(Dataset, ds)
38
-
39
- ds = ds.map(convert_sample, remove_columns=ds.column_names)
40
- ds = ds.rename_columns({"text_new": "text"})
41
- ds = ds.add_column("id", [f"{source}_{i}" for i in range(len(ds))]) # type: ignore
42
- ds = ds.select_columns(
43
- ["text", "source", "id", "added", "created", "license", "domain", "metadata"]
44
- )
45
-
46
- save_path = Path(__file__).parent / f"{source}.parquet"
47
- ds.to_parquet(save_path)
48
-
49
-
50
- if __name__ == "__main__":
51
- main()