This view is limited to 50 files because it contains too many changes.  See the raw diff here.
Files changed (50) hide show
  1. .gitignore +1 -1
  2. CONTRIBUTING.md +2 -22
  3. README.md +54 -98
  4. data/adl/adl.md +3 -41
  5. data/adl/descriptive_stats.json +1 -1
  6. data/adl/images/dist_document_length.png +0 -3
  7. data/botxt/botxt.md +3 -39
  8. data/botxt/descriptive_stats.json +1 -1
  9. data/botxt/images/dist_document_length.png +0 -3
  10. data/dannet/dannet.md +3 -41
  11. data/dannet/descriptive_stats.json +1 -1
  12. data/dannet/images/dist_document_length.png +0 -3
  13. data/depbank/depbank.md +3 -41
  14. data/depbank/descriptive_stats.json +1 -1
  15. data/depbank/images/dist_document_length.png +0 -3
  16. data/ep/descriptive_stats.json +1 -1
  17. data/ep/ep.md +3 -39
  18. data/ep/images/dist_document_length.png +0 -3
  19. data/ft/descriptive_stats.json +1 -1
  20. data/ft/ft.md +4 -41
  21. data/ft/images/dist_document_length.png +0 -3
  22. data/gutenberg/descriptive_stats.json +1 -1
  23. data/gutenberg/gutenberg.md +3 -41
  24. data/gutenberg/images/dist_document_length.png +0 -3
  25. data/hest/descriptive_stats.json +1 -1
  26. data/hest/hest.md +3 -41
  27. data/hest/images/dist_document_length.png +0 -3
  28. data/jvj/descriptive_stats.json +1 -1
  29. data/jvj/images/dist_document_length.png +0 -3
  30. data/jvj/jvj.md +3 -40
  31. data/lexdk/create.py +0 -78
  32. data/lexdk/descriptive_stats.json +0 -1
  33. data/lexdk/images/dist_document_length.png +0 -3
  34. data/lexdk/lexdk.md +0 -85
  35. data/lexdk/lexdk.parquet +0 -3
  36. data/naat/descriptive_stats.json +1 -1
  37. data/naat/images/dist_document_length.png +0 -3
  38. data/naat/naat.md +4 -41
  39. data/nordjyllandnews/descriptive_stats.json +1 -1
  40. data/nordjyllandnews/images/dist_document_length.png +0 -3
  41. data/nordjyllandnews/nordjyllandnews.md +3 -41
  42. data/opensubtitles/create.py +0 -123
  43. data/opensubtitles/descriptive_stats.json +0 -1
  44. data/opensubtitles/images/dist_document_length.png +0 -3
  45. data/opensubtitles/opensubtitles.md +0 -159
  46. data/opensubtitles/opensubtitles.parquet +0 -3
  47. data/relig/descriptive_stats.json +1 -1
  48. data/relig/images/dist_document_length.png +0 -3
  49. data/relig/relig.md +4 -40
  50. data/retsinformationdk/descriptive_stats.json +1 -1
.gitignore CHANGED
@@ -10,4 +10,4 @@ cspell.json
10
 
11
  # tmp files
12
  tmp.py
13
- tmp.png
 
10
 
11
  # tmp files
12
  tmp.py
13
+
CONTRIBUTING.md CHANGED
@@ -49,30 +49,10 @@ git checkout pr/{PR NUMBER}
49
  git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
50
  ```
51
 
52
- Before you make the PR do be sure to make sure that you have completed the following checklist.
53
 
54
- ### Checklist
55
-
56
- - [ ] I have run the test suite using `make test` and all tests pass
57
- - [ ] I have added/changed a dataset and have
58
- - [ ] I have updated descriptive statistics using `make update-descriptive-statistics`
59
- - [ ] I have bumped the version use `make bump-version`
60
-
61
- ### Examples of Previous PRs
62
  To see example PR you can see the following:
63
 
64
  - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
65
  - [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
66
- - Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)
67
-
68
- ## Frequently asked questions
69
-
70
- ### Do you accept synthetic dataets
71
-
72
- Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages.
73
- However, you should be aware that synthetic dataset will probably require a more detailed examination and description.
74
- We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.
75
-
76
- ### Do you accept non-Danish data
77
-
78
- Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.
 
49
  git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
50
  ```
51
 
52
+ Before you make the PR do be sure to make sure that the tests have been run.
53
 
 
 
 
 
 
 
 
 
54
  To see example PR you can see the following:
55
 
56
  - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
57
  - [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
58
+ - Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -5,14 +5,6 @@ configs:
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
8
- - config_name: lexdk
9
- data_files:
10
- - split: train
11
- path: data/lexdk/*.parquet
12
- - config_name: opensubtitles
13
- data_files:
14
- - split: train
15
- path: data/opensubtitles/*.parquet
16
  - config_name: retsinformationdk
17
  data_files:
18
  - split: train
@@ -120,23 +112,18 @@ language_bcp47:
120
 
121
  <!--
122
  readme structure is inspired by:
123
- https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
124
- -->
125
 
126
 
127
  # 🧨 Danish Dynaword
128
 
129
-
130
- <!-- START README TABLE -->
131
  | | |
132
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
133
- | **Version** | 1.0.7 |
134
  | **Language** | dan, dansk, Danish |
135
  | **License** | Permissible, See the respective dataset |
136
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
137
  | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions) |
138
 
139
- <!-- END README TABLE -->
140
 
141
  ## Table of Contents
142
  - [🧨 Danish Dynaword](#-danish-dynaword)
@@ -153,22 +140,20 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
153
  - [Curation Rationale](#curation-rationale)
154
  - [Annotations](#annotations)
155
  - [Source Data](#source-data)
156
- - [Dataset Statistics](#dataset-statistics)
157
  - [Additional Information](#additional-information)
158
  - [Contributing to the dataset](#contributing-to-the-dataset)
159
  - [Citation Information](#citation-information)
160
- - [Disclaimer](#disclaimer)
161
- - [Notice and take down policy](#notice-and-take-down-policy)
162
 
163
  ## Dataset Description
164
 
165
  <!-- START-DESC-STATS -->
 
166
  - **Language**: dan, dansk, Danish
167
- - **Number of samples**: 588.48K
168
- - **Number of tokens (Llama 3)**: 1.84B
169
- - **Average document length (characters)**: 9222.58
170
- <!-- END-DESC-STATS -->
171
 
 
172
 
173
  ### Dataset Summary
174
 
@@ -221,19 +206,21 @@ The dataset contains text from different sources which are thoroughly defined in
221
 
222
  Each entry in the dataset consists of a single text with associated metadata
223
 
 
224
  <!-- START-SAMPLE -->
 
 
 
225
  ```py
226
  {
227
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
228
- "source": "adl",
229
- "id": "adl_aakjaer06val",
230
- "added": "2020-09-14",
231
- "created": "1700-01-01, 2022-01-01",
232
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
233
- "domain": "Wiki & Books",
234
- "metadata": {
235
- "source-pretty": "Archive for Danish Literature"
236
- }
237
  }
238
  ```
239
 
@@ -250,7 +237,7 @@ An entry in the dataset consists of the following fields:
250
  - `domain` (`str`): The domain of the source
251
  - `metadata/source-pretty` (`str`): The long form version of the short-form source name
252
  - `metadata/*`: Potentially additional metadata
253
- <!-- END-SAMPLE -->
254
 
255
  ### Data Splits
256
 
@@ -270,86 +257,69 @@ This data generally contains no annotation besides the metadata attached to each
270
 
271
  Below follows a brief overview of the sources in the corpus along with their individual license.
272
 
 
 
 
 
 
 
273
  <!-- START-MAIN TABLE -->
274
  | Source | Description | N. Tokens | License |
275
  | :------------------ | :--------------------------------------------------------------------------------------------------------------------------- | :-------- | :--------------------- |
276
- | [lexdk] | Permissible use articles from [lex.dk](https://lex.dk) | 5.69M | [CC-BY-SA 4.0] |
277
- | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.60M | [CC-0] |
278
  | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
279
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
280
- | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
281
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
282
  | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
283
  | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
284
- | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
285
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
286
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
287
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
288
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
289
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
290
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
291
  | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
292
- | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
293
  | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
294
- | [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
295
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
296
- | [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
 
297
  | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
 
 
298
  | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
299
- | **Total** | | 1.84B | |
 
 
 
 
 
 
300
 
301
- [lexdk]: data/lexdk/lexdk.md
302
- [opensubtitles]: data/opensubtitles/opensubtitles.md
303
  [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
304
- [ep]: data/ep/ep.md
305
- [ft]: data/ft/ft.md
306
- [wikisource]: data/wikisource/wikisource.md
307
  [spont]: data/spont/spont.md
308
  [tv2r]: data/tv2r/tv2r.md
309
- [adl]: data/adl/adl.md
310
- [hest]: data/hest/hest.md
311
- [skat]: data/skat/skat.md
312
- [dannet]: data/dannet/dannet.md
313
- [retspraksis]: data/retspraksis/retspraksis.md
314
- [wikibooks]: data/wikibooks/wikibooks.md
315
- [jvj]: data/jvj/jvj.md
316
  [gutenberg]: data/gutenberg/gutenberg.md
317
- [botxt]: data/botxt/botxt.md
318
  [depbank]: data/depbank/depbank.md
319
- [naat]: data/naat/naat.md
320
- [synne]: data/synne/synne.md
321
  [wiki]: data/wiki/wiki.md
 
322
  [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
 
 
323
  [relig]: data/relig/relig.md
 
 
 
 
 
 
324
 
325
 
326
  [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
327
  [CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
328
  [Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
329
- [DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
330
  [Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
 
331
  <!-- END-MAIN TABLE -->
332
 
333
 
334
- You can learn more about each dataset by pressing
335
-
336
- <!-- ### Quality Control
337
-
338
- Dynaword performs quality checks along with each PR. These quality checks includes:
339
- - ensuring unique ids
340
- TODO:
341
- - checking for duplicates
342
- -->
343
-
344
-
345
-
346
- ### Dataset Statistics
347
-
348
- <!-- START-DATASET PLOTS -->
349
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
350
- <img>
351
- <!-- END-DATASET PLOTS -->
352
-
353
 
354
  ## Additional Information
355
 
@@ -361,20 +331,6 @@ We welcome contributions to the dataset such as new sources, better data filteri
361
 
362
  This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
363
 
364
- ### Disclaimer
365
- We do not own any of the text from which the data has been extracted.
366
- We only offer files that we believe we are free to redistribute. If any doubt occurs about the legality of any of our file downloads we will take them off right away after [contacting us](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
367
-
368
- ### Notice and take down policy
369
- Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
370
-
371
- - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
372
- - Clearly identify the copyrighted work claimed to be infringed.
373
- - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
374
-
375
- You can contact us through [this channel](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new).
376
-
377
- Take down: We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
378
 
379
  ---
380
 
 
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
 
 
 
 
 
 
 
 
8
  - config_name: retsinformationdk
9
  data_files:
10
  - split: train
 
112
 
113
  <!--
114
  readme structure is inspired by:
115
+ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
 
116
 
117
 
118
  # 🧨 Danish Dynaword
119
 
 
 
120
  | | |
121
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
 
122
  | **Language** | dan, dansk, Danish |
123
  | **License** | Permissible, See the respective dataset |
124
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
125
  | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions) |
126
 
 
127
 
128
  ## Table of Contents
129
  - [🧨 Danish Dynaword](#-danish-dynaword)
 
140
  - [Curation Rationale](#curation-rationale)
141
  - [Annotations](#annotations)
142
  - [Source Data](#source-data)
 
143
  - [Additional Information](#additional-information)
144
  - [Contributing to the dataset](#contributing-to-the-dataset)
145
  - [Citation Information](#citation-information)
 
 
146
 
147
  ## Dataset Description
148
 
149
  <!-- START-DESC-STATS -->
150
+
151
  - **Language**: dan, dansk, Danish
152
+ - **Number of samples**: 546.77K
153
+ - **Number of tokens (Llama 3)**: 1.57B
154
+ - **Average document length (characters)**: 8461.25
 
155
 
156
+ <!-- END-DESC-STATS -->
157
 
158
  ### Dataset Summary
159
 
 
206
 
207
  Each entry in the dataset consists of a single text with associated metadata
208
 
209
+
210
  <!-- START-SAMPLE -->
211
+ <!-- END-SAMPLE -->
212
+
213
+
214
  ```py
215
  {
216
+ "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...",
217
+ "source": "adl",
218
+ "id": "adl_aakjaer06val",
219
+ "added": "2020-09-14",
220
+ "created": "1700-01-01, 2022-01-01",
221
+ "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
222
+ "domain": "Wiki & Books",
223
+ "metadata": {"source-pretty": "Archive for Danish Literature"},
 
 
224
  }
225
  ```
226
 
 
237
  - `domain` (`str`): The domain of the source
238
  - `metadata/source-pretty` (`str`): The long form version of the short-form source name
239
  - `metadata/*`: Potentially additional metadata
240
+
241
 
242
  ### Data Splits
243
 
 
257
 
258
  Below follows a brief overview of the sources in the corpus along with their individual license.
259
 
260
+
261
+
262
+
263
+
264
+
265
+
266
  <!-- START-MAIN TABLE -->
267
  | Source | Description | N. Tokens | License |
268
  | :------------------ | :--------------------------------------------------------------------------------------------------------------------------- | :-------- | :--------------------- |
 
 
269
  | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
270
+ | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
 
 
271
  | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
272
  | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
273
+ | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
 
 
 
 
 
 
274
  | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
 
275
  | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
276
+ | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
277
+ | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
278
+ | [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
279
+ | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
280
  | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
281
+ | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
282
+ | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
283
  | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
284
+ | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
285
+ | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
286
+ | [naat] | A dataset of Danish speeches from 1930-2022 | 286.68K | [CC-0] |
287
+ | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
288
+ | [ft] | This dataset consists of records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
289
+ | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
290
+ | **Total** | | 1.57B | |
291
 
 
 
292
  [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
293
+ [hest]: data/hest/hest.md
 
 
294
  [spont]: data/spont/spont.md
295
  [tv2r]: data/tv2r/tv2r.md
296
+ [ep]: data/ep/ep.md
 
 
 
 
 
 
297
  [gutenberg]: data/gutenberg/gutenberg.md
 
298
  [depbank]: data/depbank/depbank.md
299
+ [jvj]: data/jvj/jvj.md
300
+ [wikisource]: data/wikisource/wikisource.md
301
  [wiki]: data/wiki/wiki.md
302
+ [wikibooks]: data/wikibooks/wikibooks.md
303
  [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
304
+ [adl]: data/adl/adl.md
305
+ [retspraksis]: data/retspraksis/retspraksis.md
306
  [relig]: data/relig/relig.md
307
+ [dannet]: data/dannet/dannet.md
308
+ [synne]: data/synne/synne.md
309
+ [naat]: data/naat/naat.md
310
+ [botxt]: data/botxt/botxt.md
311
+ [ft]: data/ft/ft.md
312
+ [skat]: data/skat/skat.md
313
 
314
 
315
  [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
316
  [CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
317
  [Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
 
318
  [Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
319
+ [DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
320
  <!-- END-MAIN TABLE -->
321
 
322
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
323
 
324
  ## Additional Information
325
 
 
331
 
332
  This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
333
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
334
 
335
  ---
336
 
data/adl/adl.md CHANGED
@@ -23,62 +23,24 @@ source_datasets:
23
  Danish literature from 1700-2023 from the Archive for Danish Literature (ADL).
24
  <!-- END-SHORT DESCRIPTION -->
25
 
26
- See also dataset [entry](https://sprogteknologi.dk/dataset/public-adl-text-sources) on sprogteknologi.dk and their API [here](https://rawgit.com/Det-Kongelige-Bibliotek/access-digital-objects/master/form-demos/adl-form.html).
27
 
28
  <!-- START-DESC-STATS -->
 
29
  - **Language**: dan, dansk, Danish
30
  - **Number of samples**: 498
31
  - **Number of tokens (Llama 3)**: 58.49M
32
  - **Average document length (characters)**: 324932.24
33
- <!-- END-DESC-STATS -->
34
 
 
35
 
36
 
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
 
40
-
41
  <!-- START-SAMPLE -->
42
- ```py
43
- {
44
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
45
- "source": "adl",
46
- "id": "adl_aakjaer06val",
47
- "added": "2020-09-14",
48
- "created": "1700-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Wiki & Books",
51
- "metadata": {
52
- "source-pretty": "Archive for Danish Literature"
53
- }
54
- }
55
- ```
56
-
57
- ### Data Fields
58
-
59
- An entry in the dataset consists of the following fields:
60
-
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
  <!-- END-SAMPLE -->
71
 
72
 
73
-
74
- ### Dataset Statistics
75
-
76
- <!-- START-DATASET PLOTS -->
77
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
78
- <img>
79
- <!-- END-DATASET PLOTS -->
80
-
81
-
82
  ## Additional Information
83
 
84
 
 
23
  Danish literature from 1700-2023 from the Archive for Danish Literature (ADL).
24
  <!-- END-SHORT DESCRIPTION -->
25
 
 
26
 
27
  <!-- START-DESC-STATS -->
28
+
29
  - **Language**: dan, dansk, Danish
30
  - **Number of samples**: 498
31
  - **Number of tokens (Llama 3)**: 58.49M
32
  - **Average document length (characters)**: 324932.24
 
33
 
34
+ <!-- END-DESC-STATS -->
35
 
36
 
37
+ ## Dataset Sturcture
38
  An example from the dataset looks as follows.
39
 
 
40
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  <!-- END-SAMPLE -->
42
 
43
 
 
 
 
 
 
 
 
 
 
44
  ## Additional Information
45
 
46
 
data/adl/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 498, "average_document_length": 324932.2429718876, "number_of_tokens": 58493311, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 498, "average_document_length": 324932.2429718876, "number_of_tokens": 58493311, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/adl/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 297677d067d7831f90c4d539c1d160af2087a25119691bbfda61e95de62ca5f5
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/botxt/botxt.md CHANGED
@@ -25,58 +25,22 @@ The Bornholmsk Ordbog Dictionary Project
25
 
26
  Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
27
 
28
-
29
  <!-- START-DESC-STATS -->
 
30
  - **Language**: dan, dansk, Danish
31
  - **Number of samples**: 106
32
  - **Number of tokens (Llama 3)**: 847.97K
33
  - **Average document length (characters)**: 18972.42
34
- <!-- END-DESC-STATS -->
35
 
 
36
 
37
 
38
- ## Dataset Structure
39
  An example from the dataset looks as follows.
40
 
41
-
42
  <!-- START-SAMPLE -->
43
- ```py
44
- {
45
- "text": "Ræua-Lârs\n\nRæua-Lârs å hans Konna, Stina, bode uda i Torpabakkana. Hanj hed nok æjla Lârs\nNielsen, m[...]",
46
- "source": "botxt",
47
- "id": "botxt_0000040",
48
- "added": "2024-05-16",
49
- "created": "2000-01-01, 2022-01-01",
50
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
51
- "domain": "Other",
52
- "metadata": {
53
- "source-pretty": "Bornholmsk (Danish dialect)"
54
- }
55
- }
56
- ```
57
-
58
- ### Data Fields
59
-
60
- An entry in the dataset consists of the following fields:
61
-
62
- - `text`(`str`): The content of the document.
63
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
64
- - `id` (`str`): An unique identifier for each document.
65
- - `added` (`str`): An date for when the document was added to this collection.
66
- - `created` (`str`): An date range for when the document was originally created.
67
- - `license` (`str`): The license of the document. The licenses vary according to the source.
68
- - `domain` (`str`): The domain of the source
69
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
70
- - `metadata/*`: Potentially additional metadata
71
  <!-- END-SAMPLE -->
72
 
73
- ### Dataset Statistics
74
-
75
- <!-- START-DATASET PLOTS -->
76
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
77
- <img>
78
- <!-- END-DATASET PLOTS -->
79
-
80
 
81
  ## Additional Information
82
 
 
25
 
26
  Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
27
 
 
28
  <!-- START-DESC-STATS -->
29
+
30
  - **Language**: dan, dansk, Danish
31
  - **Number of samples**: 106
32
  - **Number of tokens (Llama 3)**: 847.97K
33
  - **Average document length (characters)**: 18972.42
 
34
 
35
+ <!-- END-DESC-STATS -->
36
 
37
 
38
+ ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
 
 
41
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  <!-- END-SAMPLE -->
43
 
 
 
 
 
 
 
 
44
 
45
  ## Additional Information
46
 
data/botxt/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 106, "average_document_length": 18972.415094339623, "number_of_tokens": 847973, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 106, "average_document_length": 18972.415094339623, "number_of_tokens": 847973, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/botxt/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e98f2f59f8cbe8be5691f1d7c073b2c13361d331546f9451d24b27fcde649f6c
  • Pointer size: 131 Bytes
  • Size of remote file: 541 kB
data/dannet/dannet.md CHANGED
@@ -27,53 +27,23 @@ A WordNet is a lexico-semantic network which show the meaning and the relation b
27
 
28
  ## Dataset Description
29
 
30
-
31
  <!-- START-DESC-STATS -->
 
32
  - **Language**: dan, dansk, Danish
33
  - **Number of samples**: 49.04K
34
  - **Number of tokens (Llama 3)**: 1.52M
35
  - **Average document length (characters)**: 90.80
36
- <!-- END-DESC-STATS -->
37
 
 
38
 
39
 
40
- ## Dataset Structure
41
  An example from the dataset looks as follows.
42
 
43
-
44
  <!-- START-SAMPLE -->
45
- ```py
46
- {
47
- "text": "Når fodboldholdet fra 1. division i Ikast spiller hjemmekampe, lyder råbet ud over Ikast Stadion: We[...]",
48
- "source": "dannet",
49
- "id": "dannet_46506",
50
- "added": "2020-09-24",
51
- "created": "2000-01-01, 2022-01-01",
52
- "license": "Commercial Use of DanNet\n\nDanNet may be used in commercial applications in accordance with the follo[...]",
53
- "domain": "dannet",
54
- "metadata": {
55
- "source-pretty": "DanNet (Danish WordNet)"
56
- }
57
- }
58
- ```
59
-
60
- ### Data Fields
61
-
62
- An entry in the dataset consists of the following fields:
63
-
64
- - `text`(`str`): The content of the document.
65
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
66
- - `id` (`str`): An unique identifier for each document.
67
- - `added` (`str`): An date for when the document was added to this collection.
68
- - `created` (`str`): An date range for when the document was originally created.
69
- - `license` (`str`): The license of the document. The licenses vary according to the source.
70
- - `domain` (`str`): The domain of the source
71
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
72
- - `metadata/*`: Potentially additional metadata
73
  <!-- END-SAMPLE -->
74
 
75
 
76
-
77
  ## License Information
78
  <details>
79
  <summary>DanNet 1.0 License</summary>
@@ -121,14 +91,6 @@ DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
121
  </details>
122
 
123
 
124
- ### Dataset Statistics
125
-
126
- <!-- START-DATASET PLOTS -->
127
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
128
- <img>
129
- <!-- END-DATASET PLOTS -->
130
-
131
-
132
  ## Additional Information
133
 
134
 
 
27
 
28
  ## Dataset Description
29
 
 
30
  <!-- START-DESC-STATS -->
31
+
32
  - **Language**: dan, dansk, Danish
33
  - **Number of samples**: 49.04K
34
  - **Number of tokens (Llama 3)**: 1.52M
35
  - **Average document length (characters)**: 90.80
 
36
 
37
+ <!-- END-DESC-STATS -->
38
 
39
 
40
+ ## Dataset Sturcture
41
  An example from the dataset looks as follows.
42
 
 
43
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  <!-- END-SAMPLE -->
45
 
46
 
 
47
  ## License Information
48
  <details>
49
  <summary>DanNet 1.0 License</summary>
 
91
  </details>
92
 
93
 
 
 
 
 
 
 
 
 
94
  ## Additional Information
95
 
96
 
data/dannet/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 49040, "average_document_length": 90.80340538336053, "number_of_tokens": 1523416, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 49040, "average_document_length": 90.80340538336053, "number_of_tokens": 1523416, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/dannet/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 91dca32a1fd83b3699bb8ebae083dc697a0dac4b703ada720381448216ea0117
  • Pointer size: 131 Bytes
  • Size of remote file: 538 kB
data/depbank/depbank.md CHANGED
@@ -28,61 +28,23 @@ While the dataset was initially intended as a rich annotation, this corpora only
28
 
29
  ## Dataset Description
30
 
31
-
32
  <!-- START-DESC-STATS -->
 
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 536
35
  - **Number of tokens (Llama 3)**: 185.45K
36
  - **Average document length (characters)**: 1018.90
37
- <!-- END-DESC-STATS -->
38
 
 
39
 
40
 
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
-
45
  <!-- START-SAMPLE -->
46
- ```py
47
- {
48
- "text": "\nH.L. Hansen var en usædvanmlig og frodig personlighed. Han skabte \nglæde og munterhed omkring sig o[...]",
49
- "source": "depbank",
50
- "id": "depbank_0375",
51
- "added": "2024-05-16",
52
- "created": "2000-01-01, 2022-01-01",
53
- "license": "Attribution-ShareAlike 4.0 International",
54
- "domain": "Other",
55
- "metadata": {
56
- "source-pretty": "Danish Dependency Treebank"
57
- }
58
- }
59
- ```
60
-
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
  <!-- END-SAMPLE -->
75
 
76
 
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
-
86
  ## Additional Information
87
 
88
 
 
28
 
29
  ## Dataset Description
30
 
 
31
  <!-- START-DESC-STATS -->
32
+
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 536
35
  - **Number of tokens (Llama 3)**: 185.45K
36
  - **Average document length (characters)**: 1018.90
 
37
 
38
+ <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
 
44
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  <!-- END-SAMPLE -->
46
 
47
 
 
 
 
 
 
 
 
 
 
48
  ## Additional Information
49
 
50
 
data/depbank/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 536, "average_document_length": 1018.8992537313433, "number_of_tokens": 185454, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 536, "average_document_length": 1018.8992537313433, "number_of_tokens": 185454, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/depbank/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: b23e81411e3f3b86bbd3990cf2e59f4a08f7dae10b908cf3101487069c0296bc
  • Pointer size: 131 Bytes
  • Size of remote file: 547 kB
data/ep/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 4213, "average_document_length": 74063.40469973891, "number_of_tokens": 100888932, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 4213, "average_document_length": 74063.40469973891, "number_of_tokens": 100888932, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/ep/ep.md CHANGED
@@ -27,58 +27,22 @@ The europarl is a corpus of parallel text in 11 languages from the proceedings o
27
 
28
  ## Dataset Description
29
 
30
-
31
  <!-- START-DESC-STATS -->
 
32
  - **Language**: dan, dansk, Danish
33
  - **Number of samples**: 4.21K
34
  - **Number of tokens (Llama 3)**: 100.89M
35
  - **Average document length (characters)**: 74063.40
36
- <!-- END-DESC-STATS -->
37
 
 
38
 
39
 
40
- ## Dataset Structure
41
  An example from the dataset looks as follows.
42
 
43
-
44
  <!-- START-SAMPLE -->
45
- ```py
46
- {
47
- "text": "TALER 6703: Jeg har stemt for henstillingen om godkendelse af opdelingsanordninger til beskyttelse a[...]",
48
- "source": "ep",
49
- "id": "ep_07-02-01-008",
50
- "added": "2019-11-20",
51
- "created": "2004-01-01, 2009-01-01",
52
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
53
- "domain": "Conversation",
54
- "metadata": {
55
- "source-pretty": "European Parliament"
56
- }
57
- }
58
- ```
59
-
60
- ### Data Fields
61
-
62
- An entry in the dataset consists of the following fields:
63
-
64
- - `text`(`str`): The content of the document.
65
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
66
- - `id` (`str`): An unique identifier for each document.
67
- - `added` (`str`): An date for when the document was added to this collection.
68
- - `created` (`str`): An date range for when the document was originally created.
69
- - `license` (`str`): The license of the document. The licenses vary according to the source.
70
- - `domain` (`str`): The domain of the source
71
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
72
- - `metadata/*`: Potentially additional metadata
73
  <!-- END-SAMPLE -->
74
 
75
- ### Dataset Statistics
76
-
77
- <!-- START-DATASET PLOTS -->
78
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
79
- <img>
80
- <!-- END-DATASET PLOTS -->
81
-
82
 
83
 
84
  ## Additional Information
 
27
 
28
  ## Dataset Description
29
 
 
30
  <!-- START-DESC-STATS -->
31
+
32
  - **Language**: dan, dansk, Danish
33
  - **Number of samples**: 4.21K
34
  - **Number of tokens (Llama 3)**: 100.89M
35
  - **Average document length (characters)**: 74063.40
 
36
 
37
+ <!-- END-DESC-STATS -->
38
 
39
 
40
+ ## Dataset Sturcture
41
  An example from the dataset looks as follows.
42
 
 
43
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  <!-- END-SAMPLE -->
45
 
 
 
 
 
 
 
 
46
 
47
 
48
  ## Additional Information
data/ep/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 8914d9fad81bcbc519c29b7c258a256d4eb7084ed8ff9c9100a93ad87fbb4171
  • Pointer size: 131 Bytes
  • Size of remote file: 545 kB
data/ft/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 1315, "average_document_length": 266745.19163498096, "number_of_tokens": 114087231, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 1315, "average_document_length": 266745.19163498096, "number_of_tokens": 114087231, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/ft/ft.md CHANGED
@@ -20,7 +20,7 @@ source_datasets:
20
  ## Dataset Description
21
 
22
  <!-- START-SHORT DESCRIPTION -->
23
- Records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
24
  <!-- END-SHORT DESCRIPTION -->
25
 
26
 
@@ -28,60 +28,23 @@ All records have a transcript produced by commercial Automatic Speech Recognitio
28
 
29
  In the parliament hall, one speaker at a time addresses members of the parliament. Monologues may include rebuttals or other comments to statements in previous monologues. While speakers can read aloud from a prepared statement or speak extemporaneously, we expect no difference to be apparent in the data because of the post-editing. The Folketinget section covers parliament hall sessions between 2009 and 2019. It contains discussions on a wide range of topics, issues, and named entities relevant to Danish society.
30
 
31
-
32
  <!-- START-DESC-STATS -->
 
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 1.31K
35
  - **Number of tokens (Llama 3)**: 114.09M
36
  - **Average document length (characters)**: 266745.19
37
- <!-- END-DESC-STATS -->
38
 
 
39
 
40
 
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
-
45
  <!-- START-SAMPLE -->
46
- ```py
47
- {
48
- "text": "TALER 50: Mødet er åbnet. I dag er der følgende anmeldelser: Ministeren for by, bolig og landdistrik[...]",
49
- "source": "ft",
50
- "id": "ft_20121M100",
51
- "added": "2021-03-28",
52
- "created": "2009-01-01, 2019-01-01",
53
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
54
- "domain": "Conversation",
55
- "metadata": {
56
- "source-pretty": "Folketinget (Danish Parliament)"
57
- }
58
- }
59
- ```
60
-
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
  <!-- END-SAMPLE -->
75
 
76
 
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
  ## Additional Information
86
 
87
 
 
20
  ## Dataset Description
21
 
22
  <!-- START-SHORT DESCRIPTION -->
23
+ This dataset consists of records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
24
  <!-- END-SHORT DESCRIPTION -->
25
 
26
 
 
28
 
29
  In the parliament hall, one speaker at a time addresses members of the parliament. Monologues may include rebuttals or other comments to statements in previous monologues. While speakers can read aloud from a prepared statement or speak extemporaneously, we expect no difference to be apparent in the data because of the post-editing. The Folketinget section covers parliament hall sessions between 2009 and 2019. It contains discussions on a wide range of topics, issues, and named entities relevant to Danish society.
30
 
 
31
  <!-- START-DESC-STATS -->
32
+
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 1.31K
35
  - **Number of tokens (Llama 3)**: 114.09M
36
  - **Average document length (characters)**: 266745.19
 
37
 
38
+ <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
 
44
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  <!-- END-SAMPLE -->
46
 
47
 
 
 
 
 
 
 
 
 
48
  ## Additional Information
49
 
50
 
data/ft/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e16a1a9de4f1ef8fedd3e85035287a813d5980b25b40b09c54462671eaebcd81
  • Pointer size: 131 Bytes
  • Size of remote file: 550 kB
data/gutenberg/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 66, "average_document_length": 290147.9393939394, "number_of_tokens": 6763317, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 66, "average_document_length": 290147.9393939394, "number_of_tokens": 6763317, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/gutenberg/gutenberg.md CHANGED
@@ -26,53 +26,23 @@ The Danish subsection from Project [Gutenberg](https://www.gutenberg.org).
26
 
27
  Project Gutenberg is an online library of free eBooks. Project Gutenberg was the first provider of free electronic books, or eBooks.
28
 
29
-
30
  <!-- START-DESC-STATS -->
 
31
  - **Language**: dan, dansk, Danish
32
  - **Number of samples**: 66
33
  - **Number of tokens (Llama 3)**: 6.76M
34
  - **Average document length (characters)**: 290147.94
35
- <!-- END-DESC-STATS -->
36
 
 
37
 
38
 
39
- ## Dataset Structure
40
  An example from the dataset looks as follows.
41
 
42
-
43
  <!-- START-SAMPLE -->
44
- ```py
45
- {
46
- "text": "Afskriverens bemærkninger: Åbenlyse trykfejl er rettet\ni denne e-bog, men forfatterens stavning er f[...]",
47
- "source": "gutenberg",
48
- "id": "gutenberg_43899",
49
- "added": "2020-09-12",
50
- "created": "1700-01-01, 2022-01-01",
51
- "license": "*** START: FULL LICENSE ***\n\nTHE FULL PROJECT GUTENBERG LICENSE\nPLEASE READ THIS BEFORE YOU DISTRIBU[...]",
52
- "domain": "Wiki & Books",
53
- "metadata": {
54
- "source-pretty": "Gutenberg"
55
- }
56
- }
57
- ```
58
-
59
- ### Data Fields
60
-
61
- An entry in the dataset consists of the following fields:
62
-
63
- - `text`(`str`): The content of the document.
64
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
65
- - `id` (`str`): An unique identifier for each document.
66
- - `added` (`str`): An date for when the document was added to this collection.
67
- - `created` (`str`): An date range for when the document was originally created.
68
- - `license` (`str`): The license of the document. The licenses vary according to the source.
69
- - `domain` (`str`): The domain of the source
70
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
71
- - `metadata/*`: Potentially additional metadata
72
  <!-- END-SAMPLE -->
73
 
74
 
75
-
76
  ## License Information
77
 
78
  <details>
@@ -410,14 +380,6 @@ subscribe to our email newsletter to hear about new eBooks.
410
  </details>
411
 
412
 
413
- ### Dataset Statistics
414
-
415
- <!-- START-DATASET PLOTS -->
416
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
417
- <img>
418
- <!-- END-DATASET PLOTS -->
419
-
420
-
421
 
422
  ## Additional Information
423
 
 
26
 
27
  Project Gutenberg is an online library of free eBooks. Project Gutenberg was the first provider of free electronic books, or eBooks.
28
 
 
29
  <!-- START-DESC-STATS -->
30
+
31
  - **Language**: dan, dansk, Danish
32
  - **Number of samples**: 66
33
  - **Number of tokens (Llama 3)**: 6.76M
34
  - **Average document length (characters)**: 290147.94
 
35
 
36
+ <!-- END-DESC-STATS -->
37
 
38
 
39
+ ## Dataset Sturcture
40
  An example from the dataset looks as follows.
41
 
 
42
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  <!-- END-SAMPLE -->
44
 
45
 
 
46
  ## License Information
47
 
48
  <details>
 
380
  </details>
381
 
382
 
 
 
 
 
 
 
 
 
383
 
384
  ## Additional Information
385
 
data/gutenberg/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 7211ebb972796ee921e5c9d19cc8a266cc42ccab560d1701464ff2a865268116
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/hest/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 14391, "average_document_length": 82950.79104996179, "number_of_tokens": 389325153, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 14391, "average_document_length": 82950.79104996179, "number_of_tokens": 389325153, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/hest/hest.md CHANGED
@@ -28,60 +28,22 @@ Its inclusion as training data for large language models have multiple times rea
28
 
29
  ## Dataset Description
30
 
31
-
32
  <!-- START-DESC-STATS -->
 
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 14.39K
35
  - **Number of tokens (Llama 3)**: 389.33M
36
  - **Average document length (characters)**: 82950.79
37
- <!-- END-DESC-STATS -->
38
 
 
39
 
40
 
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
-
45
  <!-- START-SAMPLE -->
46
- ```py
47
- {
48
- "text": "Er den ikke kær? \nJeg kan ikke forstå at der altid er nogle der åbenbart ser alle indlæg her på HN ,[...]",
49
- "source": "hest",
50
- "id": "hest_forum112802271280227_0",
51
- "added": "2020-10-05",
52
- "created": "2000-01-01, 2022-01-01",
53
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
54
- "domain": "Social Media",
55
- "metadata": {
56
- "source-pretty": "Hestenettet (Danish debate forum)"
57
- }
58
- }
59
- ```
60
-
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
  <!-- END-SAMPLE -->
75
 
76
-
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
  ## Additional Information
86
 
87
 
 
28
 
29
  ## Dataset Description
30
 
 
31
  <!-- START-DESC-STATS -->
32
+
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 14.39K
35
  - **Number of tokens (Llama 3)**: 389.33M
36
  - **Average document length (characters)**: 82950.79
 
37
 
38
+ <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
 
44
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  <!-- END-SAMPLE -->
46
 
 
 
 
 
 
 
 
 
 
47
  ## Additional Information
48
 
49
 
data/hest/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 721ef6123a43f89bca03351e7a6459d6e40906024bcd2bc9e0a1fa377c37d60b
  • Pointer size: 131 Bytes
  • Size of remote file: 545 kB
data/jvj/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 42, "average_document_length": 254893.66666666666, "number_of_tokens": 3549181, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 42, "average_document_length": 254893.66666666666, "number_of_tokens": 3549181, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/jvj/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 842b2aff42b3efabe2ec7dd425a9b41f836ca21f1f6332561dcc90e6bb7db62e
  • Pointer size: 131 Bytes
  • Size of remote file: 534 kB
data/jvj/jvj.md CHANGED
@@ -28,60 +28,23 @@ The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikiped
28
 
29
  ## Dataset Description
30
 
31
-
32
  <!-- START-DESC-STATS -->
 
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 42
35
  - **Number of tokens (Llama 3)**: 3.55M
36
  - **Average document length (characters)**: 254893.67
37
- <!-- END-DESC-STATS -->
38
 
 
39
 
40
 
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
-
45
  <!-- START-SAMPLE -->
46
- ```py
47
- {
48
- "text": "JØRGINE JØRGINE KØBENHAVN HAGE & CLAUSENS FORLAG (J. FR. CLAUSEN) 1926 JOHANNES V. JENSEN COPYRIGHT [...]",
49
- "source": "jvj",
50
- "id": "jvj_Jørgine",
51
- "added": "2020-06-26",
52
- "created": "1873-01-01, 1951-01-01",
53
- "license": "Attribution-ShareAlike 4.0 International",
54
- "domain": "Wiki & Books",
55
- "metadata": {
56
- "source-pretty": "Johannes V. Jensen (Danish poet)"
57
- }
58
- }
59
- ```
60
-
61
- ### Data Fields
62
-
63
- An entry in the dataset consists of the following fields:
64
-
65
- - `text`(`str`): The content of the document.
66
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
67
- - `id` (`str`): An unique identifier for each document.
68
- - `added` (`str`): An date for when the document was added to this collection.
69
- - `created` (`str`): An date range for when the document was originally created.
70
- - `license` (`str`): The license of the document. The licenses vary according to the source.
71
- - `domain` (`str`): The domain of the source
72
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
73
- - `metadata/*`: Potentially additional metadata
74
  <!-- END-SAMPLE -->
75
 
76
 
77
- ### Dataset Statistics
78
-
79
- <!-- START-DATASET PLOTS -->
80
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
81
- <img>
82
- <!-- END-DATASET PLOTS -->
83
-
84
-
85
  ## Additional Information
86
 
87
 
 
28
 
29
  ## Dataset Description
30
 
 
31
  <!-- START-DESC-STATS -->
32
+
33
  - **Language**: dan, dansk, Danish
34
  - **Number of samples**: 42
35
  - **Number of tokens (Llama 3)**: 3.55M
36
  - **Average document length (characters)**: 254893.67
 
37
 
38
+ <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
 
44
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  <!-- END-SAMPLE -->
46
 
47
 
 
 
 
 
 
 
 
 
48
  ## Additional Information
49
 
50
 
data/lexdk/create.py DELETED
@@ -1,78 +0,0 @@
1
- """download lexdk from alexandrainst/lexdk-open"""
2
-
3
- from datetime import datetime
4
- from pathlib import Path
5
- from typing import cast
6
-
7
- import pandas as pd
8
- from datasets import Dataset, load_dataset
9
-
10
- column_order = [
11
- "text",
12
- "source",
13
- "id",
14
- "added",
15
- "created",
16
- "license",
17
- "domain",
18
- "metadata",
19
- ]
20
-
21
-
22
- def convert_sample(example: dict) -> dict:
23
- # from sample:
24
- # {
25
- # "url": "https://denstoredanske.lex.dk/Kullmanns_M%C3%B8lle",
26
- # "title": "Kullmanns Mølle",
27
- # "clarification": "",
28
- # "authors": ["https://brugere.lex.dk/6929"],
29
- # "date": "2021-01-20T13:23:20+01:00",
30
- # "license": "fri anvendelse",
31
- # "text": "Kullmanns Mølle er en mølle i Gudhjem, opkaldt efter Matts Kullmann, der byggede møllen i 1893 til sin søn, Christian Kullmann, se Gudhjem Mølle.",
32
- # }
33
- date = datetime.fromisoformat(example["date"])
34
- text = f"{example["title"]}\n\npubliceret: {date}\n{example["text"]}"
35
-
36
- new_example = dict(
37
- text_new=text,
38
- id=example["url"],
39
- source="lexdk",
40
- domain="Conversation",
41
- license="cc-by-sa-4.0",
42
- added="2025-01-04",
43
- created=f"{date.date()}, {date.date()}",
44
- metadata={"source-pretty": "Lex.dk"},
45
- )
46
-
47
- return new_example
48
-
49
-
50
- def main():
51
- ds = load_dataset("alexandrainst/lexdk-open", split="train")
52
- ds = cast(Dataset, ds)
53
-
54
- dates = [datetime.fromisoformat(date).date() for date in ds["date"]]
55
- print(str(min(dates)), ",", str(max(dates))) # 2009-01-28, 2023-09-05
56
-
57
- assert len(set(ds["url"])) == len(ds)
58
-
59
- ds = ds.map(convert_sample, num_proc=4)
60
- ds = ds.select_columns(column_order[1:] + ["text_new"])
61
- ds = ds.rename_columns({"text_new": "text"})
62
- # ensure order
63
- ds = ds.select_columns(column_order)
64
-
65
- df = ds.to_pandas()
66
- df = cast(pd.DataFrame, df)
67
- dedup_df = df.drop_duplicates(keep="first", subset=["text"])
68
- print("N. duplicates: ", df.shape[0] - dedup_df.shape[0]) # 0
69
-
70
- ds = ds.select(dedup_df.index)
71
- assert len(set(ds["text"])) == len(ds)
72
-
73
- save_path = Path(__file__).parent / "lexdk.parquet"
74
- ds.to_parquet(save_path)
75
-
76
-
77
- if __name__ == "__main__":
78
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/lexdk/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 11887, "average_document_length": 1405.6435601918063, "number_of_tokens": 5688613, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/lexdk/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 9aead97c97d52f9b4b9fced8eea7827d764a6a91f2af23ddc4e90607d23c0076
  • Pointer size: 131 Bytes
  • Size of remote file: 552 kB
data/lexdk/lexdk.md DELETED
@@ -1,85 +0,0 @@
1
- ---
2
- pretty_name: OpenSubtitles
3
- language:
4
- - da
5
- license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- source_datasets:
13
- - alexandrainst/lexdk-open
14
- ---
15
-
16
- # Dataset Card for OpenSubtitles
17
-
18
- <!-- START-SHORT DESCRIPTION -->
19
- Permissible use articles from [lex.dk](https://lex.dk).
20
- <!-- END-SHORT DESCRIPTION -->
21
-
22
- Lex.dk is a Danish online encyclopedia platform providing access to reliable and authoritative knowledge on a wide range of topics. It is created and curated by experts, ensuring high-quality, accurate content. The platform serves as a central hub for general and specialized information in Danish, making it a valuable resource for education, research, and general learning.
23
-
24
-
25
-
26
-
27
- ## Dataset Description
28
-
29
- <!-- START-DESC-STATS -->
30
- - **Language**: dan, dansk, Danish
31
- - **Number of samples**: 11.89K
32
- - **Number of tokens (Llama 3)**: 5.69M
33
- - **Average document length (characters)**: 1405.64
34
- <!-- END-DESC-STATS -->
35
-
36
-
37
- ## Dataset Structure
38
- An example from the dataset looks as follows.
39
-
40
- <!-- START-SAMPLE -->
41
- ```py
42
- {
43
- "text": "Oluf Høst Museet\n\npubliceret: 2014-04-23 03:42:33+02:00\nOluf Høst Museet, kunstmuseum i Gudhjem, Bor[...]",
44
- "source": "lexdk",
45
- "id": "https://denstoredanske.lex.dk/Oluf_H%C3%B8st_Museet",
46
- "added": "2025-01-04",
47
- "created": "2014-04-23, 2014-04-23",
48
- "license": "cc-by-sa-4.0",
49
- "domain": "Conversation",
50
- "metadata": {
51
- "source-pretty": "Lex.dk"
52
- }
53
- }
54
- ```
55
-
56
- ### Data Fields
57
-
58
- An entry in the dataset consists of the following fields:
59
-
60
- - `text`(`str`): The content of the document.
61
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
62
- - `id` (`str`): An unique identifier for each document.
63
- - `added` (`str`): An date for when the document was added to this collection.
64
- - `created` (`str`): An date range for when the document was originally created.
65
- - `license` (`str`): The license of the document. The licenses vary according to the source.
66
- - `domain` (`str`): The domain of the source
67
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
68
- - `metadata/*`: Potentially additional metadata
69
- <!-- END-SAMPLE -->
70
-
71
-
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
- ## Additional Information
81
-
82
-
83
- ### Citation Information
84
-
85
- This dataset is derived from the publicly availabe dataset [alexandrainst/lexdk-open](https://huggingface.co/datasets/alexandrainst/lexdk-open).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/lexdk/lexdk.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c4779881f575d6f612c8603ed4896f10ebc7293c59637fa8a0773ee4545fce3
3
- size 10007743
 
 
 
 
data/naat/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 129, "average_document_length": 6832.387596899225, "number_of_tokens": 286677, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 129, "average_document_length": 6832.387596899225, "number_of_tokens": 286677, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/naat/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e4f14416631cbf0b8a6fe2dc260e6be69155313af1f93c94bd435a60413e4836
  • Pointer size: 131 Bytes
  • Size of remote file: 537 kB
data/naat/naat.md CHANGED
@@ -18,65 +18,28 @@ source_datasets:
18
  # Dataset Card for NAAT
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
- Danish speeches from 1930-2022.
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
25
  ## Dataset Description
26
 
27
-
28
  <!-- START-DESC-STATS -->
 
29
  - **Language**: dan, dansk, Danish
30
  - **Number of samples**: 129
31
  - **Number of tokens (Llama 3)**: 286.68K
32
  - **Average document length (characters)**: 6832.39
33
- <!-- END-DESC-STATS -->
34
 
 
35
 
36
 
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
 
40
-
41
  <!-- START-SAMPLE -->
42
- ```py
43
- {
44
- "text": "Naar jeg i aften sender min nytaarshilsen til det danske folk og tænker tilbage paa det aar, der sva[...]",
45
- "source": "naat",
46
- "id": "naat_1958kongfrederikix",
47
- "added": "2020-02-11",
48
- "created": "1930-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Conversation",
51
- "metadata": {
52
- "source-pretty": "NAAT"
53
- }
54
- }
55
- ```
56
-
57
- ### Data Fields
58
-
59
- An entry in the dataset consists of the following fields:
60
-
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
  <!-- END-SAMPLE -->
71
 
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
  ## Additional Information
81
 
82
 
 
18
  # Dataset Card for NAAT
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
+ A dataset of Danish speeches from 1930-2022.
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
25
  ## Dataset Description
26
 
 
27
  <!-- START-DESC-STATS -->
28
+
29
  - **Language**: dan, dansk, Danish
30
  - **Number of samples**: 129
31
  - **Number of tokens (Llama 3)**: 286.68K
32
  - **Average document length (characters)**: 6832.39
 
33
 
34
+ <!-- END-DESC-STATS -->
35
 
36
 
37
+ ## Dataset Sturcture
38
  An example from the dataset looks as follows.
39
 
 
40
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  <!-- END-SAMPLE -->
42
 
 
 
 
 
 
 
 
 
43
  ## Additional Information
44
 
45
 
data/nordjyllandnews/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 75219, "average_document_length": 1540.2673659580691, "number_of_tokens": 37905944, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 75219, "average_document_length": 1540.2673659580691, "number_of_tokens": 37905944, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/nordjyllandnews/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 96ed628dc507036a6b09c82a04b01fee2f79c78ece535f4890cb30db731525fb
  • Pointer size: 131 Bytes
  • Size of remote file: 560 kB
data/nordjyllandnews/nordjyllandnews.md CHANGED
@@ -26,60 +26,22 @@ The data is derived from the Huggingface dataset [alexandrainst/nordjylland-news
26
 
27
  ## Dataset Description
28
 
29
-
30
  <!-- START-DESC-STATS -->
 
31
  - **Language**: dan, dansk, Danish
32
  - **Number of samples**: 75.22K
33
  - **Number of tokens (Llama 3)**: 37.91M
34
  - **Average document length (characters)**: 1540.27
35
- <!-- END-DESC-STATS -->
36
 
 
37
 
38
- ## Dataset Structure
39
  An example from the dataset looks as follows.
40
 
41
-
42
  <!-- START-SAMPLE -->
43
- ```py
44
- {
45
- "text": "Lav et referat af nedenstående tekst:\n\nTekst:\nOpdatering: Manden er nu fundet af Nordjyllands Politi[...]",
46
- "source": "nordjyllandnews",
47
- "id": "nordjyllandnews_0",
48
- "added": "2024-12-16",
49
- "created": "2000-01-01, 2024-01-01",
50
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
51
- "domain": "News",
52
- "metadata": {
53
- "source-pretty": "Nordjylland News"
54
- }
55
- }
56
- ```
57
-
58
- ### Data Fields
59
-
60
- An entry in the dataset consists of the following fields:
61
-
62
- - `text`(`str`): The content of the document.
63
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
64
- - `id` (`str`): An unique identifier for each document.
65
- - `added` (`str`): An date for when the document was added to this collection.
66
- - `created` (`str`): An date range for when the document was originally created.
67
- - `license` (`str`): The license of the document. The licenses vary according to the source.
68
- - `domain` (`str`): The domain of the source
69
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
70
- - `metadata/*`: Potentially additional metadata
71
  <!-- END-SAMPLE -->
72
 
73
 
74
- ### Dataset Statistics
75
-
76
- <!-- START-DATASET PLOTS -->
77
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
78
- <img>
79
- <!-- END-DATASET PLOTS -->
80
-
81
-
82
-
83
  ## Additional Information
84
 
85
 
 
26
 
27
  ## Dataset Description
28
 
 
29
  <!-- START-DESC-STATS -->
30
+
31
  - **Language**: dan, dansk, Danish
32
  - **Number of samples**: 75.22K
33
  - **Number of tokens (Llama 3)**: 37.91M
34
  - **Average document length (characters)**: 1540.27
 
35
 
36
+ <!-- END-DESC-STATS -->
37
 
38
+ ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
 
 
41
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  <!-- END-SAMPLE -->
43
 
44
 
 
 
 
 
 
 
 
 
 
45
  ## Additional Information
46
 
47
 
data/opensubtitles/create.py DELETED
@@ -1,123 +0,0 @@
1
- from pathlib import Path
2
- from typing import cast
3
-
4
- import pandas as pd
5
- import spacy
6
- from datasets import Dataset, load_dataset
7
-
8
- # KCE: mail from Leon
9
- sample_to_redact = {
10
- # Der kommer en dag
11
- "opensub_6726481",
12
- "opensub_6732371",
13
- # Kollektivet
14
- "opensub_6645818",
15
- # Flaskepost fra P
16
- "opensub_6666922",
17
- "opensub_6720216",
18
- "opensub_6958711",
19
- # Fasandræberne
20
- "opensub_6036947",
21
- "opensub_6008622",
22
- # En du elsker
23
- "opensub_5828376",
24
- "opensub_5828378",
25
- # En chance til
26
- "opensub_6177523",
27
- # Lev stærkt
28
- "opensub_6467655",
29
- # Nymphomaniac
30
- "opensub_5604391",
31
- "opensub_5748340",
32
- "opensub_5748494",
33
- "opensub_5629516",
34
- # Kvinden i buret
35
- "opensub_5636248",
36
- "opensub_5514603",
37
- "opensub_5504932",
38
- # Den skaldede frisør
39
- "opensub_5084880",
40
- "opensub_5031826",
41
- # Jagten
42
- "opensub_6929419",
43
- "opensub_4885548",
44
- # Melancholia
45
- "opensub_4421330",
46
- "opensub_4406991",
47
- "opensub_4418817",
48
- # Ambassadøren
49
- "opensub_4557721",
50
- # Antichrist
51
- "opensub_5511502",
52
- "opensub_3938655",
53
- "opensub_3636940",
54
- "opensub_3564521",
55
- "opensub_3562215",
56
- # En kongelig affære
57
- "opensub_4725493",
58
- "opensub_4725160",
59
- "opensub_4725159",
60
- "opensub_4916871",
61
- "opensub_5186746",
62
- # Brødre
63
- "opensub_233943",
64
- "opensub_87475",
65
- }
66
-
67
- column_order = [
68
- "text",
69
- "source",
70
- "id",
71
- "added",
72
- "created",
73
- "license",
74
- "domain",
75
- "metadata",
76
- ]
77
-
78
-
79
- def convert_sample(example: dict) -> dict:
80
- text = example["text"]
81
- if example["doc_id"] in sample_to_redact:
82
- nlp = spacy.blank("da")
83
- doc = nlp(text)
84
- text = doc[:200].text # first 200 words
85
-
86
- new_example = dict(
87
- text_new=text,
88
- id=example["doc_id"],
89
- source="opensubtitles",
90
- domain="Conversation",
91
- license="Creative Commons Legal Code\n\nCC0 1.0 Universal",
92
- added="2025-01-02",
93
- created="1920-01-01, 2018-01-01", # assuming v2018
94
- metadata={"source-pretty": "OpenSubtitles"},
95
- )
96
-
97
- return new_example
98
-
99
-
100
- def main():
101
- ds = load_dataset("DDSC/partial-danish-gigaword-no-twitter", split="train")
102
- ds = cast(Dataset, ds)
103
- ds = ds.filter(lambda x: x["source"] == "opensub", num_proc=4)
104
- ds = ds.map(convert_sample, num_proc=4)
105
- ds = ds.select_columns(column_order[1:] + ["text_new"])
106
- ds = ds.rename_columns({"text_new": "text"})
107
- # ensure order
108
- ds = ds.select_columns(column_order)
109
-
110
- df = ds.to_pandas()
111
- df = cast(pd.DataFrame, df)
112
- dedup_df = df.drop_duplicates(keep="first", subset=["text"])
113
- print("N. duplicates: ", df.shape[0] - dedup_df.shape[0]) # 2422
114
-
115
- ds = ds.select(dedup_df.index)
116
- assert len(set(ds["text"])) == len(ds)
117
-
118
- save_path = Path(__file__).parent / "opensubtitles.parquet"
119
- ds.to_parquet(save_path)
120
-
121
-
122
- if __name__ == "__main__":
123
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/opensubtitles/descriptive_stats.json DELETED
@@ -1 +0,0 @@
1
- {"number_of_samples": 29820, "average_document_length": 26298.017572099263, "number_of_tokens": 271599443, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
 
data/opensubtitles/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: cc0439ba8c58215d1cf1dcfa3dab4dd28c9f4d00065a44bba25757ee605f6425
  • Pointer size: 131 Bytes
  • Size of remote file: 550 kB
data/opensubtitles/opensubtitles.md DELETED
@@ -1,159 +0,0 @@
1
- ---
2
- pretty_name: OpenSubtitles
3
- language:
4
- - da
5
- license: cc0-1.0
6
- license_name: CC-0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- source_datasets:
13
- - DDSC/partial-danish-gigaword-no-twitter
14
- ---
15
-
16
- # Dataset Card for OpenSubtitles
17
-
18
- <!-- START-SHORT DESCRIPTION -->
19
- Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles).
20
- <!-- END-SHORT DESCRIPTION -->
21
-
22
-
23
- ## Dataset Description
24
-
25
- <!-- START-DESC-STATS -->
26
- - **Language**: dan, dansk, Danish
27
- - **Number of samples**: 29.82K
28
- - **Number of tokens (Llama 3)**: 271.60M
29
- - **Average document length (characters)**: 26298.02
30
- <!-- END-DESC-STATS -->
31
-
32
-
33
- ## Dataset Structure
34
- An example from the dataset looks as follows.
35
-
36
- <!-- START-SAMPLE -->
37
- ```py
38
- {
39
- "text": "Tidligere i vikingerne...\nJeg skal gå tilbage til England.\nBurde være gået tilbage for lang tid side[...]",
40
- "source": "opensubtitles",
41
- "id": "opensub_6822913",
42
- "added": "2025-01-02",
43
- "created": "1920-01-01, 2018-01-01",
44
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
45
- "domain": "Conversation",
46
- "metadata": {
47
- "source-pretty": "OpenSubtitles"
48
- }
49
- }
50
- ```
51
-
52
- ### Data Fields
53
-
54
- An entry in the dataset consists of the following fields:
55
-
56
- - `text`(`str`): The content of the document.
57
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
58
- - `id` (`str`): An unique identifier for each document.
59
- - `added` (`str`): An date for when the document was added to this collection.
60
- - `created` (`str`): An date range for when the document was originally created.
61
- - `license` (`str`): The license of the document. The licenses vary according to the source.
62
- - `domain` (`str`): The domain of the source
63
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
64
- - `metadata/*`: Potentially additional metadata
65
- <!-- END-SAMPLE -->
66
-
67
-
68
- ### Additional Processing
69
-
70
- Due to copyright concern additional documents have been removed due to copyright concerns. These include:
71
-
72
- ```py
73
- {
74
- # Der kommer en dag
75
- "opensub_6726481",
76
- "opensub_6732371",
77
- # Kollektivet
78
- "opensub_6645818",
79
- # Flaskepost fra P
80
- "opensub_6666922",
81
- "opensub_6720216",
82
- "opensub_6958711",
83
- # Fasandræberne
84
- "opensub_6036947",
85
- "opensub_6008622",
86
- # En du elsker
87
- "opensub_5828376",
88
- "opensub_5828378",
89
- # En chance til
90
- "opensub_6177523",
91
- # Lev stærkt
92
- "opensub_6467655",
93
- # Nymphomaniac
94
- "opensub_5604391",
95
- "opensub_5748340",
96
- "opensub_5748494",
97
- "opensub_5629516",
98
- # Kvinden i buret
99
- "opensub_5636248",
100
- "opensub_5514603",
101
- "opensub_5504932",
102
- # Den skaldede frisør
103
- "opensub_5084880",
104
- "opensub_5031826",
105
- # Jagten
106
- "opensub_6929419",
107
- "opensub_4885548",
108
- # Melancholia
109
- "opensub_4421330",
110
- "opensub_4406991",
111
- "opensub_4418817",
112
- # Ambassadøren
113
- "opensub_4557721",
114
- # Antichrist
115
- "opensub_5511502",
116
- "opensub_3938655",
117
- "opensub_3636940",
118
- "opensub_3564521",
119
- "opensub_3562215",
120
- # En kongelig affære
121
- "opensub_4725493",
122
- "opensub_4725160",
123
- "opensub_4725159",
124
- "opensub_4916871",
125
- "opensub_5186746",
126
- # Brødre
127
- "opensub_233943",
128
- "opensub_87475",
129
- }
130
- ```
131
-
132
- We have additionally removed duplicate entries from the original dataset.
133
-
134
- ### Dataset Statistics
135
-
136
- <!-- START-DATASET PLOTS -->
137
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
138
- <img>
139
- <!-- END-DATASET PLOTS -->
140
-
141
-
142
- ## Additional Information
143
-
144
-
145
- ### Citation Information
146
-
147
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
148
-
149
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
150
-
151
- ```bash
152
- @inproceedings{dagw,
153
- title = {{The Danish Gigaword Corpus}},
154
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
155
- year = 2021,
156
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
157
- publisher = {NEALT}
158
- }
159
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/opensubtitles/opensubtitles.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c80228f2095281e8e1ce2339a071873299dee2912f83706bf271ea782a94b39
3
- size 496269823
 
 
 
 
data/relig/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 66, "average_document_length": 53873.56060606061, "number_of_tokens": 1243970, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 66, "average_document_length": 53873.56060606061, "number_of_tokens": 1243970, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}
data/relig/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 6d49a67bfbc7b886985a767045b17b229bca49c3024af37e693bffd711aa45cc
  • Pointer size: 131 Bytes
  • Size of remote file: 531 kB
data/relig/relig.md CHANGED
@@ -24,58 +24,22 @@ Danish religious text from the 1700-2022.
24
 
25
  ## Dataset Description
26
 
27
-
28
- <!-- START-DESC-STATS -->
29
  - **Language**: dan, dansk, Danish
30
  - **Number of samples**: 66
31
  - **Number of tokens (Llama 3)**: 1.24M
32
  - **Average document length (characters)**: 53873.56
33
- <!-- END-DESC-STATS -->
34
 
 
35
 
36
 
37
- ## Dataset Structure
38
  An example from the dataset looks as follows.
39
 
40
-
41
  <!-- START-SAMPLE -->
42
- ```py
43
- {
44
- "text": "Salomos Højsang\nKys mig, giv mig Kys af din mund thi din Kærlighed er bedre end Vin.\nLifligt dufter [...]",
45
- "source": "relig",
46
- "id": "relig_SON",
47
- "added": "2020-09-14",
48
- "created": "1700-01-01, 2022-01-01",
49
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
50
- "domain": "Wiki & Books",
51
- "metadata": {
52
- "source-pretty": "Religious texts"
53
- }
54
- }
55
- ```
56
-
57
- ### Data Fields
58
-
59
- An entry in the dataset consists of the following fields:
60
-
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `id` (`str`): An unique identifier for each document.
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `license` (`str`): The license of the document. The licenses vary according to the source.
67
- - `domain` (`str`): The domain of the source
68
- - `metadata/source-pretty` (`str`): The long form version of the short-form source name
69
- - `metadata/*`: Potentially additional metadata
70
  <!-- END-SAMPLE -->
71
 
72
- ### Dataset Statistics
73
-
74
- <!-- START-DATASET PLOTS -->
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
- <img>
77
- <!-- END-DATASET PLOTS -->
78
-
79
 
80
  ## Additional Information
81
 
 
24
 
25
  ## Dataset Description
26
 
27
+ <!-- START-DESC-STATS -->
28
+
29
  - **Language**: dan, dansk, Danish
30
  - **Number of samples**: 66
31
  - **Number of tokens (Llama 3)**: 1.24M
32
  - **Average document length (characters)**: 53873.56
 
33
 
34
+ <!-- END-DESC-STATS -->
35
 
36
 
37
+ ## Dataset Sturcture
38
  An example from the dataset looks as follows.
39
 
 
40
  <!-- START-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  <!-- END-SAMPLE -->
42
 
 
 
 
 
 
 
 
43
 
44
  ## Additional Information
45
 
data/retsinformationdk/descriptive_stats.json CHANGED
@@ -1 +1 @@
1
- {"number_of_samples": 64043, "average_document_length": 22248.525506300455, "number_of_tokens": 516537034, "language": "dan, dansk, Danish", "revision": "6a88cbd06a598259a4879ee118c8ab1843c500ff"}
 
1
+ {"number_of_samples": 64043, "average_document_length": 22248.525506300455, "number_of_tokens": 516537034, "language": "dan, dansk, Danish", "revision": "ab78b9132d5697343896be76ff8f99a6b544b74b"}