KennethEnevoldsen commited on
Commit
6a88cbd
·
unverified ·
1 Parent(s): 546c3b3

Added lex.dk

Browse files
README.md CHANGED
@@ -5,6 +5,10 @@ configs:
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
 
 
 
 
8
  - config_name: opensubtitles
9
  data_files:
10
  - split: train
@@ -116,7 +120,8 @@ language_bcp47:
116
 
117
  <!--
118
  readme structure is inspired by:
119
- https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
 
120
 
121
 
122
  # 🧨 Danish Dynaword
@@ -138,6 +143,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
138
  - [Languages:](#languages)
139
  - [Dataset Structure](#dataset-structure)
140
  - [Data Instances](#data-instances)
 
141
  - [Data Splits](#data-splits)
142
  - [Dataset Creation](#dataset-creation)
143
  - [Curation Rationale](#curation-rationale)
@@ -151,12 +157,6 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
151
 
152
  ## Dataset Description
153
 
154
-
155
-
156
-
157
-
158
-
159
-
160
  <!-- START-DESC-STATS -->
161
  - **Language**: dan, dansk, Danish
162
  - **Number of samples**: 576.59K
@@ -165,11 +165,6 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
165
  <!-- END-DESC-STATS -->
166
 
167
 
168
-
169
-
170
-
171
-
172
-
173
  ### Dataset Summary
174
 
175
  The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
@@ -221,13 +216,6 @@ The dataset contains text from different sources which are thoroughly defined in
221
 
222
  Each entry in the dataset consists of a single text with associated metadata
223
 
224
-
225
-
226
-
227
-
228
-
229
-
230
-
231
  <!-- START-SAMPLE -->
232
  ```py
233
  {
@@ -259,13 +247,6 @@ An entry in the dataset consists of the following fields:
259
  - `metadata/*`: Potentially additional metadata
260
  <!-- END-SAMPLE -->
261
 
262
-
263
-
264
-
265
-
266
-
267
-
268
-
269
  ### Data Splits
270
 
271
  The entire corpus is provided in the `train` split.
@@ -284,52 +265,35 @@ This data generally contains no annotation besides the metadata attached to each
284
 
285
  Below follows a brief overview of the sources in the corpus along with their individual license.
286
 
287
-
288
-
289
-
290
-
291
-
292
-
293
-
294
-
295
-
296
-
297
-
298
-
299
-
300
-
301
-
302
-
303
-
304
-
305
-
306
  <!-- START-MAIN TABLE -->
307
- | Source | Description | N. Tokens | License |
308
- |:--------------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------|:-----------------------|
309
- | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.89M | [CC-0] |
310
- | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
311
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
312
- | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
313
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
314
- | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
315
- | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
316
- | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
317
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
318
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
319
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
320
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
321
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
322
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
323
- | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
324
- | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
325
- | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
326
- | [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
327
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
328
- | [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
329
- | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
330
- | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
331
- | **Total** | | 1.84B | |
332
-
 
 
333
  [opensubtitles]: data/opensubtitles/opensubtitles.md
334
  [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
335
  [ep]: data/ep/ep.md
@@ -362,18 +326,6 @@ Below follows a brief overview of the sources in the corpus along with their ind
362
  <!-- END-MAIN TABLE -->
363
 
364
 
365
-
366
-
367
-
368
-
369
-
370
-
371
-
372
-
373
-
374
-
375
-
376
-
377
  You can learn more about each dataset by pressing
378
 
379
  <!-- ### Quality Control
 
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
8
+ - config_name: lexdk
9
+ data_files:
10
+ - split: train
11
+ path: data/lexdk/*.parquet
12
  - config_name: opensubtitles
13
  data_files:
14
  - split: train
 
120
 
121
  <!--
122
  readme structure is inspired by:
123
+ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
124
+ -->
125
 
126
 
127
  # 🧨 Danish Dynaword
 
143
  - [Languages:](#languages)
144
  - [Dataset Structure](#dataset-structure)
145
  - [Data Instances](#data-instances)
146
+ - [Data Fields](#data-fields)
147
  - [Data Splits](#data-splits)
148
  - [Dataset Creation](#dataset-creation)
149
  - [Curation Rationale](#curation-rationale)
 
157
 
158
  ## Dataset Description
159
 
 
 
 
 
 
 
160
  <!-- START-DESC-STATS -->
161
  - **Language**: dan, dansk, Danish
162
  - **Number of samples**: 576.59K
 
165
  <!-- END-DESC-STATS -->
166
 
167
 
 
 
 
 
 
168
  ### Dataset Summary
169
 
170
  The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
 
216
 
217
  Each entry in the dataset consists of a single text with associated metadata
218
 
 
 
 
 
 
 
 
219
  <!-- START-SAMPLE -->
220
  ```py
221
  {
 
247
  - `metadata/*`: Potentially additional metadata
248
  <!-- END-SAMPLE -->
249
 
 
 
 
 
 
 
 
250
  ### Data Splits
251
 
252
  The entire corpus is provided in the `train` split.
 
265
 
266
  Below follows a brief overview of the sources in the corpus along with their individual license.
267
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
268
  <!-- START-MAIN TABLE -->
269
+ | Source | Description | N. Tokens | License |
270
+ | :------------------ | :--------------------------------------------------------------------------------------------------------------------------- | :-------- | :--------------------- |
271
+ | [lexdk] | Permissible use articles from [lex.dk](https://lex.dk) | 5.69M | [CC-BY-SA 4.0] |
272
+ | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.89M | [CC-0] |
273
+ | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
274
+ | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
275
+ | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
276
+ | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
277
+ | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
278
+ | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
279
+ | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
280
+ | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
281
+ | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
282
+ | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
283
+ | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
284
+ | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
285
+ | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
286
+ | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
287
+ | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
288
+ | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
289
+ | [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
290
+ | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
291
+ | [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
292
+ | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
293
+ | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
294
+ | **Total** | | 1.85B | |
295
+
296
+ [lexdk]: data/lexdk/lexdk.md
297
  [opensubtitles]: data/opensubtitles/opensubtitles.md
298
  [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
299
  [ep]: data/ep/ep.md
 
326
  <!-- END-MAIN TABLE -->
327
 
328
 
 
 
 
 
 
 
 
 
 
 
 
 
329
  You can learn more about each dataset by pressing
330
 
331
  <!-- ### Quality Control
data/lexdk/create.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """download lexdk from alexandrainst/lexdk-open"""
2
+
3
+ from datetime import datetime
4
+ from pathlib import Path
5
+ from typing import cast
6
+
7
+ import pandas as pd
8
+ from datasets import Dataset, load_dataset
9
+
10
+ column_order = [
11
+ "text",
12
+ "source",
13
+ "id",
14
+ "added",
15
+ "created",
16
+ "license",
17
+ "domain",
18
+ "metadata",
19
+ ]
20
+
21
+
22
+ def convert_sample(example: dict) -> dict:
23
+ # from sample:
24
+ # {
25
+ # "url": "https://denstoredanske.lex.dk/Kullmanns_M%C3%B8lle",
26
+ # "title": "Kullmanns Mølle",
27
+ # "clarification": "",
28
+ # "authors": ["https://brugere.lex.dk/6929"],
29
+ # "date": "2021-01-20T13:23:20+01:00",
30
+ # "license": "fri anvendelse",
31
+ # "text": "Kullmanns Mølle er en mølle i Gudhjem, opkaldt efter Matts Kullmann, der byggede møllen i 1893 til sin søn, Christian Kullmann, se Gudhjem Mølle.",
32
+ # }
33
+ date = datetime.fromisoformat(example["date"])
34
+ text = f"{example["title"]}\n\npubliceret: {date}\n{example["text"]}"
35
+
36
+ new_example = dict(
37
+ text_new=text,
38
+ id=example["url"],
39
+ source="lexdk",
40
+ domain="Conversation",
41
+ license="cc-by-sa-4.0",
42
+ added="2025-01-04",
43
+ created=f"{date.date()}, {date.date()}",
44
+ metadata={"source-pretty": "Lex.dk"},
45
+ )
46
+
47
+ return new_example
48
+
49
+
50
+ def main():
51
+ ds = load_dataset("alexandrainst/lexdk-open", split="train")
52
+ ds = cast(Dataset, ds)
53
+
54
+ dates = [datetime.fromisoformat(date).date() for date in ds["date"]]
55
+ print(str(min(dates)), ",", str(max(dates))) # 2009-01-28, 2023-09-05
56
+
57
+ assert len(set(ds["url"])) == len(ds)
58
+
59
+ ds = ds.map(convert_sample, num_proc=4)
60
+ ds = ds.select_columns(column_order[1:] + ["text_new"])
61
+ ds = ds.rename_columns({"text_new": "text"})
62
+ # ensure order
63
+ ds = ds.select_columns(column_order)
64
+
65
+ df = ds.to_pandas()
66
+ df = cast(pd.DataFrame, df)
67
+ dedup_df = df.drop_duplicates(keep="first", subset=["text"])
68
+ print("N. duplicates: ", df.shape[0] - dedup_df.shape[0]) # 0
69
+
70
+ ds = ds.select(dedup_df.index)
71
+ assert len(set(ds["text"])) == len(ds)
72
+
73
+ save_path = Path(__file__).parent / "lexdk.parquet"
74
+ ds.to_parquet(save_path)
75
+
76
+
77
+ if __name__ == "__main__":
78
+ main()
data/lexdk/descriptive_stats.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"number_of_samples": 11887, "average_document_length": 1405.6435601918063, "number_of_tokens": 5688613, "language": "dan, dansk, Danish", "revision": "546c3b35e0e37fe1f9eff91da9f73e5672833489"}
data/lexdk/lexdk.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: OpenSubtitles
3
+ language:
4
+ - da
5
+ license: cc-by-sa-4.0
6
+ license_name: CC-BY-SA 4.0
7
+ task_categories:
8
+ - text-generation
9
+ - fill-mask
10
+ task_ids:
11
+ - language-modeling
12
+ source_datasets:
13
+ - alexandrainst/lexdk-open
14
+ ---
15
+
16
+ # Dataset Card for OpenSubtitles
17
+
18
+ <!-- START-SHORT DESCRIPTION -->
19
+ Permissible use articles from [lex.dk](https://lex.dk).
20
+ <!-- END-SHORT DESCRIPTION -->
21
+
22
+ Lex.dk is a Danish online encyclopedia platform providing access to reliable and authoritative knowledge on a wide range of topics. It is created and curated by experts, ensuring high-quality, accurate content. The platform serves as a central hub for general and specialized information in Danish, making it a valuable resource for education, research, and general learning.
23
+
24
+
25
+
26
+
27
+ ## Dataset Description
28
+
29
+ <!-- START-DESC-STATS -->
30
+ - **Language**: dan, dansk, Danish
31
+ - **Number of samples**: 11.89K
32
+ - **Number of tokens (Llama 3)**: 5.69M
33
+ - **Average document length (characters)**: 1405.64
34
+ <!-- END-DESC-STATS -->
35
+
36
+
37
+ ## Dataset Structure
38
+ An example from the dataset looks as follows.
39
+
40
+ <!-- START-SAMPLE -->
41
+ ```py
42
+ {
43
+ "text": "Oluf Høst Museet\n\npubliceret: 2014-04-23 03:42:33+02:00\nOluf Høst Museet, kunstmuseum i Gudhjem, Bor[...]",
44
+ "source": "lexdk",
45
+ "id": "https://denstoredanske.lex.dk/Oluf_H%C3%B8st_Museet",
46
+ "added": "2025-01-04",
47
+ "created": "2014-04-23, 2014-04-23",
48
+ "license": "cc-by-sa-4.0",
49
+ "domain": "Conversation",
50
+ "metadata": {
51
+ "source-pretty": "Lex.dk"
52
+ }
53
+ }
54
+ ```
55
+
56
+ ### Data Fields
57
+
58
+ An entry in the dataset consists of the following fields:
59
+
60
+ - `text`(`str`): The content of the document.
61
+ - `source` (`str`): The source of the document (see [Source Data](#source-data)).
62
+ - `id` (`str`): An unique identifier for each document.
63
+ - `added` (`str`): An date for when the document was added to this collection.
64
+ - `created` (`str`): An date range for when the document was originally created.
65
+ - `license` (`str`): The license of the document. The licenses vary according to the source.
66
+ - `domain` (`str`): The domain of the source
67
+ - `metadata/source-pretty` (`str`): The long form version of the short-form source name
68
+ - `metadata/*`: Potentially additional metadata
69
+ <!-- END-SAMPLE -->
70
+
71
+
72
+ ## Additional Information
73
+
74
+
75
+ ### Citation Information
76
+
77
+ This dataset is derived from the publicly availabe dataset [alexandrainst/lexdk-open](https://huggingface.co/datasets/alexandrainst/lexdk-open).
data/lexdk/lexdk.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c4779881f575d6f612c8603ed4896f10ebc7293c59637fa8a0773ee4545fce3
3
+ size 10007743
data/opensubtitles/opensubtitles.md CHANGED
@@ -22,18 +22,6 @@ Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/v
22
 
23
  ## Dataset Description
24
 
25
-
26
-
27
-
28
-
29
-
30
-
31
-
32
-
33
-
34
-
35
-
36
-
37
  <!-- START-DESC-STATS -->
38
  - **Language**: dan, dansk, Danish
39
  - **Number of samples**: 29.82K
@@ -42,31 +30,9 @@ Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/v
42
  <!-- END-DESC-STATS -->
43
 
44
 
45
-
46
-
47
-
48
-
49
-
50
-
51
-
52
-
53
-
54
-
55
-
56
-
57
  ## Dataset Structure
58
  An example from the dataset looks as follows.
59
 
60
-
61
-
62
-
63
-
64
-
65
-
66
-
67
-
68
-
69
-
70
  <!-- START-SAMPLE -->
71
  ```py
72
  {
@@ -99,15 +65,6 @@ An entry in the dataset consists of the following fields:
99
  <!-- END-SAMPLE -->
100
 
101
 
102
-
103
-
104
-
105
-
106
-
107
-
108
-
109
-
110
-
111
  ### Additional Processing
112
 
113
  Due to copyright concern additional documents have been removed due to copyright concerns. These include:
 
22
 
23
  ## Dataset Description
24
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  <!-- START-DESC-STATS -->
26
  - **Language**: dan, dansk, Danish
27
  - **Number of samples**: 29.82K
 
30
  <!-- END-DESC-STATS -->
31
 
32
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ## Dataset Structure
34
  An example from the dataset looks as follows.
35
 
 
 
 
 
 
 
 
 
 
 
36
  <!-- START-SAMPLE -->
37
  ```py
38
  {
 
65
  <!-- END-SAMPLE -->
66
 
67
 
 
 
 
 
 
 
 
 
 
68
  ### Additional Processing
69
 
70
  Due to copyright concern additional documents have been removed due to copyright concerns. These include:
src/tests/readme_parsing.py CHANGED
@@ -41,4 +41,4 @@ def replace_tag(markdown: str, package: str, tag: str) -> str:
41
  start_md, _, remainder = markdown.partition(tag_start)
42
  _, _, end_md = remainder.partition(tag_end)
43
 
44
- return f"{start_md}\n{tag_start}\n{package.strip()}\n{tag_end}\n{end_md}"
 
41
  start_md, _, remainder = markdown.partition(tag_start)
42
  _, _, end_md = remainder.partition(tag_end)
43
 
44
+ return f"{start_md}{tag_start}\n{package.strip()}\n{tag_end}{end_md}"