parquet-converter commited on
Commit
d287943
·
1 Parent(s): e254179

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
nordic_dsl_10000test.csv → 10k/nordic_langid-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:59bcbe09179b6b6007710291750d258b586e53ddf0b3dd16df521bb6b25c7d4a
3
- size 301970
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9be26dd3f0d24f31fee196ee3262a9a9e0f1b9b171d864864121ebb7a9eb1ce6
3
+ size 214281
nordic_dsl_10000train.csv → 10k/nordic_langid-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85aa0e96ad94f1eb02a341e03db0e0c8ed986d6faedbddf8538d969719e109df
3
- size 5753499
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:104fc33e6f0547858245911c5b29440abc0b60ac7719f71aa9d7cca5e9f9169f
3
+ size 4072480
nordic_dsl_50000test.csv → 50k/nordic_langid-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e2e06503670905fdc5324e23c70d4bb3a77c59aaf43eb071fdc8f4c28fccc9fa
3
- size 1958240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:309f054f2fa0bfe804b68e234559dd273c1a954f4e7609671120c1788e306e23
3
+ size 1383259
nordic_dsl_50000train.csv → 50k/nordic_langid-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b3e31ace17411a501eb0138c33c1d811a233cd09e918badec4abd81486e556c
3
- size 37159623
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd4d5d3f88c51b88e53a9e6c7924e10d4b0383101298b15acca842552c79904a
3
+ size 26259329
README.md DELETED
@@ -1,207 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - da
8
- - nn
9
- - nb
10
- - fo
11
- - is
12
- - sv
13
- license:
14
- - cc-by-sa-3.0
15
- multilinguality:
16
- - multilingual
17
- size_categories:
18
- - 100K<n<1M
19
- source_datasets:
20
- - original
21
- task_categories:
22
- - text-classification
23
- task_ids: []
24
- paperswithcode_id: nordic-langid
25
- pretty_name: Nordic Language ID for Distinguishing between Similar Languages
26
- tags:
27
- - language-identification
28
- ---
29
-
30
- # Dataset Card for nordic_langid
31
-
32
- ## Table of Contents
33
- - [Dataset Description](#dataset-description)
34
- - [Dataset Summary](#dataset-summary)
35
- - [Supported Tasks](#supported-tasks-and-leaderboards)
36
- - [Languages](#languages)
37
- - [Dataset Structure](#dataset-structure)
38
- - [Data Instances](#data-instances)
39
- - [Data Fields](#data-instances)
40
- - [Data Splits](#data-instances)
41
- - [Dataset Creation](#dataset-creation)
42
- - [Curation Rationale](#curation-rationale)
43
- - [Source Data](#source-data)
44
- - [Annotations](#annotations)
45
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
- - [Considerations for Using the Data](#considerations-for-using-the-data)
47
- - [Social Impact of Dataset](#social-impact-of-dataset)
48
- - [Discussion of Biases](#discussion-of-biases)
49
- - [Other Known Limitations](#other-known-limitations)
50
- - [Additional Information](#additional-information)
51
- - [Dataset Curators](#dataset-curators)
52
- - [Licensing Information](#licensing-information)
53
- - [Citation Information](#citation-information)
54
-
55
- ## Dataset Description
56
-
57
- - **Homepage:** [https://github.com/StrombergNLP/NordicDSL](https://github.com/StrombergNLP/NordicDSL)
58
- - **Repository:** [https://github.com/StrombergNLP/NordicDSL](https://github.com/StrombergNLP/NordicDSL)
59
- - **Paper:** [https://aclanthology.org/2021.vardial-1.8/](https://aclanthology.org/2021.vardial-1.8/)
60
- - **Leaderboard:** [Needs More Information]
61
- - **Point of Contact:** [René Haas](mailto:[email protected])
62
-
63
- ### Dataset Summary
64
-
65
- Automatic language identification is a challenging problem. Discriminating
66
- between closely related languages is especially difficult. This paper presents
67
- a machine learning approach for automatic language identification for the
68
- Nordic languages, which often suffer miscategorisation by existing
69
- state-of-the-art tools. Concretely we will focus on discrimination between six
70
- Nordic language: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål),
71
- Faroese and Icelandic.
72
-
73
- This is the data for the tasks. Two variants are provided: 10K and 50K, with
74
- holding 10,000 and 50,000 examples for each language respectively.
75
-
76
- For more info, see the paper: [Discriminating Between Similar Nordic Languages](https://aclanthology.org/2021.vardial-1.8/).
77
-
78
- ### Supported Tasks and Leaderboards
79
-
80
- *
81
-
82
- ### Languages
83
-
84
- This dataset is in six similar Nordic language:
85
-
86
- - Danish, `da`
87
- - Faroese, `fo`
88
- - Icelandic, `is`
89
- - Norwegian Bokmål, `nb`
90
- - Norwegian Nynorsk, `nn`
91
- - Swedish, `sv`
92
-
93
- ## Dataset Structure
94
-
95
- The dataset has two parts, one with 10K samples per language and another with 50K per language.
96
- The original splits and data allocation used in the paper is presented here.
97
-
98
- ### Data Instances
99
-
100
- [Needs More Information]
101
-
102
- ### Data Fields
103
-
104
- - `id`: the sentence's unique identifier, `string`
105
- - `sentence`: the test to be classifier, a `string`
106
- - `language`: the class, one of `da`, `fo`, `is`, `nb`, `no`, `sv`.
107
-
108
- ### Data Splits
109
-
110
- Train and Test splits are provided, divided using the code provided with the paper.
111
-
112
- ## Dataset Creation
113
-
114
- ### Curation Rationale
115
-
116
- Data is taken from Wikipedia and Tatoeba from each of these six languages.
117
-
118
- ### Source Data
119
-
120
- #### Initial Data Collection and Normalization
121
-
122
- **Data collection** Data was scraped from Wikipedia. We downloaded summaries for randomly chosen Wikipedia
123
- articles in each of the languages, saved as raw text
124
- to six .txt files of about 10MB each.
125
- The 50K section is extended with Tatoeba data, which provides a different register to Wikipedia text, and then topped up with more Wikipedia data.
126
-
127
- **Extracting Sentences** The first pass in sentence
128
- tokenisation is splitting by line breaks. We then extract shorter sentences with the sentence tokenizer
129
- (sent_tokenize) function from NLTK (Loper
130
- and Bird, 2002). This does a better job than just
131
- splitting by ’.’ due to the fact that abbreviations,
132
- which can appear in a legitimate sentence, typically
133
- include a period symbol.
134
-
135
- **Cleaning characters** The initial data set has
136
- many characters that do not belong to the alphabets of the languages we work with. Often the
137
- Wikipedia pages for people or places contain names
138
- in foreign languages. For example a summary
139
- might contain Chinese or Russian characters which
140
- are not strong signals for the purpose of discriminating between the target languages.
141
- Further, it can be that some characters in the
142
- target languages are mis-encoded. These misencodings are also not likely to be intrinsically
143
- strong or stable signals.
144
- To simplify feature extraction, and to reduce the
145
- size of the vocabulary, the raw data is converted
146
- to lowercase and stripped of all characters which
147
- are not part of the standard alphabet of the six
148
- languages using a character whitelist.
149
-
150
- #### Who are the source language producers?
151
-
152
- The source language is from Wikipedia contributors and Tatoeba contributors.
153
-
154
- ### Annotations
155
-
156
- #### Annotation process
157
-
158
- The annotations were found.
159
-
160
- #### Who are the annotators?
161
-
162
- The annotations were found. They are determined by which language section a contributor posts their content to.
163
-
164
- ### Personal and Sensitive Information
165
-
166
- The data hasn't been checked for PII, and is already all public. Tatoeba is is based on translations of synthetic conversational turns and is unlikely to bear personal or sensitive information.
167
-
168
- ## Considerations for Using the Data
169
-
170
- ### Social Impact of Dataset
171
-
172
- This dataset is intended to help correctly identify content in the languages of six minority languages. Existing systems often confuse these, especially Bokmål and Danish or Icelandic and Faroese. However, some dialects are missed (for example Bornholmsk) and the closed nature of the classification task thus excludes speakers of these languages without recognising their existence.
173
-
174
- ### Discussion of Biases
175
-
176
- The text comes from only two genres, so might not transfer well to other domains.
177
-
178
- ### Other Known Limitations
179
-
180
- [Needs More Information]
181
-
182
- ## Additional Information
183
-
184
- ### Dataset Curators
185
-
186
- [Needs More Information]
187
-
188
- ### Licensing Information
189
-
190
- The data here is licensed CC-BY-SA 3.0. If you use this data, you MUST state its origin.
191
-
192
- ### Citation Information
193
-
194
- ````
195
- @inproceedings{haas-derczynski-2021-discriminating,
196
- title = "Discriminating Between Similar Nordic Languages",
197
- author = "Haas, Ren{\'e} and
198
- Derczynski, Leon",
199
- booktitle = "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
200
- month = apr,
201
- year = "2021",
202
- address = "Kiyv, Ukraine",
203
- publisher = "Association for Computational Linguistics",
204
- url = "https://aclanthology.org/2021.vardial-1.8",
205
- pages = "67--75",
206
- }
207
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"10k": {"description": "Automatic language identification is a challenging problem. Discriminating\nbetween closely related languages is especially difficult. This paper presents\na machine learning approach for automatic language identification for the\nNordic languages, which often suffer miscategorisation by existing \nstate-of-the-art tools. Concretely we will focus on discrimination between six \nNordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokm\u00e5l), \nFaroese and Icelandic.\n\nThis is the data for the tasks. Two variants are provided: 10K and 50K, with\nholding 10,000 and 50,000 examples for each language respectively.\n\n", "citation": "@inproceedings{haas-derczynski-2021-discriminating,\n title = \"Discriminating Between Similar Nordic Languages\",\n author = \"Haas, Ren{'e} and\n Derczynski, Leon\",\n booktitle = \"Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects\",\n month = apr,\n year = \"2021\",\n address = \"Kiyv, Ukraine\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2021.vardial-1.8\",\n pages = \"67--75\",\n}\n\n", "homepage": "https://aclanthology.org/2021.vardial-1.8/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"num_classes": 6, "names": ["dk", "sv", "nb", "nn", "fo", "is"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "nordic_lang_id", "config_name": "10k", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5856359, "num_examples": 56985, "dataset_name": "nordic_lang_id"}, "test": {"name": "test", "num_bytes": 303860, "num_examples": 3000, "dataset_name": "nordic_lang_id"}}, "download_checksums": {"nordic_dsl_10000train.csv": {"num_bytes": 5753499, "checksum": "85aa0e96ad94f1eb02a341e03db0e0c8ed986d6faedbddf8538d969719e109df"}, "nordic_dsl_10000test.csv": {"num_bytes": 301970, "checksum": "59bcbe09179b6b6007710291750d258b586e53ddf0b3dd16df521bb6b25c7d4a"}}, "download_size": 6055469, "post_processing_size": null, "dataset_size": 6160219, "size_in_bytes": 12215688}, "50k": {"description": "Automatic language identification is a challenging problem. Discriminating\nbetween closely related languages is especially difficult. This paper presents\na machine learning approach for automatic language identification for the\nNordic languages, which often suffer miscategorisation by existing \nstate-of-the-art tools. Concretely we will focus on discrimination between six \nNordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokm\u00e5l), \nFaroese and Icelandic.\n\nThis is the data for the tasks. Two variants are provided: 10K and 50K, with\nholding 10,000 and 50,000 examples for each language respectively.\n\n", "citation": "@inproceedings{haas-derczynski-2021-discriminating,\n title = \"Discriminating Between Similar Nordic Languages\",\n author = \"Haas, Ren{'e} and\n Derczynski, Leon\",\n booktitle = \"Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects\",\n month = apr,\n year = \"2021\",\n address = \"Kiyv, Ukraine\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2021.vardial-1.8\",\n pages = \"67--75\",\n}\n\n", "homepage": "https://aclanthology.org/2021.vardial-1.8/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"num_classes": 6, "names": ["dk", "sv", "nb", "nn", "fo", "is"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "nordic_lang_id", "config_name": "50k", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 37901206, "num_examples": 284231, "dataset_name": "nordic_lang_id"}, "test": {"name": "test", "num_bytes": 1977050, "num_examples": 14960, "dataset_name": "nordic_lang_id"}}, "download_checksums": {"nordic_dsl_50000train.csv": {"num_bytes": 37159623, "checksum": "5b3e31ace17411a501eb0138c33c1d811a233cd09e918badec4abd81486e556c"}, "nordic_dsl_50000test.csv": {"num_bytes": 1958240, "checksum": "e2e06503670905fdc5324e23c70d4bb3a77c59aaf43eb071fdc8f4c28fccc9fa"}}, "download_size": 39117863, "post_processing_size": null, "dataset_size": 39878256, "size_in_bytes": 78996119}}
 
 
nordic_langid.py DELETED
@@ -1,152 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """NordicDSL: A language identification datasets for Nordic languages"""
18
-
19
- import csv
20
- import os
21
-
22
- import datasets
23
-
24
-
25
- logger = datasets.logging.get_logger(__name__)
26
-
27
-
28
- _CITATION = """\
29
- @inproceedings{haas-derczynski-2021-discriminating,
30
- title = "Discriminating Between Similar Nordic Languages",
31
- author = "Haas, Ren{\'e} and
32
- Derczynski, Leon",
33
- booktitle = "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
34
- month = apr,
35
- year = "2021",
36
- address = "Kiyv, Ukraine",
37
- publisher = "Association for Computational Linguistics",
38
- url = "https://aclanthology.org/2021.vardial-1.8",
39
- pages = "67--75",
40
- }
41
-
42
- """
43
-
44
- _DESCRIPTION = """\
45
- Automatic language identification is a challenging problem. Discriminating
46
- between closely related languages is especially difficult. This paper presents
47
- a machine learning approach for automatic language identification for the
48
- Nordic languages, which often suffer miscategorisation by existing
49
- state-of-the-art tools. Concretely we will focus on discrimination between six
50
- Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål),
51
- Faroese and Icelandic.
52
-
53
- This is the data for the tasks. Two variants are provided: 10K and 50K, with
54
- holding 10,000 and 50,000 examples for each language respectively.
55
-
56
- """
57
-
58
- _URLS = {
59
- "10K": "nordic_dsl_10000",
60
- "50K": "nordic_dsl_50000",
61
- }
62
-
63
-
64
- class NordicLangIdConfig(datasets.BuilderConfig):
65
- """BuilderConfig for NordicLangId"""
66
-
67
- def __init__(self, **kwargs):
68
- """BuilderConfig NordicLangId.
69
-
70
- Args:
71
- **kwargs: keyword arguments forwarded to super.
72
- """
73
- super(NordicLangIdConfig, self).__init__(**kwargs)
74
-
75
-
76
- class NordicLangId(datasets.GeneratorBasedBuilder):
77
- """NordicLangId dataset."""
78
-
79
- VERSION = datasets.Version("1.0.0")
80
-
81
- BUILDER_CONFIGS = [
82
- NordicLangIdConfig(
83
- name="10k",
84
- description="Data for distinguishing between similar Nordic languages: 10k examples per class",
85
- version=VERSION,
86
- ),
87
- NordicLangIdConfig(
88
- name="50k",
89
- description="Data for distinguishing between similar Nordic languages: 50k examples per class",
90
- version=VERSION,
91
- ),
92
- ]
93
-
94
- def _info(self):
95
- return datasets.DatasetInfo(
96
- description=_DESCRIPTION,
97
- features=datasets.Features(
98
- {
99
- "id": datasets.Value("string"),
100
- "sentence": datasets.Value("string"),
101
- "language": datasets.features.ClassLabel(
102
- names=[
103
- "dk",
104
- "sv",
105
- "nb",
106
- "nn",
107
- "fo",
108
- "is",
109
- ]
110
- ),
111
- }
112
- ),
113
- supervised_keys=None,
114
- homepage="https://aclanthology.org/2021.vardial-1.8/",
115
- citation=_CITATION,
116
- )
117
-
118
- def _split_generators(self, dl_manager):
119
- """Returns SplitGenerators."""
120
- if self.config.name == "10k":
121
- downloaded_train = dl_manager.download(_URLS["10K"] + 'train.csv')
122
- downloaded_test = dl_manager.download(_URLS["10K"] + 'test.csv')
123
- elif self.config.name == "50k":
124
- downloaded_train = dl_manager.download(_URLS["50K"] + 'train.csv')
125
- downloaded_test = dl_manager.download(_URLS["50K"] + 'test.csv')
126
-
127
- return [
128
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_train}),
129
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_test}),
130
- ]
131
-
132
- def _generate_examples(self, filepath):
133
- logger.info("⏳ Generating examples from = %s", filepath)
134
- with open(filepath, encoding="utf-8") as f:
135
- guid = 0
136
- for line in f:
137
- line = line.strip()
138
- if not line:
139
- continue
140
- if self.config.name == "10k":
141
- line = line.replace('dataset10000, ', '')
142
- if self.config.name == "50k":
143
- line = line.replace('dataset50000, ', '')
144
-
145
- instance = {
146
- "id": str(guid),
147
- "language": line[-2:],
148
- "sentence": line[:-3],
149
- }
150
-
151
- yield guid, instance
152
- guid += 1