Datasets:

parquet-converter commited on
Commit
c406397
1 Parent(s): 708662e

Update parquet files

Browse files
.gitattributes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ ner/hr500k-train.parquet filter=lfs diff=lfs merge=lfs -text
2
+ ud/hr500k-train.parquet filter=lfs diff=lfs merge=lfs -text
3
+ upos/hr500k-train.parquet filter=lfs diff=lfs merge=lfs -text
README.md DELETED
@@ -1,47 +0,0 @@
1
- ---
2
- language:
3
- - hr
4
- license:
5
- - cc-by-sa-4.0
6
- task_categories:
7
- - other
8
- task_ids:
9
- - lemmatization
10
- - named-entity-recognition
11
- - part-of-speech
12
- tags:
13
- - structure-prediction
14
- - normalization
15
- - tokenization
16
- ---
17
-
18
- The hr500k training corpus contains 506,457 Croatian tokens manually annotated on the levels of tokenisation, sentence segmentation, morphosyntactic tagging, lemmatisation, named entities and dependency syntax.
19
-
20
- On the sentence level, the dataset contains 20159 training samples, 1963 validation samples and 2672 test samples
21
- across the respective data splits. Each sample represents a sentence and includes the following features:
22
- sentence ID ('sent\_id'), sentence text ('text'), list of tokens ('tokens'), list of lemmas ('lemmas'),
23
- list of MULTEXT-East tags ('xpos\_tags), list of UPOS tags ('upos\_tags'), list of morphological features ('feats'),
24
- and list of IOB tags ('iob\_tags'). A subset of the data also contains universal dependencies ('ud') and consists of
25
- 7498 training samples, 649 validation samples, and 742 test samples.
26
-
27
- Three dataset configurations are available, namely 'ner', 'upos', and 'ud', with the corresponding features
28
- encoded as class labels. If the configuration is not specified, it defaults to 'ner'.
29
-
30
- If you use this dataset in your research, please cite the following paper:
31
-
32
- ```
33
- Bibtex @InProceedings{LJUBEI16.340,
34
- author = {Nikola Ljubešić and Filip Klubička and Željko Agić and Ivo-Pavao Jazbec},
35
- title = {New Inflectional Lexicons and Training Corpora for Improved Morphosyntactic Annotation of Croatian and Serbian},
36
- booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
37
- year = {2016},
38
- month = {may},
39
- date = {23-28},
40
- location = {Portorož, Slovenia},
41
- editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Sara Goggi and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Helene Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
42
- publisher = {European Language Resources Association (ELRA)},
43
- address = {Paris, France},
44
- isbn = {978-2-9517408-9-1},
45
- language = {english}
46
- }
47
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data_ner.zip DELETED
Binary file (5.11 MB)
 
data_ud.zip DELETED
Binary file (2.16 MB)
 
hr500k.py DELETED
@@ -1,316 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the 'License');
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an 'AS IS' BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
-
17
- import os
18
-
19
- import datasets
20
-
21
-
22
- _CITATION = ''
23
- _DESCRIPTION = """The hr500k training corpus contains about 500,000 tokens manually annotated on the levels of
24
- tokenisation, sentence segmentation, morphosyntactic tagging, lemmatisation and named entities.
25
-
26
- On the sentence level, the dataset contains 20159 training samples, 1963 validation samples and 2672 test samples
27
- across the respective data splits. Each sample represents a sentence and includes the following features:
28
- sentence ID ('sent_id'), sentence text ('text'), list of tokens ('tokens'), list of lemmas ('lemmas'),
29
- list of Multext-East tags ('xpos_tags), list of UPOS tags ('upos_tags'),
30
- list of morphological features ('feats'), and list of IOB tags ('iob_tags'). The 'upos_tags' and 'iob_tags' features
31
- are encoded as class labels.
32
- """
33
- _HOMEPAGE = 'https://www.clarin.si/repository/xmlui/handle/11356/1183#'
34
- _LICENSE = ''
35
-
36
- _URLs = {
37
- 'ner': 'https://huggingface.co/datasets/classla/hr500k/raw/main/data_ner.zip',
38
- 'upos': 'https://huggingface.co/datasets/classla/hr500k/raw/main/data_ner.zip',
39
- 'ud': 'https://huggingface.co/datasets/classla/hr500k/raw/main/data_ud.zip'
40
- }
41
-
42
- _DATA_DIRS = {
43
- 'ner': 'data_ner',
44
- 'upos': 'data_ner',
45
- 'ud': 'data_ud'
46
- }
47
-
48
-
49
- class Hr500K(datasets.GeneratorBasedBuilder):
50
- VERSION = datasets.Version('1.0.1')
51
-
52
- BUILDER_CONFIGS = [
53
- datasets.BuilderConfig(
54
- name='upos',
55
- version=VERSION,
56
- description=''
57
- ),
58
- datasets.BuilderConfig(
59
- name='ner',
60
- version=VERSION,
61
- description=''
62
- ),
63
- datasets.BuilderConfig(
64
- name='ud',
65
- version=VERSION,
66
- description=''
67
- )
68
- ]
69
-
70
- DEFAULT_CONFIG_NAME = 'ner'
71
-
72
- def _info(self):
73
- if self.config.name == "upos":
74
- features = datasets.Features(
75
- {
76
- 'sent_id': datasets.Value('string'),
77
- 'text': datasets.Value('string'),
78
- 'tokens': datasets.Sequence(datasets.Value('string')),
79
- 'lemmas': datasets.Sequence(datasets.Value('string')),
80
- 'xpos_tags': datasets.Sequence(datasets.Value('string')),
81
- 'upos_tags': datasets.Sequence(
82
- datasets.features.ClassLabel(
83
- names=[
84
- 'X',
85
- 'INTJ',
86
- 'VERB',
87
- 'PROPN',
88
- 'ADV',
89
- 'ADJ',
90
- 'PUNCT',
91
- 'PRON',
92
- 'DET',
93
- 'NUM',
94
- 'SYM',
95
- 'SCONJ',
96
- 'NOUN',
97
- 'AUX',
98
- 'PART',
99
- 'CCONJ',
100
- 'ADP'
101
- ]
102
- )
103
- ),
104
- 'feats': datasets.Sequence(datasets.Value('string')),
105
- 'iob_tags': datasets.Sequence(datasets.Value('string'))
106
- }
107
- )
108
- elif self.config.name == "ner":
109
- features = datasets.Features(
110
- {
111
- 'sent_id': datasets.Value('string'),
112
- 'text': datasets.Value('string'),
113
- 'tokens': datasets.Sequence(datasets.Value('string')),
114
- 'lemmas': datasets.Sequence(datasets.Value('string')),
115
- 'xpos_tags': datasets.Sequence(datasets.Value('string')),
116
- 'upos_tags': datasets.Sequence(datasets.Value('string')),
117
- 'feats': datasets.Sequence(datasets.Value('string')),
118
- 'iob_tags': datasets.Sequence(
119
- datasets.features.ClassLabel(
120
- names=[
121
- 'I-org',
122
- 'B-misc',
123
- 'B-per',
124
- 'B-deriv-per',
125
- 'B-org',
126
- 'B-loc',
127
- 'I-deriv-per',
128
- 'I-misc',
129
- 'I-loc',
130
- 'I-per',
131
- 'O'
132
- ]
133
- )
134
- )
135
- }
136
- )
137
- else:
138
- features = datasets.Features(
139
- {
140
- 'sent_id': datasets.Value('string'),
141
- 'text': datasets.Value('string'),
142
- 'tokens': datasets.Sequence(datasets.Value('string')),
143
- 'lemmas': datasets.Sequence(datasets.Value('string')),
144
- 'xpos_tags': datasets.Sequence(datasets.Value('string')),
145
- 'upos_tags': datasets.Sequence(datasets.Value('string')),
146
- 'feats': datasets.Sequence(datasets.Value('string')),
147
- 'iob_tags': datasets.Sequence(datasets.Value('string')),
148
- 'uds': datasets.Sequence(
149
- datasets.features.ClassLabel(
150
- names=[
151
- 'det', 'aux_pass', 'list', 'cc', 'csubj', 'xcomp', 'nmod', 'dislocated', 'acl', 'fixed',
152
- 'obj', 'dep', 'advmod_emph', 'goeswith', 'advmod', 'nsubj', 'punct', 'amod', 'expl_pv',
153
- 'mark', 'obl', 'flat_foreign', 'conj', 'compound', 'expl', 'csubj_pass', 'appos',
154
- 'case', 'advcl', 'parataxis', 'iobj', 'root', 'cop', 'aux', 'orphan', 'discourse',
155
- 'nummod', 'nsubj_pass', 'vocative', 'flat', 'ccomp'
156
- ]
157
- )
158
- )
159
- }
160
- )
161
-
162
- return datasets.DatasetInfo(
163
- description=_DESCRIPTION,
164
- features=features,
165
- supervised_keys=None,
166
- homepage=_HOMEPAGE,
167
- license=_LICENSE,
168
- citation=_CITATION,
169
- )
170
-
171
- def _split_generators(self, dl_manager):
172
- """Returns SplitGenerators."""
173
- data_dir = os.path.join(dl_manager.download_and_extract(_URLs[self.config.name]), _DATA_DIRS[self.config.name])
174
-
175
- if self.config.name == 'ud':
176
- training_file = 'train_ner_ud.conllup'
177
- dev_file = 'dev_ner_ud.conllup'
178
- test_file = 'test_ner_ud.conllup'
179
- else:
180
- training_file = 'train_ner.conllu'
181
- dev_file = 'dev_ner.conllu'
182
- test_file = 'test_ner.conllu'
183
-
184
- return [
185
- datasets.SplitGenerator(
186
- name=datasets.Split.TRAIN, gen_kwargs={
187
- 'filepath': os.path.join(data_dir, training_file),
188
- 'split': 'train'}
189
- ),
190
- datasets.SplitGenerator(
191
- name=datasets.Split.VALIDATION, gen_kwargs={
192
- 'filepath': os.path.join(data_dir, dev_file),
193
- 'split': 'dev'}
194
- ),
195
- datasets.SplitGenerator(
196
- name=datasets.Split.TEST, gen_kwargs={
197
- 'filepath': os.path.join(data_dir, test_file),
198
- 'split': 'test'}
199
- ),
200
- ]
201
-
202
- def _generate_examples(self, filepath, split):
203
- if self.config.name == 'ud':
204
- with open(filepath, encoding='utf-8') as f:
205
- sent_id = ''
206
- text = ''
207
- tokens = []
208
- lemmas = []
209
- xpos_tags = []
210
- upos_tags = []
211
- feats = []
212
- iob_tags = []
213
- uds = []
214
- data_id = 0
215
- for line in f:
216
- if line and not line == '\n' and not line.startswith('# global.columns'):
217
- if line.startswith('#'):
218
- if line.startswith('# sent_id'):
219
- if tokens:
220
- yield data_id, {
221
- 'sent_id': sent_id,
222
- 'text': text,
223
- 'tokens': tokens,
224
- 'lemmas': lemmas,
225
- 'upos_tags': upos_tags,
226
- 'xpos_tags': xpos_tags,
227
- 'feats': feats,
228
- 'iob_tags': iob_tags,
229
- 'uds': uds
230
- }
231
- tokens = []
232
- lemmas = []
233
- upos_tags = []
234
- xpos_tags = []
235
- feats = []
236
- iob_tags = []
237
- uds = []
238
- data_id += 1
239
- sent_id = line.split(' = ')[1].strip()
240
- elif line.startswith('# text'):
241
- text = line.split(' = ')[1].strip()
242
- elif not line.startswith('_'):
243
- splits = line.split('\t')
244
- tokens.append(splits[1].strip())
245
- lemmas.append(splits[2].strip())
246
- upos_tags.append(splits[3].strip())
247
- xpos_tags.append(splits[4].strip())
248
- feats.append(splits[5].strip())
249
- uds.append(splits[7].strip())
250
-
251
- yield data_id, {
252
- 'sent_id': sent_id,
253
- 'text': text,
254
- 'tokens': tokens,
255
- 'lemmas': lemmas,
256
- 'upos_tags': upos_tags,
257
- 'xpos_tags': xpos_tags,
258
- 'feats': feats,
259
- 'iob_tags': iob_tags,
260
- 'uds': uds
261
- }
262
- else:
263
- with open(filepath, encoding='utf-8') as f:
264
- sent_id = ''
265
- text = ''
266
- tokens = []
267
- lemmas = []
268
- xpos_tags = []
269
- upos_tags = []
270
- feats = []
271
- iob_tags = []
272
- data_id = 0
273
- for line in f:
274
- if line and not line == '\n':
275
- if line.startswith('#'):
276
- if line.startswith('# sent_id'):
277
- if tokens:
278
- yield data_id, {
279
- 'sent_id': sent_id,
280
- 'text': text,
281
- 'tokens': tokens,
282
- 'lemmas': lemmas,
283
- 'upos_tags': upos_tags,
284
- 'xpos_tags': xpos_tags,
285
- 'feats': feats,
286
- 'iob_tags': iob_tags
287
- }
288
- tokens = []
289
- lemmas = []
290
- upos_tags = []
291
- xpos_tags = []
292
- feats = []
293
- iob_tags = []
294
- data_id += 1
295
- sent_id = line.split(' = ')[1].strip()
296
- elif line.startswith('# text'):
297
- text = line.split(' = ')[1].strip()
298
- elif not line.startswith('_'):
299
- splits = line.split('\t')
300
- tokens.append(splits[1].strip())
301
- lemmas.append(splits[2].strip())
302
- upos_tags.append(splits[3].strip())
303
- xpos_tags.append(splits[4].strip())
304
- feats.append(splits[5].strip())
305
- iob_tags.append(splits[9].strip())
306
-
307
- yield data_id, {
308
- 'sent_id': sent_id,
309
- 'text': text,
310
- 'tokens': tokens,
311
- 'lemmas': lemmas,
312
- 'upos_tags': upos_tags,
313
- 'xpos_tags': xpos_tags,
314
- 'feats': feats,
315
- 'iob_tags': iob_tags
316
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ner/hr500k-test.parquet ADDED
Binary file (833 kB). View file
 
ner/hr500k-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d5fea550d85524a33b38dfdf9e385e63dbd48fe925244515b6b3c968a8db470
3
+ size 6632450
ner/hr500k-validation.parquet ADDED
Binary file (637 kB). View file
 
ud/hr500k-test.parquet ADDED
Binary file (287 kB). View file
 
ud/hr500k-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:556a4cfa1581490381722b940ec525b3be78422a0d635f8f369865623ab30bd3
3
+ size 2815194
ud/hr500k-validation.parquet ADDED
Binary file (244 kB). View file
 
upos/hr500k-test.parquet ADDED
Binary file (833 kB). View file
 
upos/hr500k-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:761456583dd078e7fb8c47600b724e9189614579a187867c9c9e25e9cfcc9ca9
3
+ size 6632773
upos/hr500k-validation.parquet ADDED
Binary file (637 kB). View file