Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Russian
Size:
10K - 100K
Tags:
nlp
License:
valerysukmanyuk
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ dataset_info:
|
|
23 |
num_bytes: 377773.65490797546
|
24 |
num_examples: 3782
|
25 |
download_size: 3039375
|
26 |
-
dataset_size: 7554674
|
27 |
configs:
|
28 |
- config_name: default
|
29 |
data_files:
|
@@ -35,4 +35,17 @@ configs:
|
|
35 |
path: data/test-*
|
36 |
language:
|
37 |
- ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
num_bytes: 377773.65490797546
|
24 |
num_examples: 3782
|
25 |
download_size: 3039375
|
26 |
+
dataset_size: 7554674
|
27 |
configs:
|
28 |
- config_name: default
|
29 |
data_files:
|
|
|
35 |
path: data/test-*
|
36 |
language:
|
37 |
- ru
|
38 |
+
task_categories:
|
39 |
+
- token-classification
|
40 |
+
tags:
|
41 |
+
- nlp
|
42 |
+
pretty_name: 'Twilight Tokenized: Russian NLP Dataset'
|
43 |
+
size_categories:
|
44 |
+
- 10K<n<100K
|
45 |
---
|
46 |
+
|
47 |
+
# Dataset Details
|
48 |
+
|
49 |
+
#### This dataset is made for fun and in ✨educational✨ purposes.
|
50 |
+
|
51 |
+
#### It includes tokenized novel text by Stephenie Meyer "Breaking Dawn", PoS, NER, tokens' lemmas and syntax annotation.
|