license: mit
language_bcp47:
- ru-RU
tags:
- spellchecking
language:
- ru
size_categories:
- 100K<n<1M
task_categories:
- text2text-generation
Dataset Summary
This dataset is a set of samples for training and testing the spell checking, grammar error correction and ungrammatical text detection models.
The dataset contains two splits:
test.json contains samples hand-selected to evaluate the quality of models.
train.json contains synthetic samples generated in various ways.
The purpose of creating the dataset was to test an internal spellchecker for a generative poetry project, but it can also be useful in other projects, since it does not have an explicit specialization for poetry.
Example
{
"id": 1483,
"text": "Разучи стихов по больше",
"fixed_text": "Разучи стихов побольше",
"label": 0,
"error_type": "Tokenization",
"domain": "prose"
}
Notes
Using "e" instead of "ё" is not considered a text defect. So both "Зеленый клен еще цветет" and "Зелёный клён ещё цветёт" are considered acceptable.
Incorrect letter case is not considered a defect. In particular, the first word in a sentence does not have to begin with a capital letter. Therefore, both "Пушкин был поэтом" and "пушкин был поэтом" are considered equally acceptable. Moreover, all kinds of methods of highlighting text through its capitalization are not considered a defect, for example, "Не говори ни ДА, ни НЕТ"
The absence of a period, exclamation mark or question mark at the end of a single sentence is not considered a defect.
The test split contains only examples of mistakes made by people. There are no synthetics among these mistakes.
The examples of errors in the test split come from different people in terms of gender, age, education, context, and social context.
The input and output text can be not only one sentence, but also 1) a part of a sentence, 2) incomplete dialog response, 3) several sentences, e.g. a paragraph, 4) a fragment of a poem, usually a quatrain or two quatrains.
The texts may include offensive phrases, phrases that offend religious or political feelings, fragments that contradict moral standards, etc. Such samples are only needed to make the corpus as representative as possible for the tasks of processing messages in various media such as blogs, comments, etc.
One sample may contain several errors of different types.
Poetry samples
The texts of the poems are included in the test part of the dataset, which makes it unique among similar datasets for the Russian language:
{
"id": 24,
"text": "Чему научит забытьё?\nСмерть формы д'арует литьё.\nРезец мгновенье любит стружка...\nСмерть безобидная подружка!",
"fixed_text": null,
"label": 0,
"error_type": "Grammar",
"domain": "poetry"
}
Dataset fields
id (int64): the sentence's id, starting 1.
text (str): the original text (part of sentence, whole sentence or several sentences).
fixed_text (str): the corrected version of original text.
label (str): the target class. "1" for "no defects", "0" for "contains defects".
error_type (str): the violation category: Spelling, Grammar, Tokenization, Punctuation, Mixture, Unknown.
domain (str): domain: "prose" or "poetry".
Error types
Tokenization: a word is split into two tokens, or two words are merged into one word.
{
"id": 6,
"text": "Я подбираю по проще слова",
"fixed_text": "Я подбираю попроще слова",
"label": 0,
"error_type": "Tokenization",
"domain": "prose"
}
Punctuation: missing or extra comma, hyphen or other punctuation mark
{
"id": 5,
"text": "И швырнуть по-дальше",
"fixed_text": "И швырнуть подальше",
"label": 0,
"error_type": "Punctuation",
"domain": "prose"
}
Spelling:
{
"id": 38,
"text": "И ведь что интересно, русские официально ни в одном крестовом позоде не участвовали.",
"fixed_text": "И ведь что интересно, русские официально ни в одном крестовом походе не участвовали.",
"label": 0,
"error_type": "Spelling",
"domain": "prose"
}
Grammar: One of the words is in the wrong grammatical form, for example the verb is in the infinitive instead of the personal form.
{
"id": 61,
"text": "на него никто не польститься",
"fixed_text": "на него никто не польстится",
"label": 0,
"error_type": "Grammar",
"domain": "prose"
}
Please note that error categories are not always set accurately, so you should not use the "error_type" field to train classifiers.
Uncensoring samples
A number of samples contain text with explicit obscenities:
{
"id": 1,
"text": "Но не простого - с лёгкой еб@нцой.",
"fixed_text": "Но не простого - с лёгкой ебанцой.",
"label": 0,
"error_type": "Misspelling",
"domain": "prose"
}
Statistics for test split
Number of samples per domain:
prose 25012
poetry 2500
Fix categories for 'poetry' domain:
+-----------------------------+-------+-------+
| Category | Count | Share |
+-----------------------------+-------+-------+
| punctuation:redundant_comma | 955 | 0.35 |
| | 465 | 0.17 |
| tokenization:prefix↦↤word | 420 | 0.15 |
| punctuation:missing_comma | 354 | 0.13 |
| punctuation | 201 | 0.07 |
| spelling | 135 | 0.05 |
| grammar | 132 | 0.05 |
| не ↔ ни | 31 | 0.01 |
| spelling:ться ↔ тся | 30 | 0.01 |
| tokenization:не|ни | 5 | 0.0 |
| letter casing | 2 | 0.0 |
+-----------------------------+-------+-------+
Number of edits required to obtain a corrected version of the text:
+-----------------+-------------------+------------------+
| Number of edits | Number of samples | Share of samples |
+-----------------+-------------------+------------------+
| 1 | 646 | 0.5 |
| 2 | 303 | 0.23 |
| 3 | 154 | 0.12 |
| 4 | 79 | 0.06 |
| 5 | 45 | 0.03 |
| 0 | 2 | 0.0 |
| >5 | 63 | 0.05 |
+-----------------+-------------------+------------------+
Fix categories for 'prose' domain:
+-----------------------------+-------+-------+
| Category | Count | Share |
+-----------------------------+-------+-------+
| | 2592 | 0.34 |
| tokenization:prefix↦↤word | 1691 | 0.22 |
| grammar | 1264 | 0.16 |
| spelling | 918 | 0.12 |
| punctuation | 447 | 0.06 |
| punctuation:missing_comma | 429 | 0.06 |
| punctuation:redundant_comma | 147 | 0.02 |
| spelling:ться ↔ тся | 118 | 0.02 |
| не ↔ ни | 77 | 0.01 |
| tokenization:не|ни | 30 | 0.0 |
| letter casing | 23 | 0.0 |
+-----------------------------+-------+-------+
Number of edits required to obtain a corrected version of the text:
+-----------------+-------------------+------------------+
| Number of edits | Number of samples | Share of samples |
+-----------------+-------------------+------------------+
| 1 | 5974 | 0.89 |
| 2 | 570 | 0.08 |
| 3 | 126 | 0.02 |
| 4 | 41 | 0.01 |
| 0 | 18 | 0.0 |
| 5 | 9 | 0.0 |
| >5 | 5 | 0.0 |
+-----------------+-------------------+------------------+