The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for GEM/opusparcus
Link to Main Data Card
You can find the main data card on the GEM Website.
Dataset Summary
Opusparcus is a paraphrase corpus for six European language: German, English, Finnish, French, Russian, and Swedish. The paraphrases consist of subtitles from movies and TV shows.
You can load the dataset via:
import datasets
data = datasets.load_dataset('GEM/opusparcus')
The data loader can be found here.
website
paper
Dataset Overview
Where to find the Data and its Documentation
Webpage
Download
Paper
BibTex
@InProceedings{creutz:lrec2018,
title = {Open Subtitles Paraphrase Corpus for Six Languages},
author={Mathias Creutz},
booktitle={Proceedings of the 11th edition of the Language Resources and Evaluation Conference (LREC 2018)},
year={2018},
month = {May 7-12},
address = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
isbn = {979-10-95546-00-9},
language = {english},
url={http://www.lrec-conf.org/proceedings/lrec2018/pdf/131.pdf}
Contact Name
Mathias Creutz
Contact Email
firstname dot lastname at helsinki dot fi
Has a Leaderboard?
no
Languages and Intended Use
Multilingual?
yes
Covered Languages
German
, English
, Finnish
, French
, Russian
, Swedish
Whose Language?
Opusparcus is a paraphrase corpus for six European language: German, English, Finnish, French, Russian, and Swedish. The paraphrases consist of subtitles from movies and TV shows.
The data in Opusparcus has been extracted from OpenSubtitles2016, which is in turn based on data from OpenSubtitles.
License
cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International
Intended Use
Opusparcus is a sentential paraphrase corpus for multiple languages containing colloquial language.
Primary Task
Paraphrasing
Communicative Goal
Models can be trained, e.g., for paraphrase detection and generation, that is, determining whether two given sentences mean the same thing or generating new paraphrases for a given sentence.
Credit
Who added the Dataset to GEM?
Mathias Creutz (University of Helsinki)
Dataset Structure
Data Fields
sent1
: a tokenized sentencesent2
: another tokenized sentence, which is potentially a paraphrase ofsent1
.annot_score
: a value between 1.0 and 4.0 indicating how good an example of paraphrasessent1
andsent2
are. (For the training sets, the value is 0.0, which indicates that no manual annotation has taken place.)lang
: language of this datasetgem_id
: unique identifier of this entry
All fields are strings except annot_score
, which is a float.
Reason for Structure
For each target language, the Opusparcus data have been partitioned into three types of data sets: training, validation and test sets. The training sets are large, consisting of millions of sentence pairs, and have been compiled automatically, with the help of probabilistic ranking functions. The development and test sets consist of sentence pairs that have been annotated manually; each set contains approximately 1000 sentence pairs that have been verified to be acceptable paraphrases by two independent annotators.
When you download Opusparcus, you must always indicate the language you want to retrieve, for instance:
data = load_dataset("GEM/opusparcus", lang="de")
The above command will download the validation and test sets for German. If additionally, you want to retrieve training data, you need to specify the level of quality you desire, such as "French, with 90% quality of the training data":
data = load_dataset("GEM/opusparcus", lang="fr", quality=90)
The entries in the training sets have been ranked automatically by how likely they are paraphrases, best first, worst last. The quality parameter indicates the estimated proportion (in percent) of true
paraphrases in the training set. Allowed quality values range between 60 and 100, in increments of 5 (60, 65, 70, ..., 100). A value of 60 means that 60% of the sentence pairs in the training set are estimated to be true paraphrases (and the remaining 40% are not). A higher value produces a smaller but cleaner set. The smaller sets are subsets of the larger sets, such that the quality=95
set is a subset of quality=90
, which is a subset of quality=85
, and so on.
The default quality
value, if omitted, is 100. This matches no training data at all, which can be convenient, if you are only interested in the validation and test sets, which are considerably
smaller, but manually annotated.
Note that an alternative to typing the parameter values explicitly, you can use configuration names instead. The following commands are equivalent to the ones above:
data = load_dataset("GEM/opusparcus", "de.100")
data = load_dataset("GEM/opusparcus", "fr.90")
How were labels chosen?
Annotators have used the following scores to label sentence pairs in the test and validation sets:
4: Good example of paraphrases (Dark green button in the annotation tool): The two sentences can be used in the same situation and essentially "mean the same thing".
3: Mostly good example of paraphrases (Light green button in the annotation tool): It is acceptable to think that the two sentences refer to the same thing, although one sentence might be more specific than the other one, or there are differences in style, such as polite form versus familiar form.
2: Mostly bad example of paraphrases (Yellow button in the annotation tool): There is some connection between the sentences that explains why they occur together, but one would not really consider them to mean the same thing.
1: Bad example of paraphrases (Red button in the annotation tool): There is no obvious connection. The sentences mean different things.
If the two annotators fully agreed on the category, the value in the annot_score
field is 4.0, 3.0, 2.0 or 1.0. If the two annotators chose adjacent categories, the value in this field will be 3.5, 2.5 or
1.5. For instance, a value of 2.5 means that one annotator gave a score of 3 ("mostly good"), indicating a possible paraphrase pair, whereas the other annotator scored this as a 2 ("mostly bad"), that is, unlikely to be a paraphrase pair. If the annotators disagreed by more than one category, the sentence pair was discarded and won't show up in the datasets.
The training sets were not annotated manually. This is indicated by
the value 0.0 in the annot_score
field.
For an assessment of of inter-annotator agreement, see Aulamo et al. (2019). Annotation of subtitle paraphrases using a new web tool. In Proceedings of the Digital Humanities in the Nordic Countries 4th Conference, Copenhagen, Denmark.
Example Instance
{'annot_score': 4.0, 'gem_id': 'gem-opusparcus-test-1587', 'lang': 'en', 'sent1': "I haven 't been contacted by anybody .", 'sent2': "Nobody 's contacted me ."}
Data Splits
The data is split into training, validation and test sets. The validation and test sets come in two versions, the regular validation and test sets and the full sets, called validation.full and test.full. The full sets contain all sentence pairs successfully annotated by the annotators, including the sentence pairs that were rejected as paraphrases. The annotation scores of the full sets thus range between 1.0 and 4.0. The regular validation and test sets only contain sentence pairs that qualify as paraphrases, scored between 3.0 and 4.0 by the annotators.
The number of sentence pairs in the data splits are as follows for each of the languages. The range between the smallest (quality=95
) and largest (quality=60
) train configuration have been shown.
train | valid | test | valid.full | test.full | |
---|---|---|---|---|---|
de | 0.59M .. 13M | 1013 | 1047 | 1582 | 1586 |
en | 1.0M .. 35M | 1015 | 982 | 1455 | 1445 |
fi | 0.48M .. 8.9M | 963 | 958 | 1760 | 1749 |
fr | 0.94M .. 22M | 997 | 1007 | 1630 | 1674 |
ru | 0.15M .. 15M | 1020 | 1068 | 1854 | 1855 |
sv | 0.24M .. 4.5M | 984 | 947 | 1887 | 1901 |
As a concrete example, loading the English data requesting 95% quality of the train split produces the following:
>>> data = load_dataset("GEM/opusparcus", lang="en", quality=95)
>>> data
DatasetDict({
test: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 982
})
validation: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1015
})
test.full: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1445
})
validation.full: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1455
})
train: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1000000
})
})
>>> data["test"][0]
{'annot_score': 4.0, 'gem_id': 'gem-opusparcus-test-1587', 'lang': 'en', 'sent1': "I haven 't been contacted by anybody .", 'sent2': "Nobody 's contacted me ."}
>>> data["validation"][2]
{'annot_score': 3.0, 'gem_id': 'gem-opusparcus-validation-1586', 'lang': 'en', 'sent1': 'No promises , okay ?', 'sent2': "I 'm not promising anything ."}
>>> data["train"][1000]
{'annot_score': 0.0, 'gem_id': 'gem-opusparcus-train-12501001', 'lang': 'en', 'sent1': 'Am I beautiful ?', 'sent2': 'Am I pretty ?'}
Splitting Criteria
The validation and test sets have been annotated manually, but the training sets have been produced using automatic scoring and come in different size configurations depending on the desired quality level. (See above descriptions and examples for more details.)
Please note that previous work suggests that a larger and noisier training set is better than a smaller and clean set. See Sjöblom et al. (2018). Paraphrase Detection on Noisy Subtitles in Six Languages. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, and Vahtola et al. (2021). Coping with Noisy Training Data Labels in Paraphrase Detection. In Proceedings of the 7th Workshop on Noisy User-generated Text.
Dataset in GEM
Rationale for Inclusion in GEM
Why is the Dataset in GEM?
Opusparcus provides examples of sentences that mean the same thing or have very similar meaning. Sentences are available in six languages and the style is colloquial language.
Similar Datasets
yes
Unique Language Coverage
yes
Difference from other GEM datasets
There is another data set containing manually labeled Finnish paraphrases.
Ability that the Dataset measures
Sentence meaning
GEM-Specific Curation
Modificatied for GEM?
yes
GEM Modifications
other
Modification Details
Training sets have been prepared for each the "quality levels" 60% – 95%.
In the original release, this task was left to the user of the data.
Additional Splits?
yes
Split Information
There are two versions of the validations and test sets: the regular sets which only contain positive examples of paraphrases and the full sets containing all examples.
Split Motivation
In the original release, only the full validation and test sets were supplied. The "regular sets" have been added in order to make it easier to test on true parapahrases only.
Getting Started with the Task
Pointers to Resources
Creutz (2018). Open Subtitles Paraphrase Corpus for Six Languages, Proceedings of the 11th edition of the Language Resources and Evaluation Conference (LREC 2018).
Sjöblom et al. (2018). Paraphrase Detection on Noisy Subtitles in Six Languages. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text.
Aulamo et al. (2019). Annotation of subtitle paraphrases using a new web tool. In Proceedings of the Digital Humanities in the Nordic Countries 4th Conference.
Sjöblom et al. (2020). Paraphrase Generation and Evaluation on Colloquial-Style Sentences, Proceedings of the 12th Language Resources and Evaluation Conference (LREC).
Vahtola et al. (2021). Coping with Noisy Training Data Labels in Paraphrase Detection. In Proceedings of the 7th Workshop on Noisy User-generated Text.
Previous Results
Previous Results
Measured Model Abilities
Sentence meaning
In a scenario of paraphrase detection, the model determines whether two given sentences carry approximately the same meaning.
In a scenario of paraphrase generation, the model generates a potential paraphrase of a given sentence.
Metrics
BLEU
, BERT-Score
, Other: Other Metrics
Other Metrics
PINC
Proposed Evaluation
The metrics mentioned above can be used to assess how well a generated paraphrase corresponds to a given reference sentence. The PINC score additionally assesses how different the surface forms are.
Previous results available?
yes
Other Evaluation Approaches
See publications on using Opusparcus
Relevant Previous Results
Sjöblom et al. (2020). Paraphrase Generation and Evaluation on Colloquial-Style Sentences, Proceedings of the 12th Language Resources and Evaluation Conference (LREC).
Dataset Curation
Original Curation
Original Curation Rationale
Opusparcus was created in order to produce a sentential paraphrase corpus for multiple languages containing colloquial language (as opposed to news or religious text, for instance).
Communicative Goal
Opusparcus provides labeled examples of pairs of sentences that have similar (or dissimilar) meanings.
Sourced from Different Sources
no
Language Data
How was Language Data Obtained?
Crowdsourced
Where was it crowdsourced?
Other crowdworker platform
Language Producers
The data in Opusparcus has been extracted from OpenSubtitles2016, which is in turn based on data from OpenSubtitles.org.
The texts consists of subtitles that have been produced using crowdsourcing.
Topics Covered
The language is representative of movies and TV shows. Domains covered include comedy, drama, relationships, suspense, etc.
Data Validation
validated by data curator
Data Preprocessing
Sentence and word tokenization was performed.
Was Data Filtered?
algorithmically
Filter Criteria
The sentence pairs in the training sets were ordered automatically based on the estimated likelihood that the sentences were paraphrases, most likely paraphrases on the top, and least likely paraphrases on the bottom.
The validation and test sets were checked and annotated manually, but the sentence pairs selected for annotation had to be different enough in terms of minimum edit distance (Levenshtein distance). This ensured that annotators would not spend their time annotating pairs of more or less identical sentences.
Structured Annotations
Additional Annotations?
expert created
Number of Raters
11<n<50
Rater Qualifications
Students and staff at the University of Helsinki (native or very proficient speakers of the target languages)
Raters per Training Example
0
Raters per Test Example
2
Annotation Service?
no
Annotation Values
The development and test sets consist of sentence pairs that have been annotated manually; each set contains approximately 1000 sentence pairs that have been verified to be acceptable paraphrases by two independent annotators.
The annot_score
field reflects the judgments made by the annotators. If the annnotators fully agreed on the category (4.0: dark green, 3.0: light green, 2.0: yellow, 1.0: red), the value of annot_score
is 4.0, 3.0, 2.0 or 1.0. If the annotators chose adjacent categories, the value in this field will be 3.5, 2.5 or 1.5. For instance, a value of 2.5 means that one annotator gave a score of 3 ("mostly good"), indicating a possible paraphrase pair, whereas the other annotator scored this as a 2 ("mostly bad"), that is, unlikely to be a paraphrase pair. If the annotators disagreed by more than one category, the sentence pair was discarded and won't show up in the datasets.
Annotators could also reject a sentence pair as being corrupted data.
Any Quality Control?
validated by another rater
Quality Control Details
If the annotators disagreed by more than one category, the sentence pair was discarded and is not part of the final dataset.
Consent
Any Consent Policy?
no
Private Identifying Information (PII)
Contains PII?
yes/very likely
Any PII Identification?
no identification
Maintenance
Any Maintenance Plan?
no
Broader Social Context
Previous Work on the Social Impact of the Dataset
Usage of Models based on the Data
no
Impact on Under-Served Communities
Addresses needs of underserved Communities?
no
Discussion of Biases
Any Documented Social Biases?
no
Are the Language Producers Representative of the Language?
What social bias there may be in the subtitles in this dataset has not been studied.
Considerations for Using the Data
PII Risks and Liability
Potential PII Risk
The data only contains subtitles of publicly available movies and TV shows.
Licenses
Copyright Restrictions on the Dataset
non-commercial use only
Copyright Restrictions on the Language Data
non-commercial use only
Known Technical Limitations
Technical Limitations
Some subtitles contain typos that are caused by inaccurate OCR.
Unsuited Applications
The models might memorize individual subtitles of existing movies and TV shows, but there is no context across sentence boundaries in the data.
Discouraged Use Cases
A general issue with paraphrasing is that very small modifications in the surface form might produce valid paraphrases, which are however rather uninteresting. It is more valuable to produce paraphrases with clearly different surface realizations (e.g., measured using minimum edit distance).
- Downloads last month
- 2,008