dataset_info:
features:
- name: lp
dtype: large_string
- name: src
dtype: large_string
- name: mt
dtype: large_string
- name: ref
dtype: large_string
- name: raw
dtype: float64
- name: domain
dtype: large_string
- name: year
dtype: int64
- name: sents
dtype: int32
splits:
- name: train
num_bytes: 36666470784
num_examples: 7650287
- name: test
num_bytes: 283829719
num_examples: 59235
download_size: 23178699933
dataset_size: 36950300503
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
size_categories:
- 1M<n<10M
language:
- bn
- cs
- de
- en
- et
- fi
- fr
- gu
- ha
- hi
- is
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ru
- ta
- tr
- uk
- xh
- zh
- zu
tags:
- mt-evaluation
- WMT
- 41-lang-pairs
Dataset Summary
Long-context / document-level dataset for Quality Estimation of Machine Translation.
It is an augmented variant of the sentence-level WMT DA Human Evaluation dataset.
In addition to individual sentences, it contains augmentations of 2, 4, 8, 16, and 32 sentences, among each language pair lp
and domain
.
The raw
column represents a weighted average of scores of augmented sentences using character lengths of src
and mt
as weights.
The code used to apply the augmentation can be found here.
This dataset contains all DA human annotations from previous WMT News Translation shared tasks.
It extends the sentence-level dataset RicardoRei/wmt-da-human-evaluation, split into train
and test
.
Moreover, the raw
column is normalized to be between 0 and 1 using this function.
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- raw: direct assessment
- domain: domain of the input text (e.g. news)
- year: collection year
- sents: number of sentences in the text
You can also find the original data for each year in the results section: https://www.statmt.org/wmt{YEAR}/results.html e.g: for 2020 data: https://www.statmt.org/wmt20/results.html
Python usage:
from datasets import load_dataset
dataset = load_dataset("ymoslem/wmt-da-human-evaluation-long-context")
There is no standard train/test split for this dataset, but you can easily split it according to year, language pair or domain. e.g.:
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "news")
Note that most data is from the News domain.
Citation Information
If you use this data please cite the WMT findings from previous years:
- Findings of the 2017 Conference on Machine Translation (WMT17)
- Findings of the 2018 Conference on Machine Translation (WMT18)
- Findings of the 2019 Conference on Machine Translation (WMT19)
- Findings of the 2020 Conference on Machine Translation (WMT20)
- Findings of the 2021 Conference on Machine Translation (WMT21)
- Findings of the 2022 Conference on Machine Translation (WMT22)