File size: 3,661 Bytes
0b1b59d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38ddbdc
 
0b1b59d
38ddbdc
 
 
 
0b1b59d
 
 
 
 
 
 
81da824
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b1b59d
81da824
 
 
3c135d8
 
ea47edb
3c135d8
81da824
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
dataset_info:
  features:
  - name: lp
    dtype: large_string
  - name: src
    dtype: large_string
  - name: mt
    dtype: large_string
  - name: ref
    dtype: large_string
  - name: raw
    dtype: float64
  - name: domain
    dtype: large_string
  - name: year
    dtype: int64
  - name: sents
    dtype: int32
  splits:
  - name: train
    num_bytes: 36666470784
    num_examples: 7650287
  - name: test
    num_bytes: 283829719
    num_examples: 59235
  download_size: 23178699933
  dataset_size: 36950300503
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
license: apache-2.0
size_categories:
- 1M<n<10M
language:
- bn
- cs
- de
- en
- et
- fi
- fr
- gu
- ha
- hi
- is
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ru
- ta
- tr
- uk
- xh
- zh
- zu
tags:
- mt-evaluation
- WMT
- 41-lang-pairs
---

# Dataset Summary

**Long-context / document-level** dataset for Quality Estimation of Machine Translation.
It is an augmented variant of the sentence-level WMT DA Human Evaluation dataset.
In addition to individual sentences, it contains augmentations of 2, 4, 8, 16, and 32 sentences, among each language pair `lp` and `domain`.
The `raw` column represents a weighted average of scores of augmented sentences using character lengths of `src` and `mt` as weights.
The code used to apply the augmentation can be found [here](https://github.com/ymoslem/datasets/blob/main/LongContextQE/Long-Context-MT-QE-WMT.ipynb).

This dataset contains all DA human annotations from previous WMT News Translation shared tasks.
It extends the sentence-level dataset [RicardoRei/wmt-da-human-evaluation](https://huggingface.co/datasets/RicardoRei/wmt-da-human-evaluation), split into `train` and `test`.
Moreover, the `raw` column is normalized to be between 0 and 1 using this function.

The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- raw: direct assessment
- domain: domain of the input text (e.g. news)
- year: collection year
- sents: number of sentences in the text
  
You can also find the original data for each year in the results section: https://www.statmt.org/wmt{YEAR}/results.html e.g: for 2020 data: [https://www.statmt.org/wmt20/results.html](https://www.statmt.org/wmt20/results.html)

## Python usage:

```python
from datasets import load_dataset
dataset = load_dataset("ymoslem/wmt-da-human-evaluation-long-context")
```

There is no standard train/test split for this dataset, but you can easily split it according to year, language pair or domain. e.g.:

```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)

# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")

# split by domain
data = dataset.filter(lambda example: example["domain"] == "news")
```

Note that most data is from the News domain.


## Citation Information

If you use this data please cite the WMT findings from previous years:
- [Findings of the 2017 Conference on Machine Translation (WMT17)](https://aclanthology.org/W17-4717.pdf)
- [Findings of the 2018 Conference on Machine Translation (WMT18)](https://aclanthology.org/W18-6401.pdf)
- [Findings of the 2019 Conference on Machine Translation (WMT19)](https://aclanthology.org/W19-5301.pdf)
- [Findings of the 2020 Conference on Machine Translation (WMT20)](https://aclanthology.org/2020.wmt-1.1.pdf)
- [Findings of the 2021 Conference on Machine Translation (WMT21)](https://aclanthology.org/2021.wmt-1.1.pdf)
- [Findings of the 2022 Conference on Machine Translation (WMT22)](https://aclanthology.org/2022.wmt-1.1.pdf)