Translation
File size: 2,412 Bytes
f4ba550
114f9f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f4ba550
 
114f9f1
6e5d33a
114f9f1
 
 
 
 
 
 
 
 
8f9bd8b
114f9f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a09e31
114f9f1
 
 
 
b32dd9b
114f9f1
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
pipeline_tag: translation
language:
  - multilingual
  - af
  - am
  - ar
  - en
  - fr
  - ha
  - ig
  - mg
  - ny
  - om
  - pcm
  - rn
  - rw
  - sn
  - so
  - st
  - sw
  - xh
  - yo
  - zu







license: apache-2.0
---

This is a [AfriCOMET-STL (single task)](https://github.com/masakhane-io/africomet) evaluation model: It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference.

# Paper

[AfriMTE and AfriCOMET: Empowering COMET to Embrace Under-resourced African Languages](https://arxiv.org/abs/2311.09828) (Wang et al., arXiv 2023)

# License

Apache-2.0

# Usage (AfriCOMET)

Using this model requires unbabel-comet to be installed:

```bash
pip install --upgrade pip  # ensures that pip is current 
pip install unbabel-comet
```

Then you can use it through comet CLI:

```bash
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model masakhane/africomet-stl
```

Or using Python:

```python
from comet import download_model, load_from_checkpoint

model_path = download_model("masakhane/africomet-stl")
model = load_from_checkpoint(model_path)
data = [
    {
        "src": "Nadal sàkọọ́lẹ̀ ìforígbárí o ní àmì méje sóódo pẹ̀lú ilẹ̀ Canada.",
        "mt": "Nadal's head to head record against the Canadian is 7–2.",
        "ref": "Nadal scored seven unanswered points against Canada."
    },
    {
        "src": "Laipe yi o padanu si Raoniki ni ere Sisi Brisbeni.",
        "mt": "He recently lost against Raonic in the Brisbane Open.",
        "ref": "He recently lost to Raoniki in the game Sisi Brisbeni."
    }
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```

# Intended uses

Our model is intented to be used for **MT evaluation**. 

Given a triplet with (source sentence, translation, reference translation), it outputs a single score between 0 and 1 where 1 represents a perfect translation. 

# Languages Covered:

This model builds on top of AfroXLMR which cover the following languages:

Afrikaans, Arabic, Amharic, English, French, Hausa, Igbo, Malagasy, Chichewa, Oromo, Nigerian-Pidgin, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu. 

Thus, results for language pairs containing uncovered languages are unreliable!