File size: 5,451 Bytes
51eec8e
 
09c7372
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51eec8e
 
 
 
09c7372
51eec8e
09c7372
 
 
51eec8e
 
 
 
 
 
 
09c7372
 
 
 
51eec8e
 
09c7372
51eec8e
 
bab22ab
09c7372
 
51eec8e
09c7372
 
 
51eec8e
09c7372
771bb0c
 
09c7372
51eec8e
09c7372
 
51eec8e
09c7372
 
 
 
 
 
 
 
 
51eec8e
09c7372
51eec8e
09c7372
51eec8e
09c7372
51eec8e
09c7372
 
51eec8e
 
 
 
 
09c7372
51eec8e
 
 
 
09c7372
 
 
 
 
 
 
 
 
 
51eec8e
 
 
09c7372
51eec8e
 
 
 
09c7372
51eec8e
 
 
09c7372
 
 
51eec8e
 
 
09c7372
 
 
51eec8e
 
 
09c7372
 
51eec8e
 
09c7372
51eec8e
 
09c7372
51eec8e
 
 
 
 
 
 
 
 
 
09c7372
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
library_name: transformers
language:
- ru
- lez
license: apache-2.0
datasets:
- leks-forever/bible-lezghian-russian
metrics:
- bleu
base_model:
- google/mt5-base
pipeline_tag: translation
tags:
- translation
- lezghian
- caucasus
- mt5-base
---

# Model Card for Model ID

This version of the Google T5-Base model has been fine-tuned on a bilingual dataset of Russian and Lezgian sentences to improve translation quality in both directions (from Russian to Lezgian and from Lezgian to Russian). The model is designed to provide accurate and high-quality translations between these two languages.

* Architecture: Sequence-to-Sequence Transformer.
* Languages Supported: Russian and Lezghian. The fine-tuning focuses on enhancing the accuracy of translations in both directions.
* Use Cases: The model is suitable for machine translation tasks between Russian and Lezgian, as well as for applications requiring automated translations in these language pairs, such as support systems, chatbots, or content localization.



### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Leks Forever Team
- **Language(s) (NLP):** Lezghian, Russian 
<!-- - **License:** [More Information Needed] -->
- **Finetuned from model:** [google/mt5-base](https://huggingface.co/google/mt5-base)


### Model Sources

<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/leks-forever/mt5-tuning
<!-- - **Paper [optional]:** [More Information Needed] -->
<!-- - **Demo [optional]:** [More Information Needed] -->

### Model Prefixes    
`"translate Russian to Lezghian: "` - Ru-Lez    
`"translate Lezghian to Russian: "` - Lez-Ru    

## How to Get Started with the Model

```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model = AutoModelForSeq2SeqLM.from_pretrained("leks-forever/mt5-base")
tokenizer = AutoTokenizer.from_pretrained("leks-forever/mt5-base")

def predict(text, prefix, a=32, b=3, max_input_length=1024, num_beams=1, **kwargs):
    inputs = tokenizer(prefix + text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
    result = model.generate(
        **inputs.to(model.device),
        max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
        num_beams=num_beams,
        **kwargs
    )
    return tokenizer.batch_decode(result, skip_special_tokens=True)

sentence: str = "Римдин аскерар ва гьакӀни чӀехи хахамрини  фарисейри ракъурнавай нуькерар Ягьуд галаз багъдиз атана. Абурув виридав яракьар, чирагъар ва шемгьалар гвай."

translation = predict(sentence, prefix="translate Lezghian to Russian: ")

print(translation)

# ['Когда римские воины и вожди, а также главные священнослужители и блюстители Закона пришли в Иудею, они дали ему вооружённые оружие, браслеты и серьги.']
```

## Training Details

### Training Data

The model was fine-tuned on the [bible-lezghian-russian](https://huggingface.co/datasets/leks-forever/bible-lezghian-russian) dataset, which contains 13,800 parallel sentences in Russian and Lezgian. The dataset was split into three parts: 90% for training, 5% for validation, and 5% for testing.


#### Training Hyperparameters

- **Training regime:** fp32
- **Batch size:** 16
- **Training steps:** The model converged on 14k out of 110000k steps
- **Optimizer:** Adafactor with the following settings:
  - **lr:** 1e-4
  - **scale_parameter:** False
  - **relative_step:** False
  - **clip_threshold:** 1.0
  - **weight_decay:** 1e-3
- **Scheduler:** Cosine scheduler with a warmup of 1,000 steps

#### Speeds, Sizes, Times [optional]

- **Training time:** 2 hours on a single NVIDIA RTX5000 (24 GB).


## Evaluation

The evaluation was conducted on the val set of the [bible-lezghian-russian](https://huggingface.co/datasets/leks-forever/bible-lezghian-russian) dataset, consisting of 5% of the total 13,800 parallel sentences.

#### Factors

The evaluation considered translations in both directions:
* Lezgian to Russian
* Russian to Lezgian

#### Metrics

The following metrics were used to evaluate the model’s performance:
* BLEU (n-grams = 4): This metric measures the accuracy of the machine translation output by comparing it to human translations. A higher score indicates better performance.
* chrF: This is a character-level metric that evaluates the quality of translation by comparing the overlap of character n-grams between the hypothesis and the reference. It’s effective for morphologically rich languages.

### Results

* Lezgian to Russian: BLEU = 27, chrF = 61
* Russian to Lezgian: BLEU = 27, chrF = 67

#### Summary
These results indicate that the model can produce accurate translations for both language pairs. However, there are plans to improve the model further by conducting parallel alignment of the corpora to refine the sentence pair matching. Additionally, efforts will be made to collect more training data to enhance the model's performance, especially in handling more diverse and complex linguistic structures.


<!--
## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]
-->