File size: 3,399 Bytes
b63099b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fc6bcce
b63099b
 
 
 
fbffe17
b63099b
 
ab7fa64
38346e6
b63099b
 
 
 
55de1e8
b63099b
 
55de1e8
b63099b
 
55de1e8
b63099b
 
55de1e8
b63099b
 
 
b4ddd85
 
 
f82dbe0
b4ddd85
 
 
b63099b
 
783d538
b63099b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
465971e
b63099b
465971e
a27db0e
 
b63099b
0a4ab34
f0a5736
a27db0e
 
 
 
 
 
b63099b
 
 
 
 
 
a8ad1ef
dd67453
a8ad1ef
b63099b
 
 
 
 
867f019
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
language:
- it
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- it
- mozilla-foundation/common_voice_8_0
- speech
- wav2vec2
model-index:
- name: XLS-R Wav2Vec2 Italian by radiogroup crits
  results:
  - task:
      name: Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice 8.0 italian
      type: mozilla-foundation/common_voice_8_0
      args: it
    metrics:
    - name: Test WER
      type: wer
      value: 9.04
    - name: Test CER
      type: cer
      value: 2.2
    - name: Test WER (+LM)
      type: wer
      value: 6.24
    - name: Test CER (+LM)
      type: cer
      value: 1.67
---
# XLS-R-1B-ITALIAN-DOC4LM-5GRAM

## Fine-tuned XLS-R 1B model for speech recognition in Italian

Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Italian using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).

When using this model, make sure that your speech input is sampled at 16kHz.


## Language model information

Our language model was generated using a dataset of Italian wikipedia articles and manual transcriptions of radio newspapers and television programs. 


## Download CommonVoice8.0 dataset for italian language
```python
from datasets import load_dataset

dataset = load_dataset("mozilla-foundation/common_voice_8_0", "it", use_auth_token=True)
```

## Evaluation Commands

To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`:

```bash
python eval.py --model_id radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram --dataset mozilla-foundation/common_voice_8_0 --config it --split test --log_outputs --greedy

mv log_mozilla-foundation_common_voice_8_0_it_test_predictions.txt log_mozilla-foundation_common_voice_8_0_it_test_predictions_greedy.txt

mv log_mozilla-foundation_common_voice_8_0_it_test_targets.txt log_mozilla-foundation_common_voice_8_0_it_test_targets_greedy.txt

mv mozilla-foundation_common_voice_8_0_it_test_eval_results.txt mozilla-foundation_common_voice_8_0_it_test_eval_results_greedy.txt

python eval.py --model_id radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram --dataset mozilla-foundation/common_voice_8_0 --config it --split test --log_outputs

mv log_mozilla-foundation_common_voice_8_0_it_test_predictions.txt log_mozilla-foundation_common_voice_8_0_it_test_predictions_lm.txt

mv log_mozilla-foundation_common_voice_8_0_it_test_targets.txt log_mozilla-foundation_common_voice_8_0_it_test_targets_lm.txt

mv mozilla-foundation_common_voice_8_0_it_test_eval_results.txt mozilla-foundation_common_voice_8_0_it_test_eval_results_lm.txt
```

## Citation
If you want to cite this model you can use this:

```bibtex
@misc{crits2022wav2vec2-xls-r-1b-italian-doc4lm-5gram,
  title={XLS-R Wav2Vec2 Italian by radiogroup crits},
  author={Teraoni Prioletti Raffaele, Casagranda Paolo and Russo Francesco},
  publisher={Hugging Face},
  journal={Hugging Face Hub},
  howpublished={\url{https://huggingface.co/radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram}},
  year={2022}
}
```