Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: rw
|
3 |
+
datasets:
|
4 |
+
- common_voice
|
5 |
+
metrics:
|
6 |
+
- wer
|
7 |
+
tags:
|
8 |
+
- audio
|
9 |
+
- automatic-speech-recognition
|
10 |
+
- speech
|
11 |
+
- xlsr-fine-tuning-week
|
12 |
+
license: apache-2.0
|
13 |
+
model-index:
|
14 |
+
- name: XLSR Wav2Vec2 Large Kinyarwanda with apostrophes
|
15 |
+
results:
|
16 |
+
- task:
|
17 |
+
name: Speech Recognition
|
18 |
+
type: automatic-speech-recognition
|
19 |
+
dataset:
|
20 |
+
name: Common Voice rw
|
21 |
+
type: common_voice
|
22 |
+
args: rw
|
23 |
+
metrics:
|
24 |
+
- name: Test WER
|
25 |
+
type: wer
|
26 |
+
value: 39.92
|
27 |
+
---
|
28 |
+
|
29 |
+
# Wav2Vec2-Large-XLSR-53-rw
|
30 |
+
|
31 |
+
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kinyarwanda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using about 25% of the training data (limited to utterances without downvotes and shorter with 9.5 seconds), and validated on 2048 utterances from the validation set. In contrast to the [lucio/wav2vec2-large-xlsr-kinyarwanda](https://huggingface.co/lucio/wav2vec2-large-xlsr-kinyarwanda) model, which does not predict any punctuation, this model attempts to predict the apostrophes that mark contractions of pronouns with vowel-initial words, but may overgeneralize.
|
32 |
+
When using this model, make sure that your speech input is sampled at 16kHz.
|
33 |
+
|
34 |
+
## Usage
|
35 |
+
|
36 |
+
The model can be used directly (without a language model) as follows:
|
37 |
+
|
38 |
+
```python
|
39 |
+
import torch
|
40 |
+
import torchaudio
|
41 |
+
from datasets import load_dataset
|
42 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
43 |
+
|
44 |
+
# WARNING! This will download and extract to use about 80GB on disk.
|
45 |
+
test_dataset = load_dataset("common_voice", "rw", split="test[:2%]")
|
46 |
+
|
47 |
+
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
|
48 |
+
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda")
|
49 |
+
|
50 |
+
resampler = torchaudio.transforms.Resample(48_000, 16_000)
|
51 |
+
|
52 |
+
# Preprocessing the datasets.
|
53 |
+
# We need to read the audio files as arrays
|
54 |
+
def speech_file_to_array_fn(batch):
|
55 |
+
speech_array, sampling_rate = torchaudio.load(batch["path"])
|
56 |
+
batch["speech"] = resampler(speech_array).squeeze().numpy()
|
57 |
+
return batch
|
58 |
+
|
59 |
+
test_dataset = test_dataset.map(speech_file_to_array_fn)
|
60 |
+
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
|
61 |
+
|
62 |
+
with torch.no_grad():
|
63 |
+
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
|
64 |
+
|
65 |
+
predicted_ids = torch.argmax(logits, dim=-1)
|
66 |
+
|
67 |
+
print("Prediction:", processor.batch_decode(predicted_ids))
|
68 |
+
print("Reference:", test_dataset["sentence"][:2])
|
69 |
+
```
|
70 |
+
|
71 |
+
Result:
|
72 |
+
```
|
73 |
+
Prediction: ['yaherukago gukora igitaramo yiki mujyiwa na mor mu bubiligi', "ibi rero ntibizashoboka kandi n'umudabizi"]
|
74 |
+
Reference: ['Yaherukaga gukora igitaramo nk’iki mu Mujyi wa Namur mu Bubiligi.', 'Ibi rero, ntibizashoboka, kandi nawe arabizi.']
|
75 |
+
```
|
76 |
+
|
77 |
+
## Evaluation
|
78 |
+
|
79 |
+
The model can be evaluated as follows on the Kinyarwanda test data of Common Voice. Note that to even load the test data, the whole 40GB Kinyarwanda dataset will be downloaded and extracted into another 40GB directory, so you will need that space available on disk (e.g. not possible in the free tier of Google Colab). This script uses the `chunked_wer` function from [pcuenq](https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es).
|
80 |
+
|
81 |
+
|
82 |
+
```python
|
83 |
+
import jiwer
|
84 |
+
import torch
|
85 |
+
import torchaudio
|
86 |
+
from datasets import load_dataset, load_metric
|
87 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
88 |
+
import re
|
89 |
+
import unidecode
|
90 |
+
|
91 |
+
test_dataset = load_dataset("common_voice", "rw", split="test")
|
92 |
+
wer = load_metric("wer")
|
93 |
+
|
94 |
+
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied")
|
95 |
+
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied")
|
96 |
+
model.to("cuda")
|
97 |
+
|
98 |
+
chars_to_ignore_regex = r'[!"#$%&()*+,./:;<=>?@\[\]\\_{}|~£¤¨©ª«¬®¯°·¸»¼½¾ðʺ˜˝ˮ‐–—―‚“”„‟•…″‽₋€™−√�]'
|
99 |
+
|
100 |
+
def remove_special_characters(batch):
|
101 |
+
batch["text"] = re.sub(r'[ʻʽʼ‘’´`]', r"'", batch["sentence"]) # normalize apostrophes
|
102 |
+
batch["text"] = re.sub(chars_to_ignore_regex, "", batch["text"]).lower().strip() # remove all other punctuation
|
103 |
+
batch["text"] = re.sub(r"([b-df-hj-np-tv-z])' ([aeiou])", r"\1'\2", batch["text"]) # remove spaces where apostrophe marks a deleted vowel
|
104 |
+
batch["text"] = re.sub(r"(-| '|' | +)", " ", batch["text"]) # treat dash and other apostrophes as word boundary
|
105 |
+
batch["text"] = unidecode.unidecode(batch["text"]) # strip accents from loanwords
|
106 |
+
return batch
|
107 |
+
|
108 |
+
## Audio pre-processing
|
109 |
+
resampler = torchaudio.transforms.Resample(48_000, 16_000)
|
110 |
+
|
111 |
+
def speech_file_to_array_fn(batch):
|
112 |
+
speech_array, sampling_rate = torchaudio.load(batch["path"])
|
113 |
+
batch["speech"] = resampler(speech_array).squeeze().numpy()
|
114 |
+
batch["sampling_rate"] = 16_000
|
115 |
+
return batch
|
116 |
+
|
117 |
+
def cv_prepare(batch):
|
118 |
+
batch = remove_special_characters(batch)
|
119 |
+
batch = speech_file_to_array_fn(batch)
|
120 |
+
return batch
|
121 |
+
|
122 |
+
test_dataset = test_dataset.map(cv_prepare)
|
123 |
+
|
124 |
+
# Preprocessing the datasets.
|
125 |
+
# We need to read the audio files as arrays
|
126 |
+
def evaluate(batch):
|
127 |
+
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
|
128 |
+
|
129 |
+
with torch.no_grad():
|
130 |
+
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
|
131 |
+
|
132 |
+
pred_ids = torch.argmax(logits, dim=-1)
|
133 |
+
batch["pred_strings"] = processor.batch_decode(pred_ids)
|
134 |
+
return batch
|
135 |
+
|
136 |
+
result = test_dataset.map(evaluate, batched=True, batch_size=8)
|
137 |
+
|
138 |
+
def chunked_wer(targets, predictions, chunk_size=None):
|
139 |
+
if chunk_size is None: return jiwer.wer(targets, predictions)
|
140 |
+
start = 0
|
141 |
+
end = chunk_size
|
142 |
+
H, S, D, I = 0, 0, 0, 0
|
143 |
+
while start < len(targets):
|
144 |
+
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
|
145 |
+
H = H + chunk_metrics["hits"]
|
146 |
+
S = S + chunk_metrics["substitutions"]
|
147 |
+
D = D + chunk_metrics["deletions"]
|
148 |
+
I = I + chunk_metrics["insertions"]
|
149 |
+
start += chunk_size
|
150 |
+
end += chunk_size
|
151 |
+
return float(S + D + I) / float(H + S + D)
|
152 |
+
|
153 |
+
print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
|
154 |
+
```
|
155 |
+
|
156 |
+
**Test Result**: 39.92 %
|
157 |
+
|
158 |
+
|
159 |
+
## Training
|
160 |
+
|
161 |
+
Examples from the Common Voice training dataset were used for training, after filtering out utterances that had any `down_vote` or were longer than 9.5 seconds. The data used totals about 125k examples, 25% of the available data, trained on 1 V100 GPU provided by OVHcloud, for a total of about 60 hours: 20 epochs on one block of 32k examples and then 10 epochs each on 3 more blocks of 32k examples. For validation, 2048 examples of the validation dataset were used.
|
162 |
+
|
163 |
+
The [script used for training](https://github.com/serapio/transformers/blob/feature/xlsr-finetune/examples/research_projects/wav2vec2/run_common_voice.py) is adapted from the [example script provided in the transformers repo](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py).
|