Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ model-index:
|
|
23 |
metrics:
|
24 |
- name: Test WER
|
25 |
type: wer
|
26 |
-
value:
|
27 |
---
|
28 |
|
29 |
# Wav2Vec2-Large-XLSR-53-Tamil
|
@@ -37,10 +37,15 @@ When using this model, make sure that your speech input is sampled at 16kHz.
|
|
37 |
The model can be used directly (without a language model) as follows:
|
38 |
|
39 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
import torch
|
41 |
-
import
|
42 |
from datasets import load_dataset
|
43 |
-
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
44 |
|
45 |
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
|
46 |
|
@@ -76,10 +81,16 @@ The model can be evaluated as follows on the {language} test data of Common Voic
|
|
76 |
|
77 |
|
78 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
import torch
|
80 |
-
import
|
81 |
from datasets import load_dataset, load_metric
|
82 |
-
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
83 |
import re
|
84 |
|
85 |
test_dataset = load_dataset("common_voice", "ta", split="test")
|
@@ -90,7 +101,6 @@ model = Wav2Vec2ForCTC.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
|
|
90 |
model.to("cuda")
|
91 |
|
92 |
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\%\β\β\ \β\β\(\)]'
|
93 |
-
resampler = torchaudio.transforms.Resample(48_000, 16_000)
|
94 |
|
95 |
# Preprocessing the datasets.
|
96 |
# We need to read the aduio files as arrays
|
@@ -118,3 +128,15 @@ result = test_dataset.map(evaluate, batched=True, batch_size=8)
|
|
118 |
|
119 |
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
|
120 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
metrics:
|
24 |
- name: Test WER
|
25 |
type: wer
|
26 |
+
value: 57.004356
|
27 |
---
|
28 |
|
29 |
# Wav2Vec2-Large-XLSR-53-Tamil
|
|
|
37 |
The model can be used directly (without a language model) as follows:
|
38 |
|
39 |
```python
|
40 |
+
|
41 |
+
!pip install datasets
|
42 |
+
!pip install transformers
|
43 |
+
|
44 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
45 |
+
|
46 |
import torch
|
47 |
+
import librosa
|
48 |
from datasets import load_dataset
|
|
|
49 |
|
50 |
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
|
51 |
|
|
|
81 |
|
82 |
|
83 |
```python
|
84 |
+
|
85 |
+
!pip install datasets
|
86 |
+
!pip install transformers
|
87 |
+
!pip install jiwer
|
88 |
+
|
89 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
90 |
+
|
91 |
import torch
|
92 |
+
import librosa
|
93 |
from datasets import load_dataset, load_metric
|
|
|
94 |
import re
|
95 |
|
96 |
test_dataset = load_dataset("common_voice", "ta", split="test")
|
|
|
101 |
model.to("cuda")
|
102 |
|
103 |
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\%\β\β\ \β\β\(\)]'
|
|
|
104 |
|
105 |
# Preprocessing the datasets.
|
106 |
# We need to read the aduio files as arrays
|
|
|
128 |
|
129 |
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
|
130 |
```
|
131 |
+
|
132 |
+
**Test Result**: 57.004356 %
|
133 |
+
|
134 |
+
## Usage and Evaluation script
|
135 |
+
|
136 |
+
The script used for usage and evaluation can be found [here](https://colab.research.google.com/drive/1dyDe14iOmoNoVHDJTkg-hAgLnrGdI-Dk?usp=share_link)
|
137 |
+
|
138 |
+
## Training
|
139 |
+
|
140 |
+
The Common Voice `train`, `validation` datasets were used for training.
|
141 |
+
|
142 |
+
The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
|