|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- bzd |
|
metrics: |
|
- cer |
|
pipeline_tag: automatic-speech-recognition |
|
datasets: |
|
- ivangtorre/second_americas_nlp_2022 |
|
tags: |
|
- audio |
|
- automatic-speech-recognition |
|
- speech |
|
- bribri |
|
- xlsr-fine-tuning |
|
model-index: |
|
- name: Wav2Vec2 XLSR 300M Bribri Model by M Romero and Ivan G Torre |
|
results: |
|
- task: |
|
name: Speech Recognition |
|
type: automatic-speech-recognition |
|
dataset: |
|
name: Americas NLP 2022 Bribri |
|
type: second_americas_nlp_2022 |
|
args: Bribri |
|
metrics: |
|
- name: Test CER |
|
type: cer |
|
value: 29.34 |
|
|
|
--- |
|
|
|
This model was finetuned from a Wav2vec2.0 XLS-R model: 300M with the Bribri train parition of the Americas NLP 2022 dataset. This challenge took place during NeurIPSS 2022. |
|
|
|
|
|
|
|
## Example of usage |
|
|
|
The model can be used directly (without a language model) as follows: |
|
|
|
```python |
|
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC |
|
import torch |
|
import torchaudio |
|
|
|
# load model and processor |
|
processor = Wav2Vec2Processor.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-bribri") |
|
model = Wav2Vec2ForCTC.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-bribri") |
|
|
|
# Pat to wav file |
|
pathfile = "/path/to/wavfile" |
|
|
|
# Load and normalize the file |
|
wav, curr_sample_rate = sf.read(pathfile, dtype="float32") |
|
feats = torch.from_numpy(wav).float() |
|
with torch.no_grad(): |
|
feats = F.layer_norm(feats, feats.shape) |
|
feats = torch.unsqueeze(feats, 0) |
|
logits = model(feats).logits |
|
|
|
# take argmax and decode |
|
predicted_ids = torch.argmax(logits, dim=-1) |
|
transcription = processor.batch_decode(predicted_ids) |
|
print("HF prediction: ", transcription) |
|
``` |
|
|
|
|
|
This code snipnet shows how to Evaluate the wav2vec2-xlsr-300m-bribri in [Second Americas NLP 2022 Bribri dev set](https://huggingface.co/datasets/ivangtorre/second_americas_nlp_2022) |
|
|
|
```python |
|
from datasets import load_dataset |
|
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor |
|
import torch |
|
from jiwer import cer |
|
import torch.nn.functional as F |
|
from datasets import load_dataset |
|
import soundfile as sf |
|
|
|
americasnlp = load_dataset("ivangtorre/second_americas_nlp_2022", "bribri", split="dev") |
|
quechua = americasnlp.filter(lambda language: language['subset']=='bribri') |
|
|
|
model = Wav2Vec2ForCTC.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-bribri") |
|
processor = Wav2Vec2Processor.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-bribri") |
|
|
|
def map_to_pred(batch): |
|
wav = batch["audio"][0]["array"] |
|
feats = torch.from_numpy(wav).float() |
|
feats = F.layer_norm(feats, feats.shape) # Normalization performed during finetuning |
|
feats = torch.unsqueeze(feats, 0) |
|
logits = model(feats).logits |
|
predicted_ids = torch.argmax(logits, dim=-1) |
|
batch["transcription"] = processor.batch_decode(predicted_ids) |
|
return batch |
|
|
|
result = quechua.map(map_to_pred, batched=True, batch_size=1) |
|
|
|
print("CER:", cer(result["source_processed"], result["transcription"])) |
|
``` |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{romero2024asr, |
|
title={ASR advancements for indigenous languages: Quechua, Guarani, Bribri, Kotiria, and Wa'ikhana}, |
|
author={Romero, Monica and Gomez, Sandra and Torre, Iv{\'a}n G}, |
|
journal={arXiv preprint arXiv:2404.08368}, |
|
year={2024} |
|
} |
|
``` |