ivangtorre's picture
Update README.md
5a40a3e verified
|
raw
history blame
3.2 kB
metadata
license: cc-by-4.0
language:
  - gvc
metrics:
  - cer
pipeline_tag: automatic-speech-recognition
datasets:
  - ivangtorre/second_americas_nlp_2022
tags:
  - audio
  - automatic-speech-recognition
  - speech
  - kotiria
  - xlsr-fine-tuning
model-index:
  - name: Wav2Vec2 XLSR 300M Kotiria Model by M Romero and Ivan G Torre
    results:
      - task:
          name: Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Americas NLP 2022 Kotiria
          type: second_americas_nlp_2022
          args: Kotiria
        metrics:
          - name: Test CER
            type: cer
            value: 36

This model was finetuned from a Wav2vec2.0 XLS-R model: 300M with the Kotiria train parition of the Americas NLP 2022 dataset. This challenge took place during NeurIPSS 2022.

Example of usage

The model can be used directly (without a language model) as follows:

from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import torch
import torchaudio

# load model and processor
processor = Wav2Vec2Processor.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-kotiria")
model = Wav2Vec2ForCTC.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-kotiria")

# Pat to wav file
pathfile = "/path/to/wavfile"

# Load and normalize the file
wav, curr_sample_rate = sf.read(pathfile, dtype="float32")
feats = torch.from_numpy(wav).float()
with torch.no_grad():
    feats = F.layer_norm(feats, feats.shape)
feats = torch.unsqueeze(feats, 0)
logits = model(feats).logits

# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print("HF prediction: ", transcription)

This code snipnet shows how to Evaluate the wav2vec2-xlsr-300m-kotiria in Second Americas NLP 2022 Kotiria dev set

from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import cer
import torch.nn.functional as F
from datasets import load_dataset
import soundfile as sf

americasnlp = load_dataset("ivangtorre/second_americas_nlp_2022", "kotiria", split="dev")
kotiria = americasnlp.filter(lambda language: language['subset']=='kotiria')

model = Wav2Vec2ForCTC.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-kotiria")
processor = Wav2Vec2Processor.from_pretrained("ivangtorre/wav2vec2-xlsr-300m-kotiria")

def map_to_pred(batch):
    wav = batch["audio"][0]["array"]
    feats = torch.from_numpy(wav).float()
    feats = F.layer_norm(feats, feats.shape) # Normalization performed during finetuning
    feats = torch.unsqueeze(feats, 0)
    logits = model(feats).logits
    predicted_ids = torch.argmax(logits, dim=-1)
    batch["transcription"] = processor.batch_decode(predicted_ids)
    return batch

result = kotiria.map(map_to_pred, batched=True, batch_size=1)

print("CER:", cer(result["source_processed"], result["transcription"]))

Citation

@article{romero2024asr,
  title={ASR advancements for indigenous languages: Quechua, Guarani, Bribri, Kotiria, and Wa'ikhana},
  author={Romero, Monica and Gomez, Sandra and Torre, Iv{\'a}n G},
  journal={arXiv preprint arXiv:2404.08368},
  year={2024}
}