Edit model card

whisper-large-faroese-8k-steps-100h-ct2

This is a faster-whisper version of carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h.

The specific dataset used to create the model is called "Ravnursson Faroese Speech and Transcripts" and it is available at http://hdl.handle.net/20.500.12537/276.

The model was created like described in faster-whisper:

ct2-transformers-converter --model carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h \
    --output_dir whisper-large-faroese-8k-steps-100h-ct2 \
    --quantization float16

Usage

from faster_whisper import WhisperModel

model_size = "whisper-large-faroese-8k-steps-100h-ct2"

# Run on GPU with FP16
model = WhisperModel(model_size, device="cuda", compute_type="float16")

# or run on GPU with INT8
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
# model = WhisperModel(model_size, device="cpu", compute_type="int8")

segments, info = model.transcribe("audio.mp3", beam_size=5)

print("Detected language '%s' with probability %f" % (info.language, info.language_probability))

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

BibTeX entry and citation info

  • When publishing results based on these models please refer to:
@misc{mena2023whisperlargefaroesect2,
      title={Acoustic Model in Faroese: whisper-large-faroese-8k-steps-100h-ct2.}, 
      author={Hernandez Mena, Carlos Daniel},
      url={https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h-ct2},
      year={2023}
}

Acknowledgements

We want to thank to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.

Thanks to Annika Simonsen and to The Ravnur Project for making their "Basic Language Resource Kit"(BLARK 1.0) publicly available through the research paper "Creating a Basic Language Resource Kit for Faroese" https://aclanthology.org/2022.lrec-1.495.pdf

Special thanks to Björn Ingi Stefánsson for setting up the configuration of the server where this model was trained.

Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h-ct2