Automatic Speech Recognition
TensorBoard
Safetensors
Welsh
whisper
Generated from Trainer
verbatim
DewiBrynJones's picture
Update README.md
aeecc20 verified
|
raw
history blame
1.42 kB
---
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
- verbatim
metrics:
- wer
model-index:
- name: whisper-large-v3-ft-btb-cv-cy
results: []
datasets:
- techiaith/banc-trawsgrifiadau-bangor
- techiaith/commonvoice_18_0_cy
language:
- cy
pipeline_tag: automatic-speech-recognition
---
# whisper-large-v3-ft-btb-cv-cy
This model is a version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) finedtuned with
transcriptions of Welsh language spontaneous speech [Banc Trawsgrifiadau Bangor (btb)](https://huggingface.co/datasets/techiaith/banc-trawsgrifiadau-bangor)
ac well as recordings of read speach from [Welsh Common Voice version 18 (cv)](https://huggingface.co/datasets/techiaith/commonvoice_18_0_cy)
for additional training.
As such this model is suitable for more verbatim transcribing of spontaneous or unplanned speech.
It achieves the following results on the [Banc Trawsgrifiadau Bangor'r test set](https://huggingface.co/datasets/techiaith/banc-trawsgrifiadau-bangor/viewer/default/test)
- WER: 29.72
- CER: 11.01
## Usage
```python
from transformers import pipeline
transcriber = pipeline("automatic-speech-recognition", model="techiaith/whisper-large-v3-ft-btb-cv-cy")
result = transcriber(<path or url to soundfile>)
print (result)
```
`{'text': 'ymm, yn y pum mlynadd dwitha 'ma ti 'di... Ie. ...bod drw dipyn felly do?'}`