DewiBrynJones commited on
Commit
7f1f778
1 Parent(s): 85e2816

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -25,8 +25,6 @@ transcriptions of Welsh language spontaneous speech [Banc Trawsgrifiadau Bangor
25
  ac well as recordings of read speach from [Welsh Common Voice version 18 (cv)](https://huggingface.co/datasets/techiaith/commonvoice_18_0_cy)
26
  for additional training.
27
 
28
- As such this model is suitable for more verbatim transcribing of spontaneous or unplanned speech.
29
-
30
  The Whisper large-v3-turbo pre-trained model is a finetuned version of a pruned Whisper large-v3. In other words, this model is the
31
  same model as [techiaith/whisper-large-v3-ft-btb-cv-cy](https://huggingface.co/techiaith/whisper-large-v3-ft-btb-cv-cy),
32
  except that the number of decoding layers have been reduced. As a result, the model is way faster, at the expense
@@ -37,6 +35,8 @@ It achieves the following results on the [Banc Trawsgrifiadau Bangor'r test set]
37
  - WER: 30.27
38
  - CER: 11.14
39
 
 
 
40
 
41
  ## Usage
42
 
 
25
  ac well as recordings of read speach from [Welsh Common Voice version 18 (cv)](https://huggingface.co/datasets/techiaith/commonvoice_18_0_cy)
26
  for additional training.
27
 
 
 
28
  The Whisper large-v3-turbo pre-trained model is a finetuned version of a pruned Whisper large-v3. In other words, this model is the
29
  same model as [techiaith/whisper-large-v3-ft-btb-cv-cy](https://huggingface.co/techiaith/whisper-large-v3-ft-btb-cv-cy),
30
  except that the number of decoding layers have been reduced. As a result, the model is way faster, at the expense
 
35
  - WER: 30.27
36
  - CER: 11.14
37
 
38
+ As such this model is suitable for faster verbatim transcribing of spontaneous or unplanned speech.
39
+
40
 
41
  ## Usage
42