carlosdanielhernandezmena
commited on
Commit
•
17e5fa4
1
Parent(s):
9756524
Adding info to the README file
Browse files
README.md
CHANGED
@@ -1,3 +1,71 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
+
language: mt
|
4 |
+
datasets:
|
5 |
+
- common_voice
|
6 |
+
tags:
|
7 |
+
- audio
|
8 |
+
- automatic-speech-recognition
|
9 |
+
- maltese
|
10 |
+
- whisper-large
|
11 |
+
- whisper-large-v1
|
12 |
+
- masri-project
|
13 |
+
- malta
|
14 |
+
- university-of-malta
|
15 |
+
- faster-whisper
|
16 |
---
|
17 |
+
# whisper-large-maltese-8k-steps-64h-ct2
|
18 |
+
|
19 |
+
This is a faster-whisper version of [carlosdanielhernandezmena/whisper-large-maltese-8k-steps-64h-ct2](https://huggingface.co/carlosdanielhernandezmena/whisper-large-maltese-8k-steps-64h-ct2).
|
20 |
+
|
21 |
+
Most of the data used to create this model is available at the the [MASRI Project](https://www.um.edu.mt/projects/masri/) homepage.
|
22 |
+
|
23 |
+
The model was created like described in [faster-whisper](https://github.com/guillaumekln/faster-whisper/tree/master):
|
24 |
+
|
25 |
+
```bash
|
26 |
+
ct2-transformers-converter --model carlosdanielhernandezmena/whisper-large-maltese-8k-steps-64h \
|
27 |
+
--output_dir whisper-large-maltese-8k-steps-64h-ct2 \
|
28 |
+
--quantization float16
|
29 |
+
```
|
30 |
+
|
31 |
+
# Usage
|
32 |
+
|
33 |
+
```python
|
34 |
+
from faster_whisper import WhisperModel
|
35 |
+
|
36 |
+
model_size = "whisper-large-maltese-8k-steps-64h-ct2"
|
37 |
+
|
38 |
+
# Run on GPU with FP16
|
39 |
+
model = WhisperModel(model_size, device="cuda", compute_type="float16")
|
40 |
+
|
41 |
+
# or run on GPU with INT8
|
42 |
+
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
|
43 |
+
# or run on CPU with INT8
|
44 |
+
# model = WhisperModel(model_size, device="cpu", compute_type="int8")
|
45 |
+
|
46 |
+
segments, info = model.transcribe("audio.mp3", beam_size=5)
|
47 |
+
|
48 |
+
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
|
49 |
+
|
50 |
+
for segment in segments:
|
51 |
+
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
|
52 |
+
```
|
53 |
+
|
54 |
+
# BibTeX entry and citation info
|
55 |
+
*When publishing results based on these models please refer to:*
|
56 |
+
```bibtex
|
57 |
+
@misc{mena2023whisperlargemaltesect2,
|
58 |
+
title={Acoustic Model in Maltese: whisper-large-maltese-8k-steps-64h-ct2.},
|
59 |
+
author={Hernandez Mena, Carlos Daniel},
|
60 |
+
year={2023},
|
61 |
+
url={https://huggingface.co/carlosdanielhernandezmena/whisper-large-maltese-8k-steps-64h-ct2},
|
62 |
+
}
|
63 |
+
```
|
64 |
+
|
65 |
+
# Acknowledgements
|
66 |
+
|
67 |
+
The MASRI Project is funded by the University of Malta Research Fund Awards. We want to thank to Merlin Publishers (Malta) for provinding the audiobooks used to create the MASRI-MERLIN Corpus.
|
68 |
+
|
69 |
+
Thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
|
70 |
+
|
71 |
+
Special thanks to Björn Ingi Stefánsson for setting up the configuration of the server where this model was trained.
|