Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
# Model Card
|
5 |
+
|
6 |
+
## Model Description
|
7 |
+
|
8 |
+
This is in a fine-tuned series of [OpenAI's Whisper models](https://github.com/openai/whisper).
|
9 |
+
|
10 |
+
The models have been finetuned for dynamic audio context robustness, allowing shorter audio contexts for better performance with short audio inputs. The method is detailed [in our GitHub repo](https://github.com/futo-org/whisper-acft).
|
11 |
+
|
12 |
+
- **Developed by:** FUTO
|
13 |
+
- **License:** Apache-2.0
|
14 |
+
- **Finetuned from model:** OpenAI Whisper
|
15 |
+
|
16 |
+
## Uses
|
17 |
+
|
18 |
+
These models are not useful by themselves under default Whisper runtime configurations.
|
19 |
+
|
20 |
+
The easiest way to test differing audio context is to use whisper.cpp with the `--audio-context` parameter. We provide converted whisper.cpp models in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
|
21 |
+
|
22 |
+
## Other Information
|
23 |
+
|
24 |
+
More information can be found in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
|