Spaces:
Running
on
T4
Running
on
T4
Kushal Agrawal
commited on
Commit
•
9bbc276
1
Parent(s):
1442da5
Update README.md
Browse files
README.md
CHANGED
@@ -14,11 +14,11 @@ Please create an issue if you find a bug, have a question, or a feature suggesti
|
|
14 |
## OpenAI API Compatibility ++
|
15 |
See [OpenAI API reference](https://platform.openai.com/docs/api-reference/audio) for more information.
|
16 |
- Audio file transcription via `POST /v1/audio/transcriptions` endpoint.
|
17 |
-
- Unlike OpenAI's API, `faster-whisper-server` also supports streaming transcriptions(and translations). This is useful for when you want to process large audio files would rather receive the transcription in chunks as they are processed rather than waiting for the whole file to be
|
18 |
- Audio file translation via `POST /v1/audio/translations` endpoint.
|
19 |
- Live audio transcription via `WS /v1/audio/transcriptions` endpoint.
|
20 |
- LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for live transcription.
|
21 |
-
- Only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
|
22 |
|
23 |
## Quick Start
|
24 |
[Hugging Face Space](https://huggingface.co/spaces/Iatalking/fast-whisper-server)
|
@@ -42,7 +42,7 @@ docker compose up --detach faster-whisper-server-cpu
|
|
42 |
Using Kubernetes: [tutorial](https://substratus.ai/blog/deploying-faster-whisper-on-k8s)
|
43 |
|
44 |
## Usage
|
45 |
-
If you are looking for a step-by-step walkthrough,
|
46 |
|
47 |
### OpenAI API CLI
|
48 |
```bash
|
|
|
14 |
## OpenAI API Compatibility ++
|
15 |
See [OpenAI API reference](https://platform.openai.com/docs/api-reference/audio) for more information.
|
16 |
- Audio file transcription via `POST /v1/audio/transcriptions` endpoint.
|
17 |
+
- Unlike OpenAI's API, `faster-whisper-server` also supports streaming transcriptions(and translations). This is useful for when you want to process large audio files and would rather receive the transcription in chunks as they are processed rather than waiting for the whole file to be transcribed. It works similarly to chat messages when chatting with LLMs.
|
18 |
- Audio file translation via `POST /v1/audio/translations` endpoint.
|
19 |
- Live audio transcription via `WS /v1/audio/transcriptions` endpoint.
|
20 |
- LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for live transcription.
|
21 |
+
- Only transcription of a single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
|
22 |
|
23 |
## Quick Start
|
24 |
[Hugging Face Space](https://huggingface.co/spaces/Iatalking/fast-whisper-server)
|
|
|
42 |
Using Kubernetes: [tutorial](https://substratus.ai/blog/deploying-faster-whisper-on-k8s)
|
43 |
|
44 |
## Usage
|
45 |
+
If you are looking for a step-by-step walkthrough, check out [this](https://www.youtube.com/watch?app=desktop&v=vSN-oAl6LVs) YouTube video.
|
46 |
|
47 |
### OpenAI API CLI
|
48 |
```bash
|