Edit model card

Whisper large-v3-turbo in CTranslate2

This repo contains the CTranslate2 format of the whisper-large-v3-turbo

Conversation

ct2-transformers-converter --model  openai/whisper-large-v3-turbo --output_dir whisper-large-v3-turbo --copy_files tokenizer.json preprocessor_config.json --quantization float16    

Example with whisperx

from whisperx.asr import WhisperModel

model = WhisperModel(
        model_size_or_path="Capy-AI/whisper-v3-large-turbo-ct2",
        device=device,
        compute_type=compute_type,
        cpu_threads=8
    )
model = whisperx.load_model("", device, model=model, compute_type=compute_type, download_root=model_dir)

transcription = model.transcribe(audio)

for segment in transcription:
  print(f"[{segment['start']:.2f}s - {segment['end']:.2f}s] {segment['text']}")

Note: Model weights are saved in FP16. This type can be changed when the model is loaded using the compute_type option in CTranslate2 or whisperx.

Downloads last month
21
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Capy-AI/whisper-v3-large-turbo-ct2

Finetuned
(57)
this model