Whisper large-v3-singlish-DRAFT

Whisper large-v3-singlish-DRAFT is a fine-tuned automatic speech recognition (ASR) model tailored specifically for Singlish. Based on OpenAI’s Whisper architecture, this model has been optimized with Singlish-centric data to better capture the distinctive phonetic patterns and vocabulary commonly found in Singlish speech. It is designed to serve as a lightweight draft model in speculative decoding pipelines, working in tandem with Whisper large-v3-singlish as the target model to improve transcription speed while maintaining accuracy.

Model Details

  • Developed by: Ming Jie Wong
  • Base Model: distil-whisper/distil-large-v3.5
  • Model Type: Encoder-decoder
  • Metrics: Word Error Rate (WER)
  • Languages Supported: English (with a focus on Singlish)
  • License: MIT

Description

Whisper-large-v3-singlish-DRAFT was trained using pseudo-labels generated by its target model, Whisper-large-v3-singlish. The target model transcribed 66.9k audio recordings sourced from Part 3 of the Same Room Environment Close-talk Microphone section of IMDA’s National Speech Corpus (NSC). This self-distillation approach ensures high alignment between the draft and target models, enabling effective speculative decoding in Singlish speech.

The original Part 3 of the National Speech Corpus comprises approximately 1,000 hours of conversational speech from around 1,000 local English speakers, recorded in pairs. These conversations cover everyday topics and include interactive game-based dialogues. Recordings were conducted in two environments:

  • Same Room, where speakers shared a room and were recorded using a close-talk mic and a boundary mic.
  • Separate Room, where each speaker was recorded individually using a standing mic and a telephone (IVR).

Audio segments for the internal dataset were extracted using these criteria:

  • Minimum Word Count: 10 words

    This threshold was chosen to ensure that each audio segment contains sufficient linguistic context for the model to better understand instructions in Singlish. Shorter segments may bias the model towards specific utterances or phrases, limiting its overall comprehension.

  • Maximum Duration: 20 seconds

    This threshold was chosen to provide enough context for accurate transcription while minimizing noise and computational complexity for longer audio segments.

  • Sampling Rate: All audio segments are down-sampled to 16kHz.

Full experiments details will be added soon.

Fine-Tuning Details

We applied fine-tuning on a single A100-80GB GPU.

Training Hyperparameters

The following hyperparameters are used:

  • batch_size: 16
  • gradient_accumulation_steps: 1
  • learning_rate: 1e-6
  • warmup_steps: 300
  • max_steps: 5000
  • fp16: true
  • eval_batch_size: 16
  • eval_step: 300
  • max_grad_norm: 1.0
  • generation_max_length: 225

Benchmark Performance

We evaluated the speculative decoding setup for Whisper-large-v3-singlish on the following datasets:

  • SASRBench-v1: A benchmark dataset for evaluating ASR performance on Singlish.

  • AMI: A widely used dataset for meeting transcription and diarization tasks. This work specifically uses the IHM (Individual Headset Microphone) recordings.

Model Performance

Dataset Model Variant Link Rel. RTFx WER
SASRBench-v1 Large Whisper-large-v3-singlish 1.00 16.41%
SASRBench-v1 Large-Turbo Whisper-large-v3-turbo-singlish 2.36 13.35%
SASRBench-v1 Draft-enhanced Large Whisper-large-v3-singlish + DRAFT 2.20 14.84%
AMI Large Whisper-large-v3-singlish 1.00 23.72%
AMI Large-Turbo Whisper-large-v3-turbo-singlish 1.53 16.99%
AMI Draft-enhanced Large Whisper-large-v3-singlish + DRAFT 2.27 22.06%

Speculative Acceptance Rates (DRAFT-enhanced Large Model)

Dataset Micro Avg Acceptance Macro Avg Acceptance
SASRBench-v1 38.00% 42.00%
AMI 38.00% 43.00%

Conclusion

While it does not outperform Large-Turbo in WER, the Draft-enhanced Large model demonstrates strong speculative acceptance rates (~38–43%), indicating meaningful potential for runtime gains through early prediction acceptance. In latency-sensitive applications, it offers a compelling middle ground between the high accuracy of Large-Turbo and the slower inference of standard decoding.

Disclaimer

While this model has been fine-tuned to better recognize Singlish, users may experience inaccuracies, biases, or unexpected outputs, particularly in challenging audio conditions or with speakers using non-standard variations. Use of this model is at your own risk; the developers and distributors are not liable for any consequences arising from its use. Please validate results before deploying in any sensitive or production environment.

How to use the model

Whisper-large-v3-singlish-DRAFT can be leveraged as an assistant model in a speculative decoding setup with Whisper-large-v3-singlish as the target. The assistant model proposes initial tokens, which are selectively verified by the target model to accelerate inference without sacrificing accuracy.

import torch
from transformers import (
    pipeline,
    AutoModelForCausalLM,
    AutoModelForSpeechSeq2Seq,
    AutoProcessor
)

TARGET_REPO_NAME = "mjwong/whisper-large-v3-singlish"
DRAFT_REPO_NAME = "mjwong/whisper-large-v3-singlish-DRAFT"

# Select appropriate device and precision
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

# Load the draft model (used as the assistant in speculative decoding)
assistant_model = AutoModelForCausalLM.from_pretrained(
    DRAFT_REPO_NAME, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
assistant_model.to(device)

# Load the main target model (the high-accuracy decoder)
model = AutoModelForSpeechSeq2Seq.from_pretrained(
    TARGET_REPO_NAME, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)

# Load processor (tokenizer + feature extractor)
processor = AutoProcessor.from_pretrained(TARGET_REPO_NAME)

# Create the ASR pipeline with speculative decoding
pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    max_new_tokens=128,
    generate_kwargs={"assistant_model": assistant_model},
    torch_dtype=torch_dtype,
    device=device,
)

You can then use this pipeline to transcribe audios of arbitrary length.

from datasets import load_dataset
dataset = load_dataset("mjwong/SASRBench-v1", split="test")
sample = dataset[0]["audio"]

result = pipe(sample)
print(result["text"])

Contact

For more information, please reach out to [email protected].

Downloads last month
33
Safetensors
Model size
756M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mjwong/whisper-large-v3-singlish-DRAFT

Finetuned
(2)
this model

Collection including mjwong/whisper-large-v3-singlish-DRAFT

Evaluation results