Quantized Whisper Large V2 with calibration on Ukrainian

Quantized it using https://pypi.org/project/llmcompressor/

Data used for calibration: https://huggingface.co/datasets/Yehor/cv10-uk-testset-clean-punctuated

How to quantize: https://colab.research.google.com/drive/1TsCMxwq9kqsWV8jabihFN7J78RKgyvnD?usp=sharing

Usage

Install required packages:

pip install vllm polars

Run inference:

import io
import wave

import numpy as np
import polars as pl

from vllm import LLM, SamplingParams


def wav_bytes_to_numpy(wav_bytes):
    with wave.open(io.BytesIO(wav_bytes), "rb") as wr:
        if (num_channels := wr.getnchannels()) != 1:
            raise ValueError(f"num_channels must be 1, got {num_channels}")
        if (sample_width := wr.getsampwidth()) != 2:
            raise ValueError(f"sample_width must be 2, got {sample_width}")

        audio_data = wr.readframes(wr.getnframes())

        return np.frombuffer(audio_data, dtype=np.int16).astype(np.float32) / 32768.0


llm = LLM(
    model="Yehor/whisper-large-v2-quantized-uk",
    max_model_len=448,
    max_num_seqs=400,
    gpu_memory_utilization=0.8,
    limit_mm_per_prompt={"audio": 1},
)

df = pl.read_parquet("hf://datasets/Yehor/cv10-uk-testset-clean/data/train-*.parquet")


for row in df.iter_rows(named=True):
    current_sample = (
        wav_bytes_to_numpy(row["audio"]["bytes"]),
        16_000,
    )

    inputs = {
        "encoder_prompt": {
            "prompt": "",
            "multi_modal_data": {
                "audio": current_sample,
            },
        },
        "decoder_prompt": "<|startoftranscript|><|uk|><|transcribe|><|notimestamps|>",
    }

    sampling_params = SamplingParams(
        temperature=0,
        top_p=1.0,
        max_tokens=200,
    )
    outputs = llm.generate(inputs, sampling_params)

    print(f"PROMPT  : {outputs[0].prompt}")
    print(f"TRANSCRIPTION: {row['transcription']}")
    print(f"PREDICTION: {outputs[0].outputs[0].text}")
    print("==========================================")
Downloads last month
38
Safetensors
Model size
279M params
Tensor type
I64
·
F32
·
I32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Yehor/whisper-large-v2-quantized-uk

Quantized
(5)
this model

Dataset used to train Yehor/whisper-large-v2-quantized-uk