Kartoffel German Logo

SmolKartoffel-135M-v0.1-GRPO

Model Overview

This text-to-speech (TTS) model has been trained on a custom dataset representing 7,000 hours of high-quality audio data. The audio data consisted of permissive podcasts, lectures and other OER data.

Training Details

  • Base Model: HuggingFaceTB/SmolLM2-135M
  • Dataset: A custom dataset comprising 7,000 hours of data.
  • Compute Resources: The base training was performed using 2x RTX 3090 GPUs for 2 epochs.
  • Raw Training Time: Approximately 3 days 2 hours not included the data preprocessing with xcodec2.
  • GRPO: The GRPO training was performed using 2x RTX 3090 GPUs for ~650 steps with 5k examples.
  • GRPO Training Time: A single GRPO Training took ~19 hours.

πŸ‘¨β€πŸ’» Installation

First install the following pip packages:

pip install xcodec2 torch torchaudio

πŸ› οΈ Usage

🎲 Random voice

A basic example using the Hugging Face Transformers:

import os
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import soundfile as sf

llasa_1b_german = 'SmolKartoffel-135M-v0.1-GRPO'  

# Loading the model 
tokenizer = AutoTokenizer.from_pretrained(llasa_1b_german)
model = AutoModelForCausalLM.from_pretrained(llasa_1b_german)
model.to('cuda')

# Load XCodec2 model
from xcodec2.modeling_xcodec2 import XCodec2Model
model_path = "HKUST-Audio/xcodec2"
Codec_model = XCodec2Model.from_pretrained(model_path)
Codec_model.cuda()

input_text = "\"Weißt du was, Hoppi\", sagte der weise Uhu, \"manchmal ist es gar nicht so wichtig, das Ende des Regenbogens zu finden. Das Schânste ist doch, dass wir alle zusammen dieses Abenteuer erleben!"


def extract_speech_ids(speech_tokens_str):
    speech_ids = []
    for token_str in speech_tokens_str:
        if token_str.startswith('<|s_') and token_str.endswith('|>'):
            num_str = token_str[4:-2]
            num = int(num_str)
            speech_ids.append(num)
        else:
            print(f"Unexpected token: {token_str}")
    return speech_ids

with torch.no_grad():
    formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>"
    
    chat = [
        {"role": "user", "content": "Convert the text to speech:" + formatted_text},
        {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>"}
    ]

    input_ids = tokenizer.apply_chat_template(
        chat,
        tokenize=True,
        return_tensors='pt',
        continue_final_message=True
    )
    input_ids = input_ids.to('cuda')
    speech_end_id = tokenizer.convert_tokens_to_ids('<|SPEECH_GENERATION_END|>')

    outputs = model.generate(
        input_ids,
        max_length=2048,
        eos_token_id=speech_end_id,
        do_sample=True,
        top_p=1,
        temperature=0.8,
    )

    generated_ids = outputs[0][input_ids.shape[1]:-1]
    speech_tokens = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
    speech_tokens = extract_speech_ids(speech_tokens)
    speech_tokens = torch.tensor(speech_tokens).cuda().unsqueeze(0).unsqueeze(0)
    gen_wav = Codec_model.decode_code(speech_tokens)
    
    
sf.write("generation.wav", gen_wav[0, 0, :].cpu().numpy(), 16000)

🎯 Using a specific speaker

An example with speaker reference:

import torch
import torchaudio
import tempfile
import soundfile as sf
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

# Input your reference audio and optional the text
sample_audio_path = "male.wav"
sample_audio_text = None # Set it to none to use whisper for transcription
# Input the target text here
target_text = "Und apropos Spannungen und UnfΓ€lle, in Stuttgart gibt es auch einige Schlagzeilen. Die Polizei sucht Zeugen, nachdem in der Stadt mehrere Autoscheiben eingeschlagen wurden. Und gestern kam es im Stuttgarter Osten zu einer Verfolgungsjagd mit einer jungen BMW-Fahrerin, die vor einer Polizeistreife geflΓΌchtet ist."
output_filename = "no_speaker_example.wav"


#### Do not edit below ####
llasa_model_name = "SmolKartoffel-135M-v0.1-GRPO"
tokenizer = AutoTokenizer.from_pretrained(llasa_model_name)
model = AutoModelForCausalLM.from_pretrained(llasa_model_name)
model.to("cuda")

from xcodec2.modeling_xcodec2 import XCodec2Model
codec_model_path = "HKUST-Audio/xcodec2"
Codec_model = XCodec2Model.from_pretrained(codec_model_path)
Codec_model.cuda()

whisper_turbo_pipe = pipeline(
    "automatic-speech-recognition",
    model="openai/whisper-large-v3-turbo",
    torch_dtype=torch.float16,
    device="cuda",
)

def ids_to_speech_tokens(speech_ids):
    speech_tokens_str = []
    for speech_id in speech_ids:
        speech_tokens_str.append(f"<|s_{speech_id}|>")
    return speech_tokens_str

waveform, sample_rate = torchaudio.load(sample_audio_path)

max_secs = 15
if len(waveform[0]) / sample_rate > 15:
    print("Warning: Trimming audio to first 15secs.")
    waveform = waveform[:, : sample_rate * 15]
    waveform = torch.nn.functional.pad( waveform, (0, int(sample_rate * 0.5)), "constant", 0)

if waveform.size(0) > 1:
    waveform = torch.mean(waveform, dim=0, keepdim=True)

prompt_wav = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=16000)(waveform)

if sample_audio_text is None:
    print("Transcribing audio...")
    transcription = whisper_turbo_pipe(waveform[0].numpy())["text"].strip()
else:
    transcription = sample_audio_text

print("Transcription:", transcription)

if len(target_text) == 0:
    raise ValueError("Target text must be provided!")
elif len(target_text) > 500:
    print("Text is too long; trimming to first 500 characters.")
    target_text = target_text[:500]

input_text = transcription + " " + target_text

with torch.no_grad():
    vq_code_prompt = Codec_model.encode_code(input_waveform=prompt_wav)
    vq_code_prompt = vq_code_prompt[0, 0, :]
    speech_ids_prefix = ids_to_speech_tokens(vq_code_prompt)

    formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>"

    chat = [
        {"role": "user", "content": "Convert the text to speech:" + formatted_text},
        {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>" + "".join(speech_ids_prefix)}
        ]

    input_ids = tokenizer.apply_chat_template(chat, tokenize=True, return_tensors="pt", continue_final_message=True)
    input_ids = input_ids.to("cuda")
    speech_end_id = tokenizer.convert_tokens_to_ids("<|SPEECH_GENERATION_END|>")

    outputs = model.generate(
        input_ids,
        max_length=2048, 
        eos_token_id=speech_end_id,
        do_sample=True,
        top_p=1,
        temperature=0.8,
        min_new_tokens=4, # Fix so the model does not directly stop 
    )

    generated_ids = outputs[0][input_ids.shape[1] - len(speech_ids_prefix) : -1]

    speech_tokens = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
    speech_tokens = extract_speech_ids(speech_tokens)
    speech_tokens = torch.tensor(speech_tokens).cuda().unsqueeze(0).unsqueeze(0)

    gen_wav = Codec_model.decode_code(speech_tokens)
    gen_wav = gen_wav[:, :, prompt_wav.shape[1] :]
    sf.write(output_filename, gen_wav[0, 0, :].cpu().numpy(), 16000)

Tips

  • With a reference speaker, audio glitches can happen. Try to increase the temperature to get better results.

License

This project is licensed under the CC-BY-NC-4.0 license.

Acknowledgments

  • Hugging Face: Thanks for a GPU grant I could trainon top of the multilingual llasa 1B base model.
  • HKUSTAudio: for providing the model open source and a great inference, training and preprocessing (xcodec2) script!
Downloads last month
41
Safetensors
Model size
172M params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for SebastianBodza/SmolKartoffel-135M-v0.1-GRPO

Finetuned
(509)
this model

Space using SebastianBodza/SmolKartoffel-135M-v0.1-GRPO 1