YarnGPT

image/png

Table of Contents

  1. Model Summary
  2. Model Description
  3. Bias, Risks, and Limitations
  4. Speech Samples
  5. Training
  6. Future Improvements
  7. Citation
  8. Credits & References

Model Summary

YarnGPT is a text-to-speech (TTS) model designed to synthesize Nigerian-accented English leveraging pure language modelling without external adapters or complex architectures, offering high-quality, natural, and culturally relevant speech synthesis for diverse applications.

How to use (Colab)

The model can generate audio on its own but its better to use a voice to prompt the model, there are about 11 voices supported by default (6 males and 5 females ):

  • zainab
  • jude
  • tayo
  • remi
  • idera (default and best voice)
  • regina
  • chinenye
  • umar
  • osagie
  • joke
  • emma (the names do not correlate to any tribe or accent)

Prompt YarnGPT

# clone the YarnGPT repo to get access to the `audiotokenizer`
!git clone https://github.com/saheedniyi02/yarngpt.git


# install some necessary libraries
!pip install outetts==0.2.3 uroman

#import some important packages 
import os
import re
import json
import torch
import inflect
import random
import uroman as ur
import numpy as np
import torchaudio
import IPython
from transformers import AutoModelForCausalLM, AutoTokenizer
from outetts.wav_tokenizer.decoder import WavTokenizer
from yarngpt.audiotokenizer import AudioTokenizer


# download the wavtokenizer weights and config (to encode and decode the audio)
!wget https://huggingface.co/novateur/WavTokenizer-medium-speech-75token/resolve/main/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml
!wget https://huggingface.co/novateur/WavTokenizer-large-speech-75token/resolve/main/wavtokenizer_large_speech_320_24k.ckpt

# model path and wavtokenizer weight path (the paths are assumed based on Google colab, a different environment might save the weights to a different location).
hf_path="saheedniyi/YarnGPT"
wav_tokenizer_config_path="/content/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml"
wav_tokenizer_model_path = "/content/wavtokenizer_large_speech_320_24k.ckpt"

# create the AudioTokenizer object 
audio_tokenizer=AudioTokenizer(
    hf_path,wav_tokenizer_model_path,wav_tokenizer_config_path
)

#load the model weights

model = AutoModelForCausalLM.from_pretrained(hf_path,torch_dtype="auto").to(audio_tokenizer.device)

# your input text
text="Uhm, so, what was the inspiration behind your latest project? Like, was there a specific moment where you were like, 'Yeah, this is it!' Or, you know, did it just kind of, uh, come together naturally over time?"

# creating a prompt, when creating a prompt, there is an optional `speaker_name` parameter, the possible speakers are "idera","emma","jude","osagie","tayo","zainab","joke","regina","remi","umar","chinenye" if no speaker is selected a speaker is chosen at random 
prompt=audio_tokenizer.create_prompt(text,"idera")

# tokenize the prompt
input_ids=audio_tokenizer.tokenize_prompt(prompt)

# generate output from the model, you can tune the `.generate` parameters as you wish
output  = model.generate(
            input_ids=input_ids,
            temperature=0.1,
            repetition_penalty=1.1,
            max_length=4000,
        )

# convert the output to "audio codes"
codes=audio_tokenizer.get_codes(output)

# converts the codes to audio 
audio=audio_tokenizer.get_audio(codes)

# play the audio
IPython.display.Audio(audio,rate=24000)

# save the audio 
torchaudio.save(f"audio.wav", audio, sample_rate=24000)

Simple Nigerian Accented-NewsReader

!git clone https://github.com/saheedniyi02/yarngpt.git

# install some necessary libraries
!pip install outetts uroman trafilatura pydub

import os
import re
import json
import torch
import inflect
import random
import requests
import trafilatura
import inflect
import uroman as ur
import numpy as np
import torchaudio
import IPython
from pydub import AudioSegment
from pydub.effects import normalize
from transformers import AutoModelForCausalLM, AutoTokenizer
from outetts.wav_tokenizer.decoder import WavTokenizer


!wget https://huggingface.co/novateur/WavTokenizer-medium-speech-75token/resolve/main/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml
!wget https://huggingface.co/novateur/WavTokenizer-large-speech-75token/resolve/main/wavtokenizer_large_speech_320_24k.ckpt

from yarngpt.audiotokenizer import AudioTokenizer

tokenizer_path="saheedniyi/YarnGPT"
wav_tokenizer_config_path="/content/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml"
wav_tokenizer_model_path = "/content/wavtokenizer_large_speech_320_24k.ckpt"



audio_tokenizer=AudioTokenizer(
    tokenizer_path,wav_tokenizer_model_path,wav_tokenizer_config_path
       )


model = AutoModelForCausalLM.from_pretrained(tokenizer_path,torch_dtype="auto").to(audio_tokenizer.device)


def split_text_into_chunks(text, word_limit=25):
  """ 
  Function to split a long web page into reasonable chunks
  """
  sentences=[sentence.strip() for sentence in text.split('.') if sentence.strip()]
  chunks=[]
  for sentence in sentences:
    chunks.append(".")
    sentence_splitted=sentence.split(" ")
    num_words=len(sentence_splitted)
    start_index=0
    if num_words>word_limit:
      while start_index<num_words:
        end_index=min(num_words,start_index+word_limit)
        chunks.append(" ".join(sentence_splitted[start_index:start_index+word_limit]))
        start_index=end_index
    else:
      chunks.append(sentence)
  return chunks

#Extracting the content of a webpage
page=requests.get("https://punchng.com/expensive-feud-how-burna-boy-cubana-chief-priests-fight-led-to-dollar-rain/")
content=trafilatura.extract(page.text)
chunks=split_text_into_chunks(content)

#Looping over the chunks and adding creating a large `all_codes` list
all_codes=[]
for i,chunk in enumerate(chunks):
  print(i)
  print("\n")
  print(chunk)
  if chunk==".":
    #add silence for 0.25 seconds if we encounter a full stop
    all_codes.extend([453]*20)
  else:
    prompt=audio_tokenizer.create_prompt(chunk,"chinenye")
    input_ids=audio_tokenizer.tokenize_prompt(prompt)
    output  = model.generate(
            input_ids=input_ids,
            temperature=0.1,
            repetition_penalty=1.1,
            max_length=4000,
        )
    codes=audio_tokenizer.get_codes(output)
    all_codes.extend(codes)


# Converting to audio
audio=audio_tokenizer.get_audio(all_codes)
IPython.display.Audio(audio,rate=24000)
torchaudio.save(f"news1.wav", audio, sample_rate=24000)

Model Description

Uses

Generate Nigerian-accented English speech for experimental purposes.

Out-of-Scope Use

The model is not suitable for generating speech in languages other than English or other accents.

Bias, Risks, and Limitations

The model may not capture the full diversity of Nigerian accents and could exhibit biases based on the training dataset. Also a lot of the text the model was trained on were automatically generated which could impact performance.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Feedback and diverse training data contributions are encouraged.

Speech Samples

Listen to samples generated by YarnGPT:

Input Audio Notes
Hello world! I am Saheed Azeez and I am excited to announce the release of his project, I have been gathering data and learning how to build Audio-based models over the last two months, but thanks to God, I have been able to come up with something (temperature=0.1, repetition_penalty=1.1), voice: idera
Wizkid, Davido, Burna Boy perform at same event in Lagos. This event has sparked many reactions across social media, with fans and critics alike praising the artistes' performances and the rare opportunity to see the three music giants on the same stage. (temperature=0.1, repetition_penalty=1.1), voice: jude
Since Nigeria became a republic in 1963, 14 individuals have served as head of state of Nigeria under different titles. The incumbent president Bola Tinubu is the nation's 16th head of state. (temperature=0.1, repetition_penalty=1.1), voice: zainab, the model struggled in pronouncing ` in 1963`
I visited the President, who has shown great concern for the security of Plateau State, especially considering that just a year ago, our state was in mourning. The President’s commitment to addressing these challenges has been steadfast. (temperature=0.1, repetition_penalty=1.1), voice: emma
Scientists have discovered a new planet that may be capable of supporting life! (temperature=0.1, repetition_penalty=1.1)

Training

Data

Trained on a dataset of publicly available Nigerian movies, podcasts ( using the subtitle-audio pairs) and open source Nigerian-related audio data on Huggingface,

Preprocessing

Audio files were preprocessed and resampled to 24Khz and tokenized using wavtokenizer.

Training Hyperparameters

  • Number of epochs: 5
  • batch_size: 4
  • Scheduler: linear schedule with warmup for 4 epochs, then linear decay to zero for the last epoch
  • Optimizer: AdamW (betas=(0.9, 0.95),weight_decay=0.01)
  • Learning rate: 1*10^-3

Hardware

  • GPUs: 1 A100 (google colab: 50 hours)

Software

  • Training Framework: Pytorch

Future Improvements?

  • Scaling up model size and human-annotaed/ reviewed training data
  • Wrap the model around an API endpoint
  • Add support for local Nigerian languages
  • Voice cloning.
  • Potential expansion into speech-to-speech assistant models

Citation [optional]

BibTeX:

@misc{yarngpt2025,
  author = {Saheed Azeez},
  title = {YarnGPT: Nigerian-Accented English Text-to-Speech Model},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/SaheedAzeez/yarngpt}
}

APA:

Saheed Azeez. (2025). YarnGPT: Nigerian-Accented English Text-to-Speech Model. Hugging Face. Available at: https://huggingface.co/saheedniyi/YarnGPT

Credits & References

Downloads last month
157
Safetensors
Model size
366M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for saheedniyi/YarnGPT

Finetuned
(21)
this model

Collection including saheedniyi/YarnGPT