YarnGPT2
Table of Contents
- Model Summary
- Model Description
- Bias, Risks, and Limitations
- Speech Samples
- Training
- Future Improvements
- Citation
- Credits & References
Model Summary
YarnGPT2 is a text-to-speech (TTS) model designed to synthesize Nigerian-accented Languages (yoruba, igbo, hausa and english) leveraging pure language modelling without external adapters or complex architectures, offering high-quality, natural, and culturally relevant speech synthesis for diverse applications.
How to use (Colab)
The model can generate audio on its own but its better to use a voice to prompt the model:
Voices (arranged in order of perfomance and stability)
- English: idera, chinenye, jude, emma,umar,,joke,zainab ,osagie, remi, tayo
- Yoruba: yoruba_male2, yoruba_female2, yoruba_feamle1
- Igbo: igbo_female2, igbo_male2,igbo_female1,
- Hausa: hausa_feamle1,hausa_female2, hausa_male2,hausa_male1
Prompt YarnGPT2
!git clone https://github.com/saheedniyi02/yarngpt.git
pip install outetts uroman
import os
import re
import json
import torch
import inflect
import random
import uroman as ur
import numpy as np
import torchaudio
import IPython
from transformers import AutoModelForCausalLM, AutoTokenizer
from outetts.wav_tokenizer.decoder import WavTokenizer
!wget https://huggingface.co/novateur/WavTokenizer-medium-speech-75token/resolve/main/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml
!gdown 1-ASeEkrn4HY49yZWHTASgfGFNXdVnLTt
from yarngpt.audiotokenizer import AudioTokenizerV2
tokenizer_path="saheedniyi/YarnGPT2"
wav_tokenizer_config_path="/content/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml"
wav_tokenizer_model_path = "/content/wavtokenizer_large_speech_320_24k.ckpt"
audio_tokenizer=AudioTokenizerV2(
tokenizer_path,wav_tokenizer_model_path,wav_tokenizer_config_path
)
model = AutoModelForCausalLM.from_pretrained(tokenizer_path,torch_dtype="auto").to(audio_tokenizer.device)
#change the text
text="The election was won by businessman and politician, Moshood Abiola, but Babangida annulled the results, citing concerns over national security."
# change the language and voice
prompt=audio_tokenizer.create_prompt(text,lang="english",speaker_name="idera")
input_ids=audio_tokenizer.tokenize_prompt(prompt)
output = model.generate(
input_ids=input_ids,
temperature=0.1,
repetition_penalty=1.1,
max_length=4000,
#num_beams=5,# using a beam size helps for the local languages but not english
)
codes=audio_tokenizer.get_codes(output)
audio=audio_tokenizer.get_audio(codes)
IPython.display.Audio(audio,rate=24000)
torchaudio.save(f"Sample.wav", audio, sample_rate=24000)
Model Description
- Developed by: Saheedniyi
- Model type: Text-to-Speech
- Language(s) (NLP): English--> Nigerian Accented English
- Finetuned from: HuggingFaceTB/SmolLM2-360M
- Repository: YarnGPT Github Repository
- Paper: IN PROGRESS.
- Demo: 1) Prompt YarnGPT2 notebook 2) Simple news reader
Uses
Generate Nigerian-accented English speech for experimental purposes.
Out-of-Scope Use
The model is not suitable for generating speech in languages other than English or other accents.
Bias, Risks, and Limitations
The model may not capture the full diversity of Nigerian accents and could exhibit biases based on the training dataset. Also a lot of the text the model was trained on were automatically generated which could impact performance.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Feedback and diverse training data contributions are encouraged.
Speech Samples
Listen to samples generated by YarnGPT:
Input | Audio | Notes |
---|---|---|
Uhm, so, what was the inspiration behind your latest project? Like, was there a specific moment where you were like, 'Yeah, this is it!' Or, you know, did it just kind of, uh, come together naturally over time | (temperature=0.1, repetition_penalty=1.1), language: english, voice: idera | |
The election was won by businessman and politician, Moshood Abiola, but Babangida annulled the results, citing concerns over national security. | (temperature=0.1, repetition_penalty=1.1), language: english, voice: zainab | |
Habeeb Okikiแปla Olalomi Badmus ti แปpแป awแปn ololufแบน rแบน mแป si Portable ti sแป fun ile แบนjแป majisireeti ti ipinlแบน Ogun wi pe แนฃaka lara oun da, oun ko ni aisan tabi arun kankan lara. | (temperature=0.1, repetition_penalty=1.1), language: yoruba, voice: yoruba_male2 | |
Gรณmรฌnร nรกร fแบนฬsรนn kร n pรฉ ร wแปn alรกga ร nรก gbรฌyร njรบ lรกti fi ipรก gba ร wแปn รฌjแปba รฌbรญlแบนฬ lแปฬnร ร รฌtแปฬ, tรณ sรฌ jแบนฬ pรฉ รณ yแบน kรญ ร wแปn รฌjแปba รฌbรญlแบนฬ nรกร wร nรญ tรญtรฌ | (temperature=0.1, repetition_penalty=1.1), language: yoruba, voice: yoruba_female2 | |
แป bแปฅ oge ha si Enugwu steeti eme njem aga Anambra ka ndแป omekome ahแปฅ wakporo แปฅgbแปala ha. | (temperature=0.1, repetition_penalty=1.1), language: igbo, voice: igbo_male2 | |
Isi แปฅlแปorแปฅ Shell dแป na Lแปndแปn na gแปแปmenti Naแปjirแปa ekwuputala ugboro ugboro na แปrแปฅ แปsacha ogbe ndแป lara n'iyi n'Ogoni bแปฅ nke malitere ihe dแปka afแป asatแป gara aga na-aga nke แปma. | (temperature=0.1, repetition_penalty=1.1), language: igbo, voice: igbo_female1 | |
Gwamnatin Najeriya ta sake maka shafin hada-hadar kuษin kirifto na Binance a kotu, inda take buฦatar ya biya ta diyyar kuษi dalar Amurka biliyan 81.5 | (temperature=0.1, repetition_penalty=1.1), language: hausa, voice: hausa_female1 | |
Bisa ga dukkan alamu, haฦata cimma ruwa, dangane da koke-koken da tsofaffin ma'aikatan tarayya ke ta yi, a kan dimbin basukan wasu hakkokinsu da suke bi shekara da shekaru. | (temperature=0.1, repetition_penalty=1.1), language: hausa, voice: hausa_male2 |
Training
Data
Trained on a dataset of publicly available Nigerian movies, podcasts ( using the subtitle-audio pairs) and open source Nigerian-related audio data on Huggingface,
Preprocessing
Audio files were preprocessed and resampled to 24Khz and tokenized using wavtokenizer.
Training Hyperparameters
- Number of epochs: 5
- batch_size: 4
- Scheduler: linear schedule with warmup for 4 epochs, then linear decay to zero for the last epoch
- Optimizer: AdamW (betas=(0.9, 0.95),weight_decay=0.01)
- Learning rate: 1*10^-3
Hardware
- GPUs: 1 A100 (google colab: 50 hours)
Software
- Training Framework: Pytorch
Future Improvements?
- Scaling up model size and human-annotaed/ reviewed training data
- Wrap the model around an API endpoint
- Voice cloning.
- Potential expansion into speech-to-speech assistant models
Citation [optional]
BibTeX:
@misc{yarngpt2025,
author = {Saheed Azeez},
title = {YarnGPT: Nigerian-Accented English Text-to-Speech Model},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SaheedAzeez/yarngpt}
}
APA:
Saheed Azeez. (2025). YarnGPT: Nigerian-Accented English Text-to-Speech Model. Hugging Face. Available at: https://huggingface.co/saheedniyi/YarnGPT
Credits & References
- Downloads last month
- 8,076