Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
SpokenSwag / README.md
gallilmaimon's picture
Update README.md
9dacf73 verified
metadata
dataset_info:
  features:
    - name: speaker
      dtype: string
    - name: prompt_text
      dtype: string
    - name: chosen_text
      dtype: string
    - name: rejected_text
      dtype: string
    - name: prompt
      dtype: audio
    - name: chosen
      dtype: audio
    - name: rejected
      dtype: audio
    - name: auto_bleu2
      dtype: float64
  splits:
    - name: validation
      num_bytes: 12199479621.038
      num_examples: 20006
    - name: train
      num_bytes: 28797300145.392
      num_examples: 47928
  download_size: 36106016770
  dataset_size: 40996779766.43
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
license: mit
task_categories:
  - audio-to-audio
language:
  - en
size_categories:
  - 10K<n<100K

SpokenSwag

We present here SpokenSwag as described in the paper "Slamming: Training a Speech Language Model on One GPU in a Day". This dataset is based on allenai/swag and synthetised with 4 speakers from hexgrad/Kokoro-82M. We show that perfoming DPO over the dataset can really improve performance of Speech Language Models. We encourage you to also see the following resources, for further information:

Project Page: https://pages.cs.huji.ac.il/adiyoss-lab/slamming/
Paper: https://arxiv.org/abs/2502.15814
Code: https://github.com/slp-rl/slamkit

If you use our dataset, please cite the paper as follows:

@misc{maimon2025slamming,
      title={Slamming: Training a Speech Language Model on One GPU in a Day}, 
      author={Gallil Maimon and Avishai Elmakies and Yossi Adi},
      year={2025},
      eprint={2502.15814},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2502.15814}, 
}
  

Dataset Summary

A dataset used for post-training spoken language models with DPO, which was showed to notably improve semantic abilities. Specifically, the dataset is based on text only dataset allenai/swag, and taking the correct answer as the chosen contiuation and a random wrong answer as negative one. These were then synthesised using TTS by hexgrad/Kokoro-82M. We use 4 speakers - 2 male and 2 female. We generate both train and validation splits from the original dataset.

Download

Using 🤗 Datasets

from datasets import load_dataset
# entire dataset
spoken_swag = load_dataset('slprl/SpokenSwag')

We refer you to the SlamKit codebase to see how you can train a SpeechLM with DPO over the dataset.

Data Fields

The data has several fields:

  • speaker: One of the Kokoro voices - https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md
  • prompt_text: The text of the prompt recording.
  • chosen_text: The text of the chosen recording.
  • rejected_text: The text of the rejected recording.
  • prompt: The prompt audio sample
    • array: array of audio samples
    • sample_rate: audio sampling rate
    • path: path to the audio file saved location
  • chosen: The chosen audio sample
    • array: array of audio samples
    • sample_rate: audio sampling rate
    • path: path to the audio file saved location
  • rejected: The rejected audio sample
    • array: array of audio samples
    • sample_rate: audio sampling rate
    • path: path to the audio file saved location
  • auto_bleu2: The Auto-Bleu score with bi-grams, used to detect and filter repetetive samples