Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
PHBJT commited on
Commit
fcdb460
·
verified ·
1 Parent(s): 9d16de2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md CHANGED
@@ -503,4 +503,103 @@ configs:
503
  path: spanish/dev-*
504
  - split: test
505
  path: spanish/test-*
 
 
 
 
 
 
 
 
 
 
 
506
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
503
  path: spanish/dev-*
504
  - split: test
505
  path: spanish/test-*
506
+ license: cc-by-4.0
507
+ task_categories:
508
+ - text-to-speech
509
+ language:
510
+ - fr
511
+ - de
512
+ - it
513
+ - es
514
+ - pl
515
+ - pt
516
+ - nl
517
  ---
518
+
519
+
520
+ # Dataset Card for Filtred and annotated CML TTS
521
+
522
+
523
+ **This dataset is an annotated and filtred version of a [CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) [1].**
524
+
525
+ [CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) [1] CML-TTS is a recursive acronym for CML-Multi-Lingual-TTS, a Text-to-Speech (TTS) dataset developed at the Center of Excellence in Artificial Intelligence (CEIA) of the Federal University of Goias (UFG). CML-TTS is a dataset comprising audiobooks sourced from the public domain books of Project Gutenberg, read by volunteers from the LibriVox project. The dataset includes recordings in Dutch, German, French, Italian, Polish, Portuguese, and Spanish, all at a sampling rate of 24kHz.
526
+
527
+ The [original dataset](https://huggingface.co/datasets/ylacombe/cml-tts) has been [cleaned](https://huggingface.co/datasets/PHBJT/cml-tts-cleaned-levenshtein) by removing all rows with a Levenshtein score inferior to 0.9
528
+ In the `text_description` column, it provides natural language annotations on the characteristics of speakers and utterances, that have been generated using [the Data-Speech repository](https://github.com/huggingface/dataspeech).
529
+
530
+ This dataset was used alongside the [LibriTTS-R English dataset](https://huggingface.co/datasets/blabble-io/libritts_r) and the [Non English subset of MLS](https://huggingface.co/datasets/facebook/multilingual_librispeech) to train [Parler-TTS Multilingual [Mini v1.1]((https://huggingface.co/ylacombe/p-m-e)).
531
+ A training recipe is available in [the Parler-TTS library](https://github.com/huggingface/parler-tts).
532
+
533
+
534
+ ## Motivation
535
+
536
+ This dataset is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
537
+ It was designed to fine tune the Parler-TTS [Mini v1.1]((https://huggingface.co/parler-tts/parler-tts-mini-v1)) on 8 european languages (including English).
538
+
539
+ Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
540
+ Parler-TTS was released alongside:
541
+ * [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
542
+ * [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
543
+ * [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
544
+
545
+
546
+ ## Usage
547
+
548
+ Here is an example on how to oad the `clean` config with only the `train.clean.360` split.
549
+
550
+ ```py
551
+ from datasets import load_dataset
552
+
553
+ load_dataset("https://huggingface.co/datasets/PHBJT/cml-tts-cleaned-levenshtein", "french", split="train")
554
+ ```
555
+
556
+
557
+ **Note:** This dataset doesn't actually keep track of the audio column of the original version. You can merge it back to the original dataset using [this script](https://github.com/huggingface/dataspeech/blob/main/scripts/merge_audio_to_metadata.py) from Parler-TTS or, even better, get inspiration from [the training script](https://github.com/huggingface/parler-tts/blob/main/training/run_parler_tts_training.py) of Parler-TTS, that efficiently process multiple annotated datasets.
558
+ You can find the original dataset [here](https://huggingface.co/datasets/PHBJT/cml-tts-cleaned-levenshtein)
559
+
560
+ ### Dataset Description
561
+
562
+ - **License:** CC BY 4.0
563
+
564
+ ### Dataset Sources
565
+
566
+ - **Homepage:** https://www.openslr.org/141/
567
+ - **Paper:** https://arxiv.org/abs/2305.18802
568
+
569
+
570
+ ## Citation
571
+
572
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
573
+
574
+ ```
575
+ @misc{oliveira2023cmltts,
576
+ title={CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages},
577
+ author={Frederico S. Oliveira and Edresson Casanova and Arnaldo Cândido Júnior and Anderson S. Soares and Arlindo R. Galvão Filho},
578
+ year={2023},
579
+ eprint={2306.10097},
580
+ archivePrefix={arXiv},
581
+ primaryClass={eess.AS}
582
+ }
583
+ ```
584
+
585
+ ```
586
+ @misc{lacombe-etal-2024-dataspeech,
587
+ author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
588
+ title = {Data-Speech},
589
+ year = {2024},
590
+ publisher = {GitHub},
591
+ journal = {GitHub repository},
592
+ howpublished = {\url{https://github.com/ylacombe/dataspeech}}
593
+ }
594
+ ```
595
+
596
+ ```
597
+ @misc{lyth2024natural,
598
+ title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
599
+ author={Dan Lyth and Simon King},
600
+ year={2024},
601
+ eprint={2402.01912},
602
+ archivePrefix={arXiv},
603
+ primaryClass={cs.SD}
604
+ }
605
+ ```