File size: 6,157 Bytes
7472b62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aaed817
 
 
 
 
 
 
 
7472b62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b14b12
7472b62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d5efcfa
7472b62
 
d5efcfa
 
 
 
 
 
 
 
 
 
 
 
 
7472b62
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
---
license: cc-by-4.0
task_categories:
- audio-classification
language:
- de
- en
- es
- fr
- it
- nl
- pl
- sv
tags:
- speech
- speech-classifiation
- text-to-speech
- spoofing
- multilingualism

pretty_name: FLEURS-HS
size_categories:
- 10K<n<100K
---

# FLEURS-HS

An extension of the [FLEURS](https://huggingface.co/datasets/google/fleurs) dataset for synthetic speech detection using text-to-speech, featured in the paper **Synthetic speech detection with Wav2Vec 2.0 in various language settings**.

This dataset is 1 of 3 used in the paper, the others being:
- [FLEURS-HS VITS](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs-vits)
  - test set containing (generally) more difficult synthetic samples
  - separated due to different licensing
- [ARCTIC-HS](https://huggingface.co/datasets/realnetworks-kontxt/arctic-hs)
  - extension of the [CMU_ARCTIC](http://festvox.org/cmu_arctic/) and [L2-ARCTIC](https://psi.engr.tamu.edu/l2-arctic-corpus/) sets in a similar manner

## Dataset Details

### Dataset Description

The dataset features 8 languages originally seen in FLEURS:

- German
- English
- Spanish
- French
- Italian
- Dutch
- Polish
- Swedish

The original FLEURS samples are used as `human` samples, while `synthetic` samples are generated using:

- [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech)
- [Azure Text-To-Speech](https://azure.microsoft.com/en-us/products/ai-services/text-to-speech)
- [Amazon Polly](https://aws.amazon.com/polly/)

The resulting dataset features roughly twice the samples per language (every `human` sample usually has its `synthetic` counterpart).


- **Curated by:** [KONTXT by RealNetworks](https://realnetworks.com/kontxt)
- **Funded by:** [RealNetworks](https://realnetworks.com/)
- **Language(s) (NLP):** English, German, Spanish, French, Italian, Dutch, Polish, Swedish
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) for the code, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) for the dataset

### Dataset Sources

The original FLEURS dataset was downloaded from [HuggingFace](https://huggingface.co/datasets/google/fleurs).

- **FLEURS Repository:** [HuggingFace](https://huggingface.co/datasets/google/fleurs)
- **FLEURS Paper:** [arXiv](https://arxiv.org/abs/2205.12446)

- **Paper:** Synthetic speech detection with Wav2Vec 2.0 in various language settings

## Uses

This dataset is best used to train synthetic speech detection. Each sample contains an `Audio` feature, and a label: `human` or `synthetic`.

### Direct Use

The following snippet of code demonstrates loading the training split for English:

```python
from datasets import load_dataset

fleurs_hs = load_dataset(
    "realnetworks-kontxt/fleurs-hs",
    "en_us",
    split="train",
    trust_remote_code=True,
)
```

To load a different language, change `en_us` into one of the following:
- `de_de` for German
- `es_419` for Spanish
- `fr_fr` for French
- `it_it` for Italian
- `nl_nl` for Dutch
- `pl_pl` for Polish
- `sv_se` for Swedish

To load a different split, change the `split` value to `dev` or `test`.

The `trust_remote_code=True` parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the [loading script](./fleurs-hs.py).

## Dataset Structure

The dataset data is contained in the [data directory](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs/tree/main/data).

There exists 1 directory per language.

Within those directories, there is a directory named `splits`; it contains 1 file per split:
- `train.tar.gz`
- `dev.tar.gz`
- `test.tar.gz`

Those `.tar.gz` files contain 2 directories:
- `human`
- `synthetic`

Each of these directories contain the `.wav` files for the label (and split). Keep in mind the the two directories can't be merged as they share most of their file names. An identical file name implies a speaker-voice pair, ex. `human/123.wav` and `synthetic/123.wav`.

Finally, back to the language directory, it contains 4 metadata files, which are not used in the loaded dataset, but might be useful to researchers:
- `recording-metadata.csv`
  - contains the transcript ID, file name, split and gender of the original FLEURS samples
- `recording-transcripts.csv`
  - contains the transcrpits of the original FLEURS samples
- `voice-distribution.csv`
  - contains the TTS vendor, TTS name, TTS engine, FLEURS gender and TTS gender for each ID-file name pair
  - useful for tracking what models were used to get specific synthetic samples
- `voice-metadata.csv`
  - contains the groupation of TTS' used alongside the splits they were used for

### Sample

A sample contains contains an Audio feature `audio`, and a string `label`.

```
{
  'audio': {
    'path': 'human/10004088536354799741.wav',
    'array': array([0., 0., 0., ..., 0., 0., 0.]),
    'sampling_rate': 16000
  },
  'label': 'human'
}
```

## Citation

The dataset is featured alongside our paper, **Synthetic speech detection with Wav2Vec 2.0 in various language settings**, which will be published on IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). We'll provide links once it's available online.

**BibTeX:**

If you use this work, please cite us by including the following BibTeX reference:

```
@inproceedings{dropuljic-ssdww2v2ivls,
  author={Dropuljić, Branimir and Šuflaj, Miljenko and Jertec, Andrej and Obadić, Leo},
  booktitle={{IEEE} International Conference on Acoustics, Speech, and Signal Processing, {ICASSP} 2024 - Workshops, Seoul, Republic of Korea, April 14-19, 2024},
  title={Synthetic Speech Detection with Wav2vec 2.0 in Various Language Settings},
  year={2024},
  month={04},
  pages={585-589},
  publisher={{IEEE}},
  volume={},
  number={},
  keywords={Synthetic speech detection;text-to-speech;wav2vec 2.0;spoofing attack;multilingualism},
  url={https://doi.org/10.1109/ICASSPW62465.2024.10627750},
  doi={10.1109/ICASSPW62465.2024.10627750}
}
```

## Dataset Card Authors

- [Miljenko Šuflaj](https://huggingface.co/suflaj)

## Dataset Card Contact

- [Miljenko Šuflaj](mailto:[email protected])