Syoy commited on
Commit
6db8b8b
·
1 Parent(s): b8b40cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +310 -1
README.md CHANGED
@@ -1,3 +1,312 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - crowdsourced
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - extended|speech_commands
17
+ task_categories:
18
+ - audio-classification
19
+ task_ids:
20
+ - keyword-spotting
21
+ pretty_name: SpeechCommands
22
+ config_names:
23
+ - v0.01
24
+ - v0.02
25
+ tags:
26
+ - spotlight
27
+ - enriched
28
+ - renumics
29
+ - enhanced
30
+ - audio
31
+ - classification
32
+ - extended
33
  ---
34
+
35
+ # Dataset Card for SpeechCommands
36
+
37
+ ## Dataset Description
38
+
39
+ - **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=speech-commands-enriched)
40
+ - **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
41
+ - **Dataset Homepage** [tensorflow.org/datasets](https://www.tensorflow.org/datasets/catalog/speech_commands)
42
+ - **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf)
43
+ - **Leaderboard:** [More Information Needed]
44
+
45
+ ### Dataset Summary
46
+
47
+ 📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
48
+ At [Renumics](https://renumics.com/?hf-dataset-card=speech-commands-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
49
+
50
+ 🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
51
+ 1. Enable new researchers to quickly develop a profound understanding of the dataset.
52
+ 2. Popularize data-centric AI principles and tooling in the ML community.
53
+ 3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
54
+
55
+ 📚 This dataset is an enriched version of the [SpeechCommands Dataset](https://huggingface.co/datasets/speech_commands).
56
+
57
+
58
+ ### Explore the Dataset
59
+
60
+ ![Analyze SpeechCommands with Spotlight](https://spotlight.renumics.com/resources/hf-speech-commands-enriched.png)
61
+
62
+ The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
63
+
64
+ Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
65
+
66
+ ```python
67
+ !pip install renumics-spotlight datasets
68
+ ```
69
+
70
+ Load the dataset from huggingface in your notebook:
71
+
72
+ ```python
73
+ import datasets
74
+
75
+ dataset = datasets.load_dataset("renumics/speech_commands_enriched", "v0.01")
76
+ ```
77
+
78
+ [//]: <> (TODO: Update this!)
79
+ Start exploring with a simple view:
80
+
81
+ ```python
82
+ from renumics import spotlight
83
+
84
+ df = dataset.to_pandas()
85
+ df_show = df.drop(columns=['audio'])
86
+ spotlight.show(df_show, port=8000, dtype={"file": spotlight.Audio})
87
+ ```
88
+ You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
89
+
90
+
91
+ ### SpeechCommands Dataset
92
+
93
+ This is a set of one-second .wav audio files, each containing a single spoken
94
+ English word or background noise. These words are from a small set of commands, and are spoken by a
95
+ variety of different speakers. This data set is designed to help train simple
96
+ machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
97
+
98
+ Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
99
+ 64,727 audio files.
100
+
101
+ Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and
102
+ contains 105,829 audio files.
103
+
104
+
105
+ ### Supported Tasks and Leaderboards
106
+
107
+ * `keyword-spotting`: the dataset can be used to train and evaluate keyword
108
+ spotting systems. The task is to detect preregistered keywords by classifying utterances
109
+ into a predefined set of words. The task is usually performed on-device for the
110
+ fast response time. Thus, accuracy, model size, and inference time are all crucial.
111
+
112
+ ### Languages
113
+
114
+ The language data in SpeechCommands is in English (BCP-47 `en`).
115
+
116
+ ## Dataset Structure
117
+
118
+ ### Data Instances
119
+
120
+ Example of a core word (`"label"` is a word, `"is_unknown"` is `False`):
121
+ ```python
122
+ {
123
+ "file": "no/7846fd85_nohash_0.wav",
124
+ "audio": {
125
+ "path": "no/7846fd85_nohash_0.wav",
126
+ "array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346,
127
+ 0.00091553, 0.00079346]),
128
+ "sampling_rate": 16000
129
+ },
130
+ "label": 1, # "no"
131
+ "is_unknown": False,
132
+ "speaker_id": "7846fd85",
133
+ "utterance_id": 0
134
+ }
135
+ ```
136
+
137
+ Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`)
138
+ ```python
139
+ {
140
+ "file": "tree/8b775397_nohash_0.wav",
141
+ "audio": {
142
+ "path": "tree/8b775397_nohash_0.wav",
143
+ "array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658,
144
+ 0.00335693, 0.0005188]),
145
+ "sampling_rate": 16000
146
+ },
147
+ "label": 28, # "tree"
148
+ "is_unknown": True,
149
+ "speaker_id": "1b88bf70",
150
+ "utterance_id": 0
151
+ }
152
+ ```
153
+
154
+ Example of background noise (`_silence_`) class:
155
+
156
+ ```python
157
+ {
158
+ "file": "_silence_/doing_the_dishes.wav",
159
+ "audio": {
160
+ "path": "_silence_/doing_the_dishes.wav",
161
+ "array": array([ 0. , 0. , 0. , ..., -0.00592041,
162
+ -0.00405884, -0.00253296]),
163
+ "sampling_rate": 16000
164
+ },
165
+ "label": 30, # "_silence_"
166
+ "is_unknown": False,
167
+ "speaker_id": "None",
168
+ "utterance_id": 0 # doesn't make sense here
169
+ }
170
+ ```
171
+
172
+ ### Data Fields
173
+
174
+ * `file`: relative audio filename inside the original archive.
175
+ * `audio`: dictionary containing a relative audio filename,
176
+ a decoded audio array, and the sampling rate. Note that when accessing
177
+ the audio column: `dataset[0]["audio"]` the audio is automatically decoded
178
+ and resampled to `dataset.features["audio"].sampling_rate`.
179
+ Decoding and resampling of a large number of audios might take a significant
180
+ amount of time. Thus, it is important to first query the sample index before
181
+ the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred
182
+ over `dataset["audio"][0]`.
183
+ * `label`: either word pronounced in an audio sample or background noise (`_silence_`) class.
184
+ Note that it's an integer value corresponding to the class name.
185
+ * `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`,
186
+ `True` if a word is an auxiliary word.
187
+ * `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`.
188
+ * `utterance_id`: incremental id of a word utterance within the same speaker.
189
+
190
+ ### Data Splits
191
+
192
+ The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"`
193
+ contains more words (see section [Source Data](#source-data) for more details).
194
+
195
+ | | train | validation | test |
196
+ |----- |------:|-----------:|-----:|
197
+ | v0.01 | 51093 | 6799 | 3081 |
198
+ | v0.02 | 84848 | 9982 | 4890 |
199
+
200
+ Note that in train and validation sets examples of `_silence_` class are longer than 1 second.
201
+ You can use the following code to sample 1-second examples from the longer ones:
202
+
203
+ ```python
204
+ def sample_noise(example):
205
+ # Use this function to extract random 1 sec slices of each _silence_ utterance,
206
+ # e.g. inside `torch.utils.data.Dataset.__getitem__()`
207
+ from random import randint
208
+
209
+ if example["label"] == "_silence_":
210
+ random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
211
+ example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
212
+
213
+ return example
214
+ ```
215
+
216
+ ## Dataset Creation
217
+
218
+ ### Curation Rationale
219
+
220
+ The primary goal of the dataset is to provide a way to build and test small
221
+ models that can detect a single word from a set of target words and differentiate it
222
+ from background noise or unrelated speech with as few false positives as possible.
223
+
224
+ ### Source Data
225
+
226
+ #### Initial Data Collection and Normalization
227
+
228
+ The audio files were collected using crowdsourcing, see
229
+ [aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section)
230
+ for some of the open source audio collection code that was used. The goal was to gather examples of
231
+ people speaking single-word commands, rather than conversational sentences, so
232
+ they were prompted for individual words over the course of a five minute
233
+ session.
234
+
235
+ In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
236
+ "Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
237
+ "Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
238
+
239
+
240
+ In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
241
+
242
+ In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
243
+ "Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
244
+ it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
245
+ from unrecognized ones.
246
+
247
+ The `_silence_` label contains a set of longer audio clips that are either recordings or
248
+ a mathematical simulation of noise.
249
+
250
+ #### Who are the source language producers?
251
+
252
+ The audio files were collected using crowdsourcing.
253
+
254
+ ### Annotations
255
+
256
+ #### Annotation process
257
+
258
+ Labels are the list of words prepared in advances.
259
+ Speakers were prompted for individual words over the course of a five minute
260
+ session.
261
+
262
+ #### Who are the annotators?
263
+
264
+ [More Information Needed]
265
+
266
+ ### Personal and Sensitive Information
267
+
268
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
269
+
270
+ ## Considerations for Using the Data
271
+
272
+ ### Social Impact of Dataset
273
+
274
+ [More Information Needed]
275
+
276
+ ### Discussion of Biases
277
+
278
+ [More Information Needed]
279
+
280
+ ### Other Known Limitations
281
+
282
+ [More Information Needed]
283
+
284
+ ## Additional Information
285
+
286
+ ### Dataset Curators
287
+
288
+ [More Information Needed]
289
+
290
+ ### Licensing Information
291
+
292
+ Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]).
293
+
294
+ ### Citation Information
295
+
296
+ ```
297
+ @article{speechcommandsv2,
298
+ author = { {Warden}, P.},
299
+ title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
300
+ journal = {ArXiv e-prints},
301
+ archivePrefix = "arXiv",
302
+ eprint = {1804.03209},
303
+ primaryClass = "cs.CL",
304
+ keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
305
+ year = 2018,
306
+ month = apr,
307
+ url = {https://arxiv.org/abs/1804.03209},
308
+ }
309
+ ```
310
+ ### Contributions
311
+
312
+ [More Information Needed]