|
--- |
|
annotations_creators: |
|
- expert-generated |
|
- crowdsourced |
|
- machine-generated |
|
language: |
|
- en |
|
language_creators: |
|
- crowdsourced |
|
- expert-generated |
|
license: |
|
- cc-by-4.0 |
|
- apache-2.0 |
|
- cc0-1.0 |
|
- cc-by-nc-3.0 |
|
- other |
|
multilinguality: |
|
- monolingual |
|
pretty_name: ESB Diagnostic Dataset |
|
size_categories: |
|
- 100K<n<1M |
|
- 1M<n<10M |
|
source_datasets: |
|
- original |
|
- extended|librispeech_asr |
|
- extended|common_voice |
|
tags: |
|
- asr |
|
- benchmark |
|
- speech |
|
- esc |
|
task_categories: |
|
- automatic-speech-recognition |
|
task_ids: [] |
|
extra_gated_prompt: |- |
|
Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. |
|
To do so, fill in the access forms on the specific datasets' pages: |
|
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0 |
|
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech |
|
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech |
|
extra_gated_fields: |
|
I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset: checkbox |
|
I hereby confirm that I have accepted the terms of usages on GigaSpeech page: checkbox |
|
I hereby confirm that I have accepted the terms of usages on SPGISpeech page: checkbox |
|
--- |
|
## Dataset Description |
|
- **Dataset authors:** [Suno.ai](https://www.suno.ai) |
|
- **Point of contact:** [email protected] |
|
|
|
As a part of ESB benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. |
|
|
|
The diagnostic dataset can be downloaded and prepared in much the same way as the ESB datasets: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
esb_diagnostic_ami = load_dataset("esb/diagnostic-dataset", "ami") |
|
``` |
|
|
|
### Data Selection |
|
#### Audio |
|
To provide an adequate representation of all ESB datasets, we chose to use at least 1 hour of audio from the validation sets of each of the 8 constituent ESB datasets. Following the convention of LibriSpeech, we then used a public ASR model to further split each dataset into `clean`/`other` based on WER. (Note that for LibriSpeech we kept the existing `clean`/`other` splits.). The `clean` subset represents the 'easier' 50% of samples, and the `other` subset the more difficult 50%. |
|
|
|
To obtain the `clean` diagnostic-subset of AMI, either "slice" the `clean`/`other` split: |
|
|
|
```python |
|
ami_diagnostic_clean = esc_diagnostic_ami["clean"] |
|
``` |
|
|
|
Or download the `clean` subset standalone: |
|
```python |
|
ami_diagnostic_clean = load_dataset("esb/diagnostic-dataset", "ami", split="clean") |
|
``` |
|
|
|
#### Transcriptions |
|
Firstly, the transcriptions were generated by a human _without_ the bias of the original transcript. The transcriptions follow a strict orthographic and verbatim style guide, where every word, disfluency and partial word is transcribed. Punctuation and formatting follows standard English print orthography (eg. ‘July 10th in 2021.’). Breaks in thought and partial words are indicated via ‘--’. In addition to the **orthographic** transcriptions, a **normalised** format was produced, with all punctuation removed and non-standard-words such as dates, currencies and abbreviations verbalised in the exact way they are spoken (eg. ’july tenth in twenty twenty one’). |
|
|
|
Although great care was taken in standardisation of orthography, a remaining amount of ambiguity in transcription exists, especially around the use of commas and the choice of introducing sentence breaks for utterances starting with ‘And’. Each sample was then checked by a second human with access to both the original ground truth as well as the independently produced style-consistent transcript. Both versions were merged to produce new high quality ground truths in both the normalised and orthographic text format. |
|
|
|
## Dataset Information |
|
|
|
A data point can be accessed by indexing the dataset object loaded through `load_dataset`: |
|
|
|
```python |
|
print(ami_diagnostic_clean[0]) |
|
``` |
|
|
|
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name: |
|
|
|
```python |
|
{ |
|
'audio': {'path': None, |
|
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ..., |
|
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]), |
|
'sampling_rate': 16000}, |
|
'ortho_transcript': 'So, I guess we have to reflect on our experiences with remote controls to decide what, um, we would like to see in a convenient practical', |
|
'norm_transcript': 'so i guess we have to reflect on our experiences with remote controls to decide what um we would like to see in a convenient practical', |
|
'id': 'AMI_ES2011a_H00_FEE041_0062835_0064005', |
|
'dataset': 'ami', |
|
} |
|
``` |
|
|
|
### Data Fields |
|
|
|
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. |
|
|
|
- `ortho_transcript`: the **orthographic** transcription of the audio file. |
|
|
|
- `norm_transcript`: the **normalised** transcription of the audio file. |
|
|
|
- `id`: unique id of the data sample. |
|
|
|
- `dataset`: string name of a dataset the sample belongs to. |
|
|
|
We encourage participants to train their ASR system on the [AMI dataset](https://huggingface.co/datasets/esb/datasets#ami), the smallest of the 8 ESB datasets, and then evaluate their system on the `ortho_transcript` for **all** of the datasets in the diagnostic dataset. This gives a representation of how the system is likely to fare on other audio domains. The predictions can then be _normalised_ by removing casing and punctuation, converting numbers to spelled-out form and expanding abbreviations, and then assessed against the `norm_transcript`. This gives a representation of the effect of orthography for system performance. |
|
|
|
### Access |
|
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages: |
|
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0 |
|
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech |
|
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech |
|
|
|
### Contributions |
|
We show our greatest appreciation to Georg Kucsko, Keenan Freyberg and Michael Shulman from [Suno.ai](https://www.suno.ai) for creating and annotating the diagnostic dataset. |
|
|