Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

YAML Metadata Warning: The task_ids "speech-recognition" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

Dataset Card for the Buckeye Corpus (buckeye_asr)

Dataset Summary

The Buckeye Corpus of conversational speech contains high-quality recordings from 40 speakers in Columbus OH conversing freely with an interviewer. The speech has been orthographically transcribed and phonetically labeled.

Supported Tasks and Leaderboards

[Needs More Information]

Languages

American English (en-US)

Dataset Structure

Data Instances

[Needs More Information]

Data Fields

  • file: filename of the audio file containing the utterance.
  • audio: filename of the audio file containing the utterance.
  • text: transcription of the utterance.
  • phonetic_detail: list of phonetic annotations for the utterance (start, stop and label of each phone).
  • word_detail: list of word annotations for the utterance (start, stop, label, broad and narrow transcriptions, syntactic class).
  • speaker_id: string identifying the speaker.
  • id: string identifying the utterance.

Data Splits

The data is split in training, validation and test sets with different speakers (32, 4, and 4 speakers respectively) in each set. The sets are all balanced for speaker's gender and age.

Dataset Creation

Curation Rationale

[Needs More Information]

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

[Needs More Information]

Annotations

Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

[Needs More Information]

Licensing Information

FREE for noncommercial uses.

Citation Information

@misc{pitt2007Buckeye,
  title = {Buckeye {Corpus} of {Conversational} {Speech} (2nd release).},
  url = {www.buckeyecorpus.osu.edu},
  publisher = {Columbus, OH: Department of Psychology, Ohio State University (Distributor)},
  author = {Pitt, M.A. and Dilley, L. and Johnson, K. and Kiesling, S. and Raymond, W. and Hume, E. and Fosler-Lussier, E.},
  year = {2007},
}

Usage

The first step is to download a copy of the dataset from the official website. Once done, the dataset can be loaded directly through the datasets library by running:

from datasets import load_dataset

dataset = load_dataset("bhigy/buckeye_asr", data_dir=<path_to_the_dataset>)

where <path_to_the_dataset> points to the folder where the dataset is stored. An example of path to one of the audio files is then <path_to_the_dataset>/s01/s0101a.wav.

Downloads last month
47