Datasets:

Modalities:
Tabular
Text
Formats:
arrow
ArXiv:
DOI:
Libraries:
Datasets
License:
SN-echoes / README.md
SushantGautam's picture
Upload folder using huggingface_hub
cd31d69 verified
metadata
configs:
  - config_name: whisper_v1
    data_files:
      - split: original
        path: whisper_v1/**
      - split: en
        path: whisper_v1_en/**
  - config_name: whisper_v2
    data_files:
      - split: original
        path: whisper_v2/**
      - split: en
        path: whisper_v3_en/**
  - config_name: whisper_v3
    data_files:
      - split: original
        path: whisper_v3/**
      - split: en
        path: whisper_v3_en/**
task_categories:
  - summarization
  - translation
license: cc-by-4.0
size_categories:
  - 10M<n<100M
language:
  - en
  - es
  - ru
  - de
  - fr
  - tr
  - it
  - pl
  - bs
  - hu
tags:
  - SoccerNet
  - synthetic

[Paper] | [GitHub]

Dataset Card for SoccerNet-Echoes

This dataset card aims to provide comprehensive details for the SoccerNet-Echoes dataset, an audio commentary dataset for soccer games.

Dataset Details

Dataset Description

SoccerNet-Echoes is an audio commentary dataset for soccer games, curated by SimulaMet under the AI-Storyteller project. It is funded by the Research Council of Norway (project number 346671) and shared by the SoccerNet team. The dataset supports multiple languages, including English, Spanish, Russian, German, French, Turkish, Italian, Polish, Bosnian, and Hungarian, and is licensed under CC BY 4.0.

  • Curated by: SimulaMet, HOST Department (AI-Storyteller project)
  • Funded by: Research Council of Norway, project number 346671
  • Shared by: SoccerNet Team
  • Language(s) (NLP): English, Spanish, Russian, German, French, Turkish, Italian, Polish, Bosnian, Hungarian
  • License: CC BY 4.0

Dataset Sources

Uses

Direct Use

The dataset is primarily intended for:

  • Multimodal Event Detection: Combining audio cues with visual data for improved event detection in sports videos.
  • Game Summarization: Using automatic speech recognition (ASR) transcriptions to aid in summarizing soccer games.

Out-of-Scope Use

The dataset is not intended for:

  • Medical Diagnosis: The dataset is not suitable for medical applications.
  • Non-Sporting Event Analysis: The dataset is tailored for soccer and might not generalize well to other types of events without further modification.

Dataset Structure

The dataset comprises transcriptions of soccer game commentaries using various Whisper ASR models and translations of non-English commentaries into English using Google Translate. This table structure in HuggingFace has five columns for segment index, start and end times, the text (either transcribed or translated), and the game path (represented as a string). This is divided into three subsets (v1, v2, and v3 of Whisper versions), with each subset further split into "original" (ASR-generated) and "en" (English translated).

The dataset structure in a hierarchical directory and JSON format, following other SoccerNet data resources can be found at https://github.com/SoccerNet/sn-echoes. Please note that this HuggingFace dataset is mirrored from the Dataset folder in GitHub with a conversion script.

Dataset Creation

Curation Rationale

The dataset was curated to enhance the SoccerNet dataset with automatic speech recognition (ASR) transcriptions and translations of non-English commentaries into English using Google Translate, enabling a richer and more integrated understanding of soccer games.

Source Data

Data Collection and Processing

Audios were collected from soccer game broadcast videos in the SoccerNet dataset. The audio was transcribed using multiple Whisper ASR models (large-v1, large-v2, and large-v3) to create a comprehensive transcription dataset. Google Translate was used for the translations of non-English commentaries into English.

Who are the source data producers?

The source data producers are soccer game broadcasters and commentators.

Annotations

Annotation Process

The transcriptions were automatically generated by Whisper ASR models. Google Translate was used for the translations of non-English commentaries into English. Human verification and corrections of transcriptions are planned for future work.

Who are the annotators?

  • Whisper ASR models (transcriptions)
  • Google Translate (translations)
  • Authors/Humans (for verifying game halves laking game audio or commentary)

Personal and Sensitive Information

The dataset contains publicly available soccer game commentary, which is not considered sensitive. It does not include personal data about individuals outside of the context of the game.

Bias, Risks, and Limitations

  • Transcription Accuracy: ASR models may introduce errors in transcription.
  • Hallucinations: Repetition of phrases, especially in noisy environments, can degrade transcription quality.
  • Audio Quality: Variability in audio quality can impact transcription accuracy.
  • Human Verification: Lack of human-verified annotations in the current dataset.

Recommendations

Users should be aware of potential biases and limitations, such as transcription errors and hallucinations. Advanced audio pre-processing and human verification can help mitigate these issues.

Filtering Hallucinations

Users should be aware of transcription errors and hallucinations. The most occurring problem is the unwarranted repetition of phrases and words, especially with audio inputs lacking human speech, being excessively noisy, or containing music. These conditions challenge the models’ transcription accuracy but can be mitigated by a simple filtering approach: removing consecutive entries with the same text and keeping only the first occurrence of each unique text. It is strongly advised to use consecutive entries filtering along with the Mixed Selection approach to get a better ASR for downstream applications. Please refer to the related section on GitHub to see our suggested way of accomplishing this.

Citation

BibTeX:

@article{gautam2024soccernet,
    author = {Gautam, Sushant and Sarkhoosh, Mehdi Houshmand and Held, Jan and Midoglu, Cise and Cioppa, Anthony and Giancola, Silvio and Thambawita, Vajira and Riegler, Michael A. and Halvorsen, P{\aa}l and Shah, Mubarak},
    title = {{SoccerNet-Echoes: A Soccer Game Audio Commentary Dataset}},
    journal = {arXiv},
    year = {2024},
    month = may,
    eprint = {2405.07354},
    doi = {10.48550/arXiv.2405.07354}
}

APA:

Gautam, S., Sarkhoosh, M. H., Held, J., Midoglu, C., Cioppa, A., Giancola, S., ...Shah, M. (2024). SoccerNet-Echoes: A Soccer Game Audio Commentary Dataset. arXiv, 2405.07354. Retrieved from https://arxiv.org/abs/2405.07354v1

Glossary

  • ASR (Automatic Speech Recognition): Technology that converts spoken language into text.
  • Multimodal Analysis: Combining multiple types of data, such as audio and visual, for more comprehensive analysis.
  • Whisper ASR Models: A set of automatic speech recognition models developed by OpenAI.

More Information

For additional details, visit the SoccerNet-echoes GitHub repository or contact the authors of the dataset.

Dataset Card Authors

Dataset Card Contact