Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
Jilt commited on
Commit
2eacc00
1 Parent(s): 386d5fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,7 +34,7 @@ This subset is for English language evaluations.
34
 
35
  ## Dataset Structure
36
 
37
- The dataset consists of 96 video links, transcriptions, and normalized transcriptions (around 42 hours) of age-appropriate audios with a minimum word count of 300. With a normal speaking rate of 2.5 words per second, this corresponds to a minimum duration of 2 minutes. Minimum duration of the dataset is 128 seconds and maximum is 02:45 hours. Average duration per file is a little over 26 minutes and standard deviation is 29 minutes.The notable variability in audio duration, as indicated by the standard deviation, mirrors typical real-time environments.
38
 
39
 
40
  ### Data Fields
@@ -46,8 +46,8 @@ Evaluation data
46
  ## Dataset Creation
47
 
48
  Normalization is done via EnglishTextNormalizer from open_asr_eval [https://github.com/huggingface/open_asr_leaderboard/blob/main/normalizer/normalizer.py]
49
- Dataset is created by selecting the first 100 files from Youtube-Commons, with a minimum of 300 transcription words and age-appropriate content. One file is manually removed owing to high errors in the transcription observed in visual inspection.
50
 
51
  ### Licensing Information
52
 
53
- All the transcripts are part of a video shared under a CC-By license on youtube. All the licensing terms are same as the original dataset [PleIAs/YouTube-Commons].
 
34
 
35
  ## Dataset Structure
36
 
37
+ The dataset consists of 94 video links, transcriptions, and normalized transcriptions (around 38 hours) of age-appropriate audios with a minimum word count of 300. With a normal speaking rate of 2.5 words per second, this corresponds to a minimum duration of 2 minutes. Minimum duration of the dataset is 128 seconds and maximum is 02:08 hours. The average duration per file is a little over 24 minutes and the standard deviation is 25 minutes. The notable variability in audio duration, as indicated by the standard deviation, mirrors typical real-time environments.
38
 
39
 
40
  ### Data Fields
 
46
  ## Dataset Creation
47
 
48
  Normalization is done via EnglishTextNormalizer from open_asr_eval [https://github.com/huggingface/open_asr_leaderboard/blob/main/normalizer/normalizer.py]
49
+ The dataset is created by selecting the first 100 files from Youtube-Commons, with a minimum of 300 transcription words and age-appropriate content. Three files are manually removed owing to high errors in the transcription observed in visual inspection and also verified with high WER on different ASR implementations.
50
 
51
  ### Licensing Information
52
 
53
+ All the transcripts are part of a video shared under a CC-By license on YouTube. All the licensing terms are the same as the original dataset [PleIAs/YouTube-Commons].