Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ This subset is for English language evaluations.
|
|
34 |
|
35 |
## Dataset Structure
|
36 |
|
37 |
-
The dataset consists of
|
38 |
|
39 |
|
40 |
### Data Fields
|
@@ -46,8 +46,8 @@ Evaluation data
|
|
46 |
## Dataset Creation
|
47 |
|
48 |
Normalization is done via EnglishTextNormalizer from open_asr_eval [https://github.com/huggingface/open_asr_leaderboard/blob/main/normalizer/normalizer.py]
|
49 |
-
|
50 |
|
51 |
### Licensing Information
|
52 |
|
53 |
-
All the transcripts are part of a video shared under a CC-By license on
|
|
|
34 |
|
35 |
## Dataset Structure
|
36 |
|
37 |
+
The dataset consists of 94 video links, transcriptions, and normalized transcriptions (around 38 hours) of age-appropriate audios with a minimum word count of 300. With a normal speaking rate of 2.5 words per second, this corresponds to a minimum duration of 2 minutes. Minimum duration of the dataset is 128 seconds and maximum is 02:08 hours. The average duration per file is a little over 24 minutes and the standard deviation is 25 minutes. The notable variability in audio duration, as indicated by the standard deviation, mirrors typical real-time environments.
|
38 |
|
39 |
|
40 |
### Data Fields
|
|
|
46 |
## Dataset Creation
|
47 |
|
48 |
Normalization is done via EnglishTextNormalizer from open_asr_eval [https://github.com/huggingface/open_asr_leaderboard/blob/main/normalizer/normalizer.py]
|
49 |
+
The dataset is created by selecting the first 100 files from Youtube-Commons, with a minimum of 300 transcription words and age-appropriate content. Three files are manually removed owing to high errors in the transcription observed in visual inspection and also verified with high WER on different ASR implementations.
|
50 |
|
51 |
### Licensing Information
|
52 |
|
53 |
+
All the transcripts are part of a video shared under a CC-By license on YouTube. All the licensing terms are the same as the original dataset [PleIAs/YouTube-Commons].
|