File size: 3,422 Bytes
0acb83c 917b0c2 0acb83c 8eb2dee 07992b3 2131a9a dadb8bd c1cd903 dadb8bd 2131a9a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
dataset_info:
features:
- name: filename
dtype:
audio:
sampling_rate: 16000
- name: url
dtype: string
splits:
- name: train
num_bytes: 1068464089483.938
num_examples: 59879
download_size: 16395869337
dataset_size: 1068464089483.938
---
# Malaysian Youtube
Malaysian and Singaporean youtube channels, total up to 60k audio files with total 18.7k hours.
URLs data at https://github.com/mesolitica/malaya-speech/tree/master/data/youtube/data
Notebooks at https://github.com/mesolitica/malaya-speech/tree/master/data/youtube
## How to load the data efficiently?
```python
import pandas as pd
import json
from datasets import Audio
from torch.utils.data import DataLoader, Dataset
chunks = 30
sr = 16000
class Train(Dataset):
def __init__(self, indices, maxlen_cache_df=5, maxlen_cache_audio=50):
self.indices = {}
for k, v in indices.items():
for i in range(int(k), v['start'] + v['end'], 1):
self.indices[i] = v
self.max_index = len(self.indices)
self.cache_df = {}
self.cache_audio = {}
self.maxlen_cache_df = maxlen_cache_df
self.maxlen_cache_audio = maxlen_cache_audio
self.audio = Audio(sampling_rate=16000)
def __len__(self):
return self.max_index
def __getitem__(self, item):
if item < 0:
item = self.max_index + item
v = self.indices[item]
key_row = f"{v['filename']}-{v['i']}"
chunk_index = item - v['start']
if key_row not in self.cache_audio:
if v['filename'] not in self.cache_df:
df = pd.read_parquet(v['filename'])
if len(self.cache_df) >= self.maxlen_cache_df:
keys = list(self.cache_df.keys())
self.cache_df.pop(sorted(keys)[0], None)
self.cache_df[v['filename']] = df
else:
df = self.cache_df[v['filename']]
row = df.iloc[int(v['i'])]
audio = self.audio.decode_example(self.audio.encode_example(row['filename']))
if len(self.cache_audio) >= self.maxlen_cache_audio:
keys = list(self.cache_audio.keys())
self.cache_audio.pop(sorted(keys)[0], None)
self.cache_audio[key_row] = audio
else:
audio = self.cache_audio[key_row]
return {
'array': audio['array'][(chunks * sr) * chunk_index: (chunks * sr) * (chunk_index + 1)]
}
with open('crawl-youtube-global-indices.json') as fopen:
global_indices = json.load(fopen)
train = Train(global_indices)
train[0]
```
```
{'array': array([ 0. , 0. , 0. , ..., -0.00845753,
0.00168016, -0.00606468])}
```
This is global hashing indices if the audio chunked with 30 seconds, read more at https://github.com/mesolitica/malaysian-dataset/tree/master/speech-to-text-semisupervised/pseudolabel-whisper
## Licensing
```
All the videos, songs, images, and graphics used in the video belong to their respective owners and I does not claim any right over them.
Copyright Disclaimer under section 107 of the Copyright Act of 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research. Fair use is a use permitted by copyright statute that might otherwise be infringing.
``` |