Dataset Viewer issue: StreamingRowsError
The dataset viewer is not working.
Error details:
Error code: StreamingRowsError
Exception: LibsndfileError
Message: Error opening <File-like object HfFileSystem, datasets/akhikhan123/BanglaEnglishMixedAsrDataset@22fbb3db3dac41518e4f2aeb2a027120c718c11b/Speaker1/audio/recording 10.wav>: Format not recognised.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 320, in compute
compute_first_rows_from_parquet_response(
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
rows_index = indexer.get_rows_index(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index
return RowsIndex(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__
self.parquet_index = self._init_parquet_index(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index
response = get_previous_step_or_raise(
File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 564, in get_previous_step_or_raise
raise CachedArtifactNotFoundError(kind=kind, dataset=dataset, config=config, split=split)
libcommon.simple_cache.CachedArtifactNotFoundError: The cache entry has not been found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 91, in get_rows_or_raise
return get_rows(
File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 68, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1393, in __iter__
example = _apply_feature_types_on_example(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1082, in _apply_feature_types_on_example
decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1975, in decode_example
return {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1976, in <dictcomp>
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1341, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
File "/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py", line 285, in read
with SoundFile(file, 'r', samplerate, channels,
File "/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
File "/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py", line 1216, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <File-like object HfFileSystem, datasets/akhikhan123/BanglaEnglishMixedAsrDataset@22fbb3db3dac41518e4f2aeb2a027120c718c11b/Speaker1/audio/recording 10.wav>: Format not recognised.
weird, the file https://huggingface.co/datasets/akhikhan123/BanglaEnglishMixedAsrDataset/blob/main/Speaker1/audio/recording%2010.wav seems to be a valid WAV file. cc @polinaeterna maybe?
LibsndfileError: Error opening <_io.BufferedReader name='/root/.cache/huggingface/datasets/downloads/839de944793d4e7d337a9f0968a73d2d655c6cf041dfd03323a75a5a9c8c5f07'>: Format not recognised.---what is this kind of problem??can u help me the actual way of uploading asr dataset to huggingface
maybe linked to https://github.com/huggingface/dataset-viewer/pull/2792. cc @albertvillanova @polinaeterna do you want to investigate?
hi
@severo
and
@akhikhan123
I had a similar issue in my other dataset, the issue stems from the fact that some samples in the dataset were corrupted.
I was able to fix this by iterating over the dataset and identifying these samples and removing them.
here's the script i used :
from datasets import load_dataset
from tqdm import tqdm
ds = load_dataset("not-lain/lofiHipHop")
# slow process
errors = []
for i in tqdm(range(1256)): # 1256 is my dataset length
try :
# if item cannot be retrieved it's erroneous
ds["train"][i]
except :
errors.append(i)
# remove corrupted samples
l = list(range(1256))
for i in errors :
l.remove(i)
data = ds["train"].select(l)
# push filtered data
data.push_to_hub("not-lain/lofiHipHop")