The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: HfHubHTTPError Message: 502 Server Error: Bad Gateway for url: https://huggingface.co/api/datasets/MattyB95/VoxCelebSpoof/paths-info/03154c7acc6e29b5ef01aaecfa09045f4f702259 Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 444, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 347, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 364, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status response.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/api/datasets/MattyB95/VoxCelebSpoof/paths-info/03154c7acc6e29b5ef01aaecfa09045f4f702259 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 126, in get_rows_or_raise return get_rows( File "/src/services/worker/src/worker/utils.py", line 64, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 87, in get_rows ds = load_dataset( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2567, in load_dataset return builder_instance.as_streaming_dataset(split=split) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1382, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 119, in _split_generators analyze(archives, downloaded_dirs, split_name) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 93, in analyze for downloaded_dir_file in dl_manager.iter_files(downloaded_dir): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 869, in __iter__ yield from self.generator(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 947, in _iter_from_urlpaths if xisfile(urlpath, download_config=download_config): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 262, in xisfile fs, *_ = fsspec.get_fs_token_paths(path, storage_options=storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem return cls(**storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 56, in __init__ self.fo = fo.__enter__() # the whole instance is a context File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 100, in __enter__ f = self.fs.open(self.path, mode=mode) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1307, in open f = self._open( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 223, in _open return HfFileSystemFile(self, path, mode=mode, revision=revision, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 596, in __init__ super().__init__(fs, path, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1663, in __init__ self.size = self.details["size"] File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1676, in details self._details = self.fs.info(self.path) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 513, in info paths_info = self._api.get_paths_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2971, in get_paths_info hf_raise_for_status(response) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/api/datasets/MattyB95/VoxCelebSpoof/paths-info/03154c7acc6e29b5ef01aaecfa09045f4f702259
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VoxCelebSpoof
VoxCelebSpoof is a dataset related to detecting spoofing attacks on automatic speaker verification systems. This dataset is part of a broader effort to improve the security of voice biometric systems against various types of spoofing attacks, such as replay attacks, voice synthesis, and voice conversion.
Dataset Details
Dataset Description
The VoxCelebSpoof dataset includes a range of audio samples from different types of synthesis spoofs. The goal of the dataset is to develop systems that can accurately distinguish between genuine and spoofed audio samples.
Key features and objectives of VoxCelebSpoof include:
Data Diversity: The dataset is derived from VoxCeleb, a large-scale speaker identification dataset containing celebrity interviews. Due to this, the spoofing detection models trained on VoxCelebSpoof are exposed to various accents, languages, and acoustic environments.
Synthetic Varieties: The spoofs include a variety of synthetic (TTS) attacks, such as high-quality synthetic speech, using AI-based voice cloning, and challenging systems to recognise and defend against a range of synthetic vulnerabilities.
Benchmarking: VoxCelebSpoof can serve as a benchmark for comparing the performance of different spoofing detection systems under standardised conditions.
Research and Development: The dataset encourages the research community to innovate in anti-spoofing for voice biometric systems, promoting advancements in techniques like feature extraction, classification algorithms, and deep learning.
Curated by: Matthew Boakes
Funded by: Bill & Melinda Gates Foundation
Shared by: Alan Turing Institute
Language(s) (NLP): English
License: MIT
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 121