The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ImportError Message: To be able to use jordyvl/DUDE_loader, you need to install the following dependency: pdf2image. Please install it using 'pip install pdf2image' for instance. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1880, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1504, in get_module local_imports = _download_additional_modules( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 354, in _download_additional_modules raise ImportError( ImportError: To be able to use jordyvl/DUDE_loader, you need to install the following dependency: pdf2image. Please install it using 'pip install pdf2image' for instance.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Loading the dataset with a specific configuration
There are 3 different OCR versions to choose from with their original format or standardized DUE format, as well as the option to load the documents as filepaths or as binaries (PDF). To load a specific configuration, pass a config from one of the following:
#{bin_}{Amazon,Azure,Tesseract}_{original,due}
['Amazon_due', 'Amazon_original', 'Azure_due', 'Azure_original', 'Tesseract_due', 'Tesseract_original',
'bin_Amazon_due', 'bin_Amazon_original', 'bin_Azure_due', 'bin_Azure_original', 'bin_Tesseract_due', 'bin_Tesseract_original']
Loading the dataset:
from datasets import load_dataset
ds = load_dataset("jordyvl/DUDE_loader", 'Amazon_original')
This dataset repository contains helper functions to convert the dataset to ImDB (image database) format.
We advise to clone the repository and run it according to your preferences (OCR version, lowercasing, ...).
When running the above data loading script, you should be able to find the extracted binaries under the HF_CACHE:HF_CACHE/datasets/downloads/extracted/<hash>/DUDE_train-val-test_binaries
, which can be reused for the data_dir
argument.
For example:
python3 DUDE_imdb_loader.py \
--data_dir ~/.cache/huggingface/datasets/downloads/extracted/7adde0ed7b0150b7f6b32e52bcad452991fde0f3407c8a87e74b1cb475edaa5b/DUDE_train-val-test_binaries/
For baselines, we recommend having a look at the MP-DocVQA repository
We strongly encourage you to benchmark your best models and submit test set predictions on the DUDE competition leaderboard
To help with test set predictions, we have included a sample submission file RRC_DUDE_testset_submission_example.json
.
- Downloads last month
- 853