Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,23 @@ To load a specific configuration, pass a config from one of the following:
|
|
21 |
'bin_Amazon_due', 'bin_Amazon_original', 'bin_Azure_due', 'bin_Azure_original', 'bin_Tesseract_due', 'bin_Tesseract_original']
|
22 |
```
|
23 |
|
|
|
24 |
```python
|
25 |
from datasets import load_dataset
|
26 |
|
27 |
ds = load_dataset("jordyvl/DUDE_loader", 'Amazon_original')
|
28 |
```
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
'bin_Amazon_due', 'bin_Amazon_original', 'bin_Azure_due', 'bin_Azure_original', 'bin_Tesseract_due', 'bin_Tesseract_original']
|
22 |
```
|
23 |
|
24 |
+
Loading the dataset:
|
25 |
```python
|
26 |
from datasets import load_dataset
|
27 |
|
28 |
ds = load_dataset("jordyvl/DUDE_loader", 'Amazon_original')
|
29 |
```
|
30 |
|
31 |
+
This dataset repository contains helper functions to convert the dataset to ImDB (image database) format.
|
32 |
+
We advise to clone the repository and run it according to your preferences (OCR version, lowercasing, ...).
|
33 |
+
When running the above data loading script, you should be able to find the extracted binaries under the [HF_CACHE](https://huggingface.co/docs/datasets/cache):
|
34 |
+
`HF_CACHE/datasets/downloads/extracted/<hash>/DUDE_train-val-test_binaries`, which can be reused for the `data_dir` argument.
|
35 |
+
|
36 |
+
For example:
|
37 |
+
|
38 |
+
```bash
|
39 |
+
python3 DUDE_imdb_loader.py \
|
40 |
+
--data_dir ~/.cache/huggingface/datasets/downloads/extracted/7adde0ed7b0150b7f6b32e52bcad452991fde0f3407c8a87e74b1cb475edaa5b/DUDE_train-val-test_binaries/
|
41 |
+
```
|
42 |
+
|
43 |
+
For baselines, we recommend having a look at the [MP-DocVQA repository](https://github.com/rubenpt91/MP-DocVQA-Framework)
|