Datasets:

Formats:
parquet
Languages:
English
ArXiv:
Tags:
image
Libraries:
Datasets
Dask
License:
File size: 1,587 Bytes
fc433d8
867988b
 
 
 
 
 
 
 
 
fc433d8
 
 
 
 
f9175ad
fc433d8
 
f9175ad
 
cc777e2
fc433d8
 
 
 
 
cc777e2
fc433d8
 
 
 
 
 
 
 
867988b
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# Working with the Metadata

## Downloading all the metadata files at once

Install the huggingface-cli utility (via pip). You may then use the following command:

    huggingface-cli download Spawning/PD12M --repo-type dataset --local-dir metadata --include "metadata/*"

## metadata format

The metadata files are in parquet format, and contain the following attributes:
- `id`: A unique identifier for the image.
- `url`: The URL of the image.
- `s3_key`: The S3 file key of the image.
- `caption`: A caption for the image.
- `hash`: The MD5 hash of the image file.
- `width`: The width of the image in pixels.
- `height`: The height of the image in pixels.
- `mime_type`: The MIME type of the image file.
- `license`: The URL of the license.
- `source`: The source organization of the image.

#### Open a metadata file
The files are in parquet format, and can be opened with a tool like `pandas` in Python. 
```python
import pandas as pd
df = pd.read_parquet('pd12m.000.parquet')
```

#### Get URLs from metadata 
Once you have opened a maetadata file with pandas, you can get the URLs of the images with the following command:
```python
urls = df['url']
```

### Download all files mentioned in metadata

If you want to just grab all files referenced by a metadata collection, you may try this (adjust to taste):

    img2dataset --url_list $file --input_format "parquet" \
    --url_col "url" --caption_col "caption" --output_format files \
    --output_folder $dir --processes_count 16 --thread_count 64 \
    --skip_reencode true --min_image_sizel 654 --max_aspect_ratio=1.77