cQueenccc's picture
Update README.md
2a58259
|
raw
history blame
936 Bytes
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: caption
      dtype: string
  splits:
    - name: train
      num_bytes: 171055893.125
      num_examples: 1087
  download_size: 170841790
  dataset_size: 171055893.125
language:
  - en
task_categories:
  - text-to-image
annotations_creators:
  - machine-generated
size_categories:
  - n<1K

Disclaimer

This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions

Dataset Card for A subset of Vivian Maier's photographs BLIP captions

The captions are generated with the pre-trained BLIP model.

For each row the dataset contains image and caption keys. image is a varying size PIL jpeg, and caption is the accompanying text caption. Only a train split is provided.

More Information needed