|
--- |
|
dataset_info: |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: caption |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 171055893.125 |
|
num_examples: 1087 |
|
download_size: 170841790 |
|
dataset_size: 171055893.125 |
|
language: |
|
- en |
|
task_categories: |
|
- text-to-image |
|
annotations_creators: |
|
- machine-generated |
|
size_categories: |
|
- n<1K |
|
--- |
|
# Disclaimer |
|
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions |
|
|
|
# Dataset Card for A subset of Vivian Maier's photographs BLIP captions |
|
The captions are generated with the [pre-trained BLIP model](https://github.com/salesforce/BLIP). |
|
|
|
For each row the dataset contains `image` and `caption` keys. `image` is a varying size PIL jpeg, and `caption` is the accompanying text caption. Only a train split is provided. |
|
|
|
## Examples |
|
|
|
|
|
![vv1.jpg](https://raw.githubusercontent.com/CQUEEN-lpy/cqueenccc.github.io/main/imgs/vivian_a%20group%20of%20people.jpg) |
|
> A group of people |
|
|
|
![vv10.jpg](https://raw.githubusercontent.com/CQUEEN-lpy/cqueenccc.github.io/main/imgs/vivian_a%20person%20floating%20in%20the%20water.jpg) |
|
> person floating in the water |
|
|
|
![vv100.jpg](https://raw.githubusercontent.com/CQUEEN-lpy/cqueenccc.github.io/main/imgs/vivian_a%20person%20standing%20next%20to%20a%20refrigerator.jpg) |
|
> a person standing next to a refrigerator |
|
|
|
## Citation |
|
|
|
If you use this dataset, please cite it as: |
|
|
|
``` |
|
@misc{cqueenccc2023vivian, |
|
author = {cQueenccc}, |
|
title = {Vivian Maier's photograph split BLIP captions}, |
|
year={2023}, |
|
howpublished= {\url{https://huggingface.co/datasets/cQueenccc/Vivian-Blip-Captions/}} |
|
} |
|
``` |