metadata
dataset_info:
features:
- name: sentence1
dtype: image
- name: sentence2
dtype: image
- name: score
dtype: float64
splits:
- name: test
num_bytes: 13930539.5
num_examples: 1186
download_size: 10439022
dataset_size: 13930539.5
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
Dataset Summary
This dataset is rendered to images from STS-16. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images.
Examples of Use
Load test split:
from datasets import load_dataset
dataset = load_dataset("Pixel-Linguist/rendered-sts16", split="test")
Languages
English-only; for multilingual and cross-lingual datasets, see Pixel-Linguist/rendered-stsb
and Pixel-Linguist/rendered-sts17
Citation Information
@article{xiao2024pixel,
title={Pixel Sentence Representation Learning},
author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al},
journal={arXiv preprint arXiv:2402.08183},
year={2024}
}