Search is not available for this dataset
sentence1
imagewidth (px) 448
448
| sentence2
imagewidth (px) 448
448
| score
float64 0
5
|
---|---|---|
3 |
||
0.8 |
||
3.8 |
||
1 |
||
0.4 |
||
4.6 |
||
5 |
||
2 |
||
5 |
||
0.6 |
||
1 |
||
5 |
||
3.6 |
||
4 |
||
3.1 |
||
2.2 |
||
3.2 |
||
4.4 |
||
2.2 |
||
2.4 |
||
2.8 |
||
4.6 |
||
1 |
||
2.2 |
||
2.2 |
||
3.8 |
||
2.8 |
||
2.2 |
||
2.6 |
||
3 |
||
4.8 |
||
1.4 |
||
1.2 |
||
0.8 |
||
2 |
||
1.2 |
||
2.8 |
||
2.6 |
||
2.6 |
||
0.8 |
||
2.4 |
||
3 |
||
1 |
||
3.2 |
||
4.6 |
||
1.4 |
||
3 |
||
5 |
||
0.8 |
||
2.111111 |
||
2.8 |
||
2 |
||
1.1 |
||
5 |
||
1.2 |
||
3 |
||
4.5 |
||
3.4 |
||
3.8 |
||
4 |
||
2.6 |
||
1.8 |
||
2.4 |
||
3.4 |
||
3.4 |
||
3.6 |
||
1.6 |
||
3.2 |
||
4.6 |
||
4.4 |
||
2.2 |
||
1.2 |
||
2.2 |
||
3.8 |
||
4 |
||
2.2 |
||
0.8 |
||
2.6 |
||
2.2 |
||
1.6 |
||
2.6 |
||
1.8 |
||
2.8 |
||
3.6 |
||
2 |
||
2.2 |
||
4.8 |
||
2.2 |
||
3.6 |
||
1.4 |
||
2 |
||
1.8 |
||
3 |
||
2 |
||
4.2 |
||
2 |
||
3 |
||
3.4 |
||
3.6 |
||
3.6 |
End of preview. Expand
in Dataset Viewer.
Dataset Summary
This dataset is rendered to images from STS-14. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images.
Examples of Use
Load test split:
from datasets import load_dataset
dataset = load_dataset("Pixel-Linguist/rendered-sts14", split="test")
Languages
English-only; for multilingual and cross-lingual datasets, see Pixel-Linguist/rendered-stsb
and Pixel-Linguist/rendered-sts17
Citation Information
@article{xiao2024pixel,
title={Pixel Sentence Representation Learning},
author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al},
journal={arXiv preprint arXiv:2402.08183},
year={2024}
}
- Downloads last month
- 44