Datasets:

Modalities:
Image
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
Git-10M / README.md
lcybuaa's picture
Update README.md
26b9336 verified
---
license: cc-by-nc-nd-4.0
---
The **Git-10M** dataset is a global-scale remote sensing image-text pair dataset, consisting of over **10 million** image-text pairs with geographical locations and resolution information.
## CC-BY-NC-ND-4.0 License: This dataset is not allowed to be modified or distributed without authorization!
<h1>
<a href="https://chen-yang-liu.github.io/Text2Earth/">Project Page: https://chen-yang-liu.github.io/Text2Earth/ </a>
</h1>
<div align="center">
<img src="https://github.com/Chen-Yang-Liu/Text2Earth/raw/main/images/dataset.png" width="1000"/>
</div>
## Load Dataset
```python
from modelscope.msdatasets import MsDataset
ds = MsDataset.load('lcybuaa/Git-10M')
```
## View samples from the dataset
```python
from datasets import load_dataset
save_path = 'xxxxx'
ds = load_dataset.load('lcybuaa/Git-10M', cache_dir=save_path)
train_dataset = ds["train"]
for i, example in enumerate(train_dataset):
image = example["image"]
# Text Description
text = example["text"].split('_GOOGLE_LEVEL_)[-1]
# Image Resolution
Level = int(example["text"].split('_GOOGLE_LEVEL_)[0])
if Level != 0:
Resolution = 2**(17-Level)
else:
print('This image comes from a public dataset. There is no available resolution metadata.')
# save image
image.save(f"image_{i}.png") #
print('text:', text)
```
## Git-RSCLIP: Remote Sensing Vision-Language Contrastive Pre-training Foundation Model
Git-RSCLIP is pre-trained using the contrastive learning framework on the Git-10M dataset.
Git-RSCLIP is here:[[Huggingface](https://huggingface.co/lcybuaa/Git-RSCLIP) | [Modelscope](https://modelscope.cn/models/lcybuaa1111/Git-RSCLIP)]
Compare the Top1-Acc of Zero-shot classification on multiple image classification datasets:
| Method | OPTIMAL31 | RSC11 | RSICB128 | WHURS19 | RS2800/RSSCN7 | CLRS | Average score |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| CLIP | 0.6 | 0.45 | 0.25 | 0.77 | 0.52 | 0.56 | 0.52 |
| RemoteCLIP | 0.82 | 0.67 | 0.34 | 0.93 | 0.52 | 0.66 | 0.65 |
| GeoRSCLIP | 0.83 | 0.67 | 0.35 | 0.89 | 0.63 | 0.69 | 0.68 |
| SkyCLIP50 | 0.77 | 0.60 | 0.38 | 0.78 | 0.55 | 0.61 | 0.62 |
| (Git-RSCLIP) Ours | **0.95** | **0.67** | **0.52** | **0.94** | **0.64** | **0.65** | **0.73** |
# BibTeX entry and citation info
```bibtex
@misc{liu2025text2earthunlockingtextdrivenremote,
title={Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a Global-Scale Dataset and a Foundation Model},
author={Chenyang Liu and Keyan Chen and Rui Zhao and Zhengxia Zou and Zhenwei Shi},
year={2025},
eprint={2501.00895},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.00895},
}
```