metadata
license: cc-by-nc-nd-4.0
The Git-10M dataset is a global-scale remote sensing image-text pair dataset, consisting of over 10 million image-text pairs with geographical locations and resolution information.
Project Page: https://chen-yang-liu.github.io/Text2Earth/

Load Dataset
from modelscope.msdatasets import MsDataset
ds = MsDataset.load('lcybuaa/Git-10M')
View samples from the dataset
from datasets import load_dataset
save_path = 'xxxxx'
ds = load_dataset.load('lcybuaa/Git-10M', cache_dir=save_path)
train_dataset = ds["train"]
for i, example in enumerate(train_dataset):
image = example["image"]
# Text Description
text = example["text"].split('_GOOGLE_LEVEL_)[-1]
# Image Resolution
Level = int(example["text"].split('_GOOGLE_LEVEL_)[0])
if Level != 0:
Resolution = 2**(17-Level)
else:
print('This image comes from a public dataset. There is no available resolution metadata.')
# save image
image.save(f"image_{i}.png") #
print('text:', text)
Git-RSCLIP: Remote Sensing Vision-Language Contrastive Pre-training Foundation Model
Git-RSCLIP is pre-trained using the contrastive learning framework on the Git-10M dataset. Git-RSCLIP is here:[Huggingface | Modelscope]
Compare the Top1-Acc of Zero-shot classification on multiple image classification datasets:
Method | OPTIMAL31 | RSC11 | RSICB128 | WHURS19 | RS2800/RSSCN7 | CLRS | Average score |
---|---|---|---|---|---|---|---|
CLIP | 0.6 | 0.45 | 0.25 | 0.77 | 0.52 | 0.56 | 0.52 |
RemoteCLIP | 0.82 | 0.67 | 0.34 | 0.93 | 0.52 | 0.66 | 0.65 |
GeoRSCLIP | 0.83 | 0.67 | 0.35 | 0.89 | 0.63 | 0.69 | 0.68 |
SkyCLIP50 | 0.77 | 0.60 | 0.38 | 0.78 | 0.55 | 0.61 | 0.62 |
(Git-RSCLIP) Ours | 0.95 | 0.67 | 0.52 | 0.94 | 0.64 | 0.65 | 0.73 |
CC-BY-NC-ND-4.0 License: This dataset is not allowed to be modified or distributed without authorization!
BibTeX entry and citation info
@misc{liu2025text2earthunlockingtextdrivenremote,
title={Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a Global-Scale Dataset and a Foundation Model},
author={Chenyang Liu and Keyan Chen and Rui Zhao and Zhengxia Zou and Zhenwei Shi},
year={2025},
eprint={2501.00895},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.00895},
}