HySCDG

We provide the models used in our HySCDG pipeline that allows to generate hybrid semantic change detection datasets, as presented in our CVPR article The Change You Want To Detect.

The pipeline is composed of the main Stable Diffusion model and a ControlNet.

The Stable Diffusion core was specifically trained for remote sensing images inpainting. Starting from the checkpoint Stable Diffusion 2 Inpainting, we sequentially trained the VAE and the U-Net on aerial images.

Then we added a ControlNet alongside the Stable Diffusion core and trained the ControlNet while keeping frozen the weights of the core part. Training was done in the inpainting way (with random masks), using images from FLAIR dataset and feeding the ControlNet with their semantic maps.

It allows the model to be semantically-guided, in the way that inpainting can be monitored by selecting the classes to generate (provided as a RGB semantic map to the ControlNet).

Usage

You can use the model with the Diffusers library, or directly download the weights.

For use with our HySCDG pipeline, you can follow the instructions provided on the HySCDG pipeline repository.

Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support