cloudsen12_models / README.md
gonzmg88's picture
Update README.md
ba462e7 verified
---
license: cc-by-nc-4.0
---
<h1> <img src="https://raw.githubusercontent.com/IPL-UV/cloudsen12_models/main/notebooks/logo.webp" alt="Logo" width='5%'> CloudSEN12 trained models</h1>
This repository contains the trained models of the publications:
> Aybar, C., Ysuhuaylas, L., Loja, J., Gonzales, K., Herrera, F., Bautista, L., Yali, R., Flores, A., Diaz, L., Cuenca, N., Espinoza, W., Prudencio, F., Llactayo, V., Montero, D., Sudmanns, M., Tiede, D., Mateo-García, G., & Gómez-Chova, L. (2022). **CloudSEN12, a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2**. Scientific Data, 9(1), Article 1. [DOI: 10.1038/s41597-022-01878-2](https://doi.org/10.1038/s41597-022-01878-2)
> Aybar, C., Montero, D., Mateo-García, G., & Gómez-Chova, L. (2023). **Lessons Learned From Cloudsen12 Dataset: Identifying Incorrect Annotations in Cloud Semantic Segmentation Datasets**. IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, 892–895. [DOI: 10.1109/IGARSS52108.2023.10282381](https://doi.org/10.1109/IGARSS52108.2023.10282381)
> Mateo-García, G., Aybar, C., Acciarini, G., Růžička, V., Meoni, G., Longépé, N., & Gómez-Chova, L. (2023). **Onboard Cloud Detection and Atmospheric Correction with Deep Learning Emulators**. IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, 1875–1878. [DOI: 10.1109/IGARSS52108.2023.10282605](https://doi.org/10.1109/IGARSS52108.2023.10282605)
> Aybar, C., Bautista, L., Montero, D., Contreras, J., Ayala, D., Prudencio, F., Loja, J., Ysuhuaylas, L., Herrera, F., Gonzales, K., Valladares, J., Flores, L. A., Mamani, E., Quiñonez, M., Fajardo, R., Espinoza, W., Limas, A., Yali, R., Alcántara, A., Leyva, M., Loayza-Muro, M., Willems, M., Mateo-García, G. & Gómez-Chova, L. (2024). **CloudSEN12+: The largest dataset of expert-labeled pixels for cloud and cloud shadow detection in Sentinel-2**. Data in Brief, 110852. https://doi.org/10.1016/j.dib.2024.110852
We include the trained models:
* **cloudsen12** Model trained on the 13 bands of Sentinel-2 L1C in the CloudSEN12 dataset
* **cloudsen12l2a** Model trained on the 12 bands of Sentinel-2 L2A in the CloudSEN12 dataset
* **dtacs4bands** Model trained on the NIR, RED, GREEN and BLUE bands of Sentinel-2 L1C in the CloudSEN12 dataset
* **landsat30** Model trained on the common bands of Sentinel-2 L1C and Landsat 8 and 9 in the CloudSEN12 dataset
* **UNetMobV2_V1** Model trained on the 13 bands of Sentinel-2 L1C in the CloudSEN12 dataset included in CloudSEN12+
* **UNetMobV2_V2** Model trained on the 13 bands of Sentinel-2 L1C in the CloudSEN12+
In order to run any of these models in a Sentinel-2 scene see the tutorial [*Run CloudSEN12 model*](https://github.com/IPL-UV/cloudsen12_models/blob/main/notebooks/run_in_gee_image.ipynb) in the [cloudsen12_models](https://github.com/IPL-UV/cloudsen12_models) package.
<img src="https://raw.githubusercontent.com/IPL-UV/cloudsen12_models/main/notebooks/example_flood_dubai_2024.png">
If you find this work useful please cite:
```
@article{aybar_cloudsen12_2024,
title = {{CloudSEN12}+: {The} largest dataset of expert-labeled pixels for cloud and cloud shadow detection in {Sentinel}-2},
issn = {2352-3409},
url = {https://www.sciencedirect.com/science/article/pii/S2352340924008163},
doi = {10.1016/j.dib.2024.110852},
journal = {Data in Brief},
author = {Aybar, Cesar and Bautista, Lesly and Montero, David and Contreras, Julio and Ayala, Daryl and Prudencio, Fernando and Loja, Jhomira and Ysuhuaylas, Luis and Herrera, Fernando and Gonzales, Karen and Valladares, Jeanett and Flores, Lucy A. and Mamani, Evelin and Quiñonez, Maria and Fajardo, Rai and Espinoza, Wendy and Limas, Antonio and Yali, Roy and Alcántara, Alejandro and Leyva, Martin and Loayza-Muro, Rau´l and Willems, Bram and Mateo-García, Gonzalo and Gómez-Chova, Luis},
month = aug,
year = {2024},
pages = {110852},
}
@article{aybar_cloudsen12_2022,
title = {{CloudSEN12}, a global dataset for semantic understanding of cloud and cloud shadow in {Sentinel}-2},
volume = {9},
issn = {2052-4463},
url = {https://www.nature.com/articles/s41597-022-01878-2},
doi = {10.1038/s41597-022-01878-2},
number = {1},
urldate = {2023-01-02},
journal = {Scientific Data},
author = {Aybar, Cesar and Ysuhuaylas, Luis and Loja, Jhomira and Gonzales, Karen and Herrera, Fernando and Bautista, Lesly and Yali, Roy and Flores, Angie and Diaz, Lissette and Cuenca, Nicole and Espinoza, Wendy and Prudencio, Fernando and Llactayo, Valeria and Montero, David and Sudmanns, Martin and Tiede, Dirk and Mateo-García, Gonzalo and Gómez-Chova, Luis},
month = dec,
year = {2022},
pages = {782},
}
```
## Licence
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by-nc.png" alt="licence" width="60"/>
All pre-trained models in this repository are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
The `cloudsen12_models` python package is published under a [GNU Lesser GPL v3 licence](https://www.gnu.org/licenses/lgpl-3.0.en.html)
## Acknowledgments
This research has been supported by the DEEPCLOUD project (PID2019-109026RB-I00, University of Valencia) funded by the Spanish Ministry of Science and Innovation (MCIN/AEI/10.13039/501100011033) and the European Union (NextGenerationEU).
> <img src="https://www.uv.es/chovago/logos/logoMICIN.jpg" alt="DEEPCLOUD project (PID2019-109026RB-I00, University of Valencia) funded by MCIN/AEI/10.13039/501100011033." title="DEEPCLOUD project (PID2019-109026RB-I00, University of Valencia) funded by MCIN/AEI/10.13039/501100011033." width="300"/>