Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 8 new columns ({'ignore_index', 'num_samples', 'split', 'attributes', 'Dataset', 'std', 'mean', 'classes'}) and 3 missing columns ({'config', 'chunks', 'updated_at'}).

This happened while the json dataset builder was generating data using

hf://datasets/JessicaYuan/EarthNets_MMFlood/optimized_mmflood_sar_train/metadata.json (at revision 1728b0f1e552b60c0c0cbbd954b92617d9e1cbfa)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              Dataset: string
              split: string
              num_samples: int64
              mean: list<item: double>
                child 0, item: double
              std: list<item: double>
                child 0, item: double
              classes: struct<0: string, 1: string>
                child 0, 0: string
                child 1, 1: string
              ignore_index: int64
              attributes: struct<image: struct<dtype: string, bands: list<item: string>>, label: struct<dtype: string>>
                child 0, image: struct<dtype: string, bands: list<item: string>>
                    child 0, dtype: string
                    child 1, bands: list<item: string>
                        child 0, item: string
                child 1, label: struct<dtype: string>
                    child 0, dtype: string
              to
              {'chunks': [{'chunk_bytes': Value(dtype='int64', id=None), 'chunk_size': Value(dtype='int64', id=None), 'dim': Value(dtype='null', id=None), 'filename': Value(dtype='string', id=None)}], 'config': {'chunk_bytes': Value(dtype='int64', id=None), 'chunk_size': Value(dtype='null', id=None), 'compression': Value(dtype='null', id=None), 'data_format': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'data_spec': Value(dtype='string', id=None), 'encryption': Value(dtype='null', id=None), 'item_loader': Value(dtype='string', id=None)}, 'updated_at': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1412, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 988, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 8 new columns ({'ignore_index', 'num_samples', 'split', 'attributes', 'Dataset', 'std', 'mean', 'classes'}) and 3 missing columns ({'config', 'chunks', 'updated_at'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/JessicaYuan/EarthNets_MMFlood/optimized_mmflood_sar_train/metadata.json (at revision 1728b0f1e552b60c0c0cbbd954b92617d9e1cbfa)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

chunks
list
config
dict
updated_at
string
Dataset
string
split
string
num_samples
int64
mean
sequence
std
sequence
classes
dict
ignore_index
int64
attributes
dict
[ { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-0-9.bin" }, { "chunk_bytes": 74974152, "chunk_size": 22, "dim": null, "filename": "chunk-0-10.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-1-9.bin" }, { "chunk_bytes": 74974152, "chunk_size": 22, "dim": null, "filename": "chunk-1-10.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-2-9.bin" }, { "chunk_bytes": 74974152, "chunk_size": 22, "dim": null, "filename": "chunk-2-10.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-3-9.bin" }, { "chunk_bytes": 78382068, "chunk_size": 23, "dim": null, "filename": "chunk-3-10.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-4-9.bin" }, { "chunk_bytes": 78382068, "chunk_size": 23, "dim": null, "filename": "chunk-4-10.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-5-9.bin" }, { "chunk_bytes": 78382068, "chunk_size": 23, "dim": null, "filename": "chunk-5-10.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-6-9.bin" }, { "chunk_bytes": 78382068, "chunk_size": 23, "dim": null, "filename": "chunk-6-10.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-0.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-1.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-2.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-3.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-4.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-5.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-6.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-7.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-8.bin" }, { "chunk_bytes": 255593700, "chunk_size": 75, "dim": null, "filename": "chunk-7-9.bin" }, { "chunk_bytes": 78382068, "chunk_size": 23, "dim": null, "filename": "chunk-7-10.bin" } ]
{ "chunk_bytes": 256000000, "chunk_size": null, "compression": null, "data_format": [ "numpy", "numpy" ], "data_spec": "[1, {\"type\": \"builtins.dict\", \"context\": \"[\\\"image\\\", \\\"label\\\"]\", \"children_spec\": [{\"type\": null, \"context\": null, \"children_spec\": []}, {\"type\": null, \"context\": null, \"children_spec\": []}]}]", "encryption": null, "item_loader": "PyTreeLoader" }
1733781944.399247
null
null
null
null
null
null
null
null
null
null
null
MMFlood_SAR
train
6,181
[ 0.049329374, 0.011776519, 142.41237 ]
[ 0.0391287043, 0.0103687926, 81.1010422 ]
{ "0": "background", "1": "flood" }
255
{ "image": { "dtype": "float16", "bands": [ "vv", "vh", "dem" ] }, "label": { "dtype": "uint8" } }

How to use it

Install Dataset4EO

git clone --branch streaming https://github.com/EarthNets/Dataset4EO.git

pip install -e .

Then download the dataset from this Huggingface repo.

import dataset4eo as eodata
import time

train_dataset = eodata.StreamingDataset(input_dir="optimized_mmflood_sar_train", num_channels=3, shuffle=True, drop_last=True)
sample = dataset[101]
print(sample.keys())
print(sample["image"])        
print(sample["simage"].shape)
print(sample["label"]) 

We acknowledge and give full credit to the original authors of SpectralEarth for their effort in creating this dataset. The dataset is re-hosted in compliance with its original license to facilitate further research. Please cite the following paper for the creation of the dataset:

@article{montello2022mmflood,
  title={Mmflood: A multimodal dataset for flood delineation from satellite imagery},
  author={Montello, Fabio and Arnaudo, Edoardo and Rossi, Claudio},
  journal={IEEE Access},
  volume={10},
  pages={96774--96787},
  year={2022},
  publisher={IEEE}
}
Downloads last month
373