BATIS / README.md
cathv's picture
Update README.md
c8657b9 verified
metadata
configs:
  - config_name: Kenya
    data_files:
      - split: train
        path: Kenya/train_filtered.csv
      - split: val
        path: Kenya/valid_filtered.csv
      - split: test
        path: Kenya/test_filtered.csv
    default: true
  - config_name: South_Africa
    data_files:
      - split: train
        path: South_Africa/train_filtered.csv
      - split: val
        path: South_Africa/valid_filtered.csv
      - split: test
        path: South_Africa/test_filtered.csv
  - config_name: USA_Summer
    data_files:
      - split: train
        path: USA_Summer/train_filtered.csv
      - split: val
        path: USA_Summer/valid_filtered.csv
      - split: test
        path: USA_Summer/test_filtered.csv
  - config_name: USA_Winter
    data_files:
      - split: train
        path: USA_Winter/train_filtered.csv
      - split: val
        path: USA_Winter/valid_filtered.csv
      - split: test
        path: USA_Winter/test_filtered.csv
license: cc-by-nc-4.0

license: cc-by-nc-4.0

BATIS: Bayesian Approaches for Targeted Improvement of Species Distribution Models

This repository contains the dataset used in experiments shown in BATIS: Bayesian Approaches for Targeted Improvement of Species Distribution Models. To download the dataset, you can use the load_dataset function from HuggingFace. For example :

from datasets import load_dataset

# Training Split for Kenya
training_kenya = load_dataset("cathv/BATIS", name="Kenya", split="train")

# Validation Split for South Africa
validation_south_africa = load_dataset("cathv/BATIS", name="South_Africa", split="val")

# Test Split for USA-Summer
test_usa_summer = load_dataset("cathv/BATIS", name="USA_Summer", split="test")

Licenses

The BATIS Benchmark is released under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License.

The use of our dataset should also comply with the following:

Dataset Configurations and Splits

The dataset contains the following four configurations :

  • Kenya : Containing the data used to train our models for predicting bird species distribution in Kenya.
  • South Africa : Containing the data used to train our models for predicting bird species distribution in South Africa.
  • USA-Winter : Containing the data used to train our models for predicting bird species distribution in the United States of America during the winter season.
  • USA-Summer : Containing the data used to train our models for predicting bird species distribution in the United States of America during the summer season.

Each subset can be further divided into train, test and split. These splits are the same as the one we used in our paper, and were generated by following the pre-processing pipeline described in our paper, which can be easily reproduced by re-using our code.

Dataset Structure

/BATIS/
    Kenya/
        images.tar.gz
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    South_Africa/
        images.tar.gz
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    USA_Winter/
        images/
          images_{aa}
          ...
          images_{ad}
          
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    USA_Summer/
        images/
            images_{aa}
            ...
            images_{af}
        images.tar.gz
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    Species_ID/
        species_list_kenya.csv
        species_list_south_africa.csv
        species_list_usa.csv

The files train_filtered.csv, test_filtered.csv and valid_filtered.csv are containing the informations one can see from the Dataset Viewer. The archives targets, images, environmental are respectively containing the target vectors (i.e., the estimated ground truth encounter rate probability), the satellite images (in .tif format) and the environmental rasters from WorldClim (in .npy format) associated with each hotspot. The Species_ID/ folder contains the species list files for each subset.

Data Fields

  • hotspot_id : The unique ID associated with a given hotspot. The hotspot_idvalue can be used to upload date coming from either targets, environmental or variance, as they are all formulated as
/BATIS/
    images/
        {hotspot_id_1}.tif
        ...
        {hotspot_id_n}.tif
    environmental/
        {hotspot_id_1}.npy
        ...
        {hotspot_id_1}.npy
    targets/
        {hotspot_id_1}.json
        ...
        {hotspot_id_1}.json
  • lon : Longitude coordinate of the hotspot
  • latitude : Latitude coordinate of the hotspot
  • num_complete_checklists : Number of complete checklists collected in that hotspot
  • bio_1 to bio_19: Environmental covariates values associated with that hotspot, extracted from the WorldClim model. For more details on each of these variables, please refer to the appendix.
  • split : The split associated with that hotspot (either train, valid or test)

Reconstructing Satellite Image Archive Files for the USA Subsets

The satellite images archive files for the USA Summer and USA Winter subsets are very large. To facilitate download through Hugging Face, we decided to split these archives into multiple binary files. You can reconstruct the original archive using the cat command in a terminal, which will join the binary files in chronological order and reconstruct the original .tar.gz archive.

To reconstruct the archive for the USA-Winter subset, run:

cat images_chunk_aa images_chunk_ab images_chunk_ac images_chunk_ad > images.tar.gz

To reconstruct the archive for the USA-Summer subset, run:

cat images_chunk_aa images_chunk_ab images_chunk_ac images_chunk_ad images_chunk_ae images_chunk_af > images.tar.gz