Datasets:
Iulia Elisa
commited on
Commit
·
197dc46
1
Parent(s):
9759d6b
Update README
Browse files
README.md
CHANGED
@@ -54,7 +54,7 @@ The XAMI dataset contains 1000 annotated images of observations from diverse sky
|
|
54 |
|
55 |
### Artefacts
|
56 |
|
57 |
-
A particularity of
|
58 |
<img src="https://huggingface.co/datasets/iulia-elisa/XAMI-dataset/resolve/main/plots/artefact_distributions.png" alt="Examples of an image with multiple artefacts." />
|
59 |
|
60 |
Here are some examples of common artefacts in the dataset:
|
@@ -63,71 +63,65 @@ Here are some examples of common artefacts in the dataset:
|
|
63 |
|
64 |
# Annotation platforms
|
65 |
|
66 |
-
The images have been annotated using the following
|
67 |
|
68 |
-
- [Zooniverse
|
69 |
-
- [Roboflow
|
70 |
|
71 |
# The dataset format
|
72 |
-
The dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold
|
73 |
|
74 |
-
Please check [Dataset Structure](Datasets-Structure.md) for a more detailed structure of our dataset in COCO and YOLOv8-Seg format.
|
75 |
|
76 |
# Downloading the dataset
|
77 |
|
78 |
-
|
79 |
|
80 |
-
|
81 |
|
82 |
```python
|
83 |
-
|
|
|
84 |
import pandas as pd
|
|
|
85 |
|
86 |
-
dataset_name = '
|
87 |
images_dir = '.' # the output directory of the dataset images
|
88 |
-
annotations_path = os.path.join(images_dir, dataset_name, '_annotations.coco.json')
|
89 |
|
90 |
-
|
91 |
-
hf_hub_download(
|
92 |
repo_id="iulia-elisa/XAMI-dataset", # the Huggingface repo ID
|
93 |
repo_type='dataset',
|
94 |
-
filename=
|
95 |
local_dir=images_dir
|
96 |
-
|
97 |
|
98 |
# Unzip file
|
99 |
-
!unzip "
|
|
|
|
|
|
|
100 |
|
101 |
-
# Read the json annotations file
|
102 |
with open(annotations_path) as f:
|
103 |
data_in = json.load(f)
|
|
|
|
|
104 |
```
|
105 |
or
|
106 |
|
107 |
-
```
|
108 |
- using a CLI command:
|
109 |
```bash
|
110 |
-
huggingface-cli download iulia-elisa/XAMI-dataset
|
111 |
-
```
|
112 |
-
|
113 |
-
### Cloning the repository for more visualization tools
|
114 |
|
115 |
-
|
116 |
|
117 |
-
|
118 |
|
119 |
```bash
|
120 |
# Github
|
121 |
git clone https://github.com/ESA-Datalabs/XAMI-dataset.git
|
122 |
cd XAMI-dataset
|
123 |
```
|
124 |
-
|
125 |
-
```bash
|
126 |
-
# HuggingFace
|
127 |
-
git clone https://huggingface.co/datasets/iulia-elisa/XAMI-dataset.git
|
128 |
-
cd XAMI-dataset
|
129 |
-
```
|
130 |
-
|
131 |
# Dataset Split with SKF (Optional)
|
132 |
|
133 |
- The below method allows for dataset splitting, using the pre-generated splits in CSV files. This step is useful when training multiple dataset splits versions to gain mor generalised view on metrics.
|
@@ -140,7 +134,7 @@ csv_files = ['mskf_0.csv', 'mskf_1.csv', 'mskf_2.csv', 'mskf_3.csv']
|
|
140 |
for idx, csv_file in enumerate(csv_files):
|
141 |
mskf = pd.read_csv(csv_file)
|
142 |
utils.create_directories_and_copy_files(images_dir, data_in, mskf, idx)
|
143 |
-
```
|
144 |
|
145 |
## Licence
|
146 |
...
|
|
|
54 |
|
55 |
### Artefacts
|
56 |
|
57 |
+
A particularity of the dataset compared to every-day images are the locations where artefacts usually appear.
|
58 |
<img src="https://huggingface.co/datasets/iulia-elisa/XAMI-dataset/resolve/main/plots/artefact_distributions.png" alt="Examples of an image with multiple artefacts." />
|
59 |
|
60 |
Here are some examples of common artefacts in the dataset:
|
|
|
63 |
|
64 |
# Annotation platforms
|
65 |
|
66 |
+
The images have been annotated using the following platform:
|
67 |
|
68 |
+
- [Zooniverse](https://www.zooniverse.org/projects/ori-j/ai-for-artefacts-in-sky-images), where the resulted annotations are not externally visible.
|
69 |
+
- [Roboflow](https://universe.roboflow.com/iuliaelisa/xmm_om_artefacts_512/), which allows for more interactive and visual annotation tools.
|
70 |
|
71 |
# The dataset format
|
72 |
+
The dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold (**k=4**) to balance class distributions across splits. We choose to work with a single dataset splits version (out of 4) but also provide means to work with all 4 versions.
|
73 |
|
74 |
+
Please check [Dataset Structure](Datasets-Structure.md) for a more detailed structure of our dataset in COCO-IS and YOLOv8-Seg format.
|
75 |
|
76 |
# Downloading the dataset
|
77 |
|
78 |
+
### *(Option 1)* Downloading the dataset **archive** from HuggingFace
|
79 |
|
80 |
+
- using a python script:
|
81 |
|
82 |
```python
|
83 |
+
import os
|
84 |
+
import json
|
85 |
import pandas as pd
|
86 |
+
from huggingface_hub import hf_hub_download
|
87 |
|
88 |
+
dataset_name = 'xami_dataset' # the dataset name of Huggingface
|
89 |
images_dir = '.' # the output directory of the dataset images
|
|
|
90 |
|
91 |
+
hf_hub_download(
|
|
|
92 |
repo_id="iulia-elisa/XAMI-dataset", # the Huggingface repo ID
|
93 |
repo_type='dataset',
|
94 |
+
filename=dataset_name+'.zip',
|
95 |
local_dir=images_dir
|
96 |
+
);
|
97 |
|
98 |
# Unzip file
|
99 |
+
!unzip -q "xami_dataset.zip"
|
100 |
+
|
101 |
+
# Read the train json annotations file
|
102 |
+
annotations_path = os.path.join(images_dir, dataset_name, 'train/', '_annotations.coco.json')
|
103 |
|
|
|
104 |
with open(annotations_path) as f:
|
105 |
data_in = json.load(f)
|
106 |
+
|
107 |
+
data_in['images'][0]
|
108 |
```
|
109 |
or
|
110 |
|
|
|
111 |
- using a CLI command:
|
112 |
```bash
|
113 |
+
huggingface-cli download iulia-elisa/XAMI-dataset xami_dataset.zip --repo-type dataset --local-dir '/path/to/local/dataset/dir'
|
|
|
|
|
|
|
114 |
|
115 |
+
```
|
116 |
|
117 |
+
### *(Option 2)* Cloning the repository for more visualization tools
|
118 |
|
119 |
```bash
|
120 |
# Github
|
121 |
git clone https://github.com/ESA-Datalabs/XAMI-dataset.git
|
122 |
cd XAMI-dataset
|
123 |
```
|
124 |
+
<!--
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
# Dataset Split with SKF (Optional)
|
126 |
|
127 |
- The below method allows for dataset splitting, using the pre-generated splits in CSV files. This step is useful when training multiple dataset splits versions to gain mor generalised view on metrics.
|
|
|
134 |
for idx, csv_file in enumerate(csv_files):
|
135 |
mskf = pd.read_csv(csv_file)
|
136 |
utils.create_directories_and_copy_files(images_dir, data_in, mskf, idx)
|
137 |
+
``` -->
|
138 |
|
139 |
## Licence
|
140 |
...
|