File size: 3,551 Bytes
0d7da36 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
tags:
- astronomy
---
# Model Card for Model ID
This model classifies RGB images to the 2 classes, Spheroid or Spiral.
## Model Details
### Model Description
- **Developed by:** Jeroen den Otter
- **Funded by:** NASA
- **Shared by [optional]:** Michael Rutkowski
- **Model type:** Keras Sequential
- **Language(s) (NLP):** English
- **License:** Apache2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://www.kaggle.com/c/galaxy-zoo-the-galaxy-challenge/overview
- **Paper [optional]:** In progress
## Uses
The model can be used for identifying different galaxies from cutout images. It does not provide bounding boxes, so multiple galaxies in 1 image is not desired.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
model = tf.keras.models.load_model('model.keras')
prediction = model.predict(image)
print(prediction)
```
## Training Details
### Training Data
From the kaggle zoo challenge the classes one_one(Spheroid) 80%> and one_two(Spiral) 90%> are used.
Furthermore are the image segmented for noice removal
### Training Procedure
```python
data_augmentation = tf.keras.Sequential([
tf.keras.layers.RandomFlip('horizontal'),
tf.keras.layers.RandomRotation(0.2),
tf.keras.layers.RandomZoom(0.2),
tf.keras.layers.RandomContrast(0.2),
tf.keras.layers.RandomBrightness(0.2),
tf.keras.layers.GaussianNoise(0.1),
])
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
model = tf.keras.Sequential([
data_augmentation,
tf.keras.layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
#### Training Hyperparameters
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
Test data is manual retrieved data from Hubble and James web, see manually manipulated data in the files and their accuracy. Of each a log and linear scaling is used.
### Results
precision recall f1-score support
one_one 0.96 0.98 0.96 1637
one_two 0.98 0.93 0.96 1740
accuracy 0.96 3377
macro avg 0.96 0.96 0.96 3377
weighted avg 0.96 0.96 0.96 3377
## Environmental Impact
- **Hardware Type:** M3 Pro
- **Hours used:** 30min
|