File size: 14,464 Bytes
3981799
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
# -*- coding: utf-8 -*-
"""UpsideDown.ipynb

Automatically generated by Colaboratory.

Original file is located at
    https://colab.research.google.com/drive/12aS57Wk69CKJyuVVNvXjyAU6mPo1dP4_

# Fatima Fellowship Quick Coding Challenge (Pick 1)

Thank you for applying to the Fatima Fellowship. To help us select the Fellows and assess your ability to do machine learning research, we are asking that you complete a short coding challenge. Please pick **1 of these 5** coding challenges, whichever is most aligned with your interests. 

**Due date: 1 week**

**How to submit**: Please make a copy of this colab notebook, add your code and results, and submit your colab notebook to the submission link below. If you have never used a colab notebook, [check out this video](https://www.youtube.com/watch?v=i-HnvsehuSw).

**Submission link**: https://airtable.com/shrXy3QKSsO2yALd3

# 1. Deep Learning for Vision

**Upside down detector**: Train a model to detect if images are upside down

* Pick a dataset of natural images (we suggest looking at datasets on the [Hugging Face Hub](https://huggingface.co/datasets?task_categories=task_categories:image-classification&sort=downloads))
* Synthetically turn some of images upside down. Create a training and test set.
* Build a neural network (using Tensorflow, PyTorch, or any framework you like)
* Train it to classify image orientation until a reasonable accuracy is reached
* [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below.
* Look at some of the images that were classified incorrectly. Please explain what you might do to improve your model's performance on these images in the future (you do not need to impelement these suggestions)

**Submission instructions**: Please write your code below and include some examples of images that were classified

# Upside down detector: Train a model to detect if images are upside down
* Pick a dataset of natural images \
Pick a dataset of natural images (we suggest looking at datasets on the Hugging Face Hub) \
We chose a fine-grained images dataset of faces \
description:

My model is an inverted image detector and can help detect if images are inverted with 99% accuracy. \
I used a dataset containing people with and without masks. I trained my model on ~ 300 images of people without masks and tested it on ~ 60 of the same images distribution: \
author = {Prasoon Kottarathil}, \
title = {Face Mask Lite Dataset}, \
year = {2020}, \
publisher = {kaggle}, \
journal = {Kaggle Data}, \
how published = {\url{https://www.kaggle.com/prasoonkottarathil/face-mask-lite-dataset}} \


All Images in this dataset are Generated Using Style GAN-2, \
10,000 HD images in  each Folder With mask and without mask 


* Synthetically turn some of images upside down. Create a training and test set.
* Build a neural network (using Tensorflow, PyTorch, or any framework you like)
* Train it to classify image orientation until a reasonable accuracy is reached
* [Upload the the model to the Hugging Face Hub](https://huggingface.co/docs/hub/adding-a-model), and add a link to your model below.

link: https://huggingface.co/DIANKHA/upside-down/tree/main

Synthetically turn some of images upside down. Create a training and test set.
"""

import os
def makedir(path):
    if not os.path.exists(path):
        os.makedirs(path)

makedir('./without_mask')

from google.colab import drive
drive.mount('/content/drive')

# Commented out IPython magic to ensure Python compatibility.
### WRITE YOUR CODE TO TRAIN THE MODEL HERE

#Imports
import os
import sys
from glob import glob
import torch
import torchvision

import numpy    as np
import datetime as dt


import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot   as plt
import cv2

from PIL               import Image
from torch.utils.data  import Dataset
from torch.autograd    import Variable
from torch.optim       import lr_scheduler

from torch.utils.data  import Dataset, DataLoader
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision       import transforms, datasets, models
from os                import listdir, makedirs, getcwd, remove
from os.path           import isfile, join, abspath, exists, isdir, expanduser
from torchvision.transforms.functional import vflip
from PIL import Image
from torch.utils.data import random_split
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
np.random.seed(42)


# %matplotlib inline


data_path = "./without_mask"
images = glob("./without_mask/*.png", recursive=True)

import shutil
!rm -rf ./train_data
!rm -rf ./test_data
makedir('./train_data')
makedir('./test_data')
split = int(np.floor(len(images) * .8))
np.random.seed(22)
N = len(images)
K = split 

dst_dir = 'train_data'
for pngfile in images[:split]:
    shutil.copy(pngfile, dst_dir)
dst_dir = 'test_data'
for pngfile in images[split:]:
    shutil.copy(pngfile, dst_dir)

# Transformations for both the training and testing data
mean=[0.5894, 0.4595, 0.3966]
std=[0.2404, 0.2020, 0.1959]

# Do data transforms here, Try many others

train_transforms = transforms.Compose([transforms.Resize(500),
                                       transforms.ToTensor(),
                                       transforms.Normalize(mean,std)])

test_transforms = transforms.Compose([ transforms.Resize(500),
                                       transforms.ToTensor(),
                                       transforms.Normalize(mean,std)])

class Dataset(Dataset):
    def __init__(self, path, transform=None):
        self.file_list = glob(path+"*.png", recursive=True)
        print(self.file_list)
        self.transform = transform

        files = []
        split = int(np.floor(len(self.file_list) * .5))

        N = len(self.file_list)
        K = split # K zeros, N-K ones
        indices = np.array([0] * K + [1] * (N-K))
        np.random.shuffle(indices)
        print(indices)

        final_pth = path
        for idx,img_pth in enumerate(self.file_list):
          if indices[idx]:
            img = Image.open(img_pth)
            img = vflip(img)
            img_name = img_pth.split('/')[-1]
            img.save(final_pth+img_name, format="png")
            files.append([indices[idx],final_pth+img_name])
          else:
            img = Image.open(img_pth)
            img_name = img_pth.split('/')[-1]
            img.save(final_pth+img_name, format="png")
            files.append([indices[idx],final_pth+img_name])
        self.file_list = files
        print(self.file_list)
        files = None


    def __len__(self):
        return len(self.file_list)

    def __getitem__(self, idx):
        fileName = self.file_list[idx][1]
        classCategory = self.file_list[idx][0]
        im = Image.open(fileName)
        if self.transform:
            im = self.transform(im)
            
        return im.view(3, 500, 500), classCategory

train_data = Dataset('./train_data/', transform=train_transforms)

test_data = Dataset('./test_data/', transform=test_transforms)

train_loader = torch.utils.data.DataLoader(train_data, batch_size=32)

test_loader = torch.utils.data.DataLoader(test_data, batch_size=32)


# Device configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

# Define Models 

# Define Models 

class Classifier(nn.Module):
    def __init__(self, num_classes):
        super(Classifier, self).__init__()
        # Block 1
        self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size = 5, stride = 2, padding = 2)
        self.relu1 = nn.ReLU()
        self.maxpool1 = nn.MaxPool2d(kernel_size = 2)

        #Block 2
        self.conv2 = nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size = 5, stride = 2, padding = 2)
        self.relu2 = nn.ReLU()
        self.maxpool2 = nn.MaxPool2d(kernel_size=2)

        #Block 3
        self.conv3 = nn.Conv2d(in_channels = 64, out_channels = 64, kernel_size = 3, stride = 2, padding = 2)
        self.relu3 = nn.ReLU()
        self.maxpool3 = nn.MaxPool2d(kernel_size=2)

        #Block 4
        self.conv4 = nn.Conv2d(in_channels = 64, out_channels = 64, kernel_size = 3, stride = 2, padding = 2)
        self.relu4 = nn.ReLU()
        self.maxpool4 = nn.MaxPool2d(kernel_size=2)

        #Block 5
        self.conv5 = nn.Conv2d(in_channels = 64, out_channels = 32, kernel_size = 3, stride = 2, padding = 2)
        self.relu5 = nn.ReLU()
        self.maxpool5 = nn.MaxPool2d(kernel_size=2)

        # last fully-connected layer
        self.fc = nn.Linear(32, num_classes)
        self.dropout = nn.Dropout(.1)


    def forward(self, input):

        x = self.maxpool1(self.relu1(self.conv1(input)))
        x = self.dropout(x)
        x = self.maxpool2(self.relu2(self.conv2(x)))
        x = self.dropout(x)
        x = self.maxpool3(self.relu3(self.conv3(x)))
        x = self.dropout(x)
        x = self.maxpool4(self.relu4(self.conv4(x)))
        x = self.dropout(x)
        x = self.maxpool5(self.relu5(self.conv5(x)))
        x = self.dropout(x)

        x = x.view(x.size(0), -1)
        x = self.fc(x)
        return x

def train(model, criterion, data_loader, optimizer, num_epochs):
    """Simple training loop for a PyTorch model.""" 
    
    # Make sure model is in training mode.
    model.train()
    
    # Move model to the device (CPU or GPU).
    model.to(device)
    
    # Exponential moving average of the loss.
    ema_loss = None

    print('----- Training Loop -----')
    # Loop over epochs.
    for epoch in range(num_epochs):
        
      # Loop over data.
      for batch_idx, (features, target) in enumerate(data_loader):
            
          # Forward pass.
        output = model(features.to(device))
        output = output.squeeze()
        target = target.float()
        loss = criterion(output.to(device), target.to(device))

          # Backward pass.
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

      # NOTE: It is important to call .item() on the loss before summing.
        if ema_loss is None:
            ema_loss = loss.item()
        else:
            ema_loss += (loss.item() - ema_loss) * 0.01 

      # Print out progress the end of epoch.
      print('Epoch: {} \tLoss: {:.6f}'.format(epoch, ema_loss))

def test(model, data_loader):
    """Measures the accuracy of a model on a data set.""" 
    # Make sure the model is in evaluation mode.
    model.eval()
    correct = 0
    print('----- Model Evaluation -----')
    # We do not need to maintain intermediate activations while testing.
    with torch.no_grad():
        
        # Loop over test data.
        for features, target in data_loader:
          
            # Forward pass.
            output = model(features.to(device))
            
            # Get the label corresponding to the highest predicted probability.
            # pred = output.argmax(dim=1, keepdim=True)
            pred = torch.where(output>0.5,torch.ones(output.shape),torch.zeros(output.shape))
            # Count number of correct predictions.
            correct += pred.cpu().eq(target.view_as(pred)).sum().item()

    # Print test accuracy.
    percent = 100. * correct / len(data_loader.dataset)
    print(f'Test accuracy: {correct} / {len(data_loader.dataset)} ({percent:.0f}%)')
    torch.save(model.state_dict(), 'model.ckpt')
    return percent

num_epochs = 10
model = Classifier(1)
criterion = torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)

train(model, criterion, train_loader, optimizer, num_epochs=num_epochs)

test(model, test_loader)

test(model, train_loader)

"""**Write up**: 
* Link to the model on Hugging Face Hub: 
* Include some examples of misclassified images. Please explain what you might do to improve your model's performance on these images in the future (you do not need to impelement these suggestions)
"""

model1 = Classifier(1)
model1.load_state_dict(torch.load('./model.ckpt'))
model1.eval()  # make sure the model is in evaluation mode
classes = ['no_flip','flip']

test(model1, train_loader)
test(model1, test_loader)

import random
from IPython import display

# Commented out IPython magic to ensure Python compatibility.
def random_predictions(data):
  for i in random.sample(range(len(data)),3):
      tensor,target = data[i]

      if device == "cuda":
          tensor = tensor.cuda()

      prediction = model1(tensor.unsqueeze(0)).item()
      if prediction>.5:
        prediction = 1
      else:
        prediction = 0
      print(
          "Img %d. Excpected class %s, but predicted class %s."
#           % (
              i,
              classes[target],
              classes[prediction],
          )
      )
      img = Image.open(data.file_list[i][1])
      img = transforms.Resize(224)(img)
      display.display(img)

random_predictions(test_data)

# Commented out IPython magic to ensure Python compatibility.
def false_predictions(data):
  for i in range(len(data)):
      tensor, target = data[i]

      if device == "cuda":
          tensor = tensor.cuda()

      prediction = model1(tensor.unsqueeze(0)).item()
      if prediction>.5:
        prediction = 1
      else:
        prediction = 0
      
      if prediction != target:
          print(
              "Img id=%d. Excpected class %s, but predicted class %s."
#               % (
                  i,
                  classes[target],
                  classes[prediction],
              )
          )
          img = Image.open(data.file_list[i][1])
          img = transforms.Resize(224)(img)
          display.display(img)

"""* Include some examples of poorly ranked images. Please explain what you could do to improve the performance of your model on these images.
We do not have enough information to comment on this misclassified image in the dataset.
This model was trained on a small dataset of ~300 images and tested on ~60 HD images generated with a GAN.
This dataset has the same distribution and is assumed to be not representive of the wide diversity of images in the real world.
We hope that our model can be useful in the context of creating a profile on a dating site or social network.
Our model could be improved by increasing the size and diversity of the training dataset. 
"""

false_predictions(train_data)

addition_data = Dataset('./subset_without_mask/', transform=train_transforms)

false_predictions(addition_data)