Thala007Dhoni commited on
Commit
f373797
1 Parent(s): 04498b4

Upload 27 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,12 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ examples/fake_frame_10.png filter=lfs diff=lfs merge=lfs -text
37
+ examples/fake_frame_6.png filter=lfs diff=lfs merge=lfs -text
38
+ examples/fake_frame_7.png filter=lfs diff=lfs merge=lfs -text
39
+ examples/real_frame_1.png filter=lfs diff=lfs merge=lfs -text
40
+ examples/real_frame_13.png filter=lfs diff=lfs merge=lfs -text
41
+ examples/real_frame_19.png filter=lfs diff=lfs merge=lfs -text
42
+ examples/real_frame_20.png filter=lfs diff=lfs merge=lfs -text
43
+ examples/real_frame_3.png filter=lfs diff=lfs merge=lfs -text
44
+ examples/real_frame_8.png filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+
10
+ .git
11
+
12
+ # Distribution / packaging
13
+ .Python
14
+ build/
15
+ develop-eggs/
16
+ dist/
17
+ downloads/
18
+ eggs/
19
+ .eggs/
20
+ lib/
21
+ lib64/
22
+ parts/
23
+ sdist/
24
+ var/
25
+ wheels/
26
+ pip-wheel-metadata/
27
+ share/python-wheels/
28
+ *.egg-info/
29
+ .installed.cfg
30
+ *.egg
31
+ MANIFEST
32
+
33
+ # PyInstaller
34
+ # Usually these files are written by a python script from a template
35
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
36
+ *.manifest
37
+ *.spec
38
+
39
+ # Installer logs
40
+ pip-log.txt
41
+ pip-delete-this-directory.txt
42
+
43
+ # Unit test / coverage reports
44
+ htmlcov/
45
+ .tox/
46
+ .nox/
47
+ .coverage
48
+ .coverage.*
49
+ .cache
50
+ nosetests.xml
51
+ coverage.xml
52
+ *.cover
53
+ *.py,cover
54
+ .hypothesis/
55
+ .pytest_cache/
56
+
57
+ # Translations
58
+ *.mo
59
+ *.pot
60
+
61
+ # Django stuff:
62
+ *.log
63
+ local_settings.py
64
+ db.sqlite3
65
+ db.sqlite3-journal
66
+
67
+ # Flask stuff:
68
+ instance/
69
+ .webassets-cache
70
+
71
+ # Scrapy stuff:
72
+ .scrapy
73
+
74
+ # Sphinx documentation
75
+ docs/_build/
76
+
77
+ # PyBuilder
78
+ target/
79
+
80
+ # Jupyter Notebook
81
+ .ipynb_checkpoints
82
+
83
+ # IPython
84
+ profile_default/
85
+ ipython_config.py
86
+
87
+ # pyenv
88
+ .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
98
+ __pypackages__/
99
+
100
+ # Celery stuff
101
+ celerybeat-schedule
102
+ celerybeat.pid
103
+
104
+ # SageMath parsed files
105
+ *.sage.py
106
+
107
+ # Environments
108
+ .env
109
+ .venv
110
+ env/
111
+ venv/
112
+ ENV/
113
+ env.bak/
114
+ venv.bak/
115
+
116
+ # Spyder project settings
117
+ .spyderproject
118
+ .spyproject
119
+
120
+ # Rope project settings
121
+ .ropeproject
122
+
123
+ # mkdocs documentation
124
+ /site
125
+
126
+ # mypy
127
+ .mypy_cache/
128
+ .dmypy.json
129
+ dmypy.json
130
+
131
+ # Pyre type checker
132
+ .pyre/
Deepfake_detection.ipynb ADDED
@@ -0,0 +1,939 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "a2220df6",
6
+ "metadata": {},
7
+ "source": [
8
+ "# Import Libraries"
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "code",
13
+ "execution_count": 1,
14
+ "id": "7249bea4",
15
+ "metadata": {},
16
+ "outputs": [],
17
+ "source": [
18
+ "import gradio as gr\n",
19
+ "import torch\n",
20
+ "import torch.nn.functional as F\n",
21
+ "from facenet_pytorch import MTCNN, InceptionResnetV1\n",
22
+ "import numpy as np\n",
23
+ "from PIL import Image\n",
24
+ "import cv2\n",
25
+ "from pytorch_grad_cam import GradCAM\n",
26
+ "from pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget\n",
27
+ "from pytorch_grad_cam.utils.image import show_cam_on_image\n",
28
+ "import warnings\n",
29
+ "warnings.filterwarnings(\"ignore\")"
30
+ ]
31
+ },
32
+ {
33
+ "cell_type": "markdown",
34
+ "id": "d25e1c5d",
35
+ "metadata": {},
36
+ "source": [
37
+ "# Download and Load Model"
38
+ ]
39
+ },
40
+ {
41
+ "cell_type": "code",
42
+ "execution_count": 2,
43
+ "id": "237fbf44",
44
+ "metadata": {},
45
+ "outputs": [],
46
+ "source": [
47
+ "DEVICE = 'cuda:0' if torch.cuda.is_available() else 'cpu'\n",
48
+ "\n",
49
+ "mtcnn = MTCNN(\n",
50
+ " select_largest=False,\n",
51
+ " post_process=False,\n",
52
+ " device=DEVICE\n",
53
+ ").to(DEVICE).eval()"
54
+ ]
55
+ },
56
+ {
57
+ "cell_type": "code",
58
+ "execution_count": 3,
59
+ "id": "f3ef2b4f",
60
+ "metadata": {},
61
+ "outputs": [
62
+ {
63
+ "data": {
64
+ "text/plain": [
65
+ "InceptionResnetV1(\n",
66
+ " (conv2d_1a): BasicConv2d(\n",
67
+ " (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)\n",
68
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
69
+ " (relu): ReLU()\n",
70
+ " )\n",
71
+ " (conv2d_2a): BasicConv2d(\n",
72
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), bias=False)\n",
73
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
74
+ " (relu): ReLU()\n",
75
+ " )\n",
76
+ " (conv2d_2b): BasicConv2d(\n",
77
+ " (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
78
+ " (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
79
+ " (relu): ReLU()\n",
80
+ " )\n",
81
+ " (maxpool_3a): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
82
+ " (conv2d_3b): BasicConv2d(\n",
83
+ " (conv): Conv2d(64, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
84
+ " (bn): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
85
+ " (relu): ReLU()\n",
86
+ " )\n",
87
+ " (conv2d_4a): BasicConv2d(\n",
88
+ " (conv): Conv2d(80, 192, kernel_size=(3, 3), stride=(1, 1), bias=False)\n",
89
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
90
+ " (relu): ReLU()\n",
91
+ " )\n",
92
+ " (conv2d_4b): BasicConv2d(\n",
93
+ " (conv): Conv2d(192, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)\n",
94
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
95
+ " (relu): ReLU()\n",
96
+ " )\n",
97
+ " (repeat_1): Sequential(\n",
98
+ " (0): Block35(\n",
99
+ " (branch0): BasicConv2d(\n",
100
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
101
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
102
+ " (relu): ReLU()\n",
103
+ " )\n",
104
+ " (branch1): Sequential(\n",
105
+ " (0): BasicConv2d(\n",
106
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
107
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
108
+ " (relu): ReLU()\n",
109
+ " )\n",
110
+ " (1): BasicConv2d(\n",
111
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
112
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
113
+ " (relu): ReLU()\n",
114
+ " )\n",
115
+ " )\n",
116
+ " (branch2): Sequential(\n",
117
+ " (0): BasicConv2d(\n",
118
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
119
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
120
+ " (relu): ReLU()\n",
121
+ " )\n",
122
+ " (1): BasicConv2d(\n",
123
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
124
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
125
+ " (relu): ReLU()\n",
126
+ " )\n",
127
+ " (2): BasicConv2d(\n",
128
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
129
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
130
+ " (relu): ReLU()\n",
131
+ " )\n",
132
+ " )\n",
133
+ " (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))\n",
134
+ " (relu): ReLU()\n",
135
+ " )\n",
136
+ " (1): Block35(\n",
137
+ " (branch0): BasicConv2d(\n",
138
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
139
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
140
+ " (relu): ReLU()\n",
141
+ " )\n",
142
+ " (branch1): Sequential(\n",
143
+ " (0): BasicConv2d(\n",
144
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
145
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
146
+ " (relu): ReLU()\n",
147
+ " )\n",
148
+ " (1): BasicConv2d(\n",
149
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
150
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
151
+ " (relu): ReLU()\n",
152
+ " )\n",
153
+ " )\n",
154
+ " (branch2): Sequential(\n",
155
+ " (0): BasicConv2d(\n",
156
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
157
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
158
+ " (relu): ReLU()\n",
159
+ " )\n",
160
+ " (1): BasicConv2d(\n",
161
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
162
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
163
+ " (relu): ReLU()\n",
164
+ " )\n",
165
+ " (2): BasicConv2d(\n",
166
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
167
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
168
+ " (relu): ReLU()\n",
169
+ " )\n",
170
+ " )\n",
171
+ " (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))\n",
172
+ " (relu): ReLU()\n",
173
+ " )\n",
174
+ " (2): Block35(\n",
175
+ " (branch0): BasicConv2d(\n",
176
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
177
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
178
+ " (relu): ReLU()\n",
179
+ " )\n",
180
+ " (branch1): Sequential(\n",
181
+ " (0): BasicConv2d(\n",
182
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
183
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
184
+ " (relu): ReLU()\n",
185
+ " )\n",
186
+ " (1): BasicConv2d(\n",
187
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
188
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
189
+ " (relu): ReLU()\n",
190
+ " )\n",
191
+ " )\n",
192
+ " (branch2): Sequential(\n",
193
+ " (0): BasicConv2d(\n",
194
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
195
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
196
+ " (relu): ReLU()\n",
197
+ " )\n",
198
+ " (1): BasicConv2d(\n",
199
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
200
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
201
+ " (relu): ReLU()\n",
202
+ " )\n",
203
+ " (2): BasicConv2d(\n",
204
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
205
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
206
+ " (relu): ReLU()\n",
207
+ " )\n",
208
+ " )\n",
209
+ " (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))\n",
210
+ " (relu): ReLU()\n",
211
+ " )\n",
212
+ " (3): Block35(\n",
213
+ " (branch0): BasicConv2d(\n",
214
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
215
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
216
+ " (relu): ReLU()\n",
217
+ " )\n",
218
+ " (branch1): Sequential(\n",
219
+ " (0): BasicConv2d(\n",
220
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
221
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
222
+ " (relu): ReLU()\n",
223
+ " )\n",
224
+ " (1): BasicConv2d(\n",
225
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
226
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
227
+ " (relu): ReLU()\n",
228
+ " )\n",
229
+ " )\n",
230
+ " (branch2): Sequential(\n",
231
+ " (0): BasicConv2d(\n",
232
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
233
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
234
+ " (relu): ReLU()\n",
235
+ " )\n",
236
+ " (1): BasicConv2d(\n",
237
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
238
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
239
+ " (relu): ReLU()\n",
240
+ " )\n",
241
+ " (2): BasicConv2d(\n",
242
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
243
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
244
+ " (relu): ReLU()\n",
245
+ " )\n",
246
+ " )\n",
247
+ " (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))\n",
248
+ " (relu): ReLU()\n",
249
+ " )\n",
250
+ " (4): Block35(\n",
251
+ " (branch0): BasicConv2d(\n",
252
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
253
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
254
+ " (relu): ReLU()\n",
255
+ " )\n",
256
+ " (branch1): Sequential(\n",
257
+ " (0): BasicConv2d(\n",
258
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
259
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
260
+ " (relu): ReLU()\n",
261
+ " )\n",
262
+ " (1): BasicConv2d(\n",
263
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
264
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
265
+ " (relu): ReLU()\n",
266
+ " )\n",
267
+ " )\n",
268
+ " (branch2): Sequential(\n",
269
+ " (0): BasicConv2d(\n",
270
+ " (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
271
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
272
+ " (relu): ReLU()\n",
273
+ " )\n",
274
+ " (1): BasicConv2d(\n",
275
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
276
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
277
+ " (relu): ReLU()\n",
278
+ " )\n",
279
+ " (2): BasicConv2d(\n",
280
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
281
+ " (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
282
+ " (relu): ReLU()\n",
283
+ " )\n",
284
+ " )\n",
285
+ " (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))\n",
286
+ " (relu): ReLU()\n",
287
+ " )\n",
288
+ " )\n",
289
+ " (mixed_6a): Mixed_6a(\n",
290
+ " (branch0): BasicConv2d(\n",
291
+ " (conv): Conv2d(256, 384, kernel_size=(3, 3), stride=(2, 2), bias=False)\n",
292
+ " (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
293
+ " (relu): ReLU()\n",
294
+ " )\n",
295
+ " (branch1): Sequential(\n",
296
+ " (0): BasicConv2d(\n",
297
+ " (conv): Conv2d(256, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
298
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
299
+ " (relu): ReLU()\n",
300
+ " )\n",
301
+ " (1): BasicConv2d(\n",
302
+ " (conv): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
303
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
304
+ " (relu): ReLU()\n",
305
+ " )\n",
306
+ " (2): BasicConv2d(\n",
307
+ " (conv): Conv2d(192, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)\n",
308
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
309
+ " (relu): ReLU()\n",
310
+ " )\n",
311
+ " )\n",
312
+ " (branch2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
313
+ " )\n",
314
+ " (repeat_2): Sequential(\n",
315
+ " (0): Block17(\n",
316
+ " (branch0): BasicConv2d(\n",
317
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
318
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
319
+ " (relu): ReLU()\n",
320
+ " )\n",
321
+ " (branch1): Sequential(\n",
322
+ " (0): BasicConv2d(\n",
323
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
324
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
325
+ " (relu): ReLU()\n",
326
+ " )\n",
327
+ " (1): BasicConv2d(\n",
328
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
329
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
330
+ " (relu): ReLU()\n",
331
+ " )\n",
332
+ " (2): BasicConv2d(\n",
333
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
334
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
335
+ " (relu): ReLU()\n",
336
+ " )\n",
337
+ " )\n",
338
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
339
+ " (relu): ReLU()\n",
340
+ " )\n",
341
+ " (1): Block17(\n",
342
+ " (branch0): BasicConv2d(\n",
343
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
344
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
345
+ " (relu): ReLU()\n",
346
+ " )\n",
347
+ " (branch1): Sequential(\n",
348
+ " (0): BasicConv2d(\n",
349
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
350
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
351
+ " (relu): ReLU()\n",
352
+ " )\n",
353
+ " (1): BasicConv2d(\n",
354
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
355
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
356
+ " (relu): ReLU()\n",
357
+ " )\n",
358
+ " (2): BasicConv2d(\n",
359
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
360
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
361
+ " (relu): ReLU()\n",
362
+ " )\n",
363
+ " )\n",
364
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
365
+ " (relu): ReLU()\n",
366
+ " )\n",
367
+ " (2): Block17(\n",
368
+ " (branch0): BasicConv2d(\n",
369
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
370
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
371
+ " (relu): ReLU()\n",
372
+ " )\n",
373
+ " (branch1): Sequential(\n",
374
+ " (0): BasicConv2d(\n",
375
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
376
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
377
+ " (relu): ReLU()\n",
378
+ " )\n",
379
+ " (1): BasicConv2d(\n",
380
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
381
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
382
+ " (relu): ReLU()\n",
383
+ " )\n",
384
+ " (2): BasicConv2d(\n",
385
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
386
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
387
+ " (relu): ReLU()\n",
388
+ " )\n",
389
+ " )\n",
390
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
391
+ " (relu): ReLU()\n",
392
+ " )\n",
393
+ " (3): Block17(\n",
394
+ " (branch0): BasicConv2d(\n",
395
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
396
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
397
+ " (relu): ReLU()\n",
398
+ " )\n",
399
+ " (branch1): Sequential(\n",
400
+ " (0): BasicConv2d(\n",
401
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
402
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
403
+ " (relu): ReLU()\n",
404
+ " )\n",
405
+ " (1): BasicConv2d(\n",
406
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
407
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
408
+ " (relu): ReLU()\n",
409
+ " )\n",
410
+ " (2): BasicConv2d(\n",
411
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
412
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
413
+ " (relu): ReLU()\n",
414
+ " )\n",
415
+ " )\n",
416
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
417
+ " (relu): ReLU()\n",
418
+ " )\n",
419
+ " (4): Block17(\n",
420
+ " (branch0): BasicConv2d(\n",
421
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
422
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
423
+ " (relu): ReLU()\n",
424
+ " )\n",
425
+ " (branch1): Sequential(\n",
426
+ " (0): BasicConv2d(\n",
427
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
428
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
429
+ " (relu): ReLU()\n",
430
+ " )\n",
431
+ " (1): BasicConv2d(\n",
432
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
433
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
434
+ " (relu): ReLU()\n",
435
+ " )\n",
436
+ " (2): BasicConv2d(\n",
437
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
438
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
439
+ " (relu): ReLU()\n",
440
+ " )\n",
441
+ " )\n",
442
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
443
+ " (relu): ReLU()\n",
444
+ " )\n",
445
+ " (5): Block17(\n",
446
+ " (branch0): BasicConv2d(\n",
447
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
448
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
449
+ " (relu): ReLU()\n",
450
+ " )\n",
451
+ " (branch1): Sequential(\n",
452
+ " (0): BasicConv2d(\n",
453
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
454
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
455
+ " (relu): ReLU()\n",
456
+ " )\n",
457
+ " (1): BasicConv2d(\n",
458
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
459
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
460
+ " (relu): ReLU()\n",
461
+ " )\n",
462
+ " (2): BasicConv2d(\n",
463
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
464
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
465
+ " (relu): ReLU()\n",
466
+ " )\n",
467
+ " )\n",
468
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
469
+ " (relu): ReLU()\n",
470
+ " )\n",
471
+ " (6): Block17(\n",
472
+ " (branch0): BasicConv2d(\n",
473
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
474
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
475
+ " (relu): ReLU()\n",
476
+ " )\n",
477
+ " (branch1): Sequential(\n",
478
+ " (0): BasicConv2d(\n",
479
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
480
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
481
+ " (relu): ReLU()\n",
482
+ " )\n",
483
+ " (1): BasicConv2d(\n",
484
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
485
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
486
+ " (relu): ReLU()\n",
487
+ " )\n",
488
+ " (2): BasicConv2d(\n",
489
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
490
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
491
+ " (relu): ReLU()\n",
492
+ " )\n",
493
+ " )\n",
494
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
495
+ " (relu): ReLU()\n",
496
+ " )\n",
497
+ " (7): Block17(\n",
498
+ " (branch0): BasicConv2d(\n",
499
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
500
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
501
+ " (relu): ReLU()\n",
502
+ " )\n",
503
+ " (branch1): Sequential(\n",
504
+ " (0): BasicConv2d(\n",
505
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
506
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
507
+ " (relu): ReLU()\n",
508
+ " )\n",
509
+ " (1): BasicConv2d(\n",
510
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
511
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
512
+ " (relu): ReLU()\n",
513
+ " )\n",
514
+ " (2): BasicConv2d(\n",
515
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
516
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
517
+ " (relu): ReLU()\n",
518
+ " )\n",
519
+ " )\n",
520
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
521
+ " (relu): ReLU()\n",
522
+ " )\n",
523
+ " (8): Block17(\n",
524
+ " (branch0): BasicConv2d(\n",
525
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
526
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
527
+ " (relu): ReLU()\n",
528
+ " )\n",
529
+ " (branch1): Sequential(\n",
530
+ " (0): BasicConv2d(\n",
531
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
532
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
533
+ " (relu): ReLU()\n",
534
+ " )\n",
535
+ " (1): BasicConv2d(\n",
536
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
537
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
538
+ " (relu): ReLU()\n",
539
+ " )\n",
540
+ " (2): BasicConv2d(\n",
541
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
542
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
543
+ " (relu): ReLU()\n",
544
+ " )\n",
545
+ " )\n",
546
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
547
+ " (relu): ReLU()\n",
548
+ " )\n",
549
+ " (9): Block17(\n",
550
+ " (branch0): BasicConv2d(\n",
551
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
552
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
553
+ " (relu): ReLU()\n",
554
+ " )\n",
555
+ " (branch1): Sequential(\n",
556
+ " (0): BasicConv2d(\n",
557
+ " (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
558
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
559
+ " (relu): ReLU()\n",
560
+ " )\n",
561
+ " (1): BasicConv2d(\n",
562
+ " (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)\n",
563
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
564
+ " (relu): ReLU()\n",
565
+ " )\n",
566
+ " (2): BasicConv2d(\n",
567
+ " (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)\n",
568
+ " (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
569
+ " (relu): ReLU()\n",
570
+ " )\n",
571
+ " )\n",
572
+ " (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))\n",
573
+ " (relu): ReLU()\n",
574
+ " )\n",
575
+ " )\n",
576
+ " (mixed_7a): Mixed_7a(\n",
577
+ " (branch0): Sequential(\n",
578
+ " (0): BasicConv2d(\n",
579
+ " (conv): Conv2d(896, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
580
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
581
+ " (relu): ReLU()\n",
582
+ " )\n",
583
+ " (1): BasicConv2d(\n",
584
+ " (conv): Conv2d(256, 384, kernel_size=(3, 3), stride=(2, 2), bias=False)\n",
585
+ " (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
586
+ " (relu): ReLU()\n",
587
+ " )\n",
588
+ " )\n",
589
+ " (branch1): Sequential(\n",
590
+ " (0): BasicConv2d(\n",
591
+ " (conv): Conv2d(896, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
592
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
593
+ " (relu): ReLU()\n",
594
+ " )\n",
595
+ " (1): BasicConv2d(\n",
596
+ " (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)\n",
597
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
598
+ " (relu): ReLU()\n",
599
+ " )\n",
600
+ " )\n",
601
+ " (branch2): Sequential(\n",
602
+ " (0): BasicConv2d(\n",
603
+ " (conv): Conv2d(896, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
604
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
605
+ " (relu): ReLU()\n",
606
+ " )\n",
607
+ " (1): BasicConv2d(\n",
608
+ " (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
609
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
610
+ " (relu): ReLU()\n",
611
+ " )\n",
612
+ " (2): BasicConv2d(\n",
613
+ " (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)\n",
614
+ " (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
615
+ " (relu): ReLU()\n",
616
+ " )\n",
617
+ " )\n",
618
+ " (branch3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
619
+ " )\n",
620
+ " (repeat_3): Sequential(\n",
621
+ " (0): Block8(\n",
622
+ " (branch0): BasicConv2d(\n",
623
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
624
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
625
+ " (relu): ReLU()\n",
626
+ " )\n",
627
+ " (branch1): Sequential(\n",
628
+ " (0): BasicConv2d(\n",
629
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
630
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
631
+ " (relu): ReLU()\n",
632
+ " )\n",
633
+ " (1): BasicConv2d(\n",
634
+ " (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)\n",
635
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
636
+ " (relu): ReLU()\n",
637
+ " )\n",
638
+ " (2): BasicConv2d(\n",
639
+ " (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)\n",
640
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
641
+ " (relu): ReLU()\n",
642
+ " )\n",
643
+ " )\n",
644
+ " (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))\n",
645
+ " (relu): ReLU()\n",
646
+ " )\n",
647
+ " (1): Block8(\n",
648
+ " (branch0): BasicConv2d(\n",
649
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
650
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
651
+ " (relu): ReLU()\n",
652
+ " )\n",
653
+ " (branch1): Sequential(\n",
654
+ " (0): BasicConv2d(\n",
655
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
656
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
657
+ " (relu): ReLU()\n",
658
+ " )\n",
659
+ " (1): BasicConv2d(\n",
660
+ " (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)\n",
661
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
662
+ " (relu): ReLU()\n",
663
+ " )\n",
664
+ " (2): BasicConv2d(\n",
665
+ " (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)\n",
666
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
667
+ " (relu): ReLU()\n",
668
+ " )\n",
669
+ " )\n",
670
+ " (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))\n",
671
+ " (relu): ReLU()\n",
672
+ " )\n",
673
+ " (2): Block8(\n",
674
+ " (branch0): BasicConv2d(\n",
675
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
676
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
677
+ " (relu): ReLU()\n",
678
+ " )\n",
679
+ " (branch1): Sequential(\n",
680
+ " (0): BasicConv2d(\n",
681
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
682
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
683
+ " (relu): ReLU()\n",
684
+ " )\n",
685
+ " (1): BasicConv2d(\n",
686
+ " (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)\n",
687
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
688
+ " (relu): ReLU()\n",
689
+ " )\n",
690
+ " (2): BasicConv2d(\n",
691
+ " (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)\n",
692
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
693
+ " (relu): ReLU()\n",
694
+ " )\n",
695
+ " )\n",
696
+ " (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))\n",
697
+ " (relu): ReLU()\n",
698
+ " )\n",
699
+ " (3): Block8(\n",
700
+ " (branch0): BasicConv2d(\n",
701
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
702
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
703
+ " (relu): ReLU()\n",
704
+ " )\n",
705
+ " (branch1): Sequential(\n",
706
+ " (0): BasicConv2d(\n",
707
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
708
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
709
+ " (relu): ReLU()\n",
710
+ " )\n",
711
+ " (1): BasicConv2d(\n",
712
+ " (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)\n",
713
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
714
+ " (relu): ReLU()\n",
715
+ " )\n",
716
+ " (2): BasicConv2d(\n",
717
+ " (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)\n",
718
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
719
+ " (relu): ReLU()\n",
720
+ " )\n",
721
+ " )\n",
722
+ " (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))\n",
723
+ " (relu): ReLU()\n",
724
+ " )\n",
725
+ " (4): Block8(\n",
726
+ " (branch0): BasicConv2d(\n",
727
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
728
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
729
+ " (relu): ReLU()\n",
730
+ " )\n",
731
+ " (branch1): Sequential(\n",
732
+ " (0): BasicConv2d(\n",
733
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
734
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
735
+ " (relu): ReLU()\n",
736
+ " )\n",
737
+ " (1): BasicConv2d(\n",
738
+ " (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)\n",
739
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
740
+ " (relu): ReLU()\n",
741
+ " )\n",
742
+ " (2): BasicConv2d(\n",
743
+ " (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)\n",
744
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
745
+ " (relu): ReLU()\n",
746
+ " )\n",
747
+ " )\n",
748
+ " (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))\n",
749
+ " (relu): ReLU()\n",
750
+ " )\n",
751
+ " )\n",
752
+ " (block8): Block8(\n",
753
+ " (branch0): BasicConv2d(\n",
754
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
755
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
756
+ " (relu): ReLU()\n",
757
+ " )\n",
758
+ " (branch1): Sequential(\n",
759
+ " (0): BasicConv2d(\n",
760
+ " (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)\n",
761
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
762
+ " (relu): ReLU()\n",
763
+ " )\n",
764
+ " (1): BasicConv2d(\n",
765
+ " (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)\n",
766
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
767
+ " (relu): ReLU()\n",
768
+ " )\n",
769
+ " (2): BasicConv2d(\n",
770
+ " (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)\n",
771
+ " (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
772
+ " (relu): ReLU()\n",
773
+ " )\n",
774
+ " )\n",
775
+ " (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))\n",
776
+ " )\n",
777
+ " (avgpool_1a): AdaptiveAvgPool2d(output_size=1)\n",
778
+ " (dropout): Dropout(p=0.6, inplace=False)\n",
779
+ " (last_linear): Linear(in_features=1792, out_features=512, bias=False)\n",
780
+ " (last_bn): BatchNorm1d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
781
+ " (logits): Linear(in_features=512, out_features=1, bias=True)\n",
782
+ ")"
783
+ ]
784
+ },
785
+ "execution_count": 3,
786
+ "metadata": {},
787
+ "output_type": "execute_result"
788
+ }
789
+ ],
790
+ "source": [
791
+ "model = InceptionResnetV1(\n",
792
+ " pretrained=\"vggface2\",\n",
793
+ " classify=True,\n",
794
+ " num_classes=1,\n",
795
+ " device=DEVICE\n",
796
+ ")\n",
797
+ "\n",
798
+ "checkpoint = torch.load(\"resnetinceptionv1_epoch_32.pth\", map_location=torch.device('cpu'))\n",
799
+ "model.load_state_dict(checkpoint['model_state_dict'])\n",
800
+ "model.to(DEVICE)\n",
801
+ "model.eval()"
802
+ ]
803
+ },
804
+ {
805
+ "cell_type": "markdown",
806
+ "id": "a499194a",
807
+ "metadata": {},
808
+ "source": [
809
+ "# Model Inference "
810
+ ]
811
+ },
812
+ {
813
+ "cell_type": "code",
814
+ "execution_count": 4,
815
+ "id": "376e6cd6",
816
+ "metadata": {},
817
+ "outputs": [],
818
+ "source": [
819
+ "def predict(input_image:Image.Image):\n",
820
+ " \"\"\"Predict the label of the input_image\"\"\"\n",
821
+ " face = mtcnn(input_image)\n",
822
+ " if face is None:\n",
823
+ " raise Exception('No face detected')\n",
824
+ " face = face.unsqueeze(0) # add the batch dimension\n",
825
+ " face = F.interpolate(face, size=(256, 256), mode='bilinear', align_corners=False)\n",
826
+ " \n",
827
+ " # convert the face into a numpy array to be able to plot it\n",
828
+ " prev_face = face.squeeze(0).permute(1, 2, 0).cpu().detach().int().numpy()\n",
829
+ " prev_face = prev_face.astype('uint8')\n",
830
+ "\n",
831
+ " face = face.to(DEVICE)\n",
832
+ " face = face.to(torch.float32)\n",
833
+ " face = face / 255.0\n",
834
+ " face_image_to_plot = face.squeeze(0).permute(1, 2, 0).cpu().detach().int().numpy()\n",
835
+ "\n",
836
+ " target_layers=[model.block8.branch1[-1]]\n",
837
+ " use_cuda = True if torch.cuda.is_available() else False\n",
838
+ " cam = GradCAM(model=model, target_layers=target_layers, use_cuda=use_cuda)\n",
839
+ " targets = [ClassifierOutputTarget(0)]\n",
840
+ "\n",
841
+ " grayscale_cam = cam(input_tensor=face, targets=targets, eigen_smooth=True)\n",
842
+ " grayscale_cam = grayscale_cam[0, :]\n",
843
+ " visualization = show_cam_on_image(face_image_to_plot, grayscale_cam, use_rgb=True)\n",
844
+ " face_with_mask = cv2.addWeighted(prev_face, 1, visualization, 0.5, 0)\n",
845
+ "\n",
846
+ " with torch.no_grad():\n",
847
+ " output = torch.sigmoid(model(face).squeeze(0))\n",
848
+ " prediction = \"real\" if output.item() < 0.5 else \"fake\"\n",
849
+ " \n",
850
+ " real_prediction = 1 - output.item()\n",
851
+ " fake_prediction = output.item()\n",
852
+ " \n",
853
+ " confidences = {\n",
854
+ " 'real': real_prediction,\n",
855
+ " 'fake': fake_prediction\n",
856
+ " }\n",
857
+ " return confidences, face_with_mask\n"
858
+ ]
859
+ },
860
+ {
861
+ "cell_type": "markdown",
862
+ "id": "14f47b5a",
863
+ "metadata": {},
864
+ "source": [
865
+ "# Gradio Interface"
866
+ ]
867
+ },
868
+ {
869
+ "cell_type": "code",
870
+ "execution_count": 5,
871
+ "id": "d62177b5",
872
+ "metadata": {},
873
+ "outputs": [
874
+ {
875
+ "name": "stdout",
876
+ "output_type": "stream",
877
+ "text": [
878
+ "Running on local URL: http://127.0.0.1:7860\n",
879
+ "\n",
880
+ "To create a public link, set `share=True` in `launch()`.\n"
881
+ ]
882
+ },
883
+ {
884
+ "data": {
885
+ "text/html": [
886
+ "<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
887
+ ],
888
+ "text/plain": [
889
+ "<IPython.core.display.HTML object>"
890
+ ]
891
+ },
892
+ "metadata": {},
893
+ "output_type": "display_data"
894
+ }
895
+ ],
896
+ "source": [
897
+ "interface = gr.Interface(\n",
898
+ " fn=predict,\n",
899
+ " inputs=[\n",
900
+ " gr.inputs.Image(label=\"Input Image\", type=\"pil\")\n",
901
+ " ],\n",
902
+ " outputs=[\n",
903
+ " gr.outputs.Label(label=\"Class\"),\n",
904
+ " gr.outputs.Image(label=\"Face with Explainability\", type=\"pil\")\n",
905
+ " ],\n",
906
+ ").launch()"
907
+ ]
908
+ },
909
+ {
910
+ "cell_type": "code",
911
+ "execution_count": null,
912
+ "id": "0c0b293c",
913
+ "metadata": {},
914
+ "outputs": [],
915
+ "source": []
916
+ }
917
+ ],
918
+ "metadata": {
919
+ "kernelspec": {
920
+ "display_name": "Python 3 (ipykernel)",
921
+ "language": "python",
922
+ "name": "python3"
923
+ },
924
+ "language_info": {
925
+ "codemirror_mode": {
926
+ "name": "ipython",
927
+ "version": 3
928
+ },
929
+ "file_extension": ".py",
930
+ "mimetype": "text/x-python",
931
+ "name": "python",
932
+ "nbconvert_exporter": "python",
933
+ "pygments_lexer": "ipython3",
934
+ "version": "3.9.8"
935
+ }
936
+ },
937
+ "nbformat": 4,
938
+ "nbformat_minor": 5
939
+ }
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,3 +1,2 @@
1
- ---
2
- license: unknown
3
- ---
 
1
+ # deepfake-detection
2
+ Identify the images as real or fake using state-of-the-art AI models
 
examples/fake_dot_csv.jpeg ADDED
examples/fake_frame_1.png ADDED
examples/fake_frame_10.png ADDED

Git LFS Details

  • SHA256: b64c312c7e00e5ce672e5eb5380434705840a9e898bac3f39995b554441d94df
  • Pointer size: 132 Bytes
  • Size of remote file: 1.1 MB
examples/fake_frame_2.png ADDED
examples/fake_frame_3.png ADDED
examples/fake_frame_4.png ADDED
examples/fake_frame_5.png ADDED
examples/fake_frame_6.png ADDED

Git LFS Details

  • SHA256: 4743d592246d6afe7b9bcb0f487da24857099da7fa327ecbf8ec00261ca8afbd
  • Pointer size: 132 Bytes
  • Size of remote file: 1.23 MB
examples/fake_frame_7.png ADDED

Git LFS Details

  • SHA256: aa2690acaa91280a5dba662cb4a4c8c61c50f2aebde1c542a7a642d914b29f67
  • Pointer size: 132 Bytes
  • Size of remote file: 1.12 MB
examples/fake_frame_8.png ADDED
examples/fake_frame_9.png ADDED
examples/real_frame_1.png ADDED

Git LFS Details

  • SHA256: 4d1cc053e52722ba3851f557b21f465a8edc3dcfe7eed99ce06513efd5687994
  • Pointer size: 132 Bytes
  • Size of remote file: 1.27 MB
examples/real_frame_13.png ADDED

Git LFS Details

  • SHA256: 06d0a57e7211441ce7833ed222a62c74e0f10169b4f6f27354a8198aa26f8343
  • Pointer size: 132 Bytes
  • Size of remote file: 1.03 MB
examples/real_frame_16.png ADDED
examples/real_frame_19.png ADDED

Git LFS Details

  • SHA256: 7462696b8aa4759a5ed64725f62c113e82a01aa30a5883e2d37e7e66b0bd7b1c
  • Pointer size: 132 Bytes
  • Size of remote file: 1.24 MB
examples/real_frame_20.png ADDED

Git LFS Details

  • SHA256: be03d83828189848f12033da32d7eeac17bb1a7ebbf4398919c210aef6a0e989
  • Pointer size: 132 Bytes
  • Size of remote file: 1.1 MB
examples/real_frame_3.png ADDED

Git LFS Details

  • SHA256: 1f59adf74470e7bfaf1373bb17f4533079375ef5e4ad353aa9b7bd9ca94afe48
  • Pointer size: 132 Bytes
  • Size of remote file: 1.44 MB
examples/real_frame_8.png ADDED

Git LFS Details

  • SHA256: 09900e6005200db62cdd962885f2256649a36898a6603d94ff334ee8d889f1e5
  • Pointer size: 132 Bytes
  • Size of remote file: 1.15 MB
examples/real_frame_9.png ADDED
examples/real_lucia.jpeg ADDED
examples/real_mercedes.jpeg ADDED
kit_installer.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35b848850c1d74b16f6b5c85e9be74eaf6006d8c8a297013b91591f20363d8dd
3
+ size 3690
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ jupyter==1.0.0
2
+ gradio==3.23.0
3
+ Pillow==9.4.0
4
+ facenet-pytorch==2.5.2
5
+ torch==1.11.0
6
+ opencv-python==4.7.0.72
7
+ grad-cam==1.4.6