AItool commited on
Commit
a983ebc
1 Parent(s): 9565e59

Upload 127 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. fastai/__init__.py +2 -0
  2. fastai/__pycache__/__init__.cpython-310.pyc +0 -0
  3. fastai/__pycache__/basics.cpython-310.pyc +0 -0
  4. fastai/__pycache__/fp16_utils.cpython-310.pyc +0 -0
  5. fastai/__pycache__/imports.cpython-310.pyc +0 -0
  6. fastai/__pycache__/interpret.cpython-310.pyc +0 -0
  7. fastai/__pycache__/layers.cpython-310.pyc +0 -0
  8. fastai/__pycache__/learner.cpython-310.pyc +0 -0
  9. fastai/__pycache__/losses.cpython-310.pyc +0 -0
  10. fastai/__pycache__/metrics.cpython-310.pyc +0 -0
  11. fastai/__pycache__/optimizer.cpython-310.pyc +0 -0
  12. fastai/__pycache__/torch_basics.cpython-310.pyc +0 -0
  13. fastai/__pycache__/torch_core.cpython-310.pyc +0 -0
  14. fastai/__pycache__/torch_imports.cpython-310.pyc +0 -0
  15. fastai/_modidx.py +0 -0
  16. fastai/_nbdev.py +899 -0
  17. fastai/_pytorch_doc.py +46 -0
  18. fastai/basics.py +6 -0
  19. fastai/callback/__init__.py +1 -0
  20. fastai/callback/__pycache__/__init__.cpython-310.pyc +0 -0
  21. fastai/callback/__pycache__/all.cpython-310.pyc +0 -0
  22. fastai/callback/__pycache__/channelslast.cpython-310.pyc +0 -0
  23. fastai/callback/__pycache__/core.cpython-310.pyc +0 -0
  24. fastai/callback/__pycache__/data.cpython-310.pyc +0 -0
  25. fastai/callback/__pycache__/fp16.cpython-310.pyc +0 -0
  26. fastai/callback/__pycache__/hook.cpython-310.pyc +0 -0
  27. fastai/callback/__pycache__/mixup.cpython-310.pyc +0 -0
  28. fastai/callback/__pycache__/preds.cpython-310.pyc +0 -0
  29. fastai/callback/__pycache__/progress.cpython-310.pyc +0 -0
  30. fastai/callback/__pycache__/rnn.cpython-310.pyc +0 -0
  31. fastai/callback/__pycache__/schedule.cpython-310.pyc +0 -0
  32. fastai/callback/__pycache__/tracker.cpython-310.pyc +0 -0
  33. fastai/callback/__pycache__/training.cpython-310.pyc +0 -0
  34. fastai/callback/all.py +12 -0
  35. fastai/callback/azureml.py +72 -0
  36. fastai/callback/captum.py +113 -0
  37. fastai/callback/channelslast.py +41 -0
  38. fastai/callback/comet.py +91 -0
  39. fastai/callback/core.py +187 -0
  40. fastai/callback/data.py +71 -0
  41. fastai/callback/fp16.py +217 -0
  42. fastai/callback/hook.py +281 -0
  43. fastai/callback/mixup.py +111 -0
  44. fastai/callback/neptune.py +80 -0
  45. fastai/callback/preds.py +18 -0
  46. fastai/callback/progress.py +124 -0
  47. fastai/callback/rnn.py +42 -0
  48. fastai/callback/schedule.py +314 -0
  49. fastai/callback/tensorboard.py +172 -0
  50. fastai/callback/tracker.py +139 -0
fastai/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ __version__ = "2.7.13"
2
+
fastai/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (207 Bytes). View file
 
fastai/__pycache__/basics.cpython-310.pyc ADDED
Binary file (313 Bytes). View file
 
fastai/__pycache__/fp16_utils.cpython-310.pyc ADDED
Binary file (7.05 kB). View file
 
fastai/__pycache__/imports.cpython-310.pyc ADDED
Binary file (4.69 kB). View file
 
fastai/__pycache__/interpret.cpython-310.pyc ADDED
Binary file (7.66 kB). View file
 
fastai/__pycache__/layers.cpython-310.pyc ADDED
Binary file (32.5 kB). View file
 
fastai/__pycache__/learner.cpython-310.pyc ADDED
Binary file (32.4 kB). View file
 
fastai/__pycache__/losses.cpython-310.pyc ADDED
Binary file (11.9 kB). View file
 
fastai/__pycache__/metrics.cpython-310.pyc ADDED
Binary file (21.8 kB). View file
 
fastai/__pycache__/optimizer.cpython-310.pyc ADDED
Binary file (20.9 kB). View file
 
fastai/__pycache__/torch_basics.cpython-310.pyc ADDED
Binary file (507 Bytes). View file
 
fastai/__pycache__/torch_core.cpython-310.pyc ADDED
Binary file (39 kB). View file
 
fastai/__pycache__/torch_imports.cpython-310.pyc ADDED
Binary file (754 Bytes). View file
 
fastai/_modidx.py ADDED
The diff for this file is too large to render. See raw diff
 
fastai/_nbdev.py ADDED
@@ -0,0 +1,899 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED BY NBDEV! DO NOT EDIT!
2
+
3
+ __all__ = ["index", "modules", "custom_doc_links", "git_url"]
4
+
5
+ index = {"defaults.benchmark": "00_torch_core.ipynb",
6
+ "setup_cuda": "00_torch_core.ipynb",
7
+ "subplots": "00_torch_core.ipynb",
8
+ "show_image": "00_torch_core.ipynb",
9
+ "show_titled_image": "00_torch_core.ipynb",
10
+ "show_images": "00_torch_core.ipynb",
11
+ "ArrayBase": "00_torch_core.ipynb",
12
+ "ArrayImageBase": "00_torch_core.ipynb",
13
+ "ArrayImage": "00_torch_core.ipynb",
14
+ "ArrayImageBW": "00_torch_core.ipynb",
15
+ "ArrayMask": "00_torch_core.ipynb",
16
+ "Tensor.__array_eq__": "00_torch_core.ipynb",
17
+ "tensor": "00_torch_core.ipynb",
18
+ "set_seed": "00_torch_core.ipynb",
19
+ "get_random_states": "00_torch_core.ipynb",
20
+ "set_random_states": "00_torch_core.ipynb",
21
+ "no_random": "00_torch_core.ipynb",
22
+ "unsqueeze": "00_torch_core.ipynb",
23
+ "unsqueeze_": "00_torch_core.ipynb",
24
+ "apply": "00_torch_core.ipynb",
25
+ "maybe_gather": "00_torch_core.ipynb",
26
+ "to_detach": "00_torch_core.ipynb",
27
+ "to_half": "00_torch_core.ipynb",
28
+ "to_float": "00_torch_core.ipynb",
29
+ "defaults.use_cuda": "00_torch_core.ipynb",
30
+ "default_device": "00_torch_core.ipynb",
31
+ "to_device": "00_torch_core.ipynb",
32
+ "to_cpu": "00_torch_core.ipynb",
33
+ "to_np": "00_torch_core.ipynb",
34
+ "to_concat": "00_torch_core.ipynb",
35
+ "Tensor.set_meta": "00_torch_core.ipynb",
36
+ "Tensor.as_subclass": "00_torch_core.ipynb",
37
+ "TensorBase": "00_torch_core.ipynb",
38
+ "TensorImageBase": "00_torch_core.ipynb",
39
+ "TensorImage": "00_torch_core.ipynb",
40
+ "TensorImageBW": "00_torch_core.ipynb",
41
+ "TensorMask": "00_torch_core.ipynb",
42
+ "TensorFlowField": "00_torch_core.ipynb",
43
+ "TensorCategory": "00_torch_core.ipynb",
44
+ "TensorMultiCategory": "00_torch_core.ipynb",
45
+ "TitledTensorScalar": "00_torch_core.ipynb",
46
+ "L.tensored": "00_torch_core.ipynb",
47
+ "L.stack": "00_torch_core.ipynb",
48
+ "L.cat": "00_torch_core.ipynb",
49
+ "concat": "00_torch_core.ipynb",
50
+ "Chunks": "00_torch_core.ipynb",
51
+ "show_title": "00_torch_core.ipynb",
52
+ "ShowTitle": "00_torch_core.ipynb",
53
+ "TitledInt": "00_torch_core.ipynb",
54
+ "TitledFloat": "00_torch_core.ipynb",
55
+ "TitledStr": "00_torch_core.ipynb",
56
+ "TitledTuple": "00_torch_core.ipynb",
57
+ "TitledStr.truncate": "00_torch_core.ipynb",
58
+ "pd.DataFrame.__init__": "00_torch_core.ipynb",
59
+ "get_empty_df": "00_torch_core.ipynb",
60
+ "display_df": "00_torch_core.ipynb",
61
+ "get_first": "00_torch_core.ipynb",
62
+ "one_param": "00_torch_core.ipynb",
63
+ "item_find": "00_torch_core.ipynb",
64
+ "find_device": "00_torch_core.ipynb",
65
+ "find_bs": "00_torch_core.ipynb",
66
+ "np_func": "00_torch_core.ipynb",
67
+ "Module": "00_torch_core.ipynb",
68
+ "get_model": "00_torch_core.ipynb",
69
+ "one_hot": "00_torch_core.ipynb",
70
+ "one_hot_decode": "00_torch_core.ipynb",
71
+ "params": "00_torch_core.ipynb",
72
+ "trainable_params": "00_torch_core.ipynb",
73
+ "norm_types": "00_torch_core.ipynb",
74
+ "norm_bias_params": "00_torch_core.ipynb",
75
+ "batch_to_samples": "00_torch_core.ipynb",
76
+ "Tensor.interp_1d": "00_torch_core.ipynb",
77
+ "Tensor.pca": "00_torch_core.ipynb",
78
+ "logit": "00_torch_core.ipynb",
79
+ "num_distrib": "00_torch_core.ipynb",
80
+ "rank_distrib": "00_torch_core.ipynb",
81
+ "distrib_barrier": "00_torch_core.ipynb",
82
+ "Path.save_array": "00_torch_core.ipynb",
83
+ "Path.load_array": "00_torch_core.ipynb",
84
+ "base_doc": "00_torch_core.ipynb",
85
+ "doc": "00_torch_core.ipynb",
86
+ "nested_reorder": "00_torch_core.ipynb",
87
+ "make_cross_image": "00_torch_core.ipynb",
88
+ "show_image_batch": "00_torch_core.ipynb",
89
+ "requires_grad": "00_torch_core.ipynb",
90
+ "init_default": "01_layers.ipynb",
91
+ "cond_init": "00_torch_core.ipynb",
92
+ "apply_leaf": "00_torch_core.ipynb",
93
+ "apply_init": "00_torch_core.ipynb",
94
+ "script_use_ctx": "00_torch_core.ipynb",
95
+ "script_save_ctx": "00_torch_core.ipynb",
96
+ "script_fwd": "00_torch_core.ipynb",
97
+ "script_bwd": "00_torch_core.ipynb",
98
+ "grad_module": "00_torch_core.ipynb",
99
+ "ismin_torch": "00_torch_core.ipynb",
100
+ "notmax_torch": "00_torch_core.ipynb",
101
+ "module": "01_layers.ipynb",
102
+ "Identity": "01_layers.ipynb",
103
+ "Lambda": "01_layers.ipynb",
104
+ "PartialLambda": "01_layers.ipynb",
105
+ "Flatten": "01_layers.ipynb",
106
+ "ToTensorBase": "01_layers.ipynb",
107
+ "View": "01_layers.ipynb",
108
+ "ResizeBatch": "01_layers.ipynb",
109
+ "Debugger": "01_layers.ipynb",
110
+ "sigmoid_range": "01_layers.ipynb",
111
+ "SigmoidRange": "01_layers.ipynb",
112
+ "AdaptiveConcatPool1d": "01_layers.ipynb",
113
+ "AdaptiveConcatPool2d": "01_layers.ipynb",
114
+ "PoolType": "01_layers.ipynb",
115
+ "adaptive_pool": "01_layers.ipynb",
116
+ "PoolFlatten": "01_layers.ipynb",
117
+ "NormType": "01_layers.ipynb",
118
+ "BatchNorm": "01_layers.ipynb",
119
+ "InstanceNorm": "01_layers.ipynb",
120
+ "BatchNorm1dFlat": "01_layers.ipynb",
121
+ "LinBnDrop": "01_layers.ipynb",
122
+ "sigmoid": "01_layers.ipynb",
123
+ "sigmoid_": "01_layers.ipynb",
124
+ "vleaky_relu": "01_layers.ipynb",
125
+ "init_linear": "01_layers.ipynb",
126
+ "defaults.activation": "01_layers.ipynb",
127
+ "ConvLayer": "01_layers.ipynb",
128
+ "AdaptiveAvgPool": "01_layers.ipynb",
129
+ "MaxPool": "01_layers.ipynb",
130
+ "AvgPool": "01_layers.ipynb",
131
+ "trunc_normal_": "01_layers.ipynb",
132
+ "Embedding": "01_layers.ipynb",
133
+ "SelfAttention": "01_layers.ipynb",
134
+ "PooledSelfAttention2d": "01_layers.ipynb",
135
+ "SimpleSelfAttention": "01_layers.ipynb",
136
+ "icnr_init": "01_layers.ipynb",
137
+ "PixelShuffle_ICNR": "01_layers.ipynb",
138
+ "sequential": "01_layers.ipynb",
139
+ "SequentialEx": "01_layers.ipynb",
140
+ "MergeLayer": "01_layers.ipynb",
141
+ "Cat": "01_layers.ipynb",
142
+ "SimpleCNN": "01_layers.ipynb",
143
+ "ProdLayer": "01_layers.ipynb",
144
+ "inplace_relu": "01_layers.ipynb",
145
+ "SEModule": "01_layers.ipynb",
146
+ "ResBlock": "01_layers.ipynb",
147
+ "SEBlock": "01_layers.ipynb",
148
+ "SEResNeXtBlock": "01_layers.ipynb",
149
+ "SeparableBlock": "01_layers.ipynb",
150
+ "TimeDistributed": "01_layers.ipynb",
151
+ "swish": "01_layers.ipynb",
152
+ "Swish": "01_layers.ipynb",
153
+ "MishJitAutoFn": "01_layers.ipynb",
154
+ "mish": "01_layers.ipynb",
155
+ "Mish": "01_layers.ipynb",
156
+ "ParameterModule": "01_layers.ipynb",
157
+ "children_and_parameters": "01_layers.ipynb",
158
+ "has_children": "01_layers.ipynb",
159
+ "flatten_model": "01_layers.ipynb",
160
+ "NoneReduce": "01_layers.ipynb",
161
+ "in_channels": "01_layers.ipynb",
162
+ "BaseLoss": "01a_losses.ipynb",
163
+ "CrossEntropyLossFlat": "01a_losses.ipynb",
164
+ "FocalLoss": "01a_losses.ipynb",
165
+ "FocalLossFlat": "01a_losses.ipynb",
166
+ "BCEWithLogitsLossFlat": "01a_losses.ipynb",
167
+ "BCELossFlat": "01a_losses.ipynb",
168
+ "MSELossFlat": "01a_losses.ipynb",
169
+ "L1LossFlat": "01a_losses.ipynb",
170
+ "LabelSmoothingCrossEntropy": "01a_losses.ipynb",
171
+ "LabelSmoothingCrossEntropyFlat": "01a_losses.ipynb",
172
+ "DiceLoss": "01a_losses.ipynb",
173
+ "fa_collate": "02_data.load.ipynb",
174
+ "fa_convert": "02_data.load.ipynb",
175
+ "SkipItemException": "02_data.load.ipynb",
176
+ "collate_error": "02_data.load.ipynb",
177
+ "DataLoader": "02_data.load.ipynb",
178
+ "TfmdDL": "03_data.core.ipynb",
179
+ "DataLoaders": "03_data.core.ipynb",
180
+ "FilteredBase": "03_data.core.ipynb",
181
+ "TfmdLists": "03_data.core.ipynb",
182
+ "decode_at": "03_data.core.ipynb",
183
+ "show_at": "03_data.core.ipynb",
184
+ "Datasets": "03_data.core.ipynb",
185
+ "test_set": "03_data.core.ipynb",
186
+ "DataLoaders.test_dl": "03_data.core.ipynb",
187
+ "fastai_cfg": "04_data.external.ipynb",
188
+ "fastai_path": "04_data.external.ipynb",
189
+ "URLs": "04_data.external.ipynb",
190
+ "untar_data": "04_data.external.ipynb",
191
+ "get_files": "05_data.transforms.ipynb",
192
+ "FileGetter": "05_data.transforms.ipynb",
193
+ "image_extensions": "05_data.transforms.ipynb",
194
+ "get_image_files": "05_data.transforms.ipynb",
195
+ "ImageGetter": "05_data.transforms.ipynb",
196
+ "get_text_files": "05_data.transforms.ipynb",
197
+ "ItemGetter": "05_data.transforms.ipynb",
198
+ "AttrGetter": "05_data.transforms.ipynb",
199
+ "RandomSplitter": "05_data.transforms.ipynb",
200
+ "TrainTestSplitter": "05_data.transforms.ipynb",
201
+ "IndexSplitter": "05_data.transforms.ipynb",
202
+ "EndSplitter": "05_data.transforms.ipynb",
203
+ "GrandparentSplitter": "05_data.transforms.ipynb",
204
+ "FuncSplitter": "05_data.transforms.ipynb",
205
+ "MaskSplitter": "05_data.transforms.ipynb",
206
+ "FileSplitter": "05_data.transforms.ipynb",
207
+ "ColSplitter": "05_data.transforms.ipynb",
208
+ "RandomSubsetSplitter": "05_data.transforms.ipynb",
209
+ "parent_label": "05_data.transforms.ipynb",
210
+ "RegexLabeller": "05_data.transforms.ipynb",
211
+ "ColReader": "05_data.transforms.ipynb",
212
+ "CategoryMap": "05_data.transforms.ipynb",
213
+ "Categorize": "05_data.transforms.ipynb",
214
+ "Category": "05_data.transforms.ipynb",
215
+ "MultiCategorize": "05_data.transforms.ipynb",
216
+ "MultiCategory": "05_data.transforms.ipynb",
217
+ "OneHotEncode": "05_data.transforms.ipynb",
218
+ "EncodedMultiCategorize": "05_data.transforms.ipynb",
219
+ "RegressionSetup": "05_data.transforms.ipynb",
220
+ "get_c": "05_data.transforms.ipynb",
221
+ "ToTensor": "05_data.transforms.ipynb",
222
+ "IntToFloatTensor": "05_data.transforms.ipynb",
223
+ "broadcast_vec": "05_data.transforms.ipynb",
224
+ "Normalize": "05_data.transforms.ipynb",
225
+ "TransformBlock": "06_data.block.ipynb",
226
+ "CategoryBlock": "06_data.block.ipynb",
227
+ "MultiCategoryBlock": "06_data.block.ipynb",
228
+ "RegressionBlock": "06_data.block.ipynb",
229
+ "DataBlock": "06_data.block.ipynb",
230
+ "DataBlock.summary": "06_data.block.ipynb",
231
+ "imagenet_stats": "07_vision.core.ipynb",
232
+ "cifar_stats": "07_vision.core.ipynb",
233
+ "mnist_stats": "07_vision.core.ipynb",
234
+ "n_px": "07_vision.core.ipynb",
235
+ "shape": "60_medical.imaging.ipynb",
236
+ "aspect": "07_vision.core.ipynb",
237
+ "Image.Image.reshape": "07_vision.core.ipynb",
238
+ "Image.Image.to_bytes_format": "07_vision.core.ipynb",
239
+ "Image.Image.to_thumb": "07_vision.core.ipynb",
240
+ "Image.Image.resize_max": "07_vision.core.ipynb",
241
+ "to_image": "07_vision.core.ipynb",
242
+ "load_image": "07_vision.core.ipynb",
243
+ "image2tensor": "07_vision.core.ipynb",
244
+ "PILBase": "07_vision.core.ipynb",
245
+ "PILImage": "07_vision.core.ipynb",
246
+ "PILImageBW": "07_vision.core.ipynb",
247
+ "PILMask": "07_vision.core.ipynb",
248
+ "OpenMask": "07_vision.core.ipynb",
249
+ "OpenMask.loss_func": "07_vision.core.ipynb",
250
+ "PILMask.create": "07_vision.core.ipynb",
251
+ "AddMaskCodes": "07_vision.core.ipynb",
252
+ "TensorPoint": "07_vision.core.ipynb",
253
+ "TensorPointCreate": "07_vision.core.ipynb",
254
+ "TensorPointCreate.loss_func": "07_vision.core.ipynb",
255
+ "TensorPoint.create": "07_vision.core.ipynb",
256
+ "get_annotations": "07_vision.core.ipynb",
257
+ "TensorBBox": "07_vision.core.ipynb",
258
+ "LabeledBBox": "07_vision.core.ipynb",
259
+ "encodes": "40_tabular.core.ipynb",
260
+ "PointScaler": "07_vision.core.ipynb",
261
+ "BBoxLabeler": "07_vision.core.ipynb",
262
+ "decodes": "40_tabular.core.ipynb",
263
+ "get_grid": "08_vision.data.ipynb",
264
+ "clip_remove_empty": "08_vision.data.ipynb",
265
+ "bb_pad": "08_vision.data.ipynb",
266
+ "ImageBlock": "08_vision.data.ipynb",
267
+ "MaskBlock": "08_vision.data.ipynb",
268
+ "PointBlock": "08_vision.data.ipynb",
269
+ "BBoxBlock": "08_vision.data.ipynb",
270
+ "PointBlock.__doc__": "08_vision.data.ipynb",
271
+ "BBoxBlock.__doc__": "08_vision.data.ipynb",
272
+ "BBoxLblBlock": "08_vision.data.ipynb",
273
+ "ImageDataLoaders": "08_vision.data.ipynb",
274
+ "ImageDataLoaders.from_csv": "08_vision.data.ipynb",
275
+ "ImageDataLoaders.from_name_func": "08_vision.data.ipynb",
276
+ "ImageDataLoaders.from_path_re": "08_vision.data.ipynb",
277
+ "ImageDataLoaders.from_name_re": "08_vision.data.ipynb",
278
+ "SegmentationDataLoaders": "08_vision.data.ipynb",
279
+ "RandTransform": "09_vision.augment.ipynb",
280
+ "TensorTypes": "09_vision.augment.ipynb",
281
+ "Image.Image.flip_lr": "09_vision.augment.ipynb",
282
+ "TensorImageBase.flip_lr": "09_vision.augment.ipynb",
283
+ "TensorPoint.flip_lr": "09_vision.augment.ipynb",
284
+ "TensorBBox.flip_lr": "09_vision.augment.ipynb",
285
+ "FlipItem": "09_vision.augment.ipynb",
286
+ "PILImage.dihedral": "09_vision.augment.ipynb",
287
+ "TensorImage.dihedral": "09_vision.augment.ipynb",
288
+ "TensorPoint.dihedral": "09_vision.augment.ipynb",
289
+ "TensorBBox.dihedral": "09_vision.augment.ipynb",
290
+ "DihedralItem": "09_vision.augment.ipynb",
291
+ "TensorBBox.crop_pad": "09_vision.augment.ipynb",
292
+ "TensorPoint.crop_pad": "09_vision.augment.ipynb",
293
+ "Image.Image.crop_pad": "09_vision.augment.ipynb",
294
+ "CropPad": "09_vision.augment.ipynb",
295
+ "RandomCrop": "09_vision.augment.ipynb",
296
+ "OldRandomCrop": "09_vision.augment.ipynb",
297
+ "Resize": "09_vision.augment.ipynb",
298
+ "RandomResizedCrop": "09_vision.augment.ipynb",
299
+ "RatioResize": "09_vision.augment.ipynb",
300
+ "affine_grid": "09_vision.augment.ipynb",
301
+ "TensorImage.affine_coord": "09_vision.augment.ipynb",
302
+ "TensorMask.affine_coord": "09_vision.augment.ipynb",
303
+ "TensorPoint.affine_coord": "09_vision.augment.ipynb",
304
+ "TensorBBox.affine_coord": "09_vision.augment.ipynb",
305
+ "AffineCoordTfm": "09_vision.augment.ipynb",
306
+ "RandomResizedCropGPU": "09_vision.augment.ipynb",
307
+ "mask_tensor": "09_vision.augment.ipynb",
308
+ "affine_mat": "09_vision.augment.ipynb",
309
+ "flip_mat": "09_vision.augment.ipynb",
310
+ "TensorImage.flip_batch": "09_vision.augment.ipynb",
311
+ "TensorMask.flip_batch": "09_vision.augment.ipynb",
312
+ "TensorPoint.flip_batch": "09_vision.augment.ipynb",
313
+ "TensorBBox.flip_batch": "09_vision.augment.ipynb",
314
+ "Flip": "09_vision.augment.ipynb",
315
+ "DeterministicDraw": "09_vision.augment.ipynb",
316
+ "DeterministicFlip": "09_vision.augment.ipynb",
317
+ "dihedral_mat": "09_vision.augment.ipynb",
318
+ "TensorImage.dihedral_batch": "09_vision.augment.ipynb",
319
+ "TensorMask.dihedral_batch": "09_vision.augment.ipynb",
320
+ "TensorPoint.dihedral_batch": "09_vision.augment.ipynb",
321
+ "TensorBBox.dihedral_batch": "09_vision.augment.ipynb",
322
+ "Dihedral": "09_vision.augment.ipynb",
323
+ "DeterministicDihedral": "09_vision.augment.ipynb",
324
+ "rotate_mat": "09_vision.augment.ipynb",
325
+ "TensorImage.rotate": "09_vision.augment.ipynb",
326
+ "TensorMask.rotate": "09_vision.augment.ipynb",
327
+ "TensorPoint.rotate": "09_vision.augment.ipynb",
328
+ "TensorBBox.rotate": "09_vision.augment.ipynb",
329
+ "Rotate": "09_vision.augment.ipynb",
330
+ "zoom_mat": "09_vision.augment.ipynb",
331
+ "TensorImage.zoom": "09_vision.augment.ipynb",
332
+ "TensorMask.zoom": "09_vision.augment.ipynb",
333
+ "TensorPoint.zoom": "09_vision.augment.ipynb",
334
+ "TensorBBox.zoom": "09_vision.augment.ipynb",
335
+ "Zoom": "09_vision.augment.ipynb",
336
+ "find_coeffs": "09_vision.augment.ipynb",
337
+ "apply_perspective": "09_vision.augment.ipynb",
338
+ "TensorImage.warp": "09_vision.augment.ipynb",
339
+ "TensorMask.warp": "09_vision.augment.ipynb",
340
+ "TensorPoint.warp": "09_vision.augment.ipynb",
341
+ "TensorBBox.warp": "09_vision.augment.ipynb",
342
+ "Warp": "09_vision.augment.ipynb",
343
+ "TensorImage.lighting": "09_vision.augment.ipynb",
344
+ "SpaceTfm": "09_vision.augment.ipynb",
345
+ "LightingTfm": "09_vision.augment.ipynb",
346
+ "TensorImage.brightness": "09_vision.augment.ipynb",
347
+ "Brightness": "09_vision.augment.ipynb",
348
+ "TensorImage.contrast": "09_vision.augment.ipynb",
349
+ "Contrast": "09_vision.augment.ipynb",
350
+ "grayscale": "09_vision.augment.ipynb",
351
+ "TensorImage.saturation": "09_vision.augment.ipynb",
352
+ "Saturation": "09_vision.augment.ipynb",
353
+ "rgb2hsv": "09_vision.augment.ipynb",
354
+ "hsv2rgb": "09_vision.augment.ipynb",
355
+ "TensorImage.hsv": "09_vision.augment.ipynb",
356
+ "HSVTfm": "09_vision.augment.ipynb",
357
+ "TensorImage.hue": "09_vision.augment.ipynb",
358
+ "Hue": "09_vision.augment.ipynb",
359
+ "cutout_gaussian": "09_vision.augment.ipynb",
360
+ "norm_apply_denorm": "09_vision.augment.ipynb",
361
+ "RandomErasing": "09_vision.augment.ipynb",
362
+ "setup_aug_tfms": "09_vision.augment.ipynb",
363
+ "aug_transforms": "09_vision.augment.ipynb",
364
+ "download_images": "09b_vision.utils.ipynb",
365
+ "resize_to": "09b_vision.utils.ipynb",
366
+ "verify_image": "09b_vision.utils.ipynb",
367
+ "verify_images": "09b_vision.utils.ipynb",
368
+ "resize_image": "09b_vision.utils.ipynb",
369
+ "resize_images": "09b_vision.utils.ipynb",
370
+ "Box.__getitem__": "09c_vision.widgets.ipynb",
371
+ "widget": "09c_vision.widgets.ipynb",
372
+ "carousel": "09c_vision.widgets.ipynb",
373
+ "ImagesCleaner": "09c_vision.widgets.ipynb",
374
+ "ImageClassifierCleaner": "09c_vision.widgets.ipynb",
375
+ "init_cnn": "11_vision.models.xresnet.ipynb",
376
+ "XResNet": "11_vision.models.xresnet.ipynb",
377
+ "xresnet18": "11_vision.models.xresnet.ipynb",
378
+ "xresnet34": "11_vision.models.xresnet.ipynb",
379
+ "xresnet50": "11_vision.models.xresnet.ipynb",
380
+ "xresnet101": "11_vision.models.xresnet.ipynb",
381
+ "xresnet152": "11_vision.models.xresnet.ipynb",
382
+ "xresnet18_deep": "11_vision.models.xresnet.ipynb",
383
+ "xresnet34_deep": "11_vision.models.xresnet.ipynb",
384
+ "xresnet50_deep": "11_vision.models.xresnet.ipynb",
385
+ "xresnet18_deeper": "11_vision.models.xresnet.ipynb",
386
+ "xresnet34_deeper": "11_vision.models.xresnet.ipynb",
387
+ "xresnet50_deeper": "11_vision.models.xresnet.ipynb",
388
+ "se_kwargs1": "11_vision.models.xresnet.ipynb",
389
+ "se_kwargs2": "11_vision.models.xresnet.ipynb",
390
+ "se_kwargs3": "11_vision.models.xresnet.ipynb",
391
+ "g0": "11_vision.models.xresnet.ipynb",
392
+ "g1": "11_vision.models.xresnet.ipynb",
393
+ "g2": "11_vision.models.xresnet.ipynb",
394
+ "g3": "11_vision.models.xresnet.ipynb",
395
+ "xse_resnet18": "11_vision.models.xresnet.ipynb",
396
+ "xse_resnext18": "11_vision.models.xresnet.ipynb",
397
+ "xresnext18": "11_vision.models.xresnet.ipynb",
398
+ "xse_resnet34": "11_vision.models.xresnet.ipynb",
399
+ "xse_resnext34": "11_vision.models.xresnet.ipynb",
400
+ "xresnext34": "11_vision.models.xresnet.ipynb",
401
+ "xse_resnet50": "11_vision.models.xresnet.ipynb",
402
+ "xse_resnext50": "11_vision.models.xresnet.ipynb",
403
+ "xresnext50": "11_vision.models.xresnet.ipynb",
404
+ "xse_resnet101": "11_vision.models.xresnet.ipynb",
405
+ "xse_resnext101": "11_vision.models.xresnet.ipynb",
406
+ "xresnext101": "11_vision.models.xresnet.ipynb",
407
+ "xse_resnet152": "11_vision.models.xresnet.ipynb",
408
+ "xsenet154": "11_vision.models.xresnet.ipynb",
409
+ "xse_resnext18_deep": "11_vision.models.xresnet.ipynb",
410
+ "xse_resnext34_deep": "11_vision.models.xresnet.ipynb",
411
+ "xse_resnext50_deep": "11_vision.models.xresnet.ipynb",
412
+ "xse_resnext18_deeper": "11_vision.models.xresnet.ipynb",
413
+ "xse_resnext34_deeper": "11_vision.models.xresnet.ipynb",
414
+ "xse_resnext50_deeper": "11_vision.models.xresnet.ipynb",
415
+ "Optimizer": "12_optimizer.ipynb",
416
+ "sgd_step": "12_optimizer.ipynb",
417
+ "weight_decay": "12_optimizer.ipynb",
418
+ "weight_decay.defaults": "12_optimizer.ipynb",
419
+ "l2_reg": "12_optimizer.ipynb",
420
+ "l2_reg.defaults": "12_optimizer.ipynb",
421
+ "average_grad": "12_optimizer.ipynb",
422
+ "average_grad.defaults": "12_optimizer.ipynb",
423
+ "average_sqr_grad": "12_optimizer.ipynb",
424
+ "average_sqr_grad.defaults": "12_optimizer.ipynb",
425
+ "momentum_step": "12_optimizer.ipynb",
426
+ "SGD": "12_optimizer.ipynb",
427
+ "rms_prop_step": "12_optimizer.ipynb",
428
+ "rms_prop_step.defaults": "12_optimizer.ipynb",
429
+ "RMSProp": "12_optimizer.ipynb",
430
+ "step_stat": "12_optimizer.ipynb",
431
+ "debias": "12_optimizer.ipynb",
432
+ "adam_step": "12_optimizer.ipynb",
433
+ "Adam": "12_optimizer.ipynb",
434
+ "radam_step": "12_optimizer.ipynb",
435
+ "RAdam": "12_optimizer.ipynb",
436
+ "qhadam_step": "12_optimizer.ipynb",
437
+ "QHAdam": "12_optimizer.ipynb",
438
+ "larc_layer_lr": "12_optimizer.ipynb",
439
+ "larc_layer_lr.defaults": "12_optimizer.ipynb",
440
+ "larc_step": "12_optimizer.ipynb",
441
+ "Larc": "12_optimizer.ipynb",
442
+ "lamb_step": "12_optimizer.ipynb",
443
+ "Lamb": "12_optimizer.ipynb",
444
+ "Lookahead": "12_optimizer.ipynb",
445
+ "ranger": "12_optimizer.ipynb",
446
+ "detuplify_pg": "12_optimizer.ipynb",
447
+ "set_item_pg": "12_optimizer.ipynb",
448
+ "pytorch_hp_map": "12_optimizer.ipynb",
449
+ "OptimWrapper": "12_optimizer.ipynb",
450
+ "Callback": "13_callback.core.ipynb",
451
+ "TrainEvalCallback": "13_callback.core.ipynb",
452
+ "GatherPredsCallback": "13_callback.core.ipynb",
453
+ "FetchPredsCallback": "13_callback.core.ipynb",
454
+ "defaults.lr": "13a_learner.ipynb",
455
+ "replacing_yield": "13a_learner.ipynb",
456
+ "mk_metric": "13a_learner.ipynb",
457
+ "save_model": "13a_learner.ipynb",
458
+ "load_model": "13a_learner.ipynb",
459
+ "SkipToEpoch": "13a_learner.ipynb",
460
+ "Learner": "13a_learner.ipynb",
461
+ "before_batch_cb": "13a_learner.ipynb",
462
+ "Learner.save": "13a_learner.ipynb",
463
+ "Learner.load": "13a_learner.ipynb",
464
+ "Learner.export": "13a_learner.ipynb",
465
+ "load_learner": "13a_learner.ipynb",
466
+ "Metric": "13a_learner.ipynb",
467
+ "AvgMetric": "13a_learner.ipynb",
468
+ "AvgLoss": "13a_learner.ipynb",
469
+ "AvgSmoothLoss": "13a_learner.ipynb",
470
+ "ValueMetric": "13a_learner.ipynb",
471
+ "Recorder": "13a_learner.ipynb",
472
+ "CastToTensor": "13a_learner.ipynb",
473
+ "Learner.freeze_to": "13a_learner.ipynb",
474
+ "Learner.freeze": "13a_learner.ipynb",
475
+ "Learner.unfreeze": "13a_learner.ipynb",
476
+ "Learner.tta": "13a_learner.ipynb",
477
+ "flatten_check": "13b_metrics.ipynb",
478
+ "AccumMetric": "13b_metrics.ipynb",
479
+ "skm_to_fastai": "13b_metrics.ipynb",
480
+ "optim_metric": "13b_metrics.ipynb",
481
+ "accuracy": "13b_metrics.ipynb",
482
+ "error_rate": "13b_metrics.ipynb",
483
+ "top_k_accuracy": "13b_metrics.ipynb",
484
+ "APScoreBinary": "13b_metrics.ipynb",
485
+ "BalancedAccuracy": "13b_metrics.ipynb",
486
+ "BrierScore": "13b_metrics.ipynb",
487
+ "CohenKappa": "13b_metrics.ipynb",
488
+ "F1Score": "13b_metrics.ipynb",
489
+ "FBeta": "13b_metrics.ipynb",
490
+ "HammingLoss": "13b_metrics.ipynb",
491
+ "Jaccard": "13b_metrics.ipynb",
492
+ "Precision": "13b_metrics.ipynb",
493
+ "Recall": "13b_metrics.ipynb",
494
+ "RocAuc": "13b_metrics.ipynb",
495
+ "RocAucBinary": "13b_metrics.ipynb",
496
+ "MatthewsCorrCoef": "13b_metrics.ipynb",
497
+ "accuracy_multi": "13b_metrics.ipynb",
498
+ "APScoreMulti": "13b_metrics.ipynb",
499
+ "BrierScoreMulti": "13b_metrics.ipynb",
500
+ "F1ScoreMulti": "13b_metrics.ipynb",
501
+ "FBetaMulti": "13b_metrics.ipynb",
502
+ "HammingLossMulti": "13b_metrics.ipynb",
503
+ "JaccardMulti": "13b_metrics.ipynb",
504
+ "MatthewsCorrCoefMulti": "13b_metrics.ipynb",
505
+ "PrecisionMulti": "13b_metrics.ipynb",
506
+ "RecallMulti": "13b_metrics.ipynb",
507
+ "RocAucMulti": "13b_metrics.ipynb",
508
+ "mse": "13b_metrics.ipynb",
509
+ "rmse": "13b_metrics.ipynb",
510
+ "rmse.__doc__": "13b_metrics.ipynb",
511
+ "mae": "13b_metrics.ipynb",
512
+ "msle": "13b_metrics.ipynb",
513
+ "exp_rmspe": "13b_metrics.ipynb",
514
+ "exp_rmspe.__doc__": "13b_metrics.ipynb",
515
+ "ExplainedVariance": "13b_metrics.ipynb",
516
+ "R2Score": "13b_metrics.ipynb",
517
+ "PearsonCorrCoef": "13b_metrics.ipynb",
518
+ "SpearmanCorrCoef": "13b_metrics.ipynb",
519
+ "foreground_acc": "13b_metrics.ipynb",
520
+ "Dice": "13b_metrics.ipynb",
521
+ "DiceMulti": "13b_metrics.ipynb",
522
+ "JaccardCoeff": "13b_metrics.ipynb",
523
+ "CorpusBLEUMetric": "13b_metrics.ipynb",
524
+ "Perplexity": "13b_metrics.ipynb",
525
+ "perplexity": "13b_metrics.ipynb",
526
+ "LossMetric": "13b_metrics.ipynb",
527
+ "LossMetrics": "13b_metrics.ipynb",
528
+ "annealer": "14_callback.schedule.ipynb",
529
+ "sched_lin": "14_callback.schedule.ipynb",
530
+ "sched_cos": "14_callback.schedule.ipynb",
531
+ "sched_no": "14_callback.schedule.ipynb",
532
+ "sched_exp": "14_callback.schedule.ipynb",
533
+ "SchedLin": "14_callback.schedule.ipynb",
534
+ "SchedCos": "14_callback.schedule.ipynb",
535
+ "SchedNo": "14_callback.schedule.ipynb",
536
+ "SchedExp": "14_callback.schedule.ipynb",
537
+ "SchedLin.__doc__": "14_callback.schedule.ipynb",
538
+ "SchedCos.__doc__": "14_callback.schedule.ipynb",
539
+ "SchedExp.__doc__": "14_callback.schedule.ipynb",
540
+ "SchedPoly": "14_callback.schedule.ipynb",
541
+ "combine_scheds": "14_callback.schedule.ipynb",
542
+ "combined_cos": "14_callback.schedule.ipynb",
543
+ "ParamScheduler": "14_callback.schedule.ipynb",
544
+ "Learner.fit_one_cycle": "14_callback.schedule.ipynb",
545
+ "Recorder.plot_sched": "14_callback.schedule.ipynb",
546
+ "Learner.fit_flat_cos": "14_callback.schedule.ipynb",
547
+ "Learner.fit_sgdr": "14_callback.schedule.ipynb",
548
+ "Learner.fine_tune": "14_callback.schedule.ipynb",
549
+ "LRFinder": "14_callback.schedule.ipynb",
550
+ "valley": "14_callback.schedule.ipynb",
551
+ "slide": "14_callback.schedule.ipynb",
552
+ "minimum": "14_callback.schedule.ipynb",
553
+ "steep": "14_callback.schedule.ipynb",
554
+ "Recorder.plot_lr_find": "14_callback.schedule.ipynb",
555
+ "Learner.lr_find": "14_callback.schedule.ipynb",
556
+ "CollectDataCallback": "14a_callback.data.ipynb",
557
+ "WeightedDL": "14a_callback.data.ipynb",
558
+ "Datasets.weighted_dataloaders": "14a_callback.data.ipynb",
559
+ "DataBlock.weighted_dataloaders": "14a_callback.data.ipynb",
560
+ "PartialDL": "14a_callback.data.ipynb",
561
+ "FilteredBase.partial_dataloaders": "14a_callback.data.ipynb",
562
+ "Hook": "15_callback.hook.ipynb",
563
+ "hook_output": "15_callback.hook.ipynb",
564
+ "Hooks": "15_callback.hook.ipynb",
565
+ "hook_outputs": "15_callback.hook.ipynb",
566
+ "dummy_eval": "15_callback.hook.ipynb",
567
+ "model_sizes": "15_callback.hook.ipynb",
568
+ "num_features_model": "15_callback.hook.ipynb",
569
+ "has_params": "15_callback.hook.ipynb",
570
+ "HookCallback": "15_callback.hook.ipynb",
571
+ "total_params": "15_callback.hook.ipynb",
572
+ "layer_info": "15_callback.hook.ipynb",
573
+ "module_summary": "15_callback.hook.ipynb",
574
+ "Learner.summary": "15_callback.hook.ipynb",
575
+ "ActivationStats": "15_callback.hook.ipynb",
576
+ "UnetBlock": "15a_vision.models.unet.ipynb",
577
+ "ResizeToOrig": "15a_vision.models.unet.ipynb",
578
+ "DynamicUnet": "15a_vision.models.unet.ipynb",
579
+ "ProgressCallback": "16_callback.progress.ipynb",
580
+ "Learner.no_bar": "16_callback.progress.ipynb",
581
+ "ShowGraphCallback": "16_callback.progress.ipynb",
582
+ "CSVLogger": "16_callback.progress.ipynb",
583
+ "TerminateOnNaNCallback": "17_callback.tracker.ipynb",
584
+ "TrackerCallback": "17_callback.tracker.ipynb",
585
+ "EarlyStoppingCallback": "17_callback.tracker.ipynb",
586
+ "SaveModelCallback": "17_callback.tracker.ipynb",
587
+ "ReduceLROnPlateau": "17_callback.tracker.ipynb",
588
+ "MixedPrecision": "18_callback.fp16.ipynb",
589
+ "FP16TestCallback": "18_callback.fp16.ipynb",
590
+ "Learner.to_fp16": "18_callback.fp16.ipynb",
591
+ "Learner.to_fp32": "18_callback.fp16.ipynb",
592
+ "get_master": "18_callback.fp16.ipynb",
593
+ "to_master_grads": "18_callback.fp16.ipynb",
594
+ "to_model_params": "18_callback.fp16.ipynb",
595
+ "test_overflow": "18_callback.fp16.ipynb",
596
+ "grad_overflow": "18_callback.fp16.ipynb",
597
+ "copy_clone": "18_callback.fp16.ipynb",
598
+ "ModelToHalf": "18_callback.fp16.ipynb",
599
+ "NonNativeMixedPrecision": "18_callback.fp16.ipynb",
600
+ "Learner.to_non_native_fp16": "18_callback.fp16.ipynb",
601
+ "Learner.to_non_native_fp32": "18_callback.fp16.ipynb",
602
+ "ShortEpochCallback": "18a_callback.training.ipynb",
603
+ "GradientAccumulation": "18a_callback.training.ipynb",
604
+ "GradientClip": "18a_callback.training.ipynb",
605
+ "set_bn_eval": "18a_callback.training.ipynb",
606
+ "BnFreeze": "18a_callback.training.ipynb",
607
+ "bn_types": "18a_callback.training.ipynb",
608
+ "ChannelsLast": "18a_callback.training.ipynb",
609
+ "MCDropoutCallback": "18b_callback.preds.ipynb",
610
+ "reduce_loss": "19_callback.mixup.ipynb",
611
+ "MixHandler": "19_callback.mixup.ipynb",
612
+ "MixUp": "19_callback.mixup.ipynb",
613
+ "CutMix": "19_callback.mixup.ipynb",
614
+ "Interpretation": "20_interpret.ipynb",
615
+ "ClassificationInterpretation": "20_interpret.ipynb",
616
+ "SegmentationInterpretation": "20_interpret.ipynb",
617
+ "DataParallel.reset": "20a_distributed.ipynb",
618
+ "ParallelTrainer": "20a_distributed.ipynb",
619
+ "Learner.to_parallel": "20a_distributed.ipynb",
620
+ "Learner.detach_parallel": "20a_distributed.ipynb",
621
+ "Learner.parallel_ctx": "20a_distributed.ipynb",
622
+ "DistributedDataParallel.reset": "20a_distributed.ipynb",
623
+ "setup_distrib": "20a_distributed.ipynb",
624
+ "teardown_distrib": "20a_distributed.ipynb",
625
+ "DistributedDL": "20a_distributed.ipynb",
626
+ "DistributedTrainer": "20a_distributed.ipynb",
627
+ "Learner.to_distributed": "20a_distributed.ipynb",
628
+ "Learner.detach_distributed": "20a_distributed.ipynb",
629
+ "Learner.distrib_ctx": "20a_distributed.ipynb",
630
+ "rank0_first": "20a_distributed.ipynb",
631
+ "has_pool_type": "21_vision.learner.ipynb",
632
+ "cut_model": "21_vision.learner.ipynb",
633
+ "create_body": "21_vision.learner.ipynb",
634
+ "create_head": "21_vision.learner.ipynb",
635
+ "default_split": "21_vision.learner.ipynb",
636
+ "model_meta": "21_vision.learner.ipynb",
637
+ "add_head": "21_vision.learner.ipynb",
638
+ "create_vision_model": "21_vision.learner.ipynb",
639
+ "TimmBody": "21_vision.learner.ipynb",
640
+ "create_timm_model": "21_vision.learner.ipynb",
641
+ "vision_learner": "21_vision.learner.ipynb",
642
+ "create_unet_model": "21_vision.learner.ipynb",
643
+ "unet_learner": "21_vision.learner.ipynb",
644
+ "create_cnn_model": "21_vision.learner.ipynb",
645
+ "cnn_learner": "21_vision.learner.ipynb",
646
+ "GANModule": "24_vision.gan.ipynb",
647
+ "basic_critic": "24_vision.gan.ipynb",
648
+ "AddChannels": "24_vision.gan.ipynb",
649
+ "basic_generator": "24_vision.gan.ipynb",
650
+ "DenseResBlock": "24_vision.gan.ipynb",
651
+ "gan_critic": "24_vision.gan.ipynb",
652
+ "GANLoss": "24_vision.gan.ipynb",
653
+ "AdaptiveLoss": "24_vision.gan.ipynb",
654
+ "accuracy_thresh_expand": "24_vision.gan.ipynb",
655
+ "set_freeze_model": "24_vision.gan.ipynb",
656
+ "GANTrainer": "24_vision.gan.ipynb",
657
+ "FixedGANSwitcher": "24_vision.gan.ipynb",
658
+ "AdaptiveGANSwitcher": "24_vision.gan.ipynb",
659
+ "GANDiscriminativeLR": "24_vision.gan.ipynb",
660
+ "InvisibleTensor": "24_vision.gan.ipynb",
661
+ "generate_noise": "24_vision.gan.ipynb",
662
+ "gan_loss_from_func": "24_vision.gan.ipynb",
663
+ "GANLearner": "24_vision.gan.ipynb",
664
+ "GANLearner.from_learners": "24_vision.gan.ipynb",
665
+ "GANLearner.wgan": "24_vision.gan.ipynb",
666
+ "spec_add_spaces": "30_text.core.ipynb",
667
+ "rm_useless_spaces": "30_text.core.ipynb",
668
+ "replace_rep": "30_text.core.ipynb",
669
+ "replace_wrep": "30_text.core.ipynb",
670
+ "fix_html": "30_text.core.ipynb",
671
+ "replace_all_caps": "30_text.core.ipynb",
672
+ "replace_maj": "30_text.core.ipynb",
673
+ "lowercase": "30_text.core.ipynb",
674
+ "replace_space": "30_text.core.ipynb",
675
+ "defaults.text_spec_tok": "30_text.core.ipynb",
676
+ "defaults.text_proc_rules": "30_text.core.ipynb",
677
+ "defaults.text_postproc_rules": "30_text.core.ipynb",
678
+ "BaseTokenizer": "30_text.core.ipynb",
679
+ "SpacyTokenizer": "30_text.core.ipynb",
680
+ "WordTokenizer": "30_text.core.ipynb",
681
+ "TokenizeWithRules": "30_text.core.ipynb",
682
+ "tokenize1": "30_text.core.ipynb",
683
+ "parallel_tokenize": "30_text.core.ipynb",
684
+ "fn_counter_pkl": "30_text.core.ipynb",
685
+ "fn_lengths_pkl": "30_text.core.ipynb",
686
+ "tokenize_folder": "30_text.core.ipynb",
687
+ "tokenize_files": "30_text.core.ipynb",
688
+ "tokenize_texts": "30_text.core.ipynb",
689
+ "tokenize_df": "30_text.core.ipynb",
690
+ "tokenize_csv": "30_text.core.ipynb",
691
+ "load_tokenized_csv": "30_text.core.ipynb",
692
+ "Tokenizer": "30_text.core.ipynb",
693
+ "eu_langs": "30_text.core.ipynb",
694
+ "SentencePieceTokenizer": "30_text.core.ipynb",
695
+ "SubwordTokenizer": "30_text.core.ipynb",
696
+ "reverse_text": "31_text.data.ipynb",
697
+ "make_vocab": "31_text.data.ipynb",
698
+ "TensorText": "31_text.data.ipynb",
699
+ "LMTensorText": "31_text.data.ipynb",
700
+ "TensorText.__doc__": "31_text.data.ipynb",
701
+ "LMTensorText.__doc__": "31_text.data.ipynb",
702
+ "Numericalize": "31_text.data.ipynb",
703
+ "LMDataLoader": "31_text.data.ipynb",
704
+ "Pad_Input": "31_text.data.ipynb",
705
+ "pad_input": "31_text.data.ipynb",
706
+ "pad_chunk": "31_text.data.ipynb",
707
+ "pad_input_chunk": "31_text.data.ipynb",
708
+ "Pad_Chunk": "31_text.data.ipynb",
709
+ "SortedDL": "31_text.data.ipynb",
710
+ "TextBlock": "31_text.data.ipynb",
711
+ "TextDataLoaders": "31_text.data.ipynb",
712
+ "TextDataLoaders.from_csv": "31_text.data.ipynb",
713
+ "dropout_mask": "32_text.models.awdlstm.ipynb",
714
+ "RNNDropout": "32_text.models.awdlstm.ipynb",
715
+ "WeightDropout": "32_text.models.awdlstm.ipynb",
716
+ "EmbeddingDropout": "32_text.models.awdlstm.ipynb",
717
+ "AWD_LSTM": "32_text.models.awdlstm.ipynb",
718
+ "awd_lstm_lm_split": "32_text.models.awdlstm.ipynb",
719
+ "awd_lstm_lm_config": "32_text.models.awdlstm.ipynb",
720
+ "awd_lstm_clas_split": "32_text.models.awdlstm.ipynb",
721
+ "awd_lstm_clas_config": "32_text.models.awdlstm.ipynb",
722
+ "LinearDecoder": "33_text.models.core.ipynb",
723
+ "SequentialRNN": "33_text.models.core.ipynb",
724
+ "get_language_model": "33_text.models.core.ipynb",
725
+ "SentenceEncoder": "33_text.models.core.ipynb",
726
+ "masked_concat_pool": "33_text.models.core.ipynb",
727
+ "PoolingLinearClassifier": "33_text.models.core.ipynb",
728
+ "get_text_classifier": "33_text.models.core.ipynb",
729
+ "ModelResetter": "34_callback.rnn.ipynb",
730
+ "RNNCallback": "34_callback.rnn.ipynb",
731
+ "RNNRegularizer": "34_callback.rnn.ipynb",
732
+ "rnn_cbs": "34_callback.rnn.ipynb",
733
+ "match_embeds": "37_text.learner.ipynb",
734
+ "load_ignore_keys": "37_text.learner.ipynb",
735
+ "clean_raw_keys": "37_text.learner.ipynb",
736
+ "load_model_text": "37_text.learner.ipynb",
737
+ "TextLearner": "37_text.learner.ipynb",
738
+ "decode_spec_tokens": "37_text.learner.ipynb",
739
+ "LMLearner": "37_text.learner.ipynb",
740
+ "language_model_learner": "37_text.learner.ipynb",
741
+ "text_classifier_learner": "37_text.learner.ipynb",
742
+ "make_date": "40_tabular.core.ipynb",
743
+ "add_datepart": "40_tabular.core.ipynb",
744
+ "add_elapsed_times": "40_tabular.core.ipynb",
745
+ "cont_cat_split": "40_tabular.core.ipynb",
746
+ "df_shrink_dtypes": "40_tabular.core.ipynb",
747
+ "df_shrink": "40_tabular.core.ipynb",
748
+ "Tabular": "40_tabular.core.ipynb",
749
+ "TabularPandas": "40_tabular.core.ipynb",
750
+ "TabularProc": "40_tabular.core.ipynb",
751
+ "Categorify": "40_tabular.core.ipynb",
752
+ "setups": "40_tabular.core.ipynb",
753
+ "FillStrategy": "40_tabular.core.ipynb",
754
+ "FillMissing": "40_tabular.core.ipynb",
755
+ "ReadTabBatch": "40_tabular.core.ipynb",
756
+ "TabDataLoader": "40_tabular.core.ipynb",
757
+ "TabularDataLoaders": "41_tabular.data.ipynb",
758
+ "TabularDataLoaders.from_csv": "41_tabular.data.ipynb",
759
+ "emb_sz_rule": "42_tabular.model.ipynb",
760
+ "get_emb_sz": "42_tabular.model.ipynb",
761
+ "TabularModel": "42_tabular.model.ipynb",
762
+ "tabular_config": "42_tabular.model.ipynb",
763
+ "TabularLearner": "43_tabular.learner.ipynb",
764
+ "tabular_learner": "43_tabular.learner.ipynb",
765
+ "TabularCollab": "45_collab.ipynb",
766
+ "CollabDataLoaders": "45_collab.ipynb",
767
+ "CollabDataLoaders.from_csv": "45_collab.ipynb",
768
+ "EmbeddingDotBias": "45_collab.ipynb",
769
+ "EmbeddingNN": "45_collab.ipynb",
770
+ "collab_learner": "45_collab.ipynb",
771
+ "get_dicom_files": "60_medical.imaging.ipynb",
772
+ "Path.dcmread": "60_medical.imaging.ipynb",
773
+ "TensorDicom": "60_medical.imaging.ipynb",
774
+ "PILDicom": "60_medical.imaging.ipynb",
775
+ "Path.png16read": "60_medical.imaging.ipynb",
776
+ "pixels": "60_medical.imaging.ipynb",
777
+ "scaled_px": "60_medical.imaging.ipynb",
778
+ "array_freqhist_bins": "60_medical.imaging.ipynb",
779
+ "Tensor.freqhist_bins": "60_medical.imaging.ipynb",
780
+ "Tensor.hist_scaled_pt": "60_medical.imaging.ipynb",
781
+ "Tensor.hist_scaled": "60_medical.imaging.ipynb",
782
+ "DcmDataset.hist_scaled": "60_medical.imaging.ipynb",
783
+ "Tensor.windowed": "60_medical.imaging.ipynb",
784
+ "DcmDataset.windowed": "60_medical.imaging.ipynb",
785
+ "dicom_windows": "60_medical.imaging.ipynb",
786
+ "TensorCTScan": "60_medical.imaging.ipynb",
787
+ "PILCTScan": "60_medical.imaging.ipynb",
788
+ "DcmDataset.show": "60_medical.imaging.ipynb",
789
+ "DcmDataset.pct_in_window": "60_medical.imaging.ipynb",
790
+ "uniform_blur2d": "60_medical.imaging.ipynb",
791
+ "gauss_blur2d": "60_medical.imaging.ipynb",
792
+ "Tensor.mask_from_blur": "60_medical.imaging.ipynb",
793
+ "DcmDataset.mask_from_blur": "60_medical.imaging.ipynb",
794
+ "mask2bbox": "60_medical.imaging.ipynb",
795
+ "crop_resize": "60_medical.imaging.ipynb",
796
+ "Tensor.to_nchan": "60_medical.imaging.ipynb",
797
+ "DcmDataset.to_nchan": "60_medical.imaging.ipynb",
798
+ "Tensor.to_3chan": "60_medical.imaging.ipynb",
799
+ "DcmDataset.to_3chan": "60_medical.imaging.ipynb",
800
+ "Tensor.save_jpg": "60_medical.imaging.ipynb",
801
+ "DcmDataset.save_jpg": "60_medical.imaging.ipynb",
802
+ "Tensor.to_uint16": "60_medical.imaging.ipynb",
803
+ "DcmDataset.to_uint16": "60_medical.imaging.ipynb",
804
+ "Tensor.save_tif16": "60_medical.imaging.ipynb",
805
+ "DcmDataset.save_tif16": "60_medical.imaging.ipynb",
806
+ "DcmDataset.set_pixels": "60_medical.imaging.ipynb",
807
+ "DcmDataset.pixel_array": "60_medical.imaging.ipynb",
808
+ "DcmDataset.zoom": "60_medical.imaging.ipynb",
809
+ "DcmDataset.zoom_to": "60_medical.imaging.ipynb",
810
+ "DcmDataset.as_dict": "60_medical.imaging.ipynb",
811
+ "pd.DataFrame.from_dicoms": "60_medical.imaging.ipynb",
812
+ "DicomSegmentationDataLoaders": "60_medical.imaging.ipynb",
813
+ "WandbCallback": "70_callback.wandb.ipynb",
814
+ "Learner.gather_args": "70_callback.wandb.ipynb",
815
+ "log_dataset": "70_callback.wandb.ipynb",
816
+ "log_model": "70_callback.wandb.ipynb",
817
+ "TensorBoardBaseCallback": "70a_callback.tensorboard.ipynb",
818
+ "TensorBoardCallback": "70a_callback.tensorboard.ipynb",
819
+ "TensorBoardProjectorCallback": "70a_callback.tensorboard.ipynb",
820
+ "projector_word_embeddings": "70a_callback.tensorboard.ipynb",
821
+ "NeptuneCallback": "70b_callback.neptune.ipynb",
822
+ "json_clean": "70c_callback.captum.ipynb",
823
+ "jsonutil.json_clean": "70c_callback.captum.ipynb",
824
+ "CaptumInterpretation": "70c_callback.captum.ipynb",
825
+ "CaptumInterpretation.insights": "70c_callback.captum.ipynb",
826
+ "CometCallback": "70d_callback.comet.ipynb",
827
+ "synth_dbunch": "97_test_utils.ipynb",
828
+ "RegModel": "97_test_utils.ipynb",
829
+ "synth_learner": "97_test_utils.ipynb",
830
+ "VerboseCallback": "97_test_utils.ipynb",
831
+ "get_env": "97_test_utils.ipynb",
832
+ "try_import": "97_test_utils.ipynb",
833
+ "nvidia_smi": "97_test_utils.ipynb",
834
+ "nvidia_mem": "97_test_utils.ipynb",
835
+ "show_install": "97_test_utils.ipynb",
836
+ "PYTORCH_URL": "99_pytorch_doc.ipynb",
837
+ "pytorch_doc_link": "99_pytorch_doc.ipynb"}
838
+
839
+ modules = ["torch_core.py",
840
+ "layers.py",
841
+ "losses.py",
842
+ "data/load.py",
843
+ "data/core.py",
844
+ "data/external.py",
845
+ "data/transforms.py",
846
+ "data/block.py",
847
+ "vision/core.py",
848
+ "vision/data.py",
849
+ "vision/augment.py",
850
+ "vision/utils.py",
851
+ "vision/widgets.py",
852
+ "vision/models/xresnet.py",
853
+ "optimizer.py",
854
+ "callback/core.py",
855
+ "learner.py",
856
+ "metrics.py",
857
+ "callback/schedule.py",
858
+ "callback/data.py",
859
+ "callback/hook.py",
860
+ "vision/models/unet.py",
861
+ "callback/progress.py",
862
+ "callback/tracker.py",
863
+ "callback/fp16.py",
864
+ "callback/training.py",
865
+ "callback/preds.py",
866
+ "callback/mixup.py",
867
+ "interpret.py",
868
+ "distributed.py",
869
+ "vision/learner.py",
870
+ "vision/gan.py",
871
+ "text/core.py",
872
+ "text/data.py",
873
+ "text/models/awdlstm.py",
874
+ "text/models/core.py",
875
+ "callback/rnn.py",
876
+ "text/learner.py",
877
+ "tabular/core.py",
878
+ "tabular/data.py",
879
+ "tabular/model.py",
880
+ "tabular/learner.py",
881
+ "collab.py",
882
+ "medical/imaging.py",
883
+ "medical/text.py",
884
+ "callback/wandb.py",
885
+ "callback/tensorboard.py",
886
+ "callback/neptune.py",
887
+ "callback/captum.py",
888
+ "callback/comet.py",
889
+ "test_utils.py",
890
+ "_pytorch_doc.py"]
891
+
892
+ doc_url = "https://docs.fast.ai/"
893
+
894
+ git_url = "https://github.com/fastai/fastai/tree/master/"
895
+
896
+ def custom_doc_links(name):
897
+ from nbdev.showdoc import try_external_doc_link
898
+ return try_external_doc_link(name, ['fastcore', 'nbdev'])
899
+
fastai/_pytorch_doc.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../nbs/99_pytorch_doc.ipynb.
2
+
3
+ # %% ../nbs/99_pytorch_doc.ipynb 5
4
+ from __future__ import annotations
5
+ from types import ModuleType
6
+
7
+ # %% auto 0
8
+ __all__ = ['PYTORCH_URL', 'pytorch_doc_link']
9
+
10
+ # %% ../nbs/99_pytorch_doc.ipynb 6
11
+ PYTORCH_URL = 'https://pytorch.org/docs/stable/'
12
+
13
+ # %% ../nbs/99_pytorch_doc.ipynb 7
14
+ def _mod2page(
15
+ mod:ModuleType, # A PyTorch module
16
+ ) -> str:
17
+ "Get the webpage name for a PyTorch module"
18
+ if mod == Tensor: return 'tensors.html'
19
+ name = mod.__name__
20
+ name = name.replace('torch.', '').replace('utils.', '')
21
+ if name.startswith('nn.modules'): return 'nn.html'
22
+ return f'{name}.html'
23
+
24
+ # %% ../nbs/99_pytorch_doc.ipynb 9
25
+ import importlib
26
+
27
+ # %% ../nbs/99_pytorch_doc.ipynb 10
28
+ def pytorch_doc_link(
29
+ name:str # Name of a PyTorch module, class or function
30
+ ) -> (str, None):
31
+ "Get the URL to the documentation of a PyTorch module, class or function"
32
+ if name.startswith('F'): name = 'torch.nn.functional' + name[1:]
33
+ if not name.startswith('torch.'): name = 'torch.' + name
34
+ if name == 'torch.Tensor': return f'{PYTORCH_URL}tensors.html'
35
+ try:
36
+ mod = importlib.import_module(name)
37
+ return f'{PYTORCH_URL}{_mod2page(mod)}'
38
+ except: pass
39
+ splits = name.split('.')
40
+ mod_name,fname = '.'.join(splits[:-1]),splits[-1]
41
+ if mod_name == 'torch.Tensor': return f'{PYTORCH_URL}tensors.html#{name}'
42
+ try:
43
+ mod = importlib.import_module(mod_name)
44
+ page = _mod2page(mod)
45
+ return f'{PYTORCH_URL}{page}#{name}'
46
+ except: return None
fastai/basics.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ from .data.all import *
2
+ from .optimizer import *
3
+ from .callback.core import *
4
+ from .learner import *
5
+ from .metrics import *
6
+ from .interpret import *
fastai/callback/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+
fastai/callback/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (194 Bytes). View file
 
fastai/callback/__pycache__/all.cpython-310.pyc ADDED
Binary file (410 Bytes). View file
 
fastai/callback/__pycache__/channelslast.cpython-310.pyc ADDED
Binary file (1.68 kB). View file
 
fastai/callback/__pycache__/core.cpython-310.pyc ADDED
Binary file (9.38 kB). View file
 
fastai/callback/__pycache__/data.cpython-310.pyc ADDED
Binary file (3.8 kB). View file
 
fastai/callback/__pycache__/fp16.cpython-310.pyc ADDED
Binary file (11 kB). View file
 
fastai/callback/__pycache__/hook.cpython-310.pyc ADDED
Binary file (15.2 kB). View file
 
fastai/callback/__pycache__/mixup.cpython-310.pyc ADDED
Binary file (5.22 kB). View file
 
fastai/callback/__pycache__/preds.cpython-310.pyc ADDED
Binary file (1.14 kB). View file
 
fastai/callback/__pycache__/progress.cpython-310.pyc ADDED
Binary file (6.69 kB). View file
 
fastai/callback/__pycache__/rnn.cpython-310.pyc ADDED
Binary file (2.77 kB). View file
 
fastai/callback/__pycache__/schedule.cpython-310.pyc ADDED
Binary file (15.3 kB). View file
 
fastai/callback/__pycache__/tracker.cpython-310.pyc ADDED
Binary file (6.35 kB). View file
 
fastai/callback/__pycache__/training.cpython-310.pyc ADDED
Binary file (3.52 kB). View file
 
fastai/callback/all.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .core import *
2
+ from .data import *
3
+ from .fp16 import *
4
+ from .hook import *
5
+ from .mixup import *
6
+ from .progress import *
7
+ from .schedule import *
8
+ from .tracker import *
9
+ from .rnn import *
10
+ from .training import *
11
+ from .preds import *
12
+ from .channelslast import *
fastai/callback/azureml.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/74_callback.azureml.ipynb (unless otherwise specified).
2
+
3
+ __all__ = ['AzureMLCallback']
4
+
5
+ # Cell
6
+ from ..basics import *
7
+ from ..learner import Callback
8
+
9
+ # Cell
10
+ from azureml.core.run import Run
11
+ from azureml.exceptions import RunEnvironmentException
12
+ import warnings
13
+
14
+ # Cell
15
+ class AzureMLCallback(Callback):
16
+ """
17
+ Log losses, metrics, model architecture summary to AzureML.
18
+
19
+ If `log_offline` is False, will only log if actually running on AzureML.
20
+ A custom AzureML `Run` class can be passed as `azurerun`.
21
+ If `log_to_parent` is True, will also log to the parent run, if exists (e.g. in AzureML pipelines).
22
+ """
23
+ order = Recorder.order+1
24
+
25
+ def __init__(self, azurerun=None, log_to_parent=True):
26
+ if azurerun:
27
+ self.azurerun = azurerun
28
+ else:
29
+ try:
30
+ self.azurerun = Run.get_context(allow_offline=False)
31
+ except RunEnvironmentException:
32
+ # running locally
33
+ self.azurerun = None
34
+ warnings.warn("Not running on AzureML and no azurerun passed, AzureMLCallback will be disabled.")
35
+ self.log_to_parent = log_to_parent
36
+
37
+ def before_fit(self):
38
+ self._log("n_epoch", self.learn.n_epoch)
39
+ self._log("model_class", str(type(self.learn.model)))
40
+
41
+ try:
42
+ summary_file = Path("outputs") / 'model_summary.txt'
43
+ with summary_file.open("w") as f:
44
+ f.write(repr(self.learn.model))
45
+ except:
46
+ print('Did not log model summary. Check if your model is PyTorch model.')
47
+
48
+ def after_batch(self):
49
+ # log loss and opt.hypers
50
+ if self.learn.training:
51
+ self._log('batch__loss', self.learn.loss.item())
52
+ self._log('batch__train_iter', self.learn.train_iter)
53
+ for i, h in enumerate(self.learn.opt.hypers):
54
+ for k, v in h.items():
55
+ self._log(f'batch__opt.hypers.{k}', v)
56
+
57
+ def after_epoch(self):
58
+ # log metrics
59
+ for n, v in zip(self.learn.recorder.metric_names, self.learn.recorder.log):
60
+ if n not in ['epoch', 'time']:
61
+ self._log(f'epoch__{n}', v)
62
+ if n == 'time':
63
+ # split elapsed time string, then convert into 'seconds' to log
64
+ m, s = str(v).split(':')
65
+ elapsed = int(m)*60 + int(s)
66
+ self._log(f'epoch__{n}', elapsed)
67
+
68
+ def _log(self, metric, value):
69
+ if self.azurerun is not None:
70
+ self.azurerun.log(metric, value)
71
+ if self.log_to_parent and self.azurerun.parent is not None:
72
+ self.azurerun.parent.log(metric, value)
fastai/callback/captum.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/70c_callback.captum.ipynb.
2
+
3
+ # %% ../../nbs/70c_callback.captum.ipynb 3
4
+ from __future__ import annotations
5
+ import tempfile
6
+ from ..basics import *
7
+
8
+ # %% auto 0
9
+ __all__ = ['CaptumInterpretation']
10
+
11
+ # %% ../../nbs/70c_callback.captum.ipynb 6
12
+ from ipykernel import jsonutil
13
+
14
+ # %% ../../nbs/70c_callback.captum.ipynb 7
15
+ # Dirty hack as json_clean doesn't support CategoryMap type
16
+ _json_clean=jsonutil.json_clean
17
+ def json_clean(o):
18
+ o = list(o.items) if isinstance(o,CategoryMap) else o
19
+ return _json_clean(o)
20
+
21
+ jsonutil.json_clean = json_clean
22
+
23
+ # %% ../../nbs/70c_callback.captum.ipynb 8
24
+ from captum.attr import IntegratedGradients,NoiseTunnel,GradientShap,Occlusion
25
+ from captum.attr import visualization as viz
26
+
27
+ from matplotlib.colors import LinearSegmentedColormap
28
+
29
+ from captum.insights import AttributionVisualizer, Batch
30
+ from captum.insights.attr_vis.features import ImageFeature
31
+
32
+ # %% ../../nbs/70c_callback.captum.ipynb 16
33
+ class CaptumInterpretation():
34
+ "Captum Interpretation for Resnet"
35
+ def __init__(self,learn,cmap_name='custom blue',colors=None,N=256,methods=('original_image','heat_map'),
36
+ signs=("all", "positive"),outlier_perc=1):
37
+ if colors is None: colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')]
38
+ store_attr()
39
+ self.dls,self.model = learn.dls,self.learn.model
40
+ self.supported_metrics=['IG','NT','Occl']
41
+
42
+ def get_baseline_img(self, img_tensor,baseline_type):
43
+ baseline_img=None
44
+ if baseline_type=='zeros': baseline_img= img_tensor*0
45
+ if baseline_type=='uniform': baseline_img= torch.rand(img_tensor.shape)
46
+ if baseline_type=='gauss':
47
+ baseline_img= (torch.rand(img_tensor.shape).to(self.dls.device)+img_tensor)/2
48
+ return baseline_img.to(self.dls.device)
49
+
50
+ def visualize(self,inp,metric='IG',n_steps=1000,baseline_type='zeros',nt_type='smoothgrad', strides=(3,4,4), sliding_window_shapes=(3,15,15)):
51
+ if metric not in self.supported_metrics:
52
+ raise Exception(f"Metric {metric} is not supported. Currently {self.supported_metrics} are only supported")
53
+ tls = L([TfmdLists(inp, t) for t in L(ifnone(self.dls.tfms,[None]))])
54
+ inp_data=list(zip(*(tls[0],tls[1])))[0]
55
+ enc_data,dec_data=self._get_enc_dec_data(inp_data)
56
+ attributions=self._get_attributions(enc_data,metric,n_steps,nt_type,baseline_type,strides,sliding_window_shapes)
57
+ self._viz(attributions,dec_data,metric)
58
+
59
+ def _viz(self,attributions,dec_data,metric):
60
+ default_cmap = LinearSegmentedColormap.from_list(self.cmap_name,self.colors, N=self.N)
61
+ _ = viz.visualize_image_attr_multiple(np.transpose(attributions.squeeze().cpu().detach().numpy(), (1,2,0)),
62
+ np.transpose(dec_data[0].numpy(), (1,2,0)),
63
+ methods=self.methods,
64
+ cmap=default_cmap,
65
+ show_colorbar=True,
66
+ signs=self.signs,
67
+ outlier_perc=self.outlier_perc, titles=[f'Original Image - ({dec_data[1]})', metric])
68
+
69
+
70
+
71
+ def _get_enc_dec_data(self,inp_data):
72
+ dec_data=self.dls.after_item(inp_data)
73
+ enc_data=self.dls.after_batch(to_device(self.dls.before_batch(dec_data),self.dls.device))
74
+ return(enc_data,dec_data)
75
+
76
+ def _get_attributions(self,enc_data,metric,n_steps,nt_type,baseline_type,strides,sliding_window_shapes):
77
+ # Get Baseline
78
+ baseline=self.get_baseline_img(enc_data[0],baseline_type)
79
+ supported_metrics ={}
80
+ if metric == 'IG':
81
+ self._int_grads = self._int_grads if hasattr(self,'_int_grads') else IntegratedGradients(self.model)
82
+ return self._int_grads.attribute(enc_data[0],baseline, target=enc_data[1], n_steps=200)
83
+ elif metric == 'NT':
84
+ self._int_grads = self._int_grads if hasattr(self,'_int_grads') else IntegratedGradients(self.model)
85
+ self._noise_tunnel= self._noise_tunnel if hasattr(self,'_noise_tunnel') else NoiseTunnel(self._int_grads)
86
+ return self._noise_tunnel.attribute(enc_data[0].to(self.dls.device), n_samples=1, nt_type=nt_type, target=enc_data[1])
87
+ elif metric == 'Occl':
88
+ self._occlusion = self._occlusion if hasattr(self,'_occlusion') else Occlusion(self.model)
89
+ return self._occlusion.attribute(enc_data[0].to(self.dls.device),
90
+ strides = strides,
91
+ target=enc_data[1],
92
+ sliding_window_shapes=sliding_window_shapes,
93
+ baselines=baseline)
94
+
95
+ # %% ../../nbs/70c_callback.captum.ipynb 26
96
+ @patch
97
+ def insights(x: CaptumInterpretation,inp_data,debug=True):
98
+ _baseline_func= lambda o: o*0
99
+ _get_vocab = lambda vocab: list(map(str,vocab)) if isinstance(vocab[0],bool) else vocab
100
+ dl = x.dls.test_dl(L(inp_data),with_labels=True, bs=4)
101
+ normalize_func= next((func for func in dl.after_batch if type(func)==Normalize),noop)
102
+
103
+ # captum v0.3 expects tensors without the batch dimension.
104
+ if nested_attr(normalize_func, 'mean.ndim', 4)==4: normalize_func.mean.squeeze_(0)
105
+ if nested_attr(normalize_func, 'std.ndim', 4)==4: normalize_func.std.squeeze_(0)
106
+
107
+ visualizer = AttributionVisualizer(
108
+ models=[x.model],
109
+ score_func=lambda o: torch.nn.functional.softmax(o, 1),
110
+ classes=_get_vocab(dl.vocab),
111
+ features=[ImageFeature("Image", baseline_transforms=[_baseline_func], input_transforms=[normalize_func])],
112
+ dataset=x._formatted_data_iter(dl,normalize_func))
113
+ visualizer.render(debug=debug)
fastai/callback/channelslast.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/18c_callback.channelslast.ipynb.
2
+
3
+ # %% ../../nbs/18c_callback.channelslast.ipynb 1
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+ from .fp16 import MixedPrecision
7
+
8
+ from torch.cuda.amp import GradScaler
9
+
10
+ # %% auto 0
11
+ __all__ = ['ChannelsLast']
12
+
13
+ # %% ../../nbs/18c_callback.channelslast.ipynb 7
14
+ class ChannelsLast(Callback):
15
+ "Channels last training using PyTorch's Channels Last Memory Format (beta)"
16
+ order = -1 # Needs to run before any model modification callbacks occur
17
+ def before_fit(self):
18
+ self.learn.model.to(memory_format=torch.channels_last)
19
+
20
+ # %% ../../nbs/18c_callback.channelslast.ipynb 9
21
+ @patch
22
+ @delegates(GradScaler)
23
+ def to_channelslast(self:Learner,
24
+ to_fp16:bool=True, # Add `MixedPrecision` callback. Recommended for full channels last performance
25
+ **kwargs
26
+ ):
27
+ "Set `Learner` and inputs to `channels_last` format and `MixedPrecision` by default"
28
+ if to_fp16 and not hasattr(self, 'mixed_precision') and not hasattr(self, 'channels_last'):
29
+ return self.add_cbs([ChannelsLast(), MixedPrecision(**kwargs)])
30
+ elif not hasattr(self, 'channels_last'):
31
+ return self.add_cb(ChannelsLast())
32
+
33
+ # %% ../../nbs/18c_callback.channelslast.ipynb 10
34
+ @patch
35
+ def to_contiguous(self:Learner, to_fp32:bool=False):
36
+ "Set `Learner` and inputs to `contiguous_format` (default format), optionally to single precision"
37
+ self.model.to(memory_format=torch.contiguous_format)
38
+ if to_fp32:
39
+ return self.remove_cbs([ChannelsLast, MixedPrecision])
40
+ else:
41
+ return self.remove_cb(ChannelsLast)
fastai/callback/comet.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/70d_callback.comet.ipynb.
2
+
3
+ # %% ../../nbs/70d_callback.comet.ipynb 3
4
+ from __future__ import annotations
5
+
6
+ import tempfile
7
+
8
+ from ..basics import *
9
+ from ..learner import Callback
10
+
11
+ # %% auto 0
12
+ __all__ = ['CometCallback']
13
+
14
+ # %% ../../nbs/70d_callback.comet.ipynb 12
15
+ import comet_ml
16
+
17
+ # %% ../../nbs/70d_callback.comet.ipynb 13
18
+ class CometCallback(Callback):
19
+ "Log losses, metrics, model weights, model architecture summary to neptune"
20
+ order = Recorder.order + 1
21
+
22
+ def __init__(self, project_name, log_model_weights=True):
23
+ self.log_model_weights = log_model_weights
24
+ self.keep_experiment_running = keep_experiment_running
25
+ self.project_name = project_name
26
+ self.experiment = None
27
+
28
+ def before_fit(self):
29
+ try:
30
+ self.experiment = comet_ml.Experiment(project_name=self.project_name)
31
+ except ValueError:
32
+ print("No active experiment")
33
+
34
+ try:
35
+ self.experiment.log_parameter("n_epoch", str(self.learn.n_epoch))
36
+ self.experiment.log_parameter("model_class", str(type(self.learn.model)))
37
+ except:
38
+ print(f"Did not log all properties.")
39
+
40
+ try:
41
+ with tempfile.NamedTemporaryFile(mode="w") as f:
42
+ with open(f.name, "w") as g:
43
+ g.write(repr(self.learn.model))
44
+ self.experiment.log_asset(f.name, "model_summary.txt")
45
+ except:
46
+ print("Did not log model summary. Check if your model is PyTorch model.")
47
+
48
+ if self.log_model_weights and not hasattr(self.learn, "save_model"):
49
+ print(
50
+ "Unable to log model to Comet.\n",
51
+ )
52
+
53
+ def after_batch(self):
54
+ # log loss and opt.hypers
55
+ if self.learn.training:
56
+ self.experiment.log_metric("batch__smooth_loss", self.learn.smooth_loss)
57
+ self.experiment.log_metric("batch__loss", self.learn.loss)
58
+ self.experiment.log_metric("batch__train_iter", self.learn.train_iter)
59
+ for i, h in enumerate(self.learn.opt.hypers):
60
+ for k, v in h.items():
61
+ self.experiment.log_metric(f"batch__opt.hypers.{k}", v)
62
+
63
+ def after_epoch(self):
64
+ # log metrics
65
+ for n, v in zip(self.learn.recorder.metric_names, self.learn.recorder.log):
66
+ if n not in ["epoch", "time"]:
67
+ self.experiment.log_metric(f"epoch__{n}", v)
68
+ if n == "time":
69
+ self.experiment.log_text(f"epoch__{n}", str(v))
70
+
71
+ # log model weights
72
+ if self.log_model_weights and hasattr(self.learn, "save_model"):
73
+ if self.learn.save_model.every_epoch:
74
+ _file = join_path_file(
75
+ f"{self.learn.save_model.fname}_{self.learn.save_model.epoch}",
76
+ self.learn.path / self.learn.model_dir,
77
+ ext=".pth",
78
+ )
79
+ else:
80
+ _file = join_path_file(
81
+ self.learn.save_model.fname,
82
+ self.learn.path / self.learn.model_dir,
83
+ ext=".pth",
84
+ )
85
+ self.experiment.log_asset(_file)
86
+
87
+ def after_fit(self):
88
+ try:
89
+ self.experiment.end()
90
+ except:
91
+ print("No neptune experiment to stop.")
fastai/callback/core.py ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/13_callback.core.ipynb.
2
+
3
+ # %% ../../nbs/13_callback.core.ipynb 2
4
+ from __future__ import annotations
5
+ from ..data.all import *
6
+ from ..optimizer import *
7
+ from ..losses import BaseLoss
8
+
9
+ # %% auto 0
10
+ __all__ = ['Callback', 'TrainEvalCallback', 'GatherPredsCallback', 'FetchPredsCallback', 'CancelStepException',
11
+ 'CancelBackwardException', 'CancelFitException', 'CancelEpochException', 'CancelTrainException',
12
+ 'CancelValidException', 'CancelBatchException', 'event']
13
+
14
+ # %% ../../nbs/13_callback.core.ipynb 4
15
+ _all_ = ['CancelStepException','CancelBackwardException','CancelFitException','CancelEpochException','CancelTrainException','CancelValidException','CancelBatchException']
16
+
17
+ # %% ../../nbs/13_callback.core.ipynb 8
18
+ _events = L.split('after_create before_fit before_epoch before_train before_batch after_pred after_loss \
19
+ before_backward after_cancel_backward after_backward before_step after_cancel_step after_step \
20
+ after_cancel_batch after_batch after_cancel_train after_train before_validate after_cancel_validate \
21
+ after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit')
22
+
23
+ mk_class('event', **_events.map_dict(),
24
+ doc="All possible events as attributes to get tab-completion and typo-proofing")
25
+
26
+ # %% ../../nbs/13_callback.core.ipynb 9
27
+ _all_ = ['event']
28
+
29
+ # %% ../../nbs/13_callback.core.ipynb 14
30
+ _inner_loop = "before_batch after_pred after_loss before_backward after_cancel_backward after_backward before_step after_step after_cancel_batch after_batch".split()
31
+
32
+ # %% ../../nbs/13_callback.core.ipynb 15
33
+ _ex_docs = dict(
34
+ CancelBatchException="Skip the rest of this batch and go to `after_batch`",
35
+ CancelTrainException="Skip the rest of the training part of the epoch and go to `after_train`",
36
+ CancelValidException="Skip the rest of the validation part of the epoch and go to `after_validate`",
37
+ CancelEpochException="Skip the rest of this epoch and go to `after_epoch`",
38
+ CancelStepException ="Skip stepping the optimizer",
39
+ CancelBackwardException="Skip the backward pass and go to `after_backward`",
40
+ CancelFitException ="Interrupts training and go to `after_fit`")
41
+
42
+ for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
43
+
44
+ # %% ../../nbs/13_callback.core.ipynb 16
45
+ @funcs_kwargs(as_method=True)
46
+ class Callback(Stateful,GetAttr):
47
+ "Basic class handling tweaks of the training loop by changing a `Learner` in various events"
48
+ order,_default,learn,run,run_train,run_valid = 0,'learn',None,True,True,True
49
+ _methods = _events
50
+
51
+ def __init__(self, **kwargs): assert not kwargs, f'Passed unknown events: {kwargs}'
52
+ def __repr__(self): return type(self).__name__
53
+
54
+ def __call__(self, event_name):
55
+ "Call `self.{event_name}` if it's defined"
56
+ _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
57
+ (self.run_valid and not getattr(self, 'training', False)))
58
+ res = None
59
+ if self.run and _run:
60
+ try: res = getcallable(self, event_name)()
61
+ except (CancelBatchException, CancelBackwardException, CancelEpochException, CancelFitException, CancelStepException, CancelTrainException, CancelValidException): raise
62
+ except Exception as e: raise modify_exception(e, f'Exception occured in `{self.__class__.__name__}` when calling event `{event_name}`:\n\t{e.args[0]}', replace=True)
63
+ if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
64
+ return res
65
+
66
+ def __setattr__(self, name, value):
67
+ "Set an attribute for a `Callback`"
68
+ if hasattr(self.learn,name):
69
+ warn(f"You are shadowing an attribute ({name}) that exists in the learner. Use `self.learn.{name}` to avoid this")
70
+ super().__setattr__(name, value)
71
+
72
+ @property
73
+ def name(self):
74
+ "Name of the `Callback`, camel-cased and with '*Callback*' removed"
75
+ return class2attr(self, 'Callback')
76
+
77
+ # %% ../../nbs/13_callback.core.ipynb 34
78
+ class TrainEvalCallback(Callback):
79
+ "`Callback` that tracks the number of iterations done and properly sets training/eval mode"
80
+ order,run_valid = -10,False
81
+ def after_create(self): self.learn.n_epoch = 1
82
+
83
+ def before_fit(self):
84
+ "Set the iter and epoch counters to 0, put the model and the right device"
85
+ self.learn.epoch,self.learn.loss = 0,tensor(0.)
86
+ self.learn.train_iter,self.learn.pct_train = 0,0.
87
+ device = getattr(self.dls, 'device', default_device())
88
+ self.model.to(device)
89
+ if isinstance(self.loss_func, (nn.Module, BaseLoss)): self.loss_func.to(device)
90
+ if hasattr(self.model, 'reset'): self.model.reset()
91
+
92
+ def after_batch(self):
93
+ "Update the iter counter (in training mode)"
94
+ self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
95
+ self.learn.train_iter += 1
96
+
97
+ def before_train(self):
98
+ "Set the model to training mode"
99
+ self.learn.pct_train=self.epoch/self.n_epoch
100
+ self.model.train()
101
+ self.learn.training=True
102
+
103
+ def before_validate(self):
104
+ "Set the model to validation mode"
105
+ self.model.eval()
106
+ self.learn.training=False
107
+
108
+ # %% ../../nbs/13_callback.core.ipynb 38
109
+ if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback]
110
+
111
+ # %% ../../nbs/13_callback.core.ipynb 52
112
+ class GatherPredsCallback(Callback):
113
+ "`Callback` that returns all predictions and targets, optionally `with_input` or `with_loss`"
114
+ _stateattrs=('preds','targets','inputs','losses')
115
+ def __init__(self,
116
+ with_input:bool=False, # Whether to return inputs
117
+ with_loss:bool=False, # Whether to return losses
118
+ save_preds:Path=None, # Path to save predictions
119
+ save_targs:Path=None, # Path to save targets
120
+ with_preds:bool=True, # Whether to return predictions
121
+ with_targs:bool=True, # Whether to return targets
122
+ concat_dim:int=0, # Dimension to concatenate returned tensors
123
+ pickle_protocol:int=2 # Pickle protocol used to save predictions and targets
124
+ ):
125
+ store_attr()
126
+
127
+ def before_batch(self):
128
+ "If `with_input`, detach batch inputs"
129
+ if self.with_input: self.inputs.append((self.learn.to_detach(self.xb)))
130
+
131
+ def before_validate(self):
132
+ "Initialize containers"
133
+ self.preds,self.targets = [],[]
134
+ if self.with_input: self.inputs = []
135
+ if self.with_loss: self.losses = []
136
+
137
+ def after_batch(self):
138
+ "Save predictions, targets and potentially losses"
139
+ if not hasattr(self, 'pred'): return
140
+ preds,targs = self.learn.to_detach(self.pred),self.learn.to_detach(self.yb)
141
+ if self.with_preds: self.preds.append(preds)
142
+ if self.with_targs: self.targets.append(targs)
143
+ if self.save_preds is not None:
144
+ torch.save(preds, self.save_preds/str(self.iter), pickle_protocol=self.pickle_protocol)
145
+ if self.save_targs is not None:
146
+ torch.save(targs[0], self.save_targs/str(self.iter), pickle_protocol=self.pickle_protocol)
147
+ if self.with_loss:
148
+ bs = find_bs(self.yb)
149
+ loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
150
+ self.losses.append(self.learn.to_detach(loss))
151
+
152
+ def after_validate(self):
153
+ "Concatenate all recorded tensors"
154
+ if not hasattr(self, 'preds'): return
155
+ if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim))
156
+ if self.with_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim))
157
+ if self.with_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim))
158
+ if self.with_loss: self.losses = to_concat(self.losses)
159
+
160
+ def all_tensors(self) -> (Tensor, list):
161
+ "Returns all recorded tensors in the order [inputs, preds, targets, losses]"
162
+ res = [self.preds if self.with_preds else None, self.targets if self.with_targs else None]
163
+ if self.with_input: res = [self.inputs] + res
164
+ if self.with_loss: res.append(self.losses)
165
+ return res
166
+
167
+ # %% ../../nbs/13_callback.core.ipynb 54
168
+ class FetchPredsCallback(Callback):
169
+ "A callback to fetch predictions during the training loop"
170
+ remove_on_fetch = True
171
+ def __init__(self,
172
+ ds_idx:int=1, # Index of dataset, 0 for train, 1 for valid, used if `dl` is not present
173
+ dl:DataLoader=None, # `DataLoader` used for fetching `Learner` predictions
174
+ with_input:bool=False, # Whether to return inputs in `GatherPredsCallback`
175
+ with_decoded:bool=False, # Whether to return decoded predictions
176
+ cbs:Callback|MutableSequence=None, # `Callback` to temporarily remove from `Learner`
177
+ reorder:bool=True # Whether to sort prediction results
178
+ ):
179
+ self.cbs = L(cbs)
180
+ store_attr('ds_idx,dl,with_input,with_decoded,reorder')
181
+
182
+ def after_validate(self):
183
+ "Fetch predictions from `Learner` without `self.cbs` and `remove_on_fetch` callbacks"
184
+ to_rm = L(cb for cb in self.learn.cbs if getattr(cb, 'remove_on_fetch', False))
185
+ with self.learn.removed_cbs(to_rm + self.cbs) as learn:
186
+ self.preds = learn.get_preds(ds_idx=self.ds_idx, dl=self.dl,
187
+ with_input=self.with_input, with_decoded=self.with_decoded, inner=True, reorder=self.reorder)
fastai/callback/data.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/14a_callback.data.ipynb.
2
+
3
+ # %% ../../nbs/14a_callback.data.ipynb 3
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+
7
+ # %% auto 0
8
+ __all__ = ['CollectDataCallback', 'WeightedDL', 'PartialDL']
9
+
10
+ # %% ../../nbs/14a_callback.data.ipynb 5
11
+ class CollectDataCallback(Callback):
12
+ "Collect all batches, along with `pred` and `loss`, into `self.data`. Mainly for testing"
13
+ def before_fit(self): self.data = L()
14
+ def after_batch(self):
15
+ self.data.append(self.learn.to_detach((self.xb,self.yb,self.pred,self.loss)))
16
+
17
+ # %% ../../nbs/14a_callback.data.ipynb 6
18
+ @delegates()
19
+ class WeightedDL(TfmdDL):
20
+ "Weighted dataloader where `wgts` is used for the training set only"
21
+ def __init__(self, dataset=None, bs=None, wgts=None, **kwargs):
22
+ wgts = array([1.]*len(dataset) if wgts is None else wgts)
23
+ self.wgts = wgts/wgts.sum()
24
+ super().__init__(dataset=dataset, bs=bs, **kwargs)
25
+
26
+ def get_idxs(self):
27
+ if self.n==0: return []
28
+ if not self.shuffle: return super().get_idxs()
29
+ return list(np.random.choice(self.n, self.n, p=self.wgts))
30
+
31
+ # %% ../../nbs/14a_callback.data.ipynb 7
32
+ @patch
33
+ @delegates(Datasets.dataloaders)
34
+ def weighted_dataloaders(self:Datasets, wgts, bs=64, **kwargs):
35
+ "Create a weighted dataloader `WeightedDL` with `wgts` for the training set"
36
+ xtra_kwargs = [{}] * (self.n_subsets-1)
37
+ return self.dataloaders(bs=bs, dl_type=WeightedDL, dl_kwargs=({'wgts':wgts}, *xtra_kwargs), **kwargs)
38
+
39
+ # %% ../../nbs/14a_callback.data.ipynb 12
40
+ @patch
41
+ @delegates(Datasets.weighted_dataloaders)
42
+ def weighted_dataloaders(self:DataBlock, source, wgts, bs=64, verbose:bool=False, **kwargs):
43
+ "Create a weighted dataloader `WeightedDL` with `wgts` for the dataset"
44
+ dss = self.datasets(source, verbose=verbose)
45
+ if not hasattr(wgts, '__array__'): wgts = np.array(wgts)
46
+ trn_wgts = wgts[dss.splits[0]]
47
+ return dss.weighted_dataloaders(trn_wgts, bs=bs, after_batch=self.batch_tfms, after_item=self.item_tfms, **kwargs)
48
+
49
+ # %% ../../nbs/14a_callback.data.ipynb 14
50
+ @delegates()
51
+ class PartialDL(TfmdDL):
52
+ "Select randomly partial quantity of data at each epoch"
53
+ def __init__(self, dataset=None, bs=None, partial_n=None, **kwargs):
54
+ super().__init__(dataset=dataset, bs=bs, **kwargs)
55
+ self.partial_n = min(partial_n, self.n) if partial_n else None
56
+
57
+ def get_idxs(self):
58
+ if self.partial_n is None: return super().get_idxs()
59
+ return list(np.random.choice(self.n, self.partial_n, replace=False))
60
+
61
+ def __len__(self):
62
+ if self.partial_n is None: return super().__len__()
63
+ return self.partial_n//self.bs + (0 if self.drop_last or self.partial_n%self.bs==0 else 1)
64
+
65
+ # %% ../../nbs/14a_callback.data.ipynb 15
66
+ @patch
67
+ @delegates(Datasets.dataloaders)
68
+ def partial_dataloaders(self:FilteredBase, partial_n, bs=64, **kwargs):
69
+ "Create a partial dataloader `PartialDL` for the training set"
70
+ xtra_kwargs = [{}] * (self.n_subsets-1)
71
+ return self.dataloaders(bs=bs, dl_type=PartialDL, dl_kwargs=({'partial_n':partial_n}, *xtra_kwargs), **kwargs)
fastai/callback/fp16.py ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/18_callback.fp16.ipynb.
2
+
3
+ # %% ../../nbs/18_callback.fp16.ipynb 2
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+ from .progress import *
7
+
8
+ from torch.cuda.amp import GradScaler,autocast
9
+ from torch.cuda.amp.grad_scaler import OptState
10
+
11
+ # %% auto 0
12
+ __all__ = ['MixedPrecision', 'FP16TestCallback', 'get_master', 'to_master_grads', 'to_model_params', 'test_overflow',
13
+ 'grad_overflow', 'copy_clone', 'ModelToHalf', 'NonNativeMixedPrecision']
14
+
15
+ # %% ../../nbs/18_callback.fp16.ipynb 17
16
+ @delegates(GradScaler)
17
+ class MixedPrecision(Callback):
18
+ "Mixed precision training using Pytorch's `autocast` and `GradScaler`"
19
+ order = 10
20
+ def __init__(self, **kwargs): self.kwargs = kwargs
21
+ def before_fit(self):
22
+ self.autocast,self.learn.scaler,self.scales = autocast(),GradScaler(**self.kwargs),L()
23
+ def before_batch(self): self.autocast.__enter__()
24
+ def after_pred(self):
25
+ if next(flatten(self.pred)).dtype==torch.float16: self.learn.pred = to_float(self.pred)
26
+ def after_loss(self): self.autocast.__exit__(None, None, None)
27
+ def before_backward(self): self.learn.loss_grad = self.scaler.scale(self.loss_grad)
28
+ def before_step(self):
29
+ "Use `self` as a fake optimizer. `self.skipped` will be set to True `after_step` if gradients overflow. "
30
+ self.skipped=True
31
+ self.scaler.step(self)
32
+ if self.skipped: raise CancelStepException()
33
+ self.scales.append(self.scaler.get_scale())
34
+ def after_step(self): self.learn.scaler.update()
35
+
36
+ @property
37
+ def param_groups(self):
38
+ "Pretend to be an optimizer for `GradScaler`"
39
+ return self.opt.param_groups
40
+ def step(self, *args, **kwargs):
41
+ "Fake optimizer step to detect whether this batch was skipped from `GradScaler`"
42
+ self.skipped=False
43
+ def after_fit(self): self.autocast,self.learn.scaler,self.scales = None,None,None
44
+
45
+ # %% ../../nbs/18_callback.fp16.ipynb 19
46
+ class FP16TestCallback(Callback):
47
+ "Asserts that predictions are `float16` values"
48
+ order = 9
49
+ def after_pred(self): assert listify(flatten(self.pred))[0].dtype==torch.float16
50
+
51
+ # %% ../../nbs/18_callback.fp16.ipynb 22
52
+ @patch
53
+ @delegates(GradScaler)
54
+ def to_fp16(self:Learner, **kwargs): return self.add_cb(MixedPrecision(**kwargs))
55
+
56
+ # %% ../../nbs/18_callback.fp16.ipynb 23
57
+ @patch
58
+ def to_fp32(self:Learner): return self.remove_cb(MixedPrecision)
59
+
60
+ # %% ../../nbs/18_callback.fp16.ipynb 26
61
+ from ..fp16_utils import convert_network, model_grads_to_master_grads, master_params_to_model_params
62
+
63
+ # %% ../../nbs/18_callback.fp16.ipynb 32
64
+ from torch.nn.utils import parameters_to_vector
65
+
66
+ # %% ../../nbs/18_callback.fp16.ipynb 33
67
+ def get_master(
68
+ opt:Optimizer, # Optimizer from which to retrieve model params
69
+ flat_master:bool=False, # Flatten fp32 params into a vector for better performance
70
+ ) -> list: # List of fp16 params, and list of fp32 params
71
+ "Creates fp16 model params given an initialized `Optimizer`, also returning fp32 model params. "
72
+ model_params = [[param for param in pg if getattr(param, 'requires_grad', False) and hasattr(param, 'data')] for pg in opt.param_lists]
73
+ if flat_master:
74
+ master_params = []
75
+ for pg in model_params:
76
+ mp = parameters_to_vector([param.data.float() for param in pg])
77
+ mp = nn.Parameter(mp, requires_grad=True)
78
+ if mp.grad is None: mp.grad = mp.new(*mp.size())
79
+ master_params.append([mp])
80
+ else:
81
+ master_params = [[nn.Parameter(param.data.clone().float().detach(), requires_grad=True) for param in pg] for pg in model_params]
82
+ return model_params, master_params
83
+
84
+ # %% ../../nbs/18_callback.fp16.ipynb 38
85
+ def to_master_grads(
86
+ model_pgs:list, # Fp16 model parameters to copy gradients from
87
+ master_pgs:list, # Fp32 model parameters to copy gradients to
88
+ flat_master:bool=False, # Whether or not fp32 parameters were previously flattened
89
+ ):
90
+ "Move fp16 model gradients to fp32 master gradients"
91
+ for (model_params,master_params) in zip(model_pgs,master_pgs):
92
+ model_grads_to_master_grads(model_params, master_params, flat_master=flat_master)
93
+
94
+ # %% ../../nbs/18_callback.fp16.ipynb 42
95
+ def to_model_params(
96
+ model_pgs:list, # Fp16 model params to copy to
97
+ master_pgs:list, # Fp32 master params to copy from
98
+ flat_master:bool=False # Whether master_pgs was previously flattened
99
+ )->None:
100
+ "Copy updated fp32 master params to fp16 model params after gradient step. "
101
+ for (model_params,master_params) in zip(model_pgs,master_pgs):
102
+ master_params_to_model_params(model_params, master_params, flat_master=flat_master)
103
+
104
+ # %% ../../nbs/18_callback.fp16.ipynb 47
105
+ def test_overflow(x:torch.Tensor):
106
+ "Tests whether fp16 gradients have overflown."
107
+ s = float(x.float().sum())
108
+ return (s == float('inf') or s == float('-inf') or s != s)
109
+
110
+ # %% ../../nbs/18_callback.fp16.ipynb 50
111
+ def grad_overflow(pgs:list)->bool:
112
+ "Tests all fp16 parameters in pgs for gradient overflow"
113
+ for pg in pgs:
114
+ for p in pg:
115
+ if p.grad is not None and test_overflow(p.grad.data): return True
116
+ return False
117
+
118
+ # %% ../../nbs/18_callback.fp16.ipynb 53
119
+ def copy_clone(d):
120
+ return {k:(v.detach().clone().float() if isinstance(v,Tensor) else v) for k,v in d.items()}
121
+
122
+ # %% ../../nbs/18_callback.fp16.ipynb 54
123
+ def _copy_state(opt, pgs1, pgs2):
124
+ opt.param_lists = pgs2
125
+ for pg1,pg2 in zip(pgs1, pgs2):
126
+ for p1,p2 in zip(pg1, pg2): opt.state[p2] = copy_clone(opt.state.pop(p1, {}))
127
+
128
+ # %% ../../nbs/18_callback.fp16.ipynb 55
129
+ class ModelToHalf(Callback):
130
+ "Use with NonNativeMixedPrecision callback (but it needs to run at the very beginning)"
131
+ order=-50
132
+ def before_fit(self): self.learn.model = convert_network(self.model, dtype=torch.float16)
133
+ def after_fit (self): self.learn.model = convert_network(self.model, dtype=torch.float32)
134
+
135
+ # %% ../../nbs/18_callback.fp16.ipynb 56
136
+ @docs
137
+ class NonNativeMixedPrecision(Callback):
138
+ "Run training in mixed precision"
139
+ order=10
140
+ def __init__(self,
141
+ loss_scale:int=512, # Non-dynamic loss scale, used to avoid underflow of gradients.
142
+ flat_master:bool=False, # Whether to flatten fp32 parameters for performance
143
+ dynamic:bool=True, # Whether to automatically determine loss scaling
144
+ max_loss_scale:float=2.**24, # Starting value for dynamic loss scaling
145
+ div_factor:float=2., # Divide by this on overflow, multiply by this after scale_wait batches
146
+ scale_wait:int=500, # Number of batches to wait for increasing loss scale
147
+ clip:float=None, # Value to clip gradients at, max_norm, as in `nn.utils.clip_grad_norm_`
148
+ ):
149
+ assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn."
150
+ self.flat_master,self.dynamic,self.max_loss_scale = flat_master,dynamic,max_loss_scale
151
+ self.div_factor,self.scale_wait,self.clip = div_factor,scale_wait,clip
152
+ self.loss_scale = max_loss_scale if dynamic else loss_scale
153
+
154
+ def before_fit(self):
155
+ assert self.dls.device.type == 'cuda', "Mixed-precision training requires a GPU, remove the call `to_fp16`"
156
+ if self.learn.opt is None: self.learn.create_opt()
157
+ self.model_pgs,self.master_pgs = get_master(self.opt, self.flat_master)
158
+ self.old_pgs = self.opt.param_lists
159
+ #Changes the optimizer so that the optimization step is done in FP32.
160
+ _copy_state(self.learn.opt, self.model_pgs, self.master_pgs)
161
+ if self.dynamic: self.count = 0
162
+
163
+ def before_batch(self): self.learn.xb = to_half(self.xb)
164
+ def after_pred(self): self.learn.pred = to_float(self.pred)
165
+ def before_backward(self): self.learn.loss_grad *= self.loss_scale
166
+
167
+ def before_step(self):
168
+ #First, check for an overflow
169
+ if self.dynamic and grad_overflow(self.model_pgs):
170
+ self.loss_scale /= self.div_factor
171
+ self.learn.loss_grad /= self.div_factor #to record correct loss
172
+ self.model.zero_grad()
173
+ raise CancelBatchException() #skip step and zero_grad
174
+ to_master_grads(self.model_pgs, self.master_pgs, self.flat_master)
175
+ for master_params in self.master_pgs:
176
+ for param in master_params:
177
+ if param.grad is not None: param.grad.div_(self.loss_scale)
178
+ if self.clip is not None:
179
+ for group in self.master_pgs: nn.utils.clip_grad_norm_(group, self.clip)
180
+ # Check if it's been long enough without overflow
181
+ if self.dynamic:
182
+ self.count += 1
183
+ if self.count == self.scale_wait:
184
+ self.count = 0
185
+ self.loss_scale *= self.div_factor
186
+
187
+ def after_step(self):
188
+ self.model.zero_grad() #Zero the gradients of the model manually (optimizer disconnected)
189
+ to_model_params(self.model_pgs, self.master_pgs, self.flat_master)
190
+
191
+ def after_batch(self):
192
+ if self.training: self.learn.loss_grad /= self.loss_scale #Log correct loss
193
+ def after_fit(self):
194
+ if not hasattr(self,'master_pgs'): return
195
+ _copy_state(self.learn.opt, self.master_pgs, self.model_pgs)
196
+ self.learn.opt.param_lists = self.old_pgs
197
+ delattr(self, "master_pgs")
198
+ delattr(self, "model_pgs")
199
+ delattr(self, "old_pgs")
200
+
201
+ _docs = dict(before_fit="Put the model in FP16 and prepare the two copies of the parameters",
202
+ before_batch="Put the input in FP16",
203
+ after_pred="Put the output back to FP32 so that the loss is computed in FP32",
204
+ before_backward="Apply loss scaling to avoid gradient underflow",
205
+ before_step="Update and apply dynamic loss scaling, move gradients to fp32, apply gradient clipping",
206
+ after_step="Zero fp16 grads and update fp16 params with fp32 params. ",
207
+ after_batch="Ensure loss is logged correctly",
208
+ after_fit="Put the model back in FP32")
209
+
210
+ # %% ../../nbs/18_callback.fp16.ipynb 60
211
+ @patch
212
+ @delegates(NonNativeMixedPrecision.__init__)
213
+ def to_non_native_fp16(self:Learner, **kwargs): return self.add_cbs([ModelToHalf(), NonNativeMixedPrecision(**kwargs)])
214
+
215
+ # %% ../../nbs/18_callback.fp16.ipynb 63
216
+ @patch
217
+ def to_non_native_fp32(self: Learner): return self.remove_cbs([ModelToHalf, NonNativeMixedPrecision])
fastai/callback/hook.py ADDED
@@ -0,0 +1,281 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/15_callback.hook.ipynb.
2
+
3
+ # %% ../../nbs/15_callback.hook.ipynb 1
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+
7
+ # %% auto 0
8
+ __all__ = ['Hook', 'hook_output', 'Hooks', 'hook_outputs', 'dummy_eval', 'model_sizes', 'num_features_model', 'has_params',
9
+ 'HookCallback', 'total_params', 'layer_info', 'module_summary', 'ActivationStats']
10
+
11
+ # %% ../../nbs/15_callback.hook.ipynb 13
12
+ @docs
13
+ class Hook():
14
+ "Create a hook on `m` with `hook_func`."
15
+ def __init__(self, m, hook_func, is_forward=True, detach=True, cpu=False, gather=False):
16
+ store_attr('hook_func,detach,cpu,gather')
17
+ f = m.register_forward_hook if is_forward else m.register_backward_hook
18
+ self.hook = f(self.hook_fn)
19
+ self.stored,self.removed = None,False
20
+
21
+ def hook_fn(self, module, input, output):
22
+ "Applies `hook_func` to `module`, `input`, `output`."
23
+ if self.detach:
24
+ input,output = to_detach(input, cpu=self.cpu, gather=self.gather),to_detach(output, cpu=self.cpu, gather=self.gather)
25
+ self.stored = self.hook_func(module, input, output)
26
+
27
+ def remove(self):
28
+ "Remove the hook from the model."
29
+ if not self.removed:
30
+ self.hook.remove()
31
+ self.removed=True
32
+
33
+ def __enter__(self, *args): return self
34
+ def __exit__(self, *args): self.remove()
35
+
36
+ _docs = dict(__enter__="Register the hook",
37
+ __exit__="Remove the hook")
38
+
39
+ # %% ../../nbs/15_callback.hook.ipynb 25
40
+ def _hook_inner(m,i,o): return o if isinstance(o,Tensor) or is_listy(o) else list(o)
41
+
42
+ def hook_output(module, detach=True, cpu=False, grad=False):
43
+ "Return a `Hook` that stores activations of `module` in `self.stored`"
44
+ return Hook(module, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
45
+
46
+ # %% ../../nbs/15_callback.hook.ipynb 30
47
+ @docs
48
+ class Hooks():
49
+ "Create several hooks on the modules in `ms` with `hook_func`."
50
+ def __init__(self, ms, hook_func, is_forward=True, detach=True, cpu=False):
51
+ self.hooks = [Hook(m, hook_func, is_forward, detach, cpu) for m in ms]
52
+
53
+ def __getitem__(self,i): return self.hooks[i]
54
+ def __len__(self): return len(self.hooks)
55
+ def __iter__(self): return iter(self.hooks)
56
+ @property
57
+ def stored(self): return L(o.stored for o in self)
58
+
59
+ def remove(self):
60
+ "Remove the hooks from the model."
61
+ for h in self.hooks: h.remove()
62
+
63
+ def __enter__(self, *args): return self
64
+ def __exit__ (self, *args): self.remove()
65
+
66
+ _docs = dict(stored = "The states saved in each hook.",
67
+ __enter__="Register the hooks",
68
+ __exit__="Remove the hooks")
69
+
70
+ # %% ../../nbs/15_callback.hook.ipynb 39
71
+ def hook_outputs(modules, detach=True, cpu=False, grad=False):
72
+ "Return `Hooks` that store activations of all `modules` in `self.stored`"
73
+ return Hooks(modules, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
74
+
75
+ # %% ../../nbs/15_callback.hook.ipynb 43
76
+ def dummy_eval(m, size=(64,64)):
77
+ "Evaluate `m` on a dummy input of a certain `size`"
78
+ ch_in = in_channels(m)
79
+ x = one_param(m).new(1, ch_in, *size).requires_grad_(False).uniform_(-1.,1.)
80
+ with torch.no_grad(): return m.eval()(x)
81
+
82
+ # %% ../../nbs/15_callback.hook.ipynb 44
83
+ def model_sizes(m, size=(64,64)):
84
+ "Pass a dummy input through the model `m` to get the various sizes of activations."
85
+ with hook_outputs(m) as hooks:
86
+ _ = dummy_eval(m, size=size)
87
+ return [o.stored.shape for o in hooks]
88
+
89
+ # %% ../../nbs/15_callback.hook.ipynb 46
90
+ def num_features_model(m):
91
+ "Return the number of output features for `m`."
92
+ sz,ch_in = 32,in_channels(m)
93
+ while True:
94
+ #Trying for a few sizes in case the model requires a big input size.
95
+ try:
96
+ return model_sizes(m, (sz,sz))[-1][1]
97
+ except Exception as e:
98
+ sz *= 2
99
+ if sz > 2048: raise e
100
+
101
+ # %% ../../nbs/15_callback.hook.ipynb 50
102
+ def has_params(m):
103
+ "Check if `m` has at least one parameter"
104
+ return len(list(m.parameters())) > 0
105
+
106
+ # %% ../../nbs/15_callback.hook.ipynb 52
107
+ @funcs_kwargs
108
+ class HookCallback(Callback):
109
+ "`Callback` that can be used to register hooks on `modules`"
110
+ _methods = ["hook"]
111
+ hook = noops
112
+ def __init__(self, modules=None, every=None, remove_end=True, is_forward=True, detach=True, cpu=True, include_paramless=False , **kwargs):
113
+ store_attr('modules,every,remove_end,is_forward,detach,cpu, include_paramless')
114
+ assert not kwargs
115
+
116
+ def before_fit(self):
117
+ "Register the `Hooks` on `self.modules`."
118
+ if self.modules is None: self.modules = [m for m in flatten_model(self.model) if self.include_paramless or has_params(m)]
119
+ if self.every is None: self._register()
120
+
121
+ def before_batch(self):
122
+ if self.every is None: return
123
+ if self.training and self.train_iter%self.every==0: self._register()
124
+
125
+ def after_batch(self):
126
+ if self.every is None: return
127
+ if self.training and self.train_iter%self.every==0: self._remove()
128
+
129
+ def after_fit(self):
130
+ "Remove the `Hooks`."
131
+ if self.remove_end: self._remove()
132
+
133
+ def _register(self): self.hooks = Hooks(self.modules, self.hook, self.is_forward, self.detach, self.cpu)
134
+ def _remove(self):
135
+ if getattr(self, 'hooks', None): self.hooks.remove()
136
+
137
+ def __del__(self): self._remove()
138
+
139
+ # %% ../../nbs/15_callback.hook.ipynb 59
140
+ def total_params(m):
141
+ "Give the number of parameters of a module and if it's trainable or not"
142
+ params = sum([p.numel() for p in m.parameters()])
143
+ trains = [p.requires_grad for p in m.parameters()]
144
+ return params, (False if len(trains)==0 else trains[0])
145
+
146
+ # %% ../../nbs/15_callback.hook.ipynb 61
147
+ def layer_info(learn, *xb):
148
+ "Return layer infos of `model` on `xb` (only support batch first inputs)"
149
+ def _track(m, i, o):
150
+ params, trainable, shape = '', '', ''
151
+ same = any((isinstance(x[0], torch.Tensor) and x[0].shape[1:] == x[1].shape for x in zip(i, o)))
152
+ shape = apply(lambda x: x.shape, o)
153
+ if hasattr(m, 'weight'): # non activation layer
154
+ params, trainable = total_params(m)
155
+ return (type(m).__name__, params, trainable, shape, same)
156
+
157
+ with Hooks(flatten_model(learn.model), _track) as h:
158
+ batch = apply(lambda o:o[:1], xb)
159
+ train_only_cbs = [cb for cb in learn.cbs if hasattr(cb, '_only_train_loop')]
160
+ with learn.removed_cbs(train_only_cbs), learn.no_logging(), learn as l:
161
+ r = l.get_preds(dl=[batch], inner=True, reorder=False)
162
+ return h.stored
163
+
164
+ # %% ../../nbs/15_callback.hook.ipynb 66
165
+ def _get_shapes(o, bs):
166
+ inp = o[first(o)] if (isinstance(o, dict)) else o
167
+ return ' x '.join([str(bs)] + [str(t) for t in inp[1:]])
168
+
169
+ def _print_shapes(o, bs):
170
+ if isinstance(o, torch.Size): return _get_shapes(o, bs)
171
+ elif isinstance(o, tuple): return _get_shapes(o[0], bs)
172
+ else: return str([_print_shapes(x, bs) for x in o])
173
+
174
+ # %% ../../nbs/15_callback.hook.ipynb 67
175
+ def module_summary(learn, *xb):
176
+ "Print a summary of `model` using `xb`"
177
+ #Individual parameters wrapped in ParameterModule aren't called through the hooks in `layer_info`,
178
+ # thus are not counted inside the summary
179
+ #TODO: find a way to have them counted in param number somehow
180
+ infos = layer_info(learn, *xb)
181
+ n,bs = 76,find_bs(xb)
182
+ inp_sz = _print_shapes(apply(lambda x:x.shape, xb), bs)
183
+ res = f"{type(learn.model).__name__} (Input shape: {inp_sz})\n"
184
+ res += "=" * n + "\n"
185
+ res += f"{'Layer (type)':<20} {'Output Shape':<20} {'Param #':<10} {'Trainable':<10}\n"
186
+ res += "=" * n
187
+ ps,trn_ps,j = 0,0,0
188
+ infos = [o for o in infos if o is not None] #see comment in previous cell
189
+ prev_sz = None
190
+ for typ,np,trn,sz,chnged in infos:
191
+ if sz is None: continue
192
+ if j == 0:
193
+ res += f'\n{"":<20} {_print_shapes(sz, bs)[:19]:<20}' # to avoid a double line at the top
194
+ if not chnged and not prev_sz == sz and j > 0: res += "\n" + "_" * n + "\n" + f'{"":<20} {_print_shapes(sz, bs)[:19]:<20}'
195
+ j = 1
196
+ res += f"\n{typ:<20} {'':<20} {np:<10} {str(trn):<10}"
197
+ if np != '':
198
+ ps += np
199
+ if trn: trn_ps += np
200
+ prev_sz = sz
201
+ res += "\n" + "_" * n + "\n"
202
+ res += f"\nTotal params: {ps:,}\n"
203
+ res += f"Total trainable params: {trn_ps:,}\n"
204
+ res += f"Total non-trainable params: {ps - trn_ps:,}\n\n"
205
+ return PrettyString(res)
206
+
207
+ # %% ../../nbs/15_callback.hook.ipynb 68
208
+ @patch
209
+ def summary(self:Learner):
210
+ "Print a summary of the model, optimizer and loss function."
211
+ xb = self.dls.train.one_batch()[:getattr(self.dls.train, "n_inp", 1)]
212
+ res = module_summary(self, *xb)
213
+ res += f"Optimizer used: {self.opt_func}\nLoss function: {self.loss_func}\n\n"
214
+ if self.opt is not None:
215
+ res += f"Model " + ("unfrozen\n\n" if self.opt.frozen_idx==0 else f"frozen up to parameter group #{self.opt.frozen_idx}\n\n")
216
+ res += "Callbacks:\n" + '\n'.join(f" - {cb}" for cb in self.cbs.sorted('order'))
217
+ return PrettyString(res)
218
+
219
+ # %% ../../nbs/15_callback.hook.ipynb 74
220
+ @delegates()
221
+ class ActivationStats(HookCallback):
222
+ "Callback that record the mean and std of activations."
223
+ order=-20
224
+ def __init__(self, with_hist=False, **kwargs):
225
+ super().__init__(**kwargs)
226
+ self.with_hist = with_hist
227
+
228
+ def before_fit(self):
229
+ "Initialize stats."
230
+ super().before_fit()
231
+ self.stats = L()
232
+
233
+ def hook(self, m, i, o):
234
+ if isinstance(o, tuple): return self.hook_multi_ouput(o)
235
+ o = o.float()
236
+ res = {'mean': o.mean().item(), 'std': o.std().item(),
237
+ 'near_zero': (o<=0.05).long().sum().item()/o.numel()}
238
+ if self.with_hist: res['hist'] = o.histc(40,0,10)
239
+ return res
240
+
241
+ def hook_multi_ouput(self,o_tuple):
242
+ "For outputs of RNN which are [nested] tuples of tensors"
243
+ res = []
244
+ for o in self._flatten_tuple(o_tuple):
245
+ if not(isinstance(o, Tensor)): continue
246
+ res.append(self.hook(None, None, o))
247
+ return res
248
+
249
+ def _flatten_tuple(self, o_tuple):
250
+ "Recursively flatten a [nested] tuple"
251
+ res = []
252
+ for it in o_tuple:
253
+ if isinstance(it, tuple): res += self._flatten_tuple(it)
254
+ else: res += [it]
255
+ return tuple(res)
256
+
257
+ def after_batch(self):
258
+ "Take the stored results and puts it in `self.stats`"
259
+ if self.training and (self.every is None or self.train_iter%self.every == 0): self.stats.append(self.hooks.stored)
260
+ super().after_batch()
261
+
262
+ def layer_stats(self, idx):
263
+ lstats = self.stats.itemgot(idx)
264
+ return L(lstats.itemgot(o) for o in ('mean','std','near_zero'))
265
+
266
+ def hist(self, idx):
267
+ res = self.stats.itemgot(idx).itemgot('hist')
268
+ return torch.stack(tuple(res)).t().float().log1p()
269
+
270
+ def color_dim(self, idx, figsize=(10,5), ax=None):
271
+ "The 'colorful dimension' plot"
272
+ res = self.hist(idx)
273
+ if ax is None: ax = subplots(figsize=figsize)[1][0]
274
+ ax.imshow(res, origin='lower')
275
+ ax.axis('off')
276
+
277
+ def plot_layer_stats(self, idx):
278
+ _,axs = subplots(1, 3, figsize=(12,3))
279
+ for o,ax,title in zip(self.layer_stats(idx),axs,('mean','std','% near zero')):
280
+ ax.plot(o)
281
+ ax.set_title(title)
fastai/callback/mixup.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/19_callback.mixup.ipynb.
2
+
3
+ # %% ../../nbs/19_callback.mixup.ipynb 2
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+ from torch.distributions.beta import Beta
7
+
8
+ # %% auto 0
9
+ __all__ = ['reduce_loss', 'MixHandler', 'MixUp', 'CutMix']
10
+
11
+ # %% ../../nbs/19_callback.mixup.ipynb 6
12
+ def reduce_loss(
13
+ loss:Tensor,
14
+ reduction:str='mean' # PyTorch loss reduction
15
+ )->Tensor:
16
+ "Reduce the loss based on `reduction`"
17
+ return loss.mean() if reduction == 'mean' else loss.sum() if reduction == 'sum' else loss
18
+
19
+ # %% ../../nbs/19_callback.mixup.ipynb 7
20
+ class MixHandler(Callback):
21
+ "A handler class for implementing `MixUp` style scheduling"
22
+ run_valid = False
23
+ def __init__(self,
24
+ alpha:float=0.5 # Determine `Beta` distribution in range (0.,inf]
25
+ ):
26
+ self.distrib = Beta(tensor(alpha), tensor(alpha))
27
+
28
+ def before_train(self):
29
+ "Determine whether to stack y"
30
+ self.stack_y = getattr(self.learn.loss_func, 'y_int', False)
31
+ if self.stack_y: self.old_lf,self.learn.loss_func = self.learn.loss_func,self.lf
32
+
33
+ def after_train(self):
34
+ "Set the loss function back to the previous loss"
35
+ if self.stack_y: self.learn.loss_func = self.old_lf
36
+
37
+ def after_cancel_train(self):
38
+ "If training is canceled, still set the loss function back"
39
+ self.after_train()
40
+
41
+ def after_cancel_fit(self):
42
+ "If fit is canceled, still set the loss function back"
43
+ self.after_train()
44
+
45
+ def lf(self, pred, *yb):
46
+ "lf is a loss function that applies the original loss function on both outputs based on `self.lam`"
47
+ if not self.training: return self.old_lf(pred, *yb)
48
+ with NoneReduce(self.old_lf) as lf:
49
+ loss = torch.lerp(lf(pred,*self.yb1), lf(pred,*yb), self.lam)
50
+ return reduce_loss(loss, getattr(self.old_lf, 'reduction', 'mean'))
51
+
52
+ # %% ../../nbs/19_callback.mixup.ipynb 10
53
+ class MixUp(MixHandler):
54
+ "Implementation of https://arxiv.org/abs/1710.09412"
55
+ def __init__(self,
56
+ alpha:float=.4 # Determine `Beta` distribution in range (0.,inf]
57
+ ):
58
+ super().__init__(alpha)
59
+
60
+ def before_batch(self):
61
+ "Blend xb and yb with another random item in a second batch (xb1,yb1) with `lam` weights"
62
+ lam = self.distrib.sample((self.y.size(0),)).squeeze().to(self.x.device)
63
+ lam = torch.stack([lam, 1-lam], 1)
64
+ self.lam = lam.max(1)[0]
65
+ shuffle = torch.randperm(self.y.size(0)).to(self.x.device)
66
+ xb1,self.yb1 = tuple(L(self.xb).itemgot(shuffle)),tuple(L(self.yb).itemgot(shuffle))
67
+ nx_dims = len(self.x.size())
68
+ self.learn.xb = tuple(L(xb1,self.xb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=nx_dims-1)))
69
+
70
+ if not self.stack_y:
71
+ ny_dims = len(self.y.size())
72
+ self.learn.yb = tuple(L(self.yb1,self.yb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=ny_dims-1)))
73
+
74
+ # %% ../../nbs/19_callback.mixup.ipynb 21
75
+ class CutMix(MixHandler):
76
+ "Implementation of https://arxiv.org/abs/1905.04899"
77
+ def __init__(self,
78
+ alpha:float=1. # Determine `Beta` distribution in range (0.,inf]
79
+ ):
80
+ super().__init__(alpha)
81
+
82
+ def before_batch(self):
83
+ "Add `rand_bbox` patches with size based on `lam` and location chosen randomly."
84
+ bs, _, H, W = self.x.size()
85
+ self.lam = self.distrib.sample((1,)).to(self.x.device)
86
+ shuffle = torch.randperm(bs).to(self.x.device)
87
+ xb1,self.yb1 = self.x[shuffle], tuple((self.y[shuffle],))
88
+ x1, y1, x2, y2 = self.rand_bbox(W, H, self.lam)
89
+ self.learn.xb[0][..., y1:y2, x1:x2] = xb1[..., y1:y2, x1:x2]
90
+ self.lam = (1 - ((x2-x1)*(y2-y1))/float(W*H))
91
+ if not self.stack_y:
92
+ ny_dims = len(self.y.size())
93
+ self.learn.yb = tuple(L(self.yb1,self.yb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=ny_dims-1)))
94
+
95
+ def rand_bbox(self,
96
+ W:int, # Width bbox will be
97
+ H:int, # Height bbox will be
98
+ lam:Tensor # lambda sample from Beta distribution i.e tensor([0.3647])
99
+ )->tuple: # Represents the top-left pixel location and the bottom-right pixel location
100
+ "Give a bounding box location based on the size of the im and a weight"
101
+ cut_rat = torch.sqrt(1. - lam).to(self.x.device)
102
+ cut_w = torch.round(W * cut_rat).type(torch.long).to(self.x.device)
103
+ cut_h = torch.round(H * cut_rat).type(torch.long).to(self.x.device)
104
+ # uniform
105
+ cx = torch.randint(0, W, (1,)).to(self.x.device)
106
+ cy = torch.randint(0, H, (1,)).to(self.x.device)
107
+ x1 = torch.clamp(cx - cut_w // 2, 0, W)
108
+ y1 = torch.clamp(cy - cut_h // 2, 0, H)
109
+ x2 = torch.clamp(cx + cut_w // 2, 0, W)
110
+ y2 = torch.clamp(cy + cut_h // 2, 0, H)
111
+ return x1, y1, x2, y2
fastai/callback/neptune.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/70b_callback.neptune.ipynb.
2
+
3
+ # %% ../../nbs/70b_callback.neptune.ipynb 2
4
+ from __future__ import annotations
5
+ import tempfile
6
+ from ..basics import *
7
+ from ..learner import Callback
8
+
9
+ # %% auto 0
10
+ __all__ = ['NeptuneCallback']
11
+
12
+ # %% ../../nbs/70b_callback.neptune.ipynb 12
13
+ import neptune
14
+
15
+ # %% ../../nbs/70b_callback.neptune.ipynb 13
16
+ class NeptuneCallback(Callback):
17
+ "Log losses, metrics, model weights, model architecture summary to neptune"
18
+ order = Recorder.order+1
19
+ def __init__(self, log_model_weights=True, keep_experiment_running=False):
20
+ self.log_model_weights = log_model_weights
21
+ self.keep_experiment_running = keep_experiment_running
22
+ self.experiment = None
23
+
24
+ if neptune.project is None:
25
+ raise ValueError('You did not initialize project in neptune.\n',
26
+ 'Please invoke `neptune.init("USERNAME/PROJECT_NAME")` before this callback.')
27
+
28
+ def before_fit(self):
29
+ try:
30
+ self.experiment = neptune.get_experiment()
31
+ except ValueError:
32
+ print('No active experiment. Please invoke `neptune.create_experiment()` before this callback.')
33
+
34
+ try:
35
+ self.experiment.set_property('n_epoch', str(self.learn.n_epoch))
36
+ self.experiment.set_property('model_class', str(type(self.learn.model)))
37
+ except: print(f'Did not log all properties. Check properties in the {neptune.get_experiment()}.')
38
+
39
+ try:
40
+ with tempfile.NamedTemporaryFile(mode='w') as f:
41
+ with open(f.name, 'w') as g: g.write(repr(self.learn.model))
42
+ self.experiment.log_artifact(f.name, 'model_summary.txt')
43
+ except: print('Did not log model summary. Check if your model is PyTorch model.')
44
+
45
+ if self.log_model_weights and not hasattr(self.learn, 'save_model'):
46
+ print('Unable to log model to Neptune.\n',
47
+ 'Use "SaveModelCallback" to save model checkpoints that will be logged to Neptune.')
48
+
49
+ def after_batch(self):
50
+ # log loss and opt.hypers
51
+ if self.learn.training:
52
+ self.experiment.log_metric('batch__smooth_loss', self.learn.smooth_loss)
53
+ self.experiment.log_metric('batch__loss', self.learn.loss)
54
+ self.experiment.log_metric('batch__train_iter', self.learn.train_iter)
55
+ for i, h in enumerate(self.learn.opt.hypers):
56
+ for k, v in h.items(): self.experiment.log_metric(f'batch__opt.hypers.{k}', v)
57
+
58
+ def after_epoch(self):
59
+ # log metrics
60
+ for n, v in zip(self.learn.recorder.metric_names, self.learn.recorder.log):
61
+ if n not in ['epoch', 'time']: self.experiment.log_metric(f'epoch__{n}', v)
62
+ if n == 'time': self.experiment.log_text(f'epoch__{n}', str(v))
63
+
64
+ # log model weights
65
+ if self.log_model_weights and hasattr(self.learn, 'save_model'):
66
+ if self.learn.save_model.every_epoch:
67
+ _file = join_path_file(f'{self.learn.save_model.fname}_{self.learn.save_model.epoch}',
68
+ self.learn.path / self.learn.model_dir, ext='.pth')
69
+ else:
70
+ _file = join_path_file(self.learn.save_model.fname,
71
+ self.learn.path / self.learn.model_dir, ext='.pth')
72
+ self.experiment.log_artifact(_file)
73
+
74
+ def after_fit(self):
75
+ if not self.keep_experiment_running:
76
+ try: self.experiment.stop()
77
+ except: print('No neptune experiment to stop.')
78
+ else:
79
+ print(f'Your experiment (id: {self.experiment.id}, name: {self.experiment.name}) is left in the running state.\n',
80
+ 'You can log more data to it, like this: `neptune.log_metric()`')
fastai/callback/preds.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/18b_callback.preds.ipynb.
2
+
3
+ # %% ../../nbs/18b_callback.preds.ipynb 2
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+
7
+ # %% auto 0
8
+ __all__ = ['MCDropoutCallback']
9
+
10
+ # %% ../../nbs/18b_callback.preds.ipynb 6
11
+ class MCDropoutCallback(Callback):
12
+ def before_validate(self):
13
+ for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:
14
+ m.train()
15
+
16
+ def after_validate(self):
17
+ for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:
18
+ m.eval()
fastai/callback/progress.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/16_callback.progress.ipynb.
2
+
3
+ # %% ../../nbs/16_callback.progress.ipynb 1
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+
7
+ # %% auto 0
8
+ __all__ = ['ProgressCallback', 'ShowGraphCallback', 'CSVLogger']
9
+
10
+ # %% ../../nbs/16_callback.progress.ipynb 7
11
+ @docs
12
+ class ProgressCallback(Callback):
13
+ "A `Callback` to handle the display of progress bars"
14
+ order,_stateattrs = 60,('mbar','pbar')
15
+
16
+ def before_fit(self):
17
+ assert hasattr(self.learn, 'recorder')
18
+ if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch)))
19
+ if self.learn.logger != noop:
20
+ self.old_logger,self.learn.logger = self.logger,self._write_stats
21
+ self._write_stats(self.recorder.metric_names)
22
+ else: self.old_logger = noop
23
+
24
+ def before_epoch(self):
25
+ if getattr(self, 'mbar', False): self.mbar.update(self.epoch)
26
+
27
+ def before_train(self): self._launch_pbar()
28
+ def before_validate(self): self._launch_pbar()
29
+ def after_train(self): self.pbar.on_iter_end()
30
+ def after_validate(self): self.pbar.on_iter_end()
31
+ def after_batch(self):
32
+ self.pbar.update(self.iter+1)
33
+ if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss.item():.4f}'
34
+
35
+ def _launch_pbar(self):
36
+ self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False)
37
+ self.pbar.update(0)
38
+
39
+ def after_fit(self):
40
+ if getattr(self, 'mbar', False):
41
+ self.mbar.on_iter_end()
42
+ delattr(self, 'mbar')
43
+ if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger
44
+
45
+ def _write_stats(self, log):
46
+ if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True)
47
+
48
+ _docs = dict(before_fit="Setup the master bar over the epochs",
49
+ before_epoch="Update the master bar",
50
+ before_train="Launch a progress bar over the training dataloader",
51
+ before_validate="Launch a progress bar over the validation dataloader",
52
+ after_train="Close the progress bar over the training dataloader",
53
+ after_validate="Close the progress bar over the validation dataloader",
54
+ after_batch="Update the current progress bar",
55
+ after_fit="Close the master bar")
56
+
57
+ if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback]
58
+ elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback)
59
+
60
+ # %% ../../nbs/16_callback.progress.ipynb 9
61
+ @patch
62
+ @contextmanager
63
+ def no_bar(self:Learner):
64
+ "Context manager that deactivates the use of progress bars"
65
+ has_progress = hasattr(self, 'progress')
66
+ if has_progress: self.remove_cb(self.progress)
67
+ try: yield self
68
+ finally:
69
+ if has_progress: self.add_cb(ProgressCallback())
70
+
71
+ # %% ../../nbs/16_callback.progress.ipynb 22
72
+ class ShowGraphCallback(Callback):
73
+ "Update a graph of training and validation loss"
74
+ order,run_valid=65,False
75
+
76
+ def before_fit(self):
77
+ self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds")
78
+ if not(self.run): return
79
+ self.nb_batches = []
80
+ assert hasattr(self.learn, 'progress')
81
+
82
+ def after_train(self): self.nb_batches.append(self.train_iter)
83
+
84
+ def after_epoch(self):
85
+ "Plot validation loss in the pbar graph"
86
+ if not self.nb_batches: return
87
+ rec = self.learn.recorder
88
+ iters = range_of(rec.losses)
89
+ val_losses = [v[1] for v in rec.values]
90
+ x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses))
91
+ y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses)))))
92
+ self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds)
93
+
94
+ # %% ../../nbs/16_callback.progress.ipynb 26
95
+ class CSVLogger(Callback):
96
+ "Log the results displayed in `learn.path/fname`"
97
+ order=60
98
+ def __init__(self, fname='history.csv', append=False):
99
+ self.fname,self.append = Path(fname),append
100
+
101
+ def read_log(self):
102
+ "Convenience method to quickly access the log."
103
+ return pd.read_csv(self.path/self.fname)
104
+
105
+ def before_fit(self):
106
+ "Prepare file with metric names."
107
+ if hasattr(self, "gather_preds"): return
108
+ self.path.parent.mkdir(parents=True, exist_ok=True)
109
+ self.file = (self.path/self.fname).open('a' if self.append else 'w')
110
+ self.file.write(','.join(self.recorder.metric_names) + '\n')
111
+ self.old_logger,self.learn.logger = self.logger,self._write_line
112
+
113
+ def _write_line(self, log):
114
+ "Write a line with `log` and call the old logger."
115
+ self.file.write(','.join([str(t) for t in log]) + '\n')
116
+ self.file.flush()
117
+ os.fsync(self.file.fileno())
118
+ self.old_logger(log)
119
+
120
+ def after_fit(self):
121
+ "Close the file and clean up."
122
+ if hasattr(self, "gather_preds"): return
123
+ self.file.close()
124
+ self.learn.logger = self.old_logger
fastai/callback/rnn.py ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/34_callback.rnn.ipynb.
2
+
3
+ # %% ../../nbs/34_callback.rnn.ipynb 1
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+
7
+ # %% auto 0
8
+ __all__ = ['ModelResetter', 'RNNCallback', 'RNNRegularizer', 'rnn_cbs']
9
+
10
+ # %% ../../nbs/34_callback.rnn.ipynb 5
11
+ @docs
12
+ class ModelResetter(Callback):
13
+ "`Callback` that resets the model at each validation/training step"
14
+ def before_train(self): self.model.reset()
15
+ def before_validate(self): self.model.reset()
16
+ def after_fit(self): self.model.reset()
17
+ _docs = dict(before_train="Reset the model before training",
18
+ before_validate="Reset the model before validation",
19
+ after_fit="Reset the model after fitting")
20
+
21
+ # %% ../../nbs/34_callback.rnn.ipynb 6
22
+ class RNNCallback(Callback):
23
+ "Save the raw and dropped-out outputs and only keep the true output for loss computation"
24
+ def after_pred(self): self.learn.pred,self.raw_out,self.out = [o[-1] if is_listy(o) else o for o in self.pred]
25
+
26
+ # %% ../../nbs/34_callback.rnn.ipynb 7
27
+ class RNNRegularizer(Callback):
28
+ "Add AR and TAR regularization"
29
+ order,run_valid = RNNCallback.order+1,False
30
+ def __init__(self, alpha=0., beta=0.): store_attr()
31
+ def after_loss(self):
32
+ if not self.training: return
33
+ if self.alpha: self.learn.loss_grad += self.alpha * self.rnn.out.float().pow(2).mean()
34
+ if self.beta:
35
+ h = self.rnn.raw_out
36
+ if len(h)>1: self.learn.loss_grad += self.beta * (h[:,1:] - h[:,:-1]).float().pow(2).mean()
37
+
38
+ # %% ../../nbs/34_callback.rnn.ipynb 8
39
+ def rnn_cbs(alpha=0., beta=0.):
40
+ "All callbacks needed for (optionally regularized) RNN training"
41
+ reg = [RNNRegularizer(alpha=alpha, beta=beta)] if alpha or beta else []
42
+ return [ModelResetter(), RNNCallback()] + reg
fastai/callback/schedule.py ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/14_callback.schedule.ipynb.
2
+
3
+ # %% ../../nbs/14_callback.schedule.ipynb 2
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+ from .tracker import SaveModelCallback
7
+
8
+ # %% auto 0
9
+ __all__ = ['annealer', 'sched_lin', 'sched_cos', 'sched_no', 'sched_exp', 'SchedLin', 'SchedCos', 'SchedNo', 'SchedExp',
10
+ 'SchedPoly', 'combine_scheds', 'combined_cos', 'ParamScheduler', 'LRFinder', 'valley', 'slide', 'minimum',
11
+ 'steep', 'SuggestionMethod']
12
+
13
+ # %% ../../nbs/14_callback.schedule.ipynb 3
14
+ _all_ = ['SuggestionMethod']
15
+
16
+ # %% ../../nbs/14_callback.schedule.ipynb 8
17
+ class _Annealer:
18
+ def __init__(self, f, start, end): store_attr('f,start,end')
19
+ def __call__(self, pos): return self.f(self.start, self.end, pos)
20
+
21
+ # %% ../../nbs/14_callback.schedule.ipynb 9
22
+ def annealer(f):
23
+ "Decorator to make `f` return itself partially applied."
24
+ @functools.wraps(f)
25
+ def _inner(start, end): return _Annealer(f, start, end)
26
+ return _inner
27
+
28
+ # %% ../../nbs/14_callback.schedule.ipynb 11
29
+ #TODO Jeremy, make this pickle
30
+ #@annealer
31
+ #def SchedLin(start, end, pos): return start + pos*(end-start)
32
+ #@annealer
33
+ #def SchedCos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
34
+ #@annealer
35
+ #def SchedNo (start, end, pos): return start
36
+ #@annealer
37
+ #def SchedExp(start, end, pos): return start * (end/start) ** pos
38
+ #
39
+ #SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
40
+ #SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
41
+ #SchedNo .__doc__ = "Constant schedule function with `start` value"
42
+ #SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
43
+
44
+ # %% ../../nbs/14_callback.schedule.ipynb 12
45
+ def sched_lin(start, end, pos): return start + pos*(end-start)
46
+ def sched_cos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
47
+ def sched_no (start, end, pos): return start
48
+ def sched_exp(start, end, pos): return start * (end/start) ** pos
49
+
50
+ def SchedLin(start, end): return _Annealer(sched_lin, start, end)
51
+ def SchedCos(start, end): return _Annealer(sched_cos, start, end)
52
+ def SchedNo (start, end): return _Annealer(sched_no, start, end)
53
+ def SchedExp(start, end): return _Annealer(sched_exp, start, end)
54
+
55
+ SchedLin.__doc__ = "Linear schedule function from `start` to `end`"
56
+ SchedCos.__doc__ = "Cosine schedule function from `start` to `end`"
57
+ SchedNo .__doc__ = "Constant schedule function with `start` value"
58
+ SchedExp.__doc__ = "Exponential schedule function from `start` to `end`"
59
+
60
+ # %% ../../nbs/14_callback.schedule.ipynb 15
61
+ def SchedPoly(start, end, power):
62
+ "Polynomial schedule (of `power`) function from `start` to `end`"
63
+ def _inner(pos): return start + (end - start) * pos ** power
64
+ return _inner
65
+
66
+ # %% ../../nbs/14_callback.schedule.ipynb 28
67
+ def combine_scheds(pcts, scheds):
68
+ "Combine `scheds` according to `pcts` in one function"
69
+ assert sum(pcts) == 1.
70
+ pcts = tensor([0] + L(pcts))
71
+ assert torch.all(pcts >= 0)
72
+ pcts = torch.cumsum(pcts, 0)
73
+ pct_lim = len(pcts) - 2
74
+ def _inner(pos):
75
+ idx = min((pos >= pcts).nonzero().max(), pct_lim)
76
+ actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
77
+ return scheds[idx](actual_pos.item())
78
+ return _inner
79
+
80
+ # %% ../../nbs/14_callback.schedule.ipynb 33
81
+ def combined_cos(pct, start, middle, end):
82
+ "Return a scheduler with cosine annealing from `start`→`middle` & `middle`→`end`"
83
+ return combine_scheds([pct,1-pct], [SchedCos(start, middle), SchedCos(middle, end)])
84
+
85
+ # %% ../../nbs/14_callback.schedule.ipynb 38
86
+ @docs
87
+ class ParamScheduler(Callback):
88
+ "Schedule hyper-parameters according to `scheds`"
89
+ order,run_valid = 60,False
90
+
91
+ def __init__(self, scheds): self.scheds = scheds
92
+ def before_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
93
+ def before_batch(self): self._update_val(self.pct_train)
94
+
95
+ def _update_val(self, pct):
96
+ for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
97
+
98
+ def after_batch(self):
99
+ for p in self.scheds.keys(): self.hps[p].append(self.opt.hypers[-1][p])
100
+
101
+ def after_fit(self):
102
+ if hasattr(self.learn, 'recorder') and hasattr(self, 'hps'): self.recorder.hps = self.hps
103
+
104
+ _docs = {"before_fit": "Initialize container for hyper-parameters",
105
+ "before_batch": "Set the proper hyper-parameters in the optimizer",
106
+ "after_batch": "Record hyper-parameters of this batch",
107
+ "after_fit": "Save the hyper-parameters in the recorder if there is one"}
108
+
109
+ # %% ../../nbs/14_callback.schedule.ipynb 46
110
+ @patch
111
+ def fit_one_cycle(self:Learner, n_epoch, lr_max=None, div=25., div_final=1e5, pct_start=0.25, wd=None,
112
+ moms=None, cbs=None, reset_opt=False, start_epoch=0):
113
+ "Fit `self.model` for `n_epoch` using the 1cycle policy."
114
+ if self.opt is None: self.create_opt()
115
+ self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
116
+ lr_max = np.array([h['lr'] for h in self.opt.hypers])
117
+ scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
118
+ 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
119
+ self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd, start_epoch=start_epoch)
120
+
121
+ # %% ../../nbs/14_callback.schedule.ipynb 50
122
+ @patch
123
+ def plot_sched(self:Recorder, keys=None, figsize=None):
124
+ keys = self.hps.keys() if keys is None else L(keys)
125
+ rows,cols = (len(keys)+1)//2, min(2, len(keys))
126
+ figsize = figsize or (6*cols,4*rows)
127
+ _, axs = plt.subplots(rows, cols, figsize=figsize)
128
+ axs = axs.flatten() if len(keys) > 1 else L(axs)
129
+ for p,ax in zip(keys, axs):
130
+ ax.plot(self.hps[p])
131
+ ax.set_ylabel(p)
132
+
133
+ # %% ../../nbs/14_callback.schedule.ipynb 54
134
+ @patch
135
+ def fit_flat_cos(self:Learner, n_epoch, lr=None, div_final=1e5, pct_start=0.75, wd=None,
136
+ cbs=None, reset_opt=False, start_epoch=0):
137
+ "Fit `self.model` for `n_epoch` at flat `lr` before a cosine annealing."
138
+ if self.opt is None: self.create_opt()
139
+ self.opt.set_hyper('lr', self.lr if lr is None else lr)
140
+ lr = np.array([h['lr'] for h in self.opt.hypers])
141
+ scheds = {'lr': combined_cos(pct_start, lr, lr, lr/div_final)}
142
+ self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd, start_epoch=0)
143
+
144
+ # %% ../../nbs/14_callback.schedule.ipynb 57
145
+ @patch
146
+ def fit_sgdr(self:Learner, n_cycles, cycle_len, lr_max=None, cycle_mult=2, cbs=None, reset_opt=False, wd=None,
147
+ start_epoch=0):
148
+ "Fit `self.model` for `n_cycles` of `cycle_len` using SGDR."
149
+ if self.opt is None: self.create_opt()
150
+ self.opt.set_hyper('lr', self.lr if lr_max is None else lr_max)
151
+ lr_max = np.array([h['lr'] for h in self.opt.hypers])
152
+ n_epoch = cycle_len * (cycle_mult**n_cycles-1)//(cycle_mult-1)
153
+ pcts = [cycle_len * cycle_mult**i / n_epoch for i in range(n_cycles)]
154
+ scheds = [SchedCos(lr_max, 0) for _ in range(n_cycles)]
155
+ scheds = {'lr': combine_scheds(pcts, scheds)}
156
+ self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd, start_epoch=start_epoch)
157
+
158
+ # %% ../../nbs/14_callback.schedule.ipynb 60
159
+ @patch
160
+ @delegates(Learner.fit_one_cycle)
161
+ def fine_tune(self:Learner, epochs, base_lr=2e-3, freeze_epochs=1, lr_mult=100,
162
+ pct_start=0.3, div=5.0, **kwargs):
163
+ "Fine tune with `Learner.freeze` for `freeze_epochs`, then with `Learner.unfreeze` for `epochs`, using discriminative LR."
164
+ self.freeze()
165
+ self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
166
+ base_lr /= 2
167
+ self.unfreeze()
168
+ self.fit_one_cycle(epochs, slice(base_lr/lr_mult, base_lr), pct_start=pct_start, div=div, **kwargs)
169
+
170
+ # %% ../../nbs/14_callback.schedule.ipynb 67
171
+ @docs
172
+ class LRFinder(ParamScheduler):
173
+ "Training with exponentially growing learning rate"
174
+ def __init__(self, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True):
175
+ if num_it < 6: num_it = 6
176
+ self.scheds = {'lr': [SchedExp(s, e) for (s,e) in zip(start_lr,end_lr)
177
+ ] if is_listy(start_lr) else SchedExp(start_lr, end_lr)}
178
+ self.num_it,self.stop_div = num_it,stop_div
179
+
180
+ def before_fit(self):
181
+ super().before_fit()
182
+ path = self.path/self.model_dir
183
+ path.mkdir(parents=True, exist_ok=True)
184
+ self.tmp_d = tempfile.TemporaryDirectory(dir=path)
185
+ self.tmp_p = Path(self.tmp_d.name).stem
186
+ self.learn.save(f'{self.tmp_p}/_tmp')
187
+ self.best_loss = float('inf')
188
+
189
+ def before_batch(self): self._update_val(self.train_iter/self.num_it)
190
+
191
+ def after_batch(self):
192
+ super().after_batch()
193
+ if self.smooth_loss < self.best_loss: self.best_loss = self.smooth_loss
194
+ if self.smooth_loss > 4*self.best_loss and self.stop_div: raise CancelFitException()
195
+ if self.train_iter >= self.num_it: raise CancelFitException()
196
+
197
+ def before_validate(self): raise CancelValidException()
198
+
199
+ def after_fit(self):
200
+ self.learn.opt.zero_grad() # Needed before detaching the optimizer for future fits
201
+ tmp_f = self.path/self.model_dir/self.tmp_p/'_tmp.pth'
202
+ if tmp_f.exists():
203
+ self.learn.load(f'{self.tmp_p}/_tmp', with_opt=True)
204
+ self.tmp_d.cleanup()
205
+
206
+ _docs = {"before_fit": "Initialize container for hyper-parameters and save the model",
207
+ "before_batch": "Set the proper hyper-parameters in the optimizer",
208
+ "after_batch": "Record hyper-parameters of this batch and potentially stop training",
209
+ "after_fit": "Save the hyper-parameters in the recorder if there is one and load the original model",
210
+ "before_validate": "Skip the validation part of training"}
211
+
212
+ # %% ../../nbs/14_callback.schedule.ipynb 78
213
+ def valley(lrs:list, losses:list, num_it:int):
214
+ "Suggests a learning rate from the longest valley and returns its index"
215
+ n = len(losses)
216
+ max_start, max_end = 0,0
217
+
218
+ # find the longest valley
219
+ lds = [1]*n
220
+ for i in range(1,n):
221
+ for j in range(0,i):
222
+ if (losses[i] < losses[j]) and (lds[i] < lds[j] + 1):
223
+ lds[i] = lds[j] + 1
224
+ if lds[max_end] < lds[i]:
225
+ max_end = i
226
+ max_start = max_end - lds[max_end]
227
+
228
+ sections = (max_end - max_start) / 3
229
+ idx = max_start + int(sections) + int(sections/2)
230
+
231
+ return float(lrs[idx]), (float(lrs[idx]), losses[idx])
232
+
233
+ # %% ../../nbs/14_callback.schedule.ipynb 81
234
+ def slide(lrs:list, losses:list, num_it:int, lr_diff:int=15, thresh:float=.005, adjust_value:float=1.):
235
+ "Suggests a learning rate following an interval slide rule and returns its index"
236
+ losses = to_np(losses)
237
+ loss_grad = np.gradient(losses)
238
+
239
+ r_idx = -1
240
+ l_idx = r_idx - lr_diff
241
+ local_min_lr = lrs[l_idx]
242
+ while (l_idx >= -len(losses)) and (abs(loss_grad[r_idx] - loss_grad[l_idx]) > thresh):
243
+ local_min_lr = lrs[l_idx]
244
+ r_idx -= 1
245
+ l_idx -= 1
246
+
247
+ suggestion = float(local_min_lr) * adjust_value
248
+ idx = np.interp(np.log10(suggestion), np.log10(lrs), losses)
249
+ return suggestion, (suggestion, idx)
250
+
251
+ # %% ../../nbs/14_callback.schedule.ipynb 84
252
+ def minimum(lrs:list, losses:list, num_it:int):
253
+ "Suggests a learning rate one-tenth the minumum before divergance and returns its index"
254
+ lr_min = lrs[losses.argmin()].item()
255
+ loss_idx = losses[min(range(len(lrs)), key=lambda i: abs(lrs[i]-lr_min))]
256
+ return lr_min/10, (lr_min, loss_idx)
257
+
258
+ # %% ../../nbs/14_callback.schedule.ipynb 86
259
+ def steep(lrs:list, losses:list, num_it:int) -> (float, tuple):
260
+ "Suggests a learning rate when the slope is the steepest and returns its index"
261
+ grads = (losses[1:]-losses[:-1]) / (lrs[1:].log()-lrs[:-1].log())
262
+ lr_steep = lrs[grads.argmin()].item()
263
+ loss_idx = losses[min(range(len(lrs)), key=lambda i: abs(lrs[i]-lr_steep))]
264
+ return lr_steep, (lr_steep, loss_idx)
265
+
266
+ # %% ../../nbs/14_callback.schedule.ipynb 88
267
+ @patch
268
+ def plot_lr_find(self:Recorder, skip_end=5, return_fig=True, suggestions=None, nms=None, **kwargs):
269
+ "Plot the result of an LR Finder test (won't work if you didn't do `learn.lr_find()` before)"
270
+ lrs = self.lrs if skip_end==0 else self.lrs [:-skip_end]
271
+ losses = self.losses if skip_end==0 else self.losses[:-skip_end]
272
+ fig, ax = plt.subplots(1,1)
273
+ ax.plot(lrs, losses)
274
+ ax.set_ylabel("Loss")
275
+ ax.set_xlabel("Learning Rate")
276
+ ax.set_xscale('log')
277
+ if suggestions:
278
+ colors = plt.rcParams['axes.prop_cycle'].by_key()['color'][1:]
279
+ for (val, idx), nm, color in zip(suggestions, nms, colors):
280
+ ax.plot(val, idx, 'o', label=nm, c=color)
281
+ ax.legend(loc='best')
282
+
283
+ # %% ../../nbs/14_callback.schedule.ipynb 89
284
+ mk_class("SuggestionMethod", **{o.__name__.capitalize():o for o in [valley,slide,minimum,steep]},
285
+ doc="All possible suggestion methods as convience attributes to get tab-completion and typo-proofing")
286
+
287
+ # %% ../../nbs/14_callback.schedule.ipynb 90
288
+ @patch
289
+ def lr_find(self:Learner, start_lr=1e-7, end_lr=10, num_it=100, stop_div=True, show_plot=True, suggest_funcs=(SuggestionMethod.Valley)):
290
+ "Launch a mock training to find a good learning rate and return suggestions based on `suggest_funcs` as a named tuple"
291
+ n_epoch = num_it//len(self.dls.train) + 1
292
+ cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
293
+ with self.no_logging(): self.fit(n_epoch, cbs=cb)
294
+ if suggest_funcs is not None:
295
+ lrs, losses = tensor(self.recorder.lrs[num_it//10:-5]), tensor(self.recorder.losses[num_it//10:-5])
296
+ nan_idxs = torch.nonzero(torch.isnan(losses.view(-1)))
297
+ if len(nan_idxs) > 0:
298
+ drop_idx = min(nan_idxs)
299
+ lrs = lrs[:drop_idx]
300
+ losses = losses[:drop_idx]
301
+ _suggestions, nms = [], []
302
+ for func in tuplify(suggest_funcs):
303
+ nms.append(func.__name__ if not isinstance(func, partial) else func.func.__name__) # deal with partials
304
+ _suggestions.append(func(lrs, losses, num_it))
305
+
306
+ SuggestedLRs = collections.namedtuple('SuggestedLRs', nms)
307
+ lrs, pnts = [], []
308
+ for lr, pnt in _suggestions:
309
+ lrs.append(lr)
310
+ pnts.append(pnt)
311
+ if show_plot: self.recorder.plot_lr_find(suggestions=pnts, nms=nms)
312
+ return SuggestedLRs(*lrs)
313
+
314
+ elif show_plot: self.recorder.plot_lr_find()
fastai/callback/tensorboard.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/70a_callback.tensorboard.ipynb.
2
+
3
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 3
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+
7
+ # %% auto 0
8
+ __all__ = ['TensorBoardBaseCallback', 'TensorBoardCallback', 'TensorBoardProjectorCallback', 'projector_word_embeddings',
9
+ 'tensorboard_log']
10
+
11
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 18
12
+ import tensorboard
13
+ from torch.utils.tensorboard import SummaryWriter
14
+ from .fp16 import ModelToHalf
15
+ from .hook import hook_output
16
+
17
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 19
18
+ class TensorBoardBaseCallback(Callback):
19
+ order = Recorder.order+1
20
+ "Base class for tensorboard callbacks"
21
+ def __init__(self): self.run_projector = False
22
+
23
+ def after_pred(self):
24
+ if self.run_projector: self.feat = _add_projector_features(self.learn, self.h, self.feat)
25
+
26
+ def after_validate(self):
27
+ if not self.run_projector: return
28
+ self.run_projector = False
29
+ self._remove()
30
+ _write_projector_embedding(self.learn, self.writer, self.feat)
31
+
32
+ def after_fit(self):
33
+ if self.run: self.writer.close()
34
+
35
+ def _setup_projector(self):
36
+ self.run_projector = True
37
+ self.h = hook_output(self.learn.model[1][1] if not self.layer else self.layer)
38
+ self.feat = {}
39
+
40
+ def _setup_writer(self): self.writer = SummaryWriter(log_dir=self.log_dir)
41
+ def __del__(self): self._remove()
42
+ def _remove(self):
43
+ if getattr(self, 'h', None): self.h.remove()
44
+
45
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 21
46
+ class TensorBoardCallback(TensorBoardBaseCallback):
47
+ "Saves model topology, losses & metrics for tensorboard and tensorboard projector during training"
48
+ def __init__(self, log_dir=None, trace_model=True, log_preds=True, n_preds=9, projector=False, layer=None):
49
+ super().__init__()
50
+ store_attr()
51
+
52
+ def before_fit(self):
53
+ self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") and rank_distrib()==0
54
+ if not self.run: return
55
+ self._setup_writer()
56
+ if self.trace_model:
57
+ if hasattr(self.learn, 'mixed_precision'):
58
+ raise Exception("Can't trace model in mixed precision, pass `trace_model=False` or don't use FP16.")
59
+ b = self.dls.one_batch()
60
+ self.learn._split(b)
61
+ self.writer.add_graph(self.model, *self.xb)
62
+
63
+ def after_batch(self):
64
+ self.writer.add_scalar('train_loss', self.smooth_loss, self.train_iter)
65
+ for i,h in enumerate(self.opt.hypers):
66
+ for k,v in h.items(): self.writer.add_scalar(f'{k}_{i}', v, self.train_iter)
67
+
68
+ def after_epoch(self):
69
+ for n,v in zip(self.recorder.metric_names[2:-1], self.recorder.log[2:-1]):
70
+ self.writer.add_scalar(n, v, self.train_iter)
71
+ if self.log_preds:
72
+ b = self.dls.valid.one_batch()
73
+ self.learn.one_batch(0, b)
74
+ preds = getcallable(self.loss_func, 'activation')(self.pred)
75
+ out = getcallable(self.loss_func, 'decodes')(preds)
76
+ x,y,its,outs = self.dls.valid.show_results(b, out, show=False, max_n=self.n_preds)
77
+ tensorboard_log(x, y, its, outs, self.writer, self.train_iter)
78
+
79
+ def before_validate(self):
80
+ if self.projector: self._setup_projector()
81
+
82
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 23
83
+ class TensorBoardProjectorCallback(TensorBoardBaseCallback):
84
+ "Extracts and exports image featuers for tensorboard projector during inference"
85
+ def __init__(self, log_dir=None, layer=None):
86
+ super().__init__()
87
+ store_attr()
88
+
89
+ def before_fit(self):
90
+ self.run = not hasattr(self.learn, 'lr_finder') and hasattr(self, "gather_preds") and rank_distrib()==0
91
+ if not self.run: return
92
+ self._setup_writer()
93
+
94
+ def before_validate(self):
95
+ self._setup_projector()
96
+
97
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 25
98
+ def _write_projector_embedding(learn, writer, feat):
99
+ lbls = [learn.dl.vocab[l] for l in feat['lbl']] if getattr(learn.dl, 'vocab', None) else None
100
+ vecs = feat['vec'].squeeze()
101
+ writer.add_embedding(vecs, metadata=lbls, label_img=feat['img'], global_step=learn.train_iter)
102
+
103
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 26
104
+ def _add_projector_features(learn, hook, feat):
105
+ img = _normalize_for_projector(learn.x)
106
+ first_epoch = True if learn.iter == 0 else False
107
+ feat['vec'] = hook.stored if first_epoch else torch.cat((feat['vec'], hook.stored),0)
108
+ feat['img'] = img if first_epoch else torch.cat((feat['img'], img),0)
109
+ if getattr(learn.dl, 'vocab', None):
110
+ feat['lbl'] = learn.y if first_epoch else torch.cat((feat['lbl'], learn.y),0)
111
+ return feat
112
+
113
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 27
114
+ def _get_embeddings(model, layer):
115
+ layer = model[0].encoder if layer == None else layer
116
+ return layer.weight
117
+
118
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 28
119
+ @typedispatch
120
+ def _normalize_for_projector(x:TensorImage):
121
+ # normalize tensor to be between 0-1
122
+ img = x.clone()
123
+ sz = img.shape
124
+ img = img.view(x.size(0), -1)
125
+ img -= img.min(1, keepdim=True)[0]
126
+ img /= img.max(1, keepdim=True)[0]
127
+ img = img.view(*sz)
128
+ return img
129
+
130
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 29
131
+ from ..text.all import LMLearner, TextLearner
132
+
133
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 30
134
+ def projector_word_embeddings(learn=None, layer=None, vocab=None, limit=-1, start=0, log_dir=None):
135
+ "Extracts and exports word embeddings from language models embedding layers"
136
+ if not layer:
137
+ if isinstance(learn, LMLearner): layer = learn.model[0].encoder
138
+ elif isinstance(learn, TextLearner): layer = learn.model[0].module.encoder
139
+ emb = layer.weight
140
+ img = torch.full((len(emb),3,8,8), 0.7)
141
+ vocab = learn.dls.vocab[0] if vocab == None else vocab
142
+ vocab = list(map(lambda x: f'{x}_', vocab))
143
+ writer = SummaryWriter(log_dir=log_dir)
144
+ end = start + limit if limit >= 0 else -1
145
+ writer.add_embedding(emb[start:end], metadata=vocab[start:end], label_img=img[start:end])
146
+ writer.close()
147
+
148
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 32
149
+ from ..vision.data import *
150
+
151
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 33
152
+ @typedispatch
153
+ def tensorboard_log(x:TensorImage, y: TensorCategory, samples, outs, writer, step):
154
+ fig,axs = get_grid(len(samples), return_fig=True)
155
+ for i in range(2):
156
+ axs = [b.show(ctx=c) for b,c in zip(samples.itemgot(i),axs)]
157
+ axs = [r.show(ctx=c, color='green' if b==r else 'red')
158
+ for b,r,c in zip(samples.itemgot(1),outs.itemgot(0),axs)]
159
+ writer.add_figure('Sample results', fig, step)
160
+
161
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 34
162
+ from ..vision.core import TensorPoint,TensorBBox
163
+
164
+ # %% ../../nbs/70a_callback.tensorboard.ipynb 35
165
+ @typedispatch
166
+ def tensorboard_log(x:TensorImage, y: TensorImageBase|TensorPoint|TensorBBox, samples, outs, writer, step):
167
+ fig,axs = get_grid(len(samples), return_fig=True, double=True)
168
+ for i in range(2):
169
+ axs[::2] = [b.show(ctx=c) for b,c in zip(samples.itemgot(i),axs[::2])]
170
+ for x in [samples,outs]:
171
+ axs[1::2] = [b.show(ctx=c) for b,c in zip(x.itemgot(0),axs[1::2])]
172
+ writer.add_figure('Sample results', fig, step)
fastai/callback/tracker.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUTOGENERATED! DO NOT EDIT! File to edit: ../../nbs/17_callback.tracker.ipynb.
2
+
3
+ # %% ../../nbs/17_callback.tracker.ipynb 2
4
+ from __future__ import annotations
5
+ from ..basics import *
6
+ from .progress import *
7
+ from .fp16 import MixedPrecision
8
+
9
+ # %% auto 0
10
+ __all__ = ['TerminateOnNaNCallback', 'TrackerCallback', 'EarlyStoppingCallback', 'SaveModelCallback', 'ReduceLROnPlateau']
11
+
12
+ # %% ../../nbs/17_callback.tracker.ipynb 6
13
+ class TerminateOnNaNCallback(Callback):
14
+ "A `Callback` that terminates training if loss is NaN."
15
+ order=-9
16
+ def after_batch(self):
17
+ "Test if `last_loss` is NaN and interrupts training."
18
+ if torch.isinf(self.loss) or torch.isnan(self.loss): raise CancelFitException
19
+
20
+ # %% ../../nbs/17_callback.tracker.ipynb 10
21
+ class TrackerCallback(Callback):
22
+ "A `Callback` that keeps track of the best value in `monitor`."
23
+ order,remove_on_fetch,_only_train_loop = 60,True,True
24
+ def __init__(self,
25
+ monitor='valid_loss', # value (usually loss or metric) being monitored.
26
+ comp=None, # numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
27
+ min_delta=0., # minimum delta between the last monitor value and the best monitor value.
28
+ reset_on_fit=True # before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
29
+ ):
30
+ if comp is None: comp = np.less if 'loss' in monitor or 'error' in monitor else np.greater
31
+ if comp == np.less: min_delta *= -1
32
+ self.monitor,self.comp,self.min_delta,self.reset_on_fit,self.best= monitor,comp,min_delta,reset_on_fit,None
33
+
34
+ def before_fit(self):
35
+ "Prepare the monitored value"
36
+ self.run = not hasattr(self, "lr_finder") and not hasattr(self, "gather_preds")
37
+ if self.reset_on_fit or self.best is None: self.best = float('inf') if self.comp == np.less else -float('inf')
38
+ assert self.monitor in self.recorder.metric_names[1:]
39
+ self.idx = list(self.recorder.metric_names[1:]).index(self.monitor)
40
+
41
+ def after_epoch(self):
42
+ "Compare the last value to the best up to now"
43
+ val = self.recorder.values[-1][self.idx]
44
+ if self.comp(val - self.min_delta, self.best): self.best,self.new_best = val,True
45
+ else: self.new_best = False
46
+
47
+ def after_fit(self): self.run=True
48
+
49
+ # %% ../../nbs/17_callback.tracker.ipynb 19
50
+ class EarlyStoppingCallback(TrackerCallback):
51
+ "A `TrackerCallback` that terminates training when monitored quantity stops improving."
52
+ order=TrackerCallback.order+3
53
+ def __init__(self,
54
+ monitor='valid_loss', # value (usually loss or metric) being monitored.
55
+ comp=None, # numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
56
+ min_delta=0., # minimum delta between the last monitor value and the best monitor value.
57
+ patience=1, # number of epochs to wait when training has not improved model.
58
+ reset_on_fit=True # before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
59
+ ):
60
+ super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
61
+ self.patience = patience
62
+
63
+ def before_fit(self): self.wait = 0; super().before_fit()
64
+ def after_epoch(self):
65
+ "Compare the value monitored to its best score and maybe stop training."
66
+ super().after_epoch()
67
+ if self.new_best: self.wait = 0
68
+ else:
69
+ self.wait += 1
70
+ if self.wait >= self.patience:
71
+ print(f'No improvement since epoch {self.epoch-self.wait}: early stopping')
72
+ raise CancelFitException()
73
+
74
+ # %% ../../nbs/17_callback.tracker.ipynb 26
75
+ class SaveModelCallback(TrackerCallback):
76
+ "A `TrackerCallback` that saves the model's best during training and loads it at the end."
77
+ order = TrackerCallback.order+1
78
+ def __init__(self,
79
+ monitor='valid_loss', # value (usually loss or metric) being monitored.
80
+ comp=None, # numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
81
+ min_delta=0., # minimum delta between the last monitor value and the best monitor value.
82
+ fname='model', # model name to be used when saving model.
83
+ every_epoch=False, # if true, save model after every epoch; else save only when model is better than existing best.
84
+ at_end=False, # if true, save model when training ends; else load best model if there is only one saved model.
85
+ with_opt=False, # if true, save optimizer state (if any available) when saving model.
86
+ reset_on_fit=True # before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
87
+ ):
88
+ super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
89
+ assert not (every_epoch and at_end), "every_epoch and at_end cannot both be set to True"
90
+ # keep track of file path for loggers
91
+ self.last_saved_path = None
92
+ store_attr('fname,every_epoch,at_end,with_opt')
93
+
94
+ def _save(self, name): self.last_saved_path = self.learn.save(name, with_opt=self.with_opt)
95
+
96
+ def after_epoch(self):
97
+ "Compare the value monitored to its best score and save if best."
98
+ if self.every_epoch:
99
+ if (self.epoch%self.every_epoch) == 0: self._save(f'{self.fname}_{self.epoch}')
100
+ else: #every improvement
101
+ super().after_epoch()
102
+ if self.new_best:
103
+ print(f'Better model found at epoch {self.epoch} with {self.monitor} value: {self.best}.')
104
+ self._save(f'{self.fname}')
105
+
106
+ def after_fit(self, **kwargs):
107
+ "Load the best model."
108
+ if self.at_end: self._save(f'{self.fname}')
109
+ elif not self.every_epoch: self.learn.load(f'{self.fname}', with_opt=self.with_opt)
110
+
111
+ # %% ../../nbs/17_callback.tracker.ipynb 30
112
+ class ReduceLROnPlateau(TrackerCallback):
113
+ "A `TrackerCallback` that reduces learning rate when a metric has stopped improving."
114
+ order=TrackerCallback.order+2
115
+ def __init__(self,
116
+ monitor='valid_loss', # value (usually loss or metric) being monitored.
117
+ comp=None, # numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
118
+ min_delta=0., # minimum delta between the last monitor value and the best monitor value.
119
+ patience=1, # number of epochs to wait when training has not improved model.
120
+ factor=10., # the denominator to divide the learning rate by, when reducing the learning rate.
121
+ min_lr=0, # the minimum learning rate allowed; learning rate cannot be reduced below this minimum.
122
+ reset_on_fit=True # before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
123
+ ):
124
+ super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
125
+ self.patience,self.factor,self.min_lr = patience,factor,min_lr
126
+
127
+ def before_fit(self): self.wait = 0; super().before_fit()
128
+ def after_epoch(self):
129
+ "Compare the value monitored to its best score and reduce LR by `factor` if no improvement."
130
+ super().after_epoch()
131
+ if self.new_best: self.wait = 0
132
+ else:
133
+ self.wait += 1
134
+ if self.wait >= self.patience:
135
+ old_lr = self.opt.hypers[-1]['lr']
136
+ for h in self.opt.hypers: h['lr'] = max(h['lr'] / self.factor, self.min_lr)
137
+ self.wait = 0
138
+ if self.opt.hypers[-1]["lr"] < old_lr:
139
+ print(f'Epoch {self.epoch}: reducing lr to {self.opt.hypers[-1]["lr"]}')