Hannes Kuchelmeister commited on
Commit
c7be723
β€’
1 Parent(s): ae90a62

cleanup to make ready as submodule

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. models/Dockerfile β†’ Dockerfile +0 -0
  2. models/DockerfileCUDA β†’ DockerfileCUDA +0 -0
  3. README.md +1445 -1
  4. annotation-preprocessing/.dockerignore +0 -3
  5. annotation-preprocessing/.env.example +0 -8
  6. annotation-preprocessing/.gitignore +0 -6
  7. annotation-preprocessing/0_fetch_from_database.py +0 -88
  8. annotation-preprocessing/1_splitting_into_patches.py +0 -165
  9. annotation-preprocessing/Dockerfile +0 -13
  10. annotation-preprocessing/README.md +0 -48
  11. annotation-preprocessing/docker-compose.yml +0 -10
  12. annotation-preprocessing/out/.gitignore +0 -0
  13. annotation-preprocessing/requirements.txt +0 -6
  14. {models/configs β†’ configs}/callbacks/default.yaml +0 -0
  15. {models/configs β†’ configs}/callbacks/none.yaml +0 -0
  16. {models/configs β†’ configs}/datamodule/focus.yaml +0 -0
  17. {models/configs β†’ configs}/datamodule/mnist.yaml +0 -0
  18. {models/configs β†’ configs}/debug/default.yaml +0 -0
  19. {models/configs β†’ configs}/debug/limit_batches.yaml +0 -0
  20. {models/configs β†’ configs}/debug/overfit.yaml +0 -0
  21. {models/configs β†’ configs}/debug/profiler.yaml +0 -0
  22. {models/configs β†’ configs}/debug/step.yaml +0 -0
  23. {models/configs β†’ configs}/debug/test_only.yaml +0 -0
  24. {models/configs β†’ configs}/experiment/example.yaml +0 -0
  25. {models/configs β†’ configs}/hparams_search/mnist_optuna.yaml +0 -0
  26. {models/configs β†’ configs}/local/.gitkeep +0 -0
  27. {models/configs β†’ configs}/log_dir/debug.yaml +0 -0
  28. {models/configs β†’ configs}/log_dir/default.yaml +0 -0
  29. {models/configs β†’ configs}/log_dir/evaluation.yaml +0 -0
  30. {models/configs β†’ configs}/logger/comet.yaml +0 -0
  31. {models/configs β†’ configs}/logger/csv.yaml +0 -0
  32. {models/configs β†’ configs}/logger/many_loggers.yaml +0 -0
  33. {models/configs β†’ configs}/logger/mlflow.yaml +0 -0
  34. {models/configs β†’ configs}/logger/neptune.yaml +0 -0
  35. {models/configs β†’ configs}/logger/tensorboard.yaml +0 -0
  36. {models/configs β†’ configs}/logger/wandb.yaml +0 -0
  37. {models/configs β†’ configs}/model/focus.yaml +0 -0
  38. {models/configs β†’ configs}/model/mnist.yaml +0 -0
  39. {models/configs β†’ configs}/test.yaml +0 -0
  40. {models/configs β†’ configs}/train.yaml +0 -0
  41. {models/configs β†’ configs}/trainer/ddp.yaml +0 -0
  42. {models/configs β†’ configs}/trainer/default.yaml +0 -0
  43. {models/configs β†’ configs}/trainer/long.yaml +0 -0
  44. data-preprocessing/.env.example +0 -3
  45. data-preprocessing/extract_annotations.py +0 -45
  46. data-preprocessing/requirements.txt +0 -1
  47. models/docker-compose.cuda.yml β†’ docker-compose.cuda.yml +0 -0
  48. models/docker-compose.yml β†’ docker-compose.yml +0 -0
  49. focus_annotator +0 -1
  50. models/.dockerignore +0 -4
models/Dockerfile β†’ Dockerfile RENAMED
File without changes
models/DockerfileCUDA β†’ DockerfileCUDA RENAMED
File without changes
README.md CHANGED
@@ -1 +1,1445 @@
1
- # master_thesis_code
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ # Lightning-Hydra-Template
4
+
5
+ <a href="https://www.python.org/"><img alt="Python" src="https://img.shields.io/badge/-Python 3.7+-blue?style=for-the-badge&logo=python&logoColor=white"></a>
6
+ <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/-PyTorch 1.8+-ee4c2c?style=for-the-badge&logo=pytorch&logoColor=white"></a>
7
+ <a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning 1.5+-792ee5?style=for-the-badge&logo=pytorchlightning&logoColor=white"></a>
8
+ <a href="https://hydra.cc/"><img alt="Config: hydra" src="https://img.shields.io/badge/config-hydra 1.1-89b8cd?style=for-the-badge&labelColor=gray"></a>
9
+ <a href="https://black.readthedocs.io/en/stable/"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-black.svg?style=for-the-badge&labelColor=gray"></a>
10
+
11
+ A clean and scalable template to kickstart your deep learning project πŸš€βš‘πŸ”₯<br>
12
+ Click on [<kbd>Use this template</kbd>](https://github.com/ashleve/lightning-hydra-template/generate) to initialize new repository.
13
+
14
+ _Suggestions are always welcome!_
15
+
16
+ </div>
17
+
18
+ <br><br>
19
+
20
+ ## πŸ“Œ&nbsp;&nbsp;Introduction
21
+
22
+ This template tries to be as general as possible. It integrates many different MLOps tools.
23
+
24
+ > Effective usage of this template requires learning of a couple of technologies: [PyTorch](https://pytorch.org), [PyTorch Lightning](https://www.pytorchlightning.ai) and [Hydra](https://hydra.cc). Knowledge of some experiment logging framework like [Weights&Biases](https://wandb.com), [Neptune](https://neptune.ai) or [MLFlow](https://mlflow.org) is also recommended.
25
+
26
+ **Why you should use it:** it allows you to rapidly iterate over new models/datasets and scale your projects from small single experiments to hyperparameter searches on computing clusters, without writing any boilerplate code. To my knowledge, it's one of the most convenient all-in-one technology stack for Deep Learning research. Good starting point for reproducing papers, kaggle competitions or small-team research projects. It's also a collection of best practices for efficient workflow and reproducibility.
27
+
28
+ **Why you shouldn't use it:** this template is not fitted to be a production environment, should be used more as a fast experimentation tool. Apart from that, Lightning and Hydra are still evolving and integrate many libraries, which means sometimes things break - for the list of currently known bugs, visit [this page](https://github.com/ashleve/lightning-hydra-template/labels/bug). Also, even though Lightning is very flexible, it's not well suited for every possible deep learning task. See [#Limitations](#limitations) for more.
29
+
30
+ ### Why PyTorch Lightning?
31
+
32
+ [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) is a lightweight PyTorch wrapper for high-performance AI research.
33
+ It makes your code neatly organized and provides lots of useful features, like ability to run model on CPU, GPU, multi-GPU cluster and TPU.
34
+
35
+ ### Why Hydra?
36
+
37
+ [Hydra](https://github.com/facebookresearch/hydra) is an open-source Python framework that simplifies the development of research and other complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line. It allows you to conveniently manage experiments and provides many useful plugins, like [Optuna Sweeper](https://hydra.cc/docs/next/plugins/optuna_sweeper) for hyperparameter search, or [Ray Launcher](https://hydra.cc/docs/next/plugins/ray_launcher) for running jobs on a cluster.
38
+
39
+ <br>
40
+
41
+ ## Main Ideas Of This Template
42
+
43
+ - **Predefined Structure**: clean and scalable so that work can easily be extended and replicated | [#Project Structure](#project-structure)
44
+ - **Rapid Experimentation**: thanks to automating pipeline with config files and hydra command line superpowers | [#Your Superpowers](#your-superpowers)
45
+ - **Reproducibility**: obtaining similar results is supported in multiple ways | [#Reproducibility](#reproducibility)
46
+ - **Little Boilerplate**: so pipeline can be easily modified | [#How It Works](#how-it-works)
47
+ - **Main Configuration**: main config file specifies default training configuration | [#Main Project Configuration](#main-project-configuration)
48
+ - **Experiment Configurations**: can be composed out of smaller configs and override chosen hyperparameters | [#Experiment Configuration](#experiment-configuration)
49
+ - **Workflow**: comes down to 4 simple steps | [#Workflow](#workflow)
50
+ - **Experiment Tracking**: many logging frameworks can be easily integrated, like Tensorboard, MLFlow or W&B | [#Experiment Tracking](#experiment-tracking)
51
+ - **Logs**: all logs (checkpoints, data from loggers, hparams, etc.) are stored in a convenient folder structure imposed by Hydra | [#Logs](#logs)
52
+ - **Hyperparameter Search**: made easier with Hydra built-in plugins like [Optuna Sweeper](https://hydra.cc/docs/next/plugins/optuna_sweeper) | [#Hyperparameter Search](#hyperparameter-search)
53
+ - **Tests**: unit tests and shell/command based tests for speeding up the development | [#Tests](#tests)
54
+ - **Best Practices**: a couple of recommended tools, practices and standards for efficient workflow and reproducibility | [#Best Practices](#best-practices)
55
+
56
+ <br>
57
+
58
+ ## Project Structure
59
+
60
+ The directory structure of new project looks like this:
61
+
62
+ ```
63
+ β”œβ”€β”€ configs <- Hydra configuration files
64
+ β”‚ β”œβ”€β”€ callbacks <- Callbacks configs
65
+ β”‚ β”œβ”€β”€ datamodule <- Datamodule configs
66
+ β”‚ β”œβ”€β”€ debug <- Debugging configs
67
+ β”‚ β”œβ”€β”€ experiment <- Experiment configs
68
+ β”‚ β”œβ”€β”€ hparams_search <- Hyperparameter search configs
69
+ β”‚ β”œβ”€β”€ local <- Local configs
70
+ β”‚ β”œβ”€β”€ log_dir <- Logging directory configs
71
+ β”‚ β”œβ”€β”€ logger <- Logger configs
72
+ β”‚ β”œβ”€β”€ model <- Model configs
73
+ β”‚ β”œβ”€β”€ trainer <- Trainer configs
74
+ β”‚ β”‚
75
+ β”‚ β”œβ”€β”€ test.yaml <- Main config for testing
76
+ β”‚ └── train.yaml <- Main config for training
77
+ β”‚
78
+ β”œβ”€β”€ data <- Project data
79
+ β”‚
80
+ β”œβ”€β”€ logs <- Logs generated by Hydra and PyTorch Lightning loggers
81
+ β”‚
82
+ β”œβ”€β”€ notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
83
+ β”‚ the creator's initials, and a short `-` delimited description,
84
+ β”‚ e.g. `1.0-jqp-initial-data-exploration.ipynb`.
85
+ β”‚
86
+ β”œβ”€β”€ scripts <- Shell scripts
87
+ β”‚
88
+ β”œβ”€β”€ src <- Source code
89
+ β”‚ β”œβ”€β”€ datamodules <- Lightning datamodules
90
+ β”‚ β”œβ”€β”€ models <- Lightning models
91
+ β”‚ β”œβ”€β”€ utils <- Utility scripts
92
+ β”‚ β”œβ”€β”€ vendor <- Third party code that cannot be installed using PIP/Conda
93
+ β”‚ β”‚
94
+ β”‚ β”œβ”€β”€ testing_pipeline.py
95
+ β”‚ └── training_pipeline.py
96
+ β”‚
97
+ β”œβ”€β”€ tests <- Tests of any kind
98
+ β”‚ β”œβ”€β”€ helpers <- A couple of testing utilities
99
+ β”‚ β”œβ”€β”€ shell <- Shell/command based tests
100
+ β”‚ └── unit <- Unit tests
101
+ β”‚
102
+ β”œβ”€β”€ test.py <- Run testing
103
+ β”œβ”€β”€ train.py <- Run training
104
+ β”‚
105
+ β”œβ”€β”€ .env.example <- Template of the file for storing private environment variables
106
+ β”œβ”€β”€ .gitignore <- List of files/folders ignored by git
107
+ β”œβ”€β”€ .pre-commit-config.yaml <- Configuration of pre-commit hooks for code formatting
108
+ β”œβ”€β”€ requirements.txt <- File for installing python dependencies
109
+ β”œβ”€β”€ setup.cfg <- Configuration of linters and pytest
110
+ └── README.md
111
+ ```
112
+
113
+ <br>
114
+
115
+ ## πŸš€&nbsp;&nbsp;Quickstart
116
+
117
+ ```bash
118
+ # clone project
119
+ git clone https://github.com/ashleve/lightning-hydra-template
120
+ cd lightning-hydra-template
121
+
122
+ # [OPTIONAL] create conda environment
123
+ conda create -n myenv python=3.8
124
+ conda activate myenv
125
+
126
+ # install pytorch according to instructions
127
+ # https://pytorch.org/get-started/
128
+
129
+ # install requirements
130
+ pip install -r requirements.txt
131
+ ```
132
+
133
+ Template contains example with MNIST classification.<br>
134
+ When running `python train.py` you should see something like this:
135
+
136
+ <div align="center">
137
+
138
+ ![](https://github.com/ashleve/lightning-hydra-template/blob/resources/terminal.png)
139
+
140
+ </div>
141
+
142
+ ### ⚑&nbsp;&nbsp;Your Superpowers
143
+
144
+ <details>
145
+ <summary><b>Override any config parameter from command line</b></summary>
146
+
147
+ > Hydra allows you to easily overwrite any parameter defined in your config.
148
+
149
+ ```bash
150
+ python train.py trainer.max_epochs=20 model.lr=1e-4
151
+ ```
152
+
153
+ > You can also add new parameters with `+` sign.
154
+
155
+ ```bash
156
+ python train.py +model.new_param="uwu"
157
+ ```
158
+
159
+ </details>
160
+
161
+ <details>
162
+ <summary><b>Train on CPU, GPU, multi-GPU and TPU</b></summary>
163
+
164
+ > PyTorch Lightning makes it easy to train your models on different hardware.
165
+
166
+ ```bash
167
+ # train on CPU
168
+ python train.py trainer.gpus=0
169
+
170
+ # train on 1 GPU
171
+ python train.py trainer.gpus=1
172
+
173
+ # train on TPU
174
+ python train.py +trainer.tpu_cores=8
175
+
176
+ # train with DDP (Distributed Data Parallel) (4 GPUs)
177
+ python train.py trainer.gpus=4 +trainer.strategy=ddp
178
+
179
+ # train with DDP (Distributed Data Parallel) (8 GPUs, 2 nodes)
180
+ python train.py trainer.gpus=4 +trainer.num_nodes=2 +trainer.strategy=ddp
181
+ ```
182
+
183
+ </details>
184
+
185
+ <details>
186
+ <summary><b>Train with mixed precision</b></summary>
187
+
188
+ ```bash
189
+ # train with pytorch native automatic mixed precision (AMP)
190
+ python train.py trainer.gpus=1 +trainer.precision=16
191
+ ```
192
+
193
+ </details>
194
+
195
+ <!-- deepspeed support still in beta
196
+ <details>
197
+ <summary><b>Optimize large scale models on multiple GPUs with Deepspeed</b></summary>
198
+
199
+ ```bash
200
+ python train.py +trainer.
201
+ ```
202
+
203
+ </details>
204
+ -->
205
+
206
+ <details>
207
+ <summary><b>Train model with any logger available in PyTorch Lightning, like Weights&Biases or Tensorboard</b></summary>
208
+
209
+ > PyTorch Lightning provides convenient integrations with most popular logging frameworks, like Tensorboard, Neptune or simple csv files. Read more [here](#experiment-tracking). Using wandb requires you to [setup account](https://www.wandb.com/) first. After that just complete the config as below.<br> > **Click [here](https://wandb.ai/hobglob/template-dashboard/) to see example wandb dashboard generated with this template.**
210
+
211
+ ```bash
212
+ # set project and entity names in `configs/logger/wandb`
213
+ wandb:
214
+ project: "your_project_name"
215
+ entity: "your_wandb_team_name"
216
+ ```
217
+
218
+ ```bash
219
+ # train model with Weights&Biases (link to wandb dashboard should appear in the terminal)
220
+ python train.py logger=wandb
221
+ ```
222
+
223
+ </details>
224
+
225
+ <details>
226
+ <summary><b>Train model with chosen experiment config</b></summary>
227
+
228
+ > Experiment configurations are placed in [configs/experiment/](configs/experiment/).
229
+
230
+ ```bash
231
+ python train.py experiment=example
232
+ ```
233
+
234
+ </details>
235
+
236
+ <details>
237
+ <summary><b>Attach some callbacks to run</b></summary>
238
+
239
+ > Callbacks can be used for things such as as model checkpointing, early stopping and [many more](https://pytorch-lightning.readthedocs.io/en/latest/extensions/callbacks.html#built-in-callbacks).<br>
240
+ > Callbacks configurations are placed in [configs/callbacks/](configs/callbacks/).
241
+
242
+ ```bash
243
+ python train.py callbacks=default
244
+ ```
245
+
246
+ </details>
247
+
248
+ <details>
249
+ <summary><b>Use different tricks available in Pytorch Lightning</b></summary>
250
+
251
+ > PyTorch Lightning provides about [40+ useful trainer flags](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags).
252
+
253
+ ```yaml
254
+ # gradient clipping may be enabled to avoid exploding gradients
255
+ python train.py +trainer.gradient_clip_val=0.5
256
+
257
+ # stochastic weight averaging can make your models generalize better
258
+ python train.py +trainer.stochastic_weight_avg=true
259
+
260
+ # run validation loop 4 times during a training epoch
261
+ python train.py +trainer.val_check_interval=0.25
262
+
263
+ # accumulate gradients
264
+ python train.py +trainer.accumulate_grad_batches=10
265
+
266
+ # terminate training after 12 hours
267
+ python train.py +trainer.max_time="00:12:00:00"
268
+ ```
269
+
270
+ </details>
271
+
272
+ <details>
273
+ <summary><b>Easily debug</b></summary>
274
+
275
+ > Visit [configs/debug/](configs/debug/) for different debugging configs.
276
+
277
+ ```bash
278
+ # runs 1 epoch in default debugging mode
279
+ # changes logging directory to `logs/debugs/...`
280
+ # sets level of all command line loggers to 'DEBUG'
281
+ # enables extra trainer flags like tracking gradient norm
282
+ # enforces debug-friendly configuration
283
+ python train.py debug=default
284
+
285
+ # runs test epoch without training
286
+ python train.py debug=test_only
287
+
288
+ # run 1 train, val and test loop, using only 1 batch
289
+ python train.py +trainer.fast_dev_run=true
290
+
291
+ # raise exception if there are any numerical anomalies in tensors, like NaN or +/-inf
292
+ python train.py +trainer.detect_anomaly=true
293
+
294
+ # print execution time profiling after training ends
295
+ python train.py +trainer.profiler="simple"
296
+
297
+ # try overfitting to 1 batch
298
+ python train.py +trainer.overfit_batches=1 trainer.max_epochs=20
299
+
300
+ # use only 20% of the data
301
+ python train.py +trainer.limit_train_batches=0.2 \
302
+ +trainer.limit_val_batches=0.2 +trainer.limit_test_batches=0.2
303
+
304
+ # log second gradient norm of the model
305
+ python train.py +trainer.track_grad_norm=2
306
+ ```
307
+
308
+ </details>
309
+
310
+ <details>
311
+ <summary><b>Resume training from checkpoint</b></summary>
312
+
313
+ > Checkpoint can be either path or URL.
314
+
315
+ ```yaml
316
+ python train.py trainer.resume_from_checkpoint="/path/to/ckpt/name.ckpt"
317
+ ```
318
+
319
+ > ⚠️ Currently loading ckpt in Lightning doesn't resume logger experiment, but it will be supported in future Lightning release.
320
+
321
+ </details>
322
+
323
+ <details>
324
+ <summary><b>Execute evaluation for a given checkpoint</b></summary>
325
+
326
+ > Checkpoint can be either path or URL.
327
+
328
+ ```yaml
329
+ python test.py ckpt_path="/path/to/ckpt/name.ckpt"
330
+ ```
331
+
332
+ </details>
333
+
334
+ <details>
335
+ <summary><b>Create a sweep over hyperparameters</b></summary>
336
+
337
+ ```bash
338
+ # this will run 6 experiments one after the other,
339
+ # each with different combination of batch_size and learning rate
340
+ python train.py -m datamodule.batch_size=32,64,128 model.lr=0.001,0.0005
341
+ ```
342
+
343
+ > ⚠️ This sweep is not failure resistant (if one job crashes than the whole sweep crashes).
344
+
345
+ </details>
346
+
347
+ <details>
348
+ <summary><b>Create a sweep over hyperparameters with Optuna</b></summary>
349
+
350
+ > Using [Optuna Sweeper](https://hydra.cc/docs/next/plugins/optuna_sweeper) plugin doesn't require you to code any boilerplate into your pipeline, everything is defined in a [single config file](configs/hparams_search/mnist_optuna.yaml)!
351
+
352
+ ```bash
353
+ # this will run hyperparameter search defined in `configs/hparams_search/mnist_optuna.yaml`
354
+ # over chosen experiment config
355
+ python train.py -m hparams_search=mnist_optuna experiment=example_simple
356
+ ```
357
+
358
+ > ⚠️ Currently this sweep is not failure resistant (if one job crashes than the whole sweep crashes). Might be supported in future Hydra release.
359
+
360
+ </details>
361
+
362
+ <details>
363
+ <summary><b>Execute all experiments from folder</b></summary>
364
+
365
+ > Hydra provides special syntax for controlling behavior of multiruns. Learn more [here](https://hydra.cc/docs/next/tutorials/basic/running_your_app/multi-run). The command below executes all experiments from folder [configs/experiment/](configs/experiment/).
366
+
367
+ ```bash
368
+ python train.py -m 'experiment=glob(*)'
369
+ ```
370
+
371
+ </details>
372
+
373
+ <details>
374
+ <summary><b>Execute sweep on a remote AWS cluster</b></summary>
375
+
376
+ > This should be achievable with simple config using [Ray AWS launcher for Hydra](https://hydra.cc/docs/next/plugins/ray_launcher). Example is not yet implemented in this template.
377
+
378
+ </details>
379
+
380
+ <!-- <details>
381
+ <summary><b>Execute sweep on a SLURM cluster</b></summary>
382
+
383
+ > This should be achievable with either [the right lightning trainer flags](https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html?highlight=SLURM#slurm-managed-cluster) or simple config using [Submitit launcher for Hydra](https://hydra.cc/docs/plugins/submitit_launcher). Example is not yet implemented in this template.
384
+
385
+ </details> -->
386
+
387
+ <details>
388
+ <summary><b>Use Hydra tab completion</b></summary>
389
+
390
+ > Hydra allows you to autocomplete config argument overrides in shell as you write them, by pressing `tab` key. Learn more [here](https://hydra.cc/docs/tutorials/basic/running_your_app/tab_completion).
391
+
392
+ </details>
393
+
394
+ <details>
395
+ <summary><b>Apply pre-commit hooks</b></summary>
396
+
397
+ > Apply pre-commit hooks to automatically format your code and configs, perform code analysis and remove output from jupyter notebooks. See [# Best Practices](#best-practices) for more.
398
+
399
+ ```bash
400
+ pre-commit run -a
401
+ ```
402
+
403
+ </details>
404
+
405
+ <br>
406
+
407
+ ## ❀️&nbsp;&nbsp;Contributions
408
+
409
+ Have a question? Found a bug? Missing a specific feature? Have an idea for improving documentation? Feel free to file a new issue, discussion or PR with respective title and description. If you already found a solution to your problem, don't hesitate to share it. Suggestions for new best practices are always welcome!
410
+
411
+ <br>
412
+
413
+ ## ℹ️&nbsp;&nbsp;Guide
414
+
415
+ ### How To Get Started
416
+
417
+ - First, you should probably get familiar with [PyTorch Lightning](https://www.pytorchlightning.ai)
418
+ - Next, go through [Hydra quick start guide](https://hydra.cc/docs/intro/) and [basic Hydra tutorial](https://hydra.cc/docs/tutorials/basic/your_first_app/simple_cli/)
419
+
420
+ <br>
421
+
422
+ ### How It Works
423
+
424
+ All PyTorch Lightning modules are dynamically instantiated from module paths specified in config. Example model config:
425
+
426
+ ```yaml
427
+ _target_: src.models.mnist_model.MNISTLitModule
428
+ input_size: 784
429
+ lin1_size: 256
430
+ lin2_size: 256
431
+ lin3_size: 256
432
+ output_size: 10
433
+ lr: 0.001
434
+ ```
435
+
436
+ Using this config we can instantiate the object with the following line:
437
+
438
+ ```python
439
+ model = hydra.utils.instantiate(config.model)
440
+ ```
441
+
442
+ This allows you to easily iterate over new models! Every time you create a new one, just specify its module path and parameters in appropriate config file. <br>
443
+
444
+ Switch between models and datamodules with command line arguments:
445
+
446
+ ```bash
447
+ python train.py model=mnist
448
+ ```
449
+
450
+ The whole pipeline managing the instantiation logic is placed in [src/training_pipeline.py](src/training_pipeline.py).
451
+
452
+ <br>
453
+
454
+ ### Main Project Configuration
455
+
456
+ Location: [configs/train.yaml](configs/train.yaml) <br>
457
+ Main project config contains default training configuration.<br>
458
+ It determines how config is composed when simply executing command `python train.py`.<br>
459
+
460
+ <details>
461
+ <summary><b>Show main project config</b></summary>
462
+
463
+ ```yaml
464
+ # specify here default training configuration
465
+ defaults:
466
+ - _self_
467
+ - datamodule: mnist.yaml
468
+ - model: mnist.yaml
469
+ - callbacks: default.yaml
470
+ - logger: null # set logger here or use command line (e.g. `python train.py logger=tensorboard`)
471
+ - trainer: default.yaml
472
+ - log_dir: default.yaml
473
+
474
+ # experiment configs allow for version control of specific configurations
475
+ # e.g. best hyperparameters for each combination of model and datamodule
476
+ - experiment: null
477
+
478
+ # debugging config (enable through command line, e.g. `python train.py debug=default)
479
+ - debug: null
480
+
481
+ # config for hyperparameter optimization
482
+ - hparams_search: null
483
+
484
+ # optional local config for machine/user specific settings
485
+ # it's optional since it doesn't need to exist and is excluded from version control
486
+ - optional local: default.yaml
487
+
488
+ # enable color logging
489
+ - override hydra/hydra_logging: colorlog
490
+ - override hydra/job_logging: colorlog
491
+
492
+ # path to original working directory
493
+ # hydra hijacks working directory by changing it to the new log directory
494
+ # https://hydra.cc/docs/next/tutorials/basic/running_your_app/working_directory
495
+ original_work_dir: ${hydra:runtime.cwd}
496
+
497
+ # path to folder with data
498
+ data_dir: ${original_work_dir}/data/
499
+
500
+ # pretty print config at the start of the run using Rich library
501
+ print_config: True
502
+
503
+ # disable python warnings if they annoy you
504
+ ignore_warnings: True
505
+
506
+ # set False to skip model training
507
+ train: True
508
+
509
+ # evaluate on test set, using best model weights achieved during training
510
+ # lightning chooses best weights based on the metric specified in checkpoint callback
511
+ test: True
512
+
513
+ # seed for random number generators in pytorch, numpy and python.random
514
+ seed: null
515
+
516
+ # default name for the experiment, determines logging folder path
517
+ # (you can overwrite this name in experiment configs)
518
+ name: "default"
519
+ ```
520
+
521
+ </details>
522
+
523
+ <br>
524
+
525
+ ### Experiment Configuration
526
+
527
+ Location: [configs/experiment](configs/experiment)<br>
528
+ Experiment configs allow you to overwrite parameters from main project configuration.<br>
529
+ For example, you can use them to version control best hyperparameters for each combination of model and dataset.
530
+
531
+ <details>
532
+ <summary><b>Show example experiment config</b></summary>
533
+
534
+ ```yaml
535
+ # to execute this experiment run:
536
+ # python train.py experiment=example
537
+
538
+ defaults:
539
+ - override /datamodule: mnist.yaml
540
+ - override /model: mnist.yaml
541
+ - override /callbacks: default.yaml
542
+ - override /logger: null
543
+ - override /trainer: default.yaml
544
+
545
+ # all parameters below will be merged with parameters from default configurations set above
546
+ # this allows you to overwrite only specified parameters
547
+
548
+ # name of the run determines folder name in logs
549
+ name: "simple_dense_net"
550
+
551
+ seed: 12345
552
+
553
+ trainer:
554
+ min_epochs: 10
555
+ max_epochs: 10
556
+ gradient_clip_val: 0.5
557
+
558
+ model:
559
+ lin1_size: 128
560
+ lin2_size: 256
561
+ lin3_size: 64
562
+ lr: 0.002
563
+
564
+ datamodule:
565
+ batch_size: 64
566
+
567
+ logger:
568
+ wandb:
569
+ tags: ["mnist", "${name}"]
570
+ ```
571
+
572
+ </details>
573
+
574
+ <br>
575
+
576
+ ### Local Configuration
577
+
578
+ Location: [configs/local](configs/local) <br>
579
+ Some configurations are user/machine/installation specific (e.g. configuration of local cluster, or harddrive paths on a specific machine). For such scenarios, a file `configs/local/default.yaml` can be created which is automatically loaded but not tracked by Git.
580
+
581
+ <details>
582
+ <summary><b>Show example local Slurm cluster config</b></summary>
583
+
584
+ ```yaml
585
+ # @package _global_
586
+
587
+ defaults:
588
+ - override /hydra/launcher@_here_: submitit_slurm
589
+
590
+ data_dir: /mnt/scratch/data/
591
+
592
+ hydra:
593
+ launcher:
594
+ timeout_min: 1440
595
+ gpus_per_task: 1
596
+ gres: gpu:1
597
+ job:
598
+ env_set:
599
+ MY_VAR: /home/user/my/system/path
600
+ MY_KEY: asdgjhawi8y23ihsghsueity23ihwd
601
+ ```
602
+
603
+ </details>
604
+
605
+ <br>
606
+
607
+ ### Workflow
608
+
609
+ 1. Write your PyTorch Lightning module (see [models/mnist_module.py](src/models/mnist_module.py) for example)
610
+ 2. Write your PyTorch Lightning datamodule (see [datamodules/mnist_datamodule.py](src/datamodules/mnist_datamodule.py) for example)
611
+ 3. Write your experiment config, containing paths to your model and datamodule
612
+ 4. Run training with chosen experiment config: `python train.py experiment=experiment_name`
613
+
614
+ <br>
615
+
616
+ ### Logs
617
+
618
+ **Hydra creates new working directory for every executed run.** By default, logs have the following structure:
619
+
620
+ ```
621
+ β”œβ”€β”€ logs
622
+ β”‚ β”œβ”€β”€ experiments # Folder for the logs generated by experiments
623
+ β”‚ β”‚ β”œβ”€β”€ runs # Folder for single runs
624
+ β”‚ β”‚ β”‚ β”œβ”€β”€ experiment_name # Experiment name
625
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ YYYY-MM-DD_HH-MM-SS # Datetime of the run
626
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ .hydra # Hydra logs
627
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ csv # Csv logs
628
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ wandb # Weights&Biases logs
629
+ β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ checkpoints # Training checkpoints
630
+ β”‚ β”‚ β”‚ β”‚ β”‚ └── ... # Any other thing saved during training
631
+ β”‚ β”‚ β”‚ β”‚ └── ...
632
+ β”‚ β”‚ β”‚ └── ...
633
+ β”‚ β”‚ β”‚
634
+ β”‚ β”‚ └── multiruns # Folder for multiruns
635
+ β”‚ β”‚ β”œβ”€β”€ experiment_name # Experiment name
636
+ β”‚ β”‚ β”‚ β”œβ”€β”€ YYYY-MM-DD_HH-MM-SS # Datetime of the multirun
637
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€1 # Multirun job number
638
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€2
639
+ β”‚ β”‚ β”‚ β”‚ └── ...
640
+ β”‚ β”‚ β”‚ └── ...
641
+ β”‚ β”‚ └── ...
642
+ β”‚ β”‚
643
+ β”‚ β”œβ”€β”€ evaluations # Folder for the logs generated during testing
644
+ β”‚ β”‚ └── ...
645
+ β”‚ β”‚
646
+ β”‚ └── debugs # Folder for the logs generated during debugging
647
+ β”‚ └── ...
648
+ ```
649
+
650
+ You can change this structure by modifying paths in [hydra configuration](configs/log_dir).
651
+
652
+ <br>
653
+
654
+ ### Experiment Tracking
655
+
656
+ PyTorch Lightning supports many popular logging frameworks:<br>
657
+ **[Weights&Biases](https://www.wandb.com/) Β· [Neptune](https://neptune.ai/) Β· [Comet](https://www.comet.ml/) Β· [MLFlow](https://mlflow.org) Β· [Tensorboard](https://www.tensorflow.org/tensorboard/)**
658
+
659
+ These tools help you keep track of hyperparameters and output metrics and allow you to compare and visualize results. To use one of them simply complete its configuration in [configs/logger](configs/logger) and run:
660
+
661
+ ```bash
662
+ python train.py logger=logger_name
663
+ ```
664
+
665
+ You can use many of them at once (see [configs/logger/many_loggers.yaml](configs/logger/many_loggers.yaml) for example).
666
+
667
+ You can also write your own logger.
668
+
669
+ Lightning provides convenient method for logging custom metrics from inside LightningModule. Read the docs [here](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging) or take a look at [MNIST example](src/models/mnist_module.py).
670
+
671
+ <br>
672
+
673
+ ### Hyperparameter Search
674
+
675
+ Defining hyperparameter optimization is as easy as adding new config file to [configs/hparams_search](configs/hparams_search).
676
+
677
+ <details>
678
+ <summary><b>Show example</b></summary>
679
+
680
+ ```yaml
681
+ defaults:
682
+ - override /hydra/sweeper: optuna
683
+
684
+ # choose metric which will be optimized by Optuna
685
+ optimized_metric: "val/acc_best"
686
+
687
+ hydra:
688
+ # here we define Optuna hyperparameter search
689
+ # it optimizes for value returned from function with @hydra.main decorator
690
+ # learn more here: https://hydra.cc/docs/next/plugins/optuna_sweeper
691
+ sweeper:
692
+ _target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
693
+ storage: null
694
+ study_name: null
695
+ n_jobs: 1
696
+
697
+ # 'minimize' or 'maximize' the objective
698
+ direction: maximize
699
+
700
+ # number of experiments that will be executed
701
+ n_trials: 20
702
+
703
+ # choose Optuna hyperparameter sampler
704
+ # learn more here: https://optuna.readthedocs.io/en/stable/reference/samplers.html
705
+ sampler:
706
+ _target_: optuna.samplers.TPESampler
707
+ seed: 12345
708
+ consider_prior: true
709
+ prior_weight: 1.0
710
+ consider_magic_clip: true
711
+ consider_endpoints: false
712
+ n_startup_trials: 10
713
+ n_ei_candidates: 24
714
+ multivariate: false
715
+ warn_independent_sampling: true
716
+
717
+ # define range of hyperparameters
718
+ search_space:
719
+ datamodule.batch_size:
720
+ type: categorical
721
+ choices: [32, 64, 128]
722
+ model.lr:
723
+ type: float
724
+ low: 0.0001
725
+ high: 0.2
726
+ model.lin1_size:
727
+ type: categorical
728
+ choices: [32, 64, 128, 256, 512]
729
+ model.lin2_size:
730
+ type: categorical
731
+ choices: [32, 64, 128, 256, 512]
732
+ model.lin3_size:
733
+ type: categorical
734
+ choices: [32, 64, 128, 256, 512]
735
+ ```
736
+
737
+ </details>
738
+
739
+ Next, you can execute it with: `python train.py -m hparams_search=mnist_optuna`
740
+
741
+ Using this approach doesn't require you to add any boilerplate into your pipeline, everything is defined in a single config file.
742
+
743
+ You can use different optimization frameworks integrated with Hydra, like Optuna, Ax or Nevergrad.
744
+
745
+ The `optimization_results.yaml` will be available under `logs/multirun` folder.
746
+
747
+ This approach doesn't support advanced technics like prunning - for more sophisticated search, you probably shouldn't use hydra multirun feature and instead write your own optimization pipeline.
748
+
749
+ <br>
750
+
751
+ ### Inference
752
+
753
+ The following code is an example of loading model from checkpoint and running predictions.<br>
754
+
755
+ <details>
756
+ <summary><b>Show example</b></summary>
757
+
758
+ ```python
759
+ from PIL import Image
760
+ from torchvision import transforms
761
+
762
+ from src.models.mnist_module import MNISTLitModule
763
+
764
+
765
+ def predict():
766
+ """Example of inference with trained model.
767
+ It loads trained image classification model from checkpoint.
768
+ Then it loads example image and predicts its label.
769
+ """
770
+
771
+ # ckpt can be also a URL!
772
+ CKPT_PATH = "last.ckpt"
773
+
774
+ # load model from checkpoint
775
+ # model __init__ parameters will be loaded from ckpt automatically
776
+ # you can also pass some parameter explicitly to override it
777
+ trained_model = MNISTLitModule.load_from_checkpoint(checkpoint_path=CKPT_PATH)
778
+
779
+ # print model hyperparameters
780
+ print(trained_model.hparams)
781
+
782
+ # switch to evaluation mode
783
+ trained_model.eval()
784
+ trained_model.freeze()
785
+
786
+ # load data
787
+ img = Image.open("data/example_img.png").convert("L") # convert to black and white
788
+ # img = Image.open("data/example_img.png").convert("RGB") # convert to RGB
789
+
790
+ # preprocess
791
+ mnist_transforms = transforms.Compose(
792
+ [
793
+ transforms.ToTensor(),
794
+ transforms.Resize((28, 28)),
795
+ transforms.Normalize((0.1307,), (0.3081,)),
796
+ ]
797
+ )
798
+ img = mnist_transforms(img)
799
+ img = img.reshape((1, *img.size())) # reshape to form batch of size 1
800
+
801
+ # inference
802
+ output = trained_model(img)
803
+ print(output)
804
+
805
+
806
+ if __name__ == "__main__":
807
+ predict()
808
+
809
+ ```
810
+
811
+ </details>
812
+
813
+ <br>
814
+
815
+ ### Tests
816
+
817
+ Template comes with example tests implemented with pytest library. To execute them simply run:
818
+
819
+ ```bash
820
+ # run all tests
821
+ pytest
822
+
823
+ # run tests from specific file
824
+ pytest tests/shell/test_basic_commands.py
825
+
826
+ # run all tests except the ones marked as slow
827
+ pytest -k "not slow"
828
+ ```
829
+
830
+ To speed up the development, you can once in a while execute tests that run a couple of quick experiments, like training 1 epoch on 25% of data, executing single train/val/test step, etc. Those kind of tests don't check for any specific output - they exist to simply verify that executing some bash commands doesn't end up in throwing exceptions. You can find them implemented in [tests/shell](tests/shell) folder.
831
+
832
+ You can easily modify the commands in the scripts for your use case. If 1 epoch is too much for your model, then make it run for a couple of batches instead (by using the right trainer flags).
833
+
834
+ <br>
835
+
836
+ ### Callbacks
837
+
838
+ The branch [`wandb-callbacks`](https://github.com/ashleve/lightning-hydra-template/tree/wandb-callbacks) contains example callbacks enabling better Weights&Biases integration, which you can use as a reference for writing your own callbacks (see [wandb_callbacks.py](https://github.com/ashleve/lightning-hydra-template/tree/wandb-callbacks/src/callbacks/wandb_callbacks.py)).
839
+
840
+ Callbacks which support reproducibility:
841
+
842
+ - **WatchModel**
843
+ - **UploadCodeAsArtifact**
844
+ - **UploadCheckpointsAsArtifact**
845
+
846
+ Callbacks which provide examples of logging custom visualisations:
847
+
848
+ - **LogConfusionMatrix**
849
+ - **LogF1PrecRecHeatmap**
850
+ - **LogImagePredictions**
851
+
852
+ To try all of the callbacks at once, switch to the right branch:
853
+
854
+ ```bash
855
+ git checkout wandb-callbacks
856
+ ```
857
+
858
+ And then run the following command:
859
+
860
+ ```bash
861
+ python train.py logger=wandb callbacks=wandb
862
+ ```
863
+
864
+ To see the result of all the callbacks attached, take a look at [this experiment dashboard](https://wandb.ai/hobglob/template-tests/runs/3rw7q70h).
865
+
866
+ <br>
867
+
868
+ ### Multi-GPU Training
869
+
870
+ Lightning supports multiple ways of doing distributed training. The most common one is DDP, which spawns separate process for each GPU and averages gradients between them. To learn about other approaches read the [lightning docs](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html).
871
+
872
+ You can run DDP on mnist example with 4 GPUs like this:
873
+
874
+ ```bash
875
+ python train.py trainer.gpus=4 +trainer.strategy=ddp
876
+ ```
877
+
878
+ ⚠️ When using DDP you have to be careful how you write your models - learn more [here](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html).
879
+
880
+ <br>
881
+
882
+ ### Docker
883
+
884
+ First you will need to [install Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) to enable GPU support.
885
+
886
+ The template Dockerfile is provided on branch [`dockerfiles`](https://github.com/ashleve/lightning-hydra-template/tree/dockerfiles). Copy it to the template root folder.
887
+
888
+ To build the container use:
889
+
890
+ ```bash
891
+ docker build -t <project_name> .
892
+ ```
893
+
894
+ To mount the project to the container use:
895
+
896
+ ```bash
897
+ docker run -v $(pwd):/workspace/project --gpus all -it --rm <project_name>
898
+ ```
899
+
900
+ <br>
901
+
902
+ ### Reproducibility
903
+
904
+ What provides reproducibility:
905
+
906
+ - Hydra manages your configs
907
+ - Hydra manages your logging paths and makes every executed run store its hyperparameters and config overrides in a separate file in logs
908
+ - Single seed for random number generators in pytorch, numpy and python.random
909
+ - LightningDataModule allows you to encapsulate data split, transformations and default parameters in a single, clean abstraction
910
+ - LightningModule separates your research code from engineering code in a clean way
911
+ - Experiment tracking frameworks take care of logging metrics and hparams, some can also store results and artifacts in cloud
912
+ - Pytorch Lightning takes care of creating training checkpoints
913
+ - Example callbacks for wandb show how you can save and upload a snapshot of codebase every time the run is executed, as well as upload ckpts and track model gradients
914
+
915
+ <!--
916
+ You can load the config of previous run using:
917
+
918
+ ```bash
919
+ python train.py --config-path /logs/runs/.../.hydra/ --config-name config.yaml
920
+ ```
921
+
922
+ The `config.yaml` from `.hydra` folder contains all overriden parameters and sections. This approach however is not officially supported by Hydra and doesn't override the `hydra/` part of the config, meaning logging paths will revert to default!
923
+ -->
924
+ <br>
925
+
926
+ ### Limitations
927
+
928
+ - Currently, template doesn't support k-fold cross validation, but it's possible to achieve it with Lightning Loop interface. See the [official example](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/loop_examples/kfold.py). Implementing it requires rewriting the training pipeline.
929
+ - Pytorch Lightning might not be the best choice for scalable reinforcement learning, it's probably better to use something like [Ray](https://github.com/ray-project/ray).
930
+ - Currently hyperparameter search with Hydra Optuna Plugin doesn't support prunning.
931
+ - Hydra changes working directory to new logging folder for every executed run, which might not be compatible with the way some libraries work.
932
+
933
+ <br>
934
+
935
+ ## Useful Tricks
936
+
937
+ <details>
938
+ <summary><b>Accessing datamodule attributes in model</b></summary>
939
+
940
+ 1. The simplest way is to pass datamodule attribute directly to model on initialization:
941
+
942
+ ```python
943
+ # ./src/training_pipeline.py
944
+ datamodule = hydra.utils.instantiate(config.datamodule)
945
+ model = hydra.utils.instantiate(config.model, some_param=datamodule.some_param)
946
+ ```
947
+
948
+ This is not a very robust solution, since it assumes all your datamodules have `some_param` attribute available (otherwise the run will crash).
949
+
950
+ 2. If you only want to access datamodule config, you can simply pass it as an init parameter:
951
+
952
+ ```python
953
+ # ./src/training_pipeline.py
954
+ model = hydra.utils.instantiate(config.model, dm_conf=config.datamodule, _recursive_=False)
955
+ ```
956
+
957
+ Now you can access any datamodule config part like this:
958
+
959
+ ```python
960
+ # ./src/models/my_model.py
961
+ class MyLitModel(LightningModule):
962
+ def __init__(self, dm_conf, param1, param2):
963
+ super().__init__()
964
+
965
+ batch_size = dm_conf.batch_size
966
+ ```
967
+
968
+ 3. If you need to access the datamodule object attributes, a little hacky solution is to add Omegaconf resolver to your datamodule:
969
+
970
+ ```python
971
+ # ./src/datamodules/my_datamodule.py
972
+ from omegaconf import OmegaConf
973
+
974
+ class MyDataModule(LightningDataModule):
975
+ def __init__(self, param1, param2):
976
+ super().__init__()
977
+
978
+ self.param1 = param1
979
+
980
+ resolver_name = "datamodule"
981
+ OmegaConf.register_new_resolver(
982
+ resolver_name,
983
+ lambda name: getattr(self, name),
984
+ use_cache=False
985
+ )
986
+ ```
987
+
988
+ This way you can reference any datamodule attribute from your config like this:
989
+
990
+ ```yaml
991
+ # this will return attribute 'param1' from datamodule object
992
+ param1: ${datamodule: param1}
993
+ ```
994
+
995
+ When later accessing this field, say in your lightning model, it will get automatically resolved based on all resolvers that are registered. Remember not to access this field before datamodule is initialized or it will crash. **You also need to set `resolve=False` in `print_config()` in [train.py](train.py) or it will throw errors:**
996
+
997
+ ```python
998
+ # ./src/train.py
999
+ utils.print_config(config, resolve=False)
1000
+ ```
1001
+
1002
+ </details>
1003
+
1004
+ <details>
1005
+ <summary><b>Automatic activation of virtual environment and tab completion when entering folder</b></summary>
1006
+
1007
+ 1. Create a new file called `.autoenv` (this name is excluded from version control in `.gitignore`). <br>
1008
+ You can use it to automatically execute shell commands when entering folder. Add some commands to your `.autoenv` file, like in the example below:
1009
+
1010
+ ```bash
1011
+ # activate conda environment
1012
+ conda activate myenv
1013
+
1014
+ # activate hydra tab completion for bash
1015
+ eval "$(python train.py -sc install=bash)"
1016
+ ```
1017
+
1018
+ (these commands will be executed whenever you're openning or switching terminal to folder containing `.autoenv` file)
1019
+
1020
+ 2. To setup this automation for bash, execute the following line (it will append your `.bashrc` file):
1021
+
1022
+ ```bash
1023
+ echo "autoenv() { if [ -x .autoenv ]; then source .autoenv ; echo '.autoenv executed' ; fi } ; cd() { builtin cd \"\$@\" ; autoenv ; } ; autoenv" >> ~/.bashrc
1024
+ ```
1025
+
1026
+ 3. Lastly add execution previliges to your `.autoenv` file:
1027
+
1028
+ ```
1029
+ chmod +x .autoenv
1030
+ ```
1031
+
1032
+ (for safety, only `.autoenv` with previligies will be executed)
1033
+
1034
+ **Explanation**
1035
+
1036
+ The mentioned line appends your `.bashrc` file with 2 commands:
1037
+
1038
+ 1. `autoenv() { if [ -x .autoenv ]; then source .autoenv ; echo '.autoenv executed' ; fi }` - this declares the `autoenv()` function, which executes `.autoenv` file if it exists in current work dir and has execution previligies
1039
+ 2. `cd() { builtin cd \"\$@\" ; autoenv ; } ; autoenv` - this extends behaviour of `cd` command, to make it execute `autoenv()` function each time you change folder in terminal or open new terminal
1040
+
1041
+ </details>
1042
+
1043
+ <!--
1044
+ <details>
1045
+ <summary><b>Making sweeps failure resistant</b></summary>
1046
+
1047
+ TODO
1048
+
1049
+ </details>
1050
+ -->
1051
+
1052
+ <br>
1053
+
1054
+ ## Best Practices
1055
+
1056
+ <details>
1057
+ <summary><b>Use Miniconda for GPU environments</b></summary>
1058
+
1059
+ Use miniconda for your python environments (it's usually unnecessary to install full anaconda environment, miniconda should be enough).
1060
+ It makes it easier to install some dependencies, like cudatoolkit for GPU support. It also allows you to acccess your environments globally.
1061
+
1062
+ Example installation:
1063
+
1064
+ ```bash
1065
+ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
1066
+ bash Miniconda3-latest-Linux-x86_64.sh
1067
+ ```
1068
+
1069
+ Create new conda environment:
1070
+
1071
+ ```bash
1072
+ conda create -n myenv python=3.8
1073
+ conda activate myenv
1074
+ ```
1075
+
1076
+ </details>
1077
+
1078
+ <details>
1079
+ <summary><b>Use automatic code formatting</b></summary>
1080
+
1081
+ Use pre-commit hooks to standardize code formatting of your project and save mental energy.<br>
1082
+ Simply install pre-commit package with:
1083
+
1084
+ ```bash
1085
+ pip install pre-commit
1086
+ ```
1087
+
1088
+ Next, install hooks from [.pre-commit-config.yaml](.pre-commit-config.yaml):
1089
+
1090
+ ```bash
1091
+ pre-commit install
1092
+ ```
1093
+
1094
+ After that your code will be automatically reformatted on every new commit.<br>
1095
+ Currently template contains configurations of **black** (python code formatting), **isort** (python import sorting), **flake8** (python code analysis), **prettier** (yaml formating) and **nbstripout** (clearing output from jupyter notebooks). <br>
1096
+
1097
+ To reformat all files in the project use command:
1098
+
1099
+ ```bash
1100
+ pre-commit run -a
1101
+ ```
1102
+
1103
+ </details>
1104
+
1105
+ <details>
1106
+ <summary><b>Set private environment variables in .env file</b></summary>
1107
+
1108
+ System specific variables (e.g. absolute paths to datasets) should not be under version control or it will result in conflict between different users. Your private keys also shouldn't be versioned since you don't want them to be leaked.<br>
1109
+
1110
+ Template contains `.env.example` file, which serves as an example. Create a new file called `.env` (this name is excluded from version control in .gitignore).
1111
+ You should use it for storing environment variables like this:
1112
+
1113
+ ```
1114
+ MY_VAR=/home/user/my_system_path
1115
+ ```
1116
+
1117
+ All variables from `.env` are loaded in `train.py` automatically.
1118
+
1119
+ Hydra allows you to reference any env variable in `.yaml` configs like this:
1120
+
1121
+ ```yaml
1122
+ path_to_data: ${oc.env:MY_VAR}
1123
+ ```
1124
+
1125
+ </details>
1126
+
1127
+ <details>
1128
+ <summary><b>Name metrics using '/' character</b></summary>
1129
+
1130
+ Depending on which logger you're using, it's often useful to define metric name with `/` character:
1131
+
1132
+ ```python
1133
+ self.log("train/loss", loss)
1134
+ ```
1135
+
1136
+ This way loggers will treat your metrics as belonging to different sections, which helps to get them organised in UI.
1137
+
1138
+ </details>
1139
+
1140
+ <details>
1141
+ <summary><b>Use torchmetrics</b></summary>
1142
+
1143
+ Use official [torchmetrics](https://github.com/PytorchLightning/metrics) library to ensure proper calculation of metrics. This is especially important for multi-GPU training!
1144
+
1145
+ For example, instead of calculating accuracy by yourself, you should use the provided `Accuracy` class like this:
1146
+
1147
+ ```python
1148
+ from torchmetrics.classification.accuracy import Accuracy
1149
+
1150
+
1151
+ class LitModel(LightningModule):
1152
+ def __init__(self)
1153
+ self.train_acc = Accuracy()
1154
+ self.val_acc = Accuracy()
1155
+
1156
+ def training_step(self, batch, batch_idx):
1157
+ ...
1158
+ acc = self.train_acc(predictions, targets)
1159
+ self.log("train/acc", acc)
1160
+ ...
1161
+
1162
+ def validation_step(self, batch, batch_idx):
1163
+ ...
1164
+ acc = self.val_acc(predictions, targets)
1165
+ self.log("val/acc", acc)
1166
+ ...
1167
+ ```
1168
+
1169
+ Make sure to use different metric instance for each step to ensure proper value reduction over all GPU processes.
1170
+
1171
+ Torchmetrics provides metrics for most use cases, like F1 score or confusion matrix. Read [documentation](https://torchmetrics.readthedocs.io/en/latest/#more-reading) for more.
1172
+
1173
+ </details>
1174
+
1175
+ <details>
1176
+ <summary><b>Follow PyTorch Lightning style guide</b></summary>
1177
+
1178
+ The style guide is available [here](https://pytorch-lightning.readthedocs.io/en/latest/starter/style_guide.html).<br>
1179
+
1180
+ 1. Be explicit in your init. Try to define all the relevant defaults so that the user doesn’t have to guess. Provide type hints. This way your module is reusable across projects!
1181
+
1182
+ ```python
1183
+ class LitModel(LightningModule):
1184
+ def __init__(self, layer_size: int = 256, lr: float = 0.001):
1185
+ ```
1186
+
1187
+ 2. Preserve the recommended method order.
1188
+
1189
+ ```python
1190
+ class LitModel(LightningModule):
1191
+
1192
+ def __init__():
1193
+ ...
1194
+
1195
+ def forward():
1196
+ ...
1197
+
1198
+ def training_step():
1199
+ ...
1200
+
1201
+ def training_step_end():
1202
+ ...
1203
+
1204
+ def training_epoch_end():
1205
+ ...
1206
+
1207
+ def validation_step():
1208
+ ...
1209
+
1210
+ def validation_step_end():
1211
+ ...
1212
+
1213
+ def validation_epoch_end():
1214
+ ...
1215
+
1216
+ def test_step():
1217
+ ...
1218
+
1219
+ def test_step_end():
1220
+ ...
1221
+
1222
+ def test_epoch_end():
1223
+ ...
1224
+
1225
+ def configure_optimizers():
1226
+ ...
1227
+
1228
+ def any_extra_hook():
1229
+ ...
1230
+ ```
1231
+
1232
+ </details>
1233
+
1234
+ <details>
1235
+ <summary><b>Version control your data and models with DVC</b></summary>
1236
+
1237
+ Use [DVC](https://dvc.org) to version control big files, like your data or trained ML models.<br>
1238
+ To initialize the dvc repository:
1239
+
1240
+ ```bash
1241
+ dvc init
1242
+ ```
1243
+
1244
+ To start tracking a file or directory, use `dvc add`:
1245
+
1246
+ ```bash
1247
+ dvc add data/MNIST
1248
+ ```
1249
+
1250
+ DVC stores information about the added file (or a directory) in a special .dvc file named data/MNIST.dvc, a small text file with a human-readable format. This file can be easily versioned like source code with Git, as a placeholder for the original data:
1251
+
1252
+ ```bash
1253
+ git add data/MNIST.dvc data/.gitignore
1254
+ git commit -m "Add raw data"
1255
+ ```
1256
+
1257
+ </details>
1258
+
1259
+ <details>
1260
+ <summary><b>Support installing project as a package</b></summary>
1261
+
1262
+ It allows other people to easily use your modules in their own projects.
1263
+ Change name of the `src` folder to your project name and add `setup.py` file:
1264
+
1265
+ ```python
1266
+ from setuptools import find_packages, setup
1267
+
1268
+
1269
+ setup(
1270
+ name="src", # change "src" folder name to your project name
1271
+ version="0.0.0",
1272
+ description="Describe Your Cool Project",
1273
+ author="...",
1274
+ author_email="...",
1275
+ url="https://github.com/ashleve/lightning-hydra-template", # replace with your own github project link
1276
+ install_requires=[
1277
+ "pytorch>=1.10.0",
1278
+ "pytorch-lightning>=1.4.0",
1279
+ "hydra-core>=1.1.0",
1280
+ ],
1281
+ packages=find_packages(),
1282
+ )
1283
+ ```
1284
+
1285
+ Now your project can be installed from local files:
1286
+
1287
+ ```bash
1288
+ pip install -e .
1289
+ ```
1290
+
1291
+ Or directly from git repository:
1292
+
1293
+ ```bash
1294
+ pip install git+git://github.com/YourGithubName/your-repo-name.git --upgrade
1295
+ ```
1296
+
1297
+ So any file can be easily imported into any other file like so:
1298
+
1299
+ ```python
1300
+ from project_name.models.mnist_module import MNISTLitModule
1301
+ from project_name.datamodules.mnist_datamodule import MNISTDataModule
1302
+ ```
1303
+
1304
+ </details>
1305
+
1306
+ <!-- <details>
1307
+ <summary><b>Make notebooks independent from other files</b></summary>
1308
+
1309
+ It's a good practice for jupyter notebooks to be portable. Try to make them independent from src files. If you need to access external code, try to embed it inside the notebook.
1310
+
1311
+ </details> -->
1312
+
1313
+ <!--<details>
1314
+ <summary><b>Use Docker</b></summary>
1315
+
1316
+ Docker makes it easy to initialize the whole training environment, e.g. when you want to execute experiments in cloud or on some private computing cluster. You can extend [dockerfiles](https://github.com/ashleve/lightning-hydra-template/tree/dockerfiles) provided in the template with your own instructions for building the image.<br>
1317
+
1318
+ </details> -->
1319
+
1320
+ <br>
1321
+
1322
+ ## Other Repositories
1323
+
1324
+ <details>
1325
+ <summary><b>Inspirations</b></summary>
1326
+
1327
+ This template was inspired by:
1328
+ [PyTorchLightning/deep-learninig-project-template](https://github.com/PyTorchLightning/deep-learning-project-template),
1329
+ [drivendata/cookiecutter-data-science](https://github.com/drivendata/cookiecutter-data-science),
1330
+ [tchaton/lightning-hydra-seed](https://github.com/tchaton/lightning-hydra-seed),
1331
+ [Erlemar/pytorch_tempest](https://github.com/Erlemar/pytorch_tempest),
1332
+ [lucmos/nn-template](https://github.com/lucmos/nn-template).
1333
+
1334
+ </details>
1335
+
1336
+ <details>
1337
+ <summary><b>Useful repositories</b></summary>
1338
+
1339
+ - [pytorch/hydra-torch](https://github.com/pytorch/hydra-torch) - resources for configuring PyTorch classes with Hydra,
1340
+ - [romesco/hydra-lightning](https://github.com/romesco/hydra-lightning) - resources for configuring PyTorch Lightning classes with Hydra
1341
+ - [lucmos/nn-template](https://github.com/lucmos/nn-template) - similar template
1342
+ - [PyTorchLightning/lightning-transformers](https://github.com/PyTorchLightning/lightning-transformers) - official Lightning Transformers repo built with Hydra
1343
+
1344
+ </details>
1345
+
1346
+ <!-- ## :star:&nbsp; Stargazers Over Time
1347
+ [![Stargazers over time](https://starchart.cc/ashleve/lightning-hydra-template.svg)](https://starchart.cc/ashleve/lightning-hydra-template) -->
1348
+
1349
+ <br>
1350
+
1351
+ ## License
1352
+
1353
+ This project is licensed under the MIT License.
1354
+
1355
+ ```
1356
+ MIT License
1357
+
1358
+ Copyright (c) 2021 ashleve
1359
+
1360
+ Permission is hereby granted, free of charge, to any person obtaining a copy
1361
+ of this software and associated documentation files (the "Software"), to deal
1362
+ in the Software without restriction, including without limitation the rights
1363
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
1364
+ copies of the Software, and to permit persons to whom the Software is
1365
+ furnished to do so, subject to the following conditions:
1366
+
1367
+ The above copyright notice and this permission notice shall be included in all
1368
+ copies or substantial portions of the Software.
1369
+
1370
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1371
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1372
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1373
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1374
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
1375
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
1376
+ SOFTWARE.
1377
+ ```
1378
+
1379
+ <br>
1380
+ <br>
1381
+ <br>
1382
+ <br>
1383
+
1384
+ **DELETE EVERYTHING ABOVE FOR YOUR PROJECT**
1385
+
1386
+ ---
1387
+
1388
+ <div align="center">
1389
+
1390
+ # Your Project Name
1391
+
1392
+ <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>
1393
+ <a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
1394
+ <a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
1395
+ <a href="https://github.com/ashleve/lightning-hydra-template"><img alt="Template" src="https://img.shields.io/badge/-Lightning--Hydra--Template-017F2F?style=flat&logo=github&labelColor=gray"></a><br>
1396
+ [![Paper](http://img.shields.io/badge/paper-arxiv.1001.2234-B31B1B.svg)](https://www.nature.com/articles/nature14539)
1397
+ [![Conference](http://img.shields.io/badge/AnyConference-year-4b44ce.svg)](https://papers.nips.cc/paper/2020)
1398
+
1399
+ </div>
1400
+
1401
+ ## Description
1402
+
1403
+ What it does
1404
+
1405
+ ## How to run
1406
+
1407
+ Install dependencies
1408
+
1409
+ ```bash
1410
+ # clone project
1411
+ git clone https://github.com/YourGithubName/your-repo-name
1412
+ cd your-repo-name
1413
+
1414
+ # [OPTIONAL] create conda environment
1415
+ conda create -n myenv python=3.8
1416
+ conda activate myenv
1417
+
1418
+ # install pytorch according to instructions
1419
+ # https://pytorch.org/get-started/
1420
+
1421
+ # install requirements
1422
+ pip install -r requirements.txt
1423
+ ```
1424
+
1425
+ Train model with default configuration
1426
+
1427
+ ```bash
1428
+ # train on CPU
1429
+ python train.py trainer.gpus=0
1430
+
1431
+ # train on GPU
1432
+ python train.py trainer.gpus=1
1433
+ ```
1434
+
1435
+ Train model with chosen experiment configuration from [configs/experiment/](configs/experiment/)
1436
+
1437
+ ```bash
1438
+ python train.py experiment=experiment_name.yaml
1439
+ ```
1440
+
1441
+ You can override any parameter from command line like this
1442
+
1443
+ ```bash
1444
+ python train.py trainer.max_epochs=20 datamodule.batch_size=64
1445
+ ```
annotation-preprocessing/.dockerignore DELETED
@@ -1,3 +0,0 @@
1
- in/
2
- out/
3
-
 
 
 
 
annotation-preprocessing/.env.example DELETED
@@ -1,8 +0,0 @@
1
- DB_HOST=localhost
2
- DB_USER=someuser
3
- DB_PASSWORD=somepassword
4
- DB_NAME=somedatabase
5
-
6
- IMG_SIZE=75
7
- ROOT_IN=in
8
-
 
 
 
 
 
 
 
 
 
annotation-preprocessing/.gitignore DELETED
@@ -1,6 +0,0 @@
1
- *.env
2
- out
3
- in
4
- *.csv
5
- *.jpg
6
- *.sql
 
 
 
 
 
 
 
annotation-preprocessing/0_fetch_from_database.py DELETED
@@ -1,88 +0,0 @@
1
- import mysql.connector
2
- import pandas as pd
3
- import os
4
- from dotenv import load_dotenv
5
-
6
- BASE_OBJECT_SQL = """
7
- FROM UniqueGroundTruth
8
- JOIN DetectedObject on DetectedObject.id = UniqueGroundTruth.object_id
9
- JOIN Image on Image.id = DetectedObject.image_id
10
- JOIN FocusStack on FocusStack.id = Image.focus_stack_id
11
- JOIN Scan on Scan.id = FocusStack.scan_id
12
- JOIN Slide on Slide.id = Scan.slide_id
13
- JOIN ObjectType on ObjectType.id = UniqueGroundTruth.object_type_id
14
- WHERE metaclass_id = 1 -- only select eggs;
15
- AND study_id = 31
16
- ORDER BY UniqueGroundTruth.focus_stack_id
17
- """
18
-
19
- def fetch_objects_from_datase(db):
20
- cursor = db.cursor()
21
-
22
- cursor.execute("""SELECT
23
- UniqueGroundTruth.focus_stack_id,
24
- UniqueGroundTruth.x_min,
25
- UniqueGroundTruth.y_min,
26
- UniqueGroundTruth.x_max,
27
- UniqueGroundTruth.y_max,
28
- UniqueGroundTruth.object_type_id,
29
- ObjectType.name,
30
- Image.add_date""" + BASE_OBJECT_SQL)
31
-
32
- result = cursor.fetchall()
33
- return result
34
-
35
- def fetch_focus_stacks_from_database(db):
36
- cursor = db.cursor()
37
-
38
- cursor.execute("""SELECT
39
- FocusStack.id as foucs_stack_id,
40
- CONCAT (study_id, "/", uuid, "/", file_name) as file_path,
41
- file_name,
42
- uuid,
43
- study_id,
44
- Image.pos_z,
45
- Image.focus_value,
46
- Image.add_date
47
- FROM FocusStack
48
- JOIN Scan on Scan.id = FocusStack.scan_id
49
- JOIN Slide on Slide.id = Scan.slide_id
50
- JOIN Study on Study .id = Slide.study_id
51
- JOIN Image on Image.focus_stack_id = FocusStack.id
52
- WHERE
53
- FocusStack.id IN( -- get all focus stacks that have objects in them;
54
- SELECT DISTINCT
55
- UniqueGroundTruth.focus_stack_id
56
- """ + BASE_OBJECT_SQL
57
- + """
58
- )
59
- ORDER BY FocusStack.id DESC, focus_value, focus_level
60
- """)
61
- result = cursor.fetchall()
62
- return result
63
-
64
-
65
-
66
- if __name__ == "__main__":
67
- load_dotenv()
68
-
69
- db = mysql.connector.connect(
70
- host=os.getenv('DB_HOST'),
71
- user=os.getenv('DB_USER'),
72
- password=os.getenv('DB_PASSWORD'),
73
- database=os.getenv('DB_NAME')
74
- )
75
-
76
- print("Querring objects...")
77
- df_objects = pd.DataFrame(fetch_objects_from_datase(db))
78
- print("Querring stacks...")
79
- df_stacks = pd.DataFrame(fetch_focus_stacks_from_database(db))
80
-
81
- df_objects.columns = ['stack_id', 'x_min', 'y_min', 'x_max', 'y_max', 'object_type_id', 'name', 'add_date']
82
- df_stacks.columns = ['stack_id', 'file_path', 'file_name',
83
- 'uuid', 'study_id', 'pos_z', 'focus_value', 'add_date']
84
-
85
- print("Writing objects to file...")
86
- df_objects.to_csv("out/objects.csv")
87
- print("Writing stacks to file...")
88
- df_stacks.to_csv("out/stacks.csv")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
annotation-preprocessing/1_splitting_into_patches.py DELETED
@@ -1,165 +0,0 @@
1
- import pandas as pd
2
- from collections import defaultdict
3
- from dotenv import load_dotenv
4
- import os
5
- from PIL import Image, ImageDraw
6
- import math
7
- import json
8
- import random
9
-
10
- class StackEntry:
11
- def __init__(self):
12
- self.images = []
13
- self.objects = []
14
- def add_image(self, image):
15
- self.images.append(image)
16
- def add_object(self, object):
17
- self.objects.append(object)
18
- def sort(self):
19
- self.images.sort(key=lambda x: x.focus_value)
20
-
21
- def get_neighbours(img, x, y, dimensions):
22
- neighbour_candidates = [(-1,-1), (0, -1), (1, -1), (-1, 0), (1,0), (-1,1), (0,1), (1,1)]
23
-
24
- width, height = img.size
25
-
26
- neighbours = []
27
- for x_offset, y_offset in neighbour_candidates:
28
- neighbour_x = x + x_offset * dimensions
29
- neighbour_y = y + y_offset * dimensions
30
-
31
- if neighbour_x >= 0 and neighbour_x + dimensions <= width and neighbour_y >= 0 and neighbour_y + dimensions <= height:
32
- box = [neighbour_x, neighbour_y, neighbour_x + dimensions, neighbour_y + dimensions]
33
- neighbours.append((neighbour_x, neighbour_y, img.crop(box)))
34
- else:
35
- neighbours.append(None)
36
- return neighbours
37
-
38
- def extract_object_tiles(obj, stack_images, in_folder, threshold = 0.25):
39
- x_start = int(obj.x_min / size) * size
40
- x_end = int(math.ceil(obj.x_max / size)) * size
41
- y_start = int(obj.y_min / size) * size
42
- y_end = int(math.ceil(obj.y_max / size)) * size
43
-
44
- tiles = []
45
-
46
- focus_stack_images = list(map(lambda x: (x, Image.open(os.path.join(in_folder, x.file_path))), stack_images))
47
-
48
- # Get tiles of the image that contain bounding box of object
49
- for y in range(y_start, y_end, size):
50
- for x in range(x_start, x_end, size):
51
-
52
- if compute_overlap([x, y, x + size, y + size], [obj.x_min, obj.y_min, obj.x_max, obj.y_max]) > size * size * threshold:
53
- stack = []
54
- for row, img in focus_stack_images:
55
- box = [x, y, x + size, y + size]
56
- crop = img.crop(box)
57
-
58
- neighbours = get_neighbours(img, x, y, size)
59
- stack.append((row, box[:2], crop, neighbours))
60
- tiles.append(stack)
61
- return tiles
62
-
63
-
64
- def save_tile(original_file_path, out_dir, x : int, y : int, img, overwrite = False):
65
- path, file_name = os.path.split(original_file_path)
66
- name, ext = os.path.splitext(file_name)
67
-
68
- out_path = os.path.join(out_dir, path)
69
- save_to = os.path.join(out_path, f'{name}_{x}_{y}{ext}')
70
-
71
- if not os.path.exists(out_path):
72
- os.makedirs(out_path)
73
- if overwrite or not os.path.exists(save_to):
74
- img.save(save_to)
75
- return os.path.join(path, f'{name}_{x}_{y}{ext}')
76
-
77
- def compute_overlap(rect1, rect2):
78
- dx = min(rect1[2], rect2[2]) - max(rect1[0], rect2[0])
79
- dy = min(rect1[3], rect2[3]) - max(rect1[1], rect2[1])
80
- return dx * dy
81
-
82
- def save_obj_tiles(obj, out_folder, in_folder, stack_images):
83
- extracted = extract_object_tiles(obj, stack_images, in_folder)
84
- z_stacks = []
85
- for z_stack in extracted:
86
- z_stack_images = []
87
- for row, box, img, neigbours in z_stack:
88
-
89
- neighbours = []
90
-
91
- image_path = save_tile(row.file_path, out_folder, box[0], box[1], img)
92
- for neighbour in neigbours:
93
- n_path = None
94
- if neighbour:
95
- x, y, n_img = neighbour
96
- n_path = save_tile(row.file_path, out_folder, x, y, n_img)
97
- neighbours.append(n_path)
98
-
99
- z_stack_images.append({
100
- "focus_value": row["focus_value"],
101
- "image_path": image_path,
102
- "neighbours": neighbours,
103
- "original_filename": row["file_name"],
104
- "scan_uuid": row["uuid"],
105
- "study_id": row["study_id"],
106
- })
107
- z_stacks.append({
108
- "best_index": None,
109
- "images" : z_stack_images,
110
- "obj_name": obj["name"],
111
- "stack_id": obj["stack_id"],
112
- })
113
-
114
- return z_stacks
115
-
116
- def save_stack(stack, out_folder, in_folder):
117
- z_stacks = []
118
- for obj in stack.objects:
119
- z_stacks.extend(save_obj_tiles(obj, out_folder, in_folder, stack.images))
120
- return z_stacks
121
-
122
-
123
- if __name__ == "__main__":
124
- load_dotenv()
125
- print("Geting environment variables...")
126
- size = int(os.getenv('IMG_SIZE'))
127
- root_in = os.getenv('ROOT_IN')
128
-
129
- print(f'img_size: ')
130
- print(f'in_folder: {root_in}')
131
-
132
- print("Loading data from csv files...")
133
- objects = pd.read_csv("out/objects.csv", index_col=0)
134
- stacks = pd.read_csv("out/stacks.csv", index_col=0)
135
-
136
-
137
- stacks_dict = defaultdict(lambda: StackEntry())
138
-
139
- print("Building internal datastructure...")
140
- # adding images to dict
141
- for (index, row) in stacks.iterrows():
142
- stacks_dict[row.stack_id].add_image(row)
143
-
144
- for values in stacks_dict.values():
145
- values.sort()
146
-
147
- # adding objects
148
- for (index, row) in objects.iterrows():
149
- stacks_dict[row.stack_id].add_object(row)
150
-
151
- out_folder = "out"
152
- z_stacks = []
153
-
154
- print("Generating image tiles and writing them to file...")
155
- for stack in stacks_dict.values():
156
- z_stacks.extend(save_stack(stack,"out", root_in))
157
-
158
- # randomize z_stacks
159
- print("Shuffling data...")
160
- random.shuffle(z_stacks)
161
-
162
- print("Writing meta-data for annotation to file...")
163
- with open(os.path.join(out_folder, "data.json"), 'w') as file:
164
- file.write(json.dumps(z_stacks))
165
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
annotation-preprocessing/Dockerfile DELETED
@@ -1,13 +0,0 @@
1
- FROM python:3.7
2
-
3
- WORKDIR /usr/src/app
4
-
5
- RUN apt-get update
6
- RUN apt-get install libgl1 -y
7
-
8
- COPY requirements.txt ./
9
- RUN pip install --no-cache-dir -r requirements.txt
10
-
11
- COPY *.py ./
12
-
13
- CMD sh -c "python 0_fetch_from_database.py && python 1_splitting_into_patches.py"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
annotation-preprocessing/README.md DELETED
@@ -1,48 +0,0 @@
1
- # Annotation Preprocessing
2
-
3
- This directory contains code to extract image metadata from the database. In the first step the metadata is converted to csv files. The second step then loads the metadata and creates small tiles out of all images that contain eggs. The corresponding information about these patches is stored in a json file which cna be read by [Focus Annotator
4
- ](https://github.com/13hannes11/focus_annotator).
5
-
6
-
7
- ## Environment Variables
8
-
9
- To run the preprocessing you need to create a `.env` file or set the corresponding environmental variables directly. The content of the file should be all of the following:
10
-
11
- For step 0, fetching data from the database you need:
12
-
13
- ```
14
- DB_HOST=
15
- DB_USER=
16
- DB_PASSWORD=
17
- DB_NAME=
18
- ```
19
-
20
- For step 1, cropping and extracting images with eggs you need:
21
-
22
- ```
23
- IMG_SIZE=75
24
- ROOT_IN="in"
25
- ```
26
-
27
- The actual code can be either run in a docker-container, for that you can run `docker-compose up` inside the this directory. Make sure you edit the mount in the docker-compose to your directories:
28
-
29
- ```yaml
30
- volumes:
31
- - <path to your output directory>:/usr/src/app/out:z
32
- - <path to your input directory>:/usr/src/app/in:z
33
- ```
34
-
35
-
36
- Alternatively, you can manually run these python steps:
37
-
38
- ```
39
- python 0_fetch_from_database.py
40
- ```
41
-
42
- and
43
-
44
- ```
45
- python 1_splitting_into_patches.py
46
- ```
47
-
48
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
annotation-preprocessing/docker-compose.yml DELETED
@@ -1,10 +0,0 @@
1
- version: "3" # optional since v1.27.0
2
- services:
3
- preprocess:
4
- build: .
5
- volumes:
6
- - ./out/:/usr/src/app/out:z
7
- - ./in/:/usr/src/app/in:z
8
- env_file:
9
- - .env
10
- network_mode: host # use host networking; you can also just link container networks directly
 
 
 
 
 
 
 
 
 
 
 
annotation-preprocessing/out/.gitignore DELETED
File without changes
annotation-preprocessing/requirements.txt DELETED
@@ -1,6 +0,0 @@
1
- opencv-python
2
- numpy
3
- pillow
4
- mysql-connector-python
5
- pandas
6
- python-dotenv
 
 
 
 
 
 
 
{models/configs β†’ configs}/callbacks/default.yaml RENAMED
File without changes
{models/configs β†’ configs}/callbacks/none.yaml RENAMED
File without changes
{models/configs β†’ configs}/datamodule/focus.yaml RENAMED
File without changes
{models/configs β†’ configs}/datamodule/mnist.yaml RENAMED
File without changes
{models/configs β†’ configs}/debug/default.yaml RENAMED
File without changes
{models/configs β†’ configs}/debug/limit_batches.yaml RENAMED
File without changes
{models/configs β†’ configs}/debug/overfit.yaml RENAMED
File without changes
{models/configs β†’ configs}/debug/profiler.yaml RENAMED
File without changes
{models/configs β†’ configs}/debug/step.yaml RENAMED
File without changes
{models/configs β†’ configs}/debug/test_only.yaml RENAMED
File without changes
{models/configs β†’ configs}/experiment/example.yaml RENAMED
File without changes
{models/configs β†’ configs}/hparams_search/mnist_optuna.yaml RENAMED
File without changes
{models/configs β†’ configs}/local/.gitkeep RENAMED
File without changes
{models/configs β†’ configs}/log_dir/debug.yaml RENAMED
File without changes
{models/configs β†’ configs}/log_dir/default.yaml RENAMED
File without changes
{models/configs β†’ configs}/log_dir/evaluation.yaml RENAMED
File without changes
{models/configs β†’ configs}/logger/comet.yaml RENAMED
File without changes
{models/configs β†’ configs}/logger/csv.yaml RENAMED
File without changes
{models/configs β†’ configs}/logger/many_loggers.yaml RENAMED
File without changes
{models/configs β†’ configs}/logger/mlflow.yaml RENAMED
File without changes
{models/configs β†’ configs}/logger/neptune.yaml RENAMED
File without changes
{models/configs β†’ configs}/logger/tensorboard.yaml RENAMED
File without changes
{models/configs β†’ configs}/logger/wandb.yaml RENAMED
File without changes
{models/configs β†’ configs}/model/focus.yaml RENAMED
File without changes
{models/configs β†’ configs}/model/mnist.yaml RENAMED
File without changes
{models/configs β†’ configs}/test.yaml RENAMED
File without changes
{models/configs β†’ configs}/train.yaml RENAMED
File without changes
{models/configs β†’ configs}/trainer/ddp.yaml RENAMED
File without changes
{models/configs β†’ configs}/trainer/default.yaml RENAMED
File without changes
{models/configs β†’ configs}/trainer/long.yaml RENAMED
File without changes
data-preprocessing/.env.example DELETED
@@ -1,3 +0,0 @@
1
- DATA_FILE=data.json
2
- OUT_FILE=metadata.csv
3
-
 
 
 
 
data-preprocessing/extract_annotations.py DELETED
@@ -1,45 +0,0 @@
1
- import json
2
- import os
3
- from itertools import chain
4
- from dotenv import load_dotenv
5
- import pandas as pd
6
-
7
- def to_relative_focus(stack):
8
- best_index = stack["best_index"]
9
- images = stack["images"]
10
-
11
- best_value = images[best_index]["focus_value"]
12
- for i in range(len(images)):
13
- images[i]["focus_value"] = images[i]["focus_value"] - best_value
14
- return stack
15
-
16
-
17
- def flatten_stack(stack):
18
- images = stack["images"]
19
-
20
- def f(image):
21
- del image["neighbours"]
22
- image["stack_id"] = stack["stack_id"]
23
- image["obj_name"] = stack["obj_name"]
24
- return image
25
-
26
- images = list(map(f, images))
27
- return images
28
-
29
-
30
- if __name__ == "__main__":
31
- load_dotenv()
32
- data_file = os.getenv('DATA_FILE')
33
- out_file = os.getenv('OUT_FILE')
34
-
35
- with open(data_file) as f:
36
- content = json.load(f)
37
-
38
- annotated = filter(lambda x: x["best_index"], content)
39
- relative_focus = map(to_relative_focus, annotated)
40
- flattened = chain(*map(flatten_stack,relative_focus))
41
-
42
- dataframe = pd.DataFrame(flattened)
43
- dataframe.to_csv(out_file)
44
-
45
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data-preprocessing/requirements.txt DELETED
@@ -1 +0,0 @@
1
- python-dotenv
 
 
models/docker-compose.cuda.yml β†’ docker-compose.cuda.yml RENAMED
File without changes
models/docker-compose.yml β†’ docker-compose.yml RENAMED
File without changes
focus_annotator DELETED
@@ -1 +0,0 @@
1
- Subproject commit fe8ee5b5cbaf9271668fbf003c0a3ccac3fdb65b
 
 
models/.dockerignore DELETED
@@ -1,4 +0,0 @@
1
- *.env
2
- data
3
- logs
4
- configs