Designing Network Design Spaces
Introduction
[BACKBONE]
We implement RegNetX and RegNetY models in detection systems and provide their first results on Mask R-CNN, Faster R-CNN and RetinaNet.
The pre-trained modles are converted from model zoo of pycls.
@article{radosavovic2020designing,
title={Designing Network Design Spaces},
author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár},
year={2020},
eprint={2003.13678},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Usage
To use a regnet model, there are two steps to do:
- Convert the model to ResNet-style supported by MMDetection
- Modify backbone and neck in config accordingly
Convert model
We already prepare models of FLOPs from 400M to 12G in our model zoo.
For more general usage, we also provide script regnet2mmdet.py
in the tools directory to convert the key of models pretrained by pycls to
ResNet-style checkpoints used in MMDetection.
python -u tools/model_converters/regnet2mmdet.py ${PRETRAIN_PATH} ${STORE_PATH}
This script convert model from PRETRAIN_PATH
and store the converted model in STORE_PATH
.
Modify config
The users can modify the config's depth
of backbone and corresponding keys in arch
according to the configs in the pycls model zoo.
The parameter in_channels
in FPN can be found in the Figure 15 & 16 of the paper (wi
in the legend).
This directory already provides some configs with their performance, using RegNetX from 800MF to 12GF level.
For other pre-trained models or self-implemented regnet models, the users are responsible to check these parameters by themselves.
Note: Although Fig. 15 & 16 also provide w0
, wa
, wm
, group_w
, and bot_mul
for arch
, they are quantized thus inaccurate, using them sometimes produces different backbone that does not match the key in the pre-trained model.
Results
Mask R-CNN
Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
---|---|---|---|---|---|---|---|---|
R-50-FPN | pytorch | 1x | 4.4 | 12.0 | 38.2 | 34.7 | config | model | log |
RegNetX-3.2GF-FPN | pytorch | 1x | 5.0 | 40.3 | 36.6 | config | model | log | |
RegNetX-4.0GF-FPN | pytorch | 1x | 5.5 | 41.5 | 37.4 | config | model | log | |
R-101-FPN | pytorch | 1x | 6.4 | 10.3 | 40.0 | 36.1 | config | model | log |
RegNetX-6.4GF-FPN | pytorch | 1x | 6.1 | 41.0 | 37.1 | config | model | log | |
X-101-32x4d-FPN | pytorch | 1x | 7.6 | 9.4 | 41.9 | 37.5 | config | model | log |
RegNetX-8.0GF-FPN | pytorch | 1x | 6.4 | 41.7 | 37.5 | config | model | log | |
RegNetX-12GF-FPN | pytorch | 1x | 7.4 | 42.2 | 38 | config | model | log | |
RegNetX-3.2GF-FPN-DCN-C3-C5 | pytorch | 1x | 5.0 | 40.3 | 36.6 | config | model | log |
Faster R-CNN
Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
---|---|---|---|---|---|---|---|
R-50-FPN | pytorch | 1x | 4.0 | 18.2 | 37.4 | config | model | log |
RegNetX-3.2GF-FPN | pytorch | 1x | 4.5 | 39.9 | config | model | log | |
RegNetX-3.2GF-FPN | pytorch | 2x | 4.5 | 41.1 | config | model | log |
RetinaNet
Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
---|---|---|---|---|---|---|---|
R-50-FPN | pytorch | 1x | 3.8 | 16.6 | 36.5 | config | model | log |
RegNetX-800MF-FPN | pytorch | 1x | 2.5 | 35.6 | config | model | log | |
RegNetX-1.6GF-FPN | pytorch | 1x | 3.3 | 37.3 | config | model | log | |
RegNetX-3.2GF-FPN | pytorch | 1x | 4.2 | 39.1 | config | model | log |
Pre-trained models
We also train some models with longer schedules and multi-scale training. The users could finetune them for downstream tasks.
Method | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
---|---|---|---|---|---|---|---|---|---|
Faster RCNN | RegNetX-3.2GF-FPN | pytorch | 3x | 5.0 | 42.2 | - | config | model | log | |
Mask RCNN | RegNetX-3.2GF-FPN | pytorch | 3x | 5.0 | 43.1 | 38.7 | config | model | log |
Notice
- The models are trained using a different weight decay, i.e.,
weight_decay=5e-5
according to the setting in ImageNet training. This brings improvement of at least 0.7 AP absolute but does not improve the model using ResNet-50. - RetinaNets using RegNets are trained with learning rate 0.02 with gradient clip. We find that using learning rate 0.02 could improve the results by at least 0.7 AP absolute and gradient clip is necessary to stabilize the training. However, this does not improve the performance of ResNet-50-FPN RetinaNet.