Spaces:
Running
on
Zero
Running
on
Zero
File size: 6,703 Bytes
61c2d32 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
# Bootstrapping Pipeline
Bootstrapping pipeline for DensePose was proposed in
[Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf)
to extend DensePose from humans to proximal animal classes
(chimpanzees). Currently, the pipeline is only implemented for
[chart-based models](DENSEPOSE_IUV.md).
Bootstrapping proceeds in two steps.
## Master Model Training
Master model is trained on data from source domain (humans)
and supporting domain (animals). Instances from the source domain
contain full DensePose annotations (`S`, `I`, `U` and `V`) and
instances from the supporting domain have segmentation annotations only.
To ensure segmentation quality in the target domain, only a subset of
supporting domain classes is included into the training. This is achieved
through category filters, e.g.
(see [configs/evolution/Base-RCNN-FPN-Atop10P_CA.yaml](../configs/evolution/Base-RCNN-FPN-Atop10P_CA.yaml)):
```
WHITELISTED_CATEGORIES:
"base_coco_2017_train":
- 1 # person
- 16 # bird
- 17 # cat
- 18 # dog
- 19 # horse
- 20 # sheep
- 21 # cow
- 22 # elephant
- 23 # bear
- 24 # zebra
- 25 # girafe
```
The acronym `Atop10P` in config file names indicates that categories are filtered to
only contain top 10 animals and person.
The training is performed in a *class-agnostic* manner: all instances
are mapped into the same class (person), e.g.
(see [configs/evolution/Base-RCNN-FPN-Atop10P_CA.yaml](../configs/evolution/Base-RCNN-FPN-Atop10P_CA.yaml)):
```
CATEGORY_MAPS:
"base_coco_2017_train":
"16": 1 # bird -> person
"17": 1 # cat -> person
"18": 1 # dog -> person
"19": 1 # horse -> person
"20": 1 # sheep -> person
"21": 1 # cow -> person
"22": 1 # elephant -> person
"23": 1 # bear -> person
"24": 1 # zebra -> person
"25": 1 # girafe -> person
```
The acronym `CA` in config file names indicates that the training is class-agnostic.
## Student Model Training
Student model is trained on data from source domain (humans),
supporting domain (animals) and target domain (chimpanzees).
Annotations in source and supporting domains are similar to the ones
used for the master model training.
Annotations in target domain are obtained by applying the master model
to images that contain instances from the target category and sampling
sparse annotations from dense results. This process is called *bootstrapping*.
Below we give details on how the bootstrapping pipeline is implemented.
### Data Loaders
The central components that enable bootstrapping are
[`InferenceBasedLoader`](../densepose/data/inference_based_loader.py) and
[`CombinedDataLoader`](../densepose/data/combined_loader.py).
`InferenceBasedLoader` takes images from a data loader, applies a model
to the images, filters the model outputs based on the selected criteria and
samples the filtered outputs to produce annotations.
`CombinedDataLoader` combines data obtained from the loaders based on specified
ratios. The standard data loader has the default ratio of 1.0,
ratios for bootstrap datasets are specified in the configuration file.
The higher the ratio the higher the probability to include samples from the
particular data loader into a batch.
Here is an example of the bootstrapping configuration taken from
[`configs/evolution/densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uniform.yaml`](../configs/evolution/densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uniform.yaml):
```
BOOTSTRAP_DATASETS:
- DATASET: "chimpnsee"
RATIO: 1.0
IMAGE_LOADER:
TYPE: "video_keyframe"
SELECT:
STRATEGY: "random_k"
NUM_IMAGES: 4
TRANSFORM:
TYPE: "resize"
MIN_SIZE: 800
MAX_SIZE: 1333
BATCH_SIZE: 8
NUM_WORKERS: 1
INFERENCE:
INPUT_BATCH_SIZE: 1
OUTPUT_BATCH_SIZE: 1
DATA_SAMPLER:
# supported types:
# densepose_uniform
# densepose_UV_confidence
# densepose_fine_segm_confidence
# densepose_coarse_segm_confidence
TYPE: "densepose_uniform"
COUNT_PER_CLASS: 8
FILTER:
TYPE: "detection_score"
MIN_VALUE: 0.8
BOOTSTRAP_MODEL:
WEIGHTS: https://dl.fbaipublicfiles.com/densepose/evolution/densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA/217578784/model_final_9fe1cc.pkl
```
The above example has one bootstrap dataset (`chimpnsee`). This dataset is registered as
a [VIDEO_LIST](../densepose/data/datasets/chimpnsee.py) dataset, which means that
it consists of a number of videos specified in a text file. For videos there can be
different strategies to sample individual images. Here we use `video_keyframe` strategy
which considers only keyframes; this ensures temporal offset between sampled images and
faster seek operations. We select at most 4 random keyframes in each video:
```
SELECT:
STRATEGY: "random_k"
NUM_IMAGES: 4
```
The frames are then resized
```
TRANSFORM:
TYPE: "resize"
MIN_SIZE: 800
MAX_SIZE: 1333
```
and batched using the standard
[PyTorch DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader):
```
BATCH_SIZE: 8
NUM_WORKERS: 1
```
`InferenceBasedLoader` decomposes those batches into batches of size `INPUT_BATCH_SIZE`
and applies the master model specified by `BOOTSTRAP_MODEL`. Models outputs are filtered
by detection score:
```
FILTER:
TYPE: "detection_score"
MIN_VALUE: 0.8
```
and sampled using the specified sampling strategy:
```
DATA_SAMPLER:
# supported types:
# densepose_uniform
# densepose_UV_confidence
# densepose_fine_segm_confidence
# densepose_coarse_segm_confidence
TYPE: "densepose_uniform"
COUNT_PER_CLASS: 8
```
The current implementation supports
[uniform sampling](../densepose/data/samplers/densepose_uniform.py) and
[confidence-based sampling](../densepose/data/samplers/densepose_confidence_based.py)
to obtain sparse annotations from dense results. For confidence-based
sampling one needs to use the master model which produces confidence estimates.
The `WC1M` master model used in the example above produces all three types of confidence
estimates.
Finally, sampled data is grouped into batches of size `OUTPUT_BATCH_SIZE`:
```
INFERENCE:
INPUT_BATCH_SIZE: 1
OUTPUT_BATCH_SIZE: 1
```
The proportion of data from annotated datasets and bootstrapped dataset can be tracked
in the logs, e.g.:
```
[... densepose.engine.trainer]: batch/ 1.8, batch/base_coco_2017_train 6.4, batch/densepose_coco_2014_train 3.85
```
which means that over the last 20 iterations, on average for 1.8 bootstrapped data samples there were 6.4 samples from `base_coco_2017_train` and 3.85 samples from `densepose_coco_2014_train`.
|