Upload README.md
Browse files
README.md
CHANGED
@@ -61,11 +61,19 @@ The results of reconstruction after slot-attention and ckps are stored in './out
|
|
61 |
|
62 |
- SlotVLM<sup>4</sup>
|
63 |
```shell
|
64 |
-
python -m train.adversarial_training_clip_with_object_token --clip_model_name ViT-L-14 --slots_ckp ./ckps/model_slots_step_300000.pt --pretrained openai --dataset imagenet --imagenet_root /path/to/imagenet --template std --output_normalize False --steps 20000 --warmup 1400 --batch_size 128 --loss l2 --opt adamw --lr 1e-5 --wd 1e-4 --attack pgd --inner_loss l2 --norm linf --eps 4 --iterations_adv 10 --stepsize_adv 1 --wandb False --output_dir ./output --experiment_name with_OT --log_freq 10 --eval_freq 10
|
65 |
```
|
66 |
|
67 |
Set `--eps 2` to obtain SlotVLM<sup>2</sup> models.
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
## Evaluation
|
71 |
Make sure files in `bash` directory are executable: `chmod +x bash/*`
|
|
|
61 |
|
62 |
- SlotVLM<sup>4</sup>
|
63 |
```shell
|
64 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m train.adversarial_training_clip_with_object_token --clip_model_name ViT-L-14 --slots_ckp ./ckps/model_slots_step_300000.pt --pretrained openai --dataset imagenet --imagenet_root /path/to/imagenet --template std --output_normalize False --steps 20000 --warmup 1400 --batch_size 128 --loss l2 --opt adamw --lr 1e-5 --wd 1e-4 --attack pgd --inner_loss l2 --norm linf --eps 4 --iterations_adv 10 --stepsize_adv 1 --wandb False --output_dir ./output --experiment_name with_OT --log_freq 10 --eval_freq 10
|
65 |
```
|
66 |
|
67 |
Set `--eps 2` to obtain SlotVLM<sup>2</sup> models.
|
68 |
|
69 |
+
\
|
70 |
+
If you want to resume your training, just add some params like:
|
71 |
+
\
|
72 |
+
--optimizer_state /xxx/checkpoints/fallback_80000_opt.pt --start_step 80000 --pretrained none
|
73 |
+
|
74 |
+
```shell
|
75 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m train.adversarial_training_clip_with_object_token --clip_model_name ViT-L-14 --slots_ckp ./ckps/model_slots_step_300000.pt --dataset imagenet --imagenet_root /path/to/imagenet --template std --output_normalize False --steps 20000 --warmup 1400 --batch_size 128 --loss l2 --opt adamw --lr 1e-5 --wd 1e-4 --attack pgd --inner_loss l2 --norm linf --eps 4 --iterations_adv 10 --stepsize_adv 1 --wandb False --output_dir ./output --experiment_name with_OT --log_freq 10 --eval_freq 10 --optimizer_state /home/xxx/RobustVLM/output/ViT-L-14_openai_imagenet_l2_imagenet_with_Object_Token_xxxxx/checkpoints/fallback_80000_opt.pt --start_step 80000 --pretrained none
|
76 |
+
```
|
77 |
|
78 |
## Evaluation
|
79 |
Make sure files in `bash` directory are executable: `chmod +x bash/*`
|