Training details
#1
by
CaptainZZZ
- opened
Hi author, thanks for the amazing work! I have some questions about train details.
For 33 channels transformer, do we need train a transformer from scratch or fully fine-tune (all parameters from transformer) the SD3's transformer?
Also, I am curious about the size of the train dataset and the train batch size and learning rate.
Thanks!
I just directly fine-tuned the sd3 medium. The hyper-parameters settings: image size :768; learning rate: 1e-4; batch size: 4 per GPU; 4 GPUs. I only trained for 18000 steps for GPU shortage.
Thanks so much for the reply!
I have another queation, does inpainting need large scale dataset? May I ask your dataset size?