ddpm-ema-pokemon-64

Model description

This diffusion model is trained with the 🤗 Diffusers library on the huggan/pokemon dataset.

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training data

[TODO: describe the data used to train the model]

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • gradient_accumulation_steps: 1
  • optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
  • lr_scheduler: cosine
  • lr_warmup_steps: 500
  • ema_inv_gamma: 1.0
  • ema_inv_gamma: 0.75
  • ema_inv_gamma: 0.9999
  • mixed_precision: no

Training results

📈 TensorBoard logs

Downloads last month
7
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train jirtan/ddpm-ema-pokemon-64