--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: cc0-1.0 tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - diffusers-training - lora inference: true --- # LoRA fine-tuning - jonathandinu/sdxl-metamorphosis These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on illustrations from [Maria Sibylla Merian’s Metamorphosis Insectorum Surinamensium (1705)](https://huggingface.co/datasets/jonathandinu/merian-metamorphosis). ![image grid](samples.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Intended uses & limitations ### How to use #### text2img ```python from diffusers import DiffusionPipeline, AutoencoderKL, utils vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16") pipeline.to("cuda") pipeline.load_lora_weights("jonathandinu/sdxl-metamorphosis-lora", weight_name="pytorch_lora_weights.safetensors") pipeline( prompt="an astronaut in the jungle", num_inference_steps=30, generator=torch.manual_seed(1) ).images[0] ``` #### img2img ```python from diffusers import AutoPipelineForImage2Image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipeline = AutoPipelineForImage2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16") pipeline.to("cuda") pipeline.load_lora_weights("jonathandinu/sdxl-metamorphosis-lora", weight_name="pytorch_lora_weights.safetensors") pipeline( prompt="an astronaut in the jungle", image=init_image, num_inference_steps=30, generator=torch.manual_seed(1), strength=0.7 ).images[0] ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]