--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ pipeline_tag: text-to-image base_model: - RedRayz/illumina-xl-1.1 tags: - stable-diffusion - stable-diffusion-xl --- # Abydos-XL-1.1 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630e2d981ef92d4e37a1694e/gmHyCWfexAf9FwBb8CzKb.jpeg) Modified Illustrious-XL-v0.1 with Blue Archive style This is the next version of Abydos-XL-1.0, Slightly improved background(scenery), stability and detail rendering. You can find example images on [Civitai model page](https://civitai.com/models/832248) ## Prompt Guidelines Almost same as the base model ## Recommended Prompt None(Works good without `masterpiece, best quality`) ## Recommended Negative Prompt `worst quality, low quality, bad quality, lowres, jpeg artifacts, unfinished, abstract, oldest, photoshop \(medium\)` To improve the quality of background, add `simple background, transparent background` to Negative Prompt. ## Recommended Settings Steps: 14-28 Sampler: DPM++ 2M(dpmpp_2m) Scheduler: Simple Guidance Scale: 4-9 ### Hires.fix Upscaler: 4x-UltraSharp or Latent Denoising strength: 0.5(0.6 for latent) ## Training information Finetuned Illumina-XL-1.1 by repeating the training and merging a DoRA 6 times with sd-scripts. - Network module: lycoris_kohya(algo=lora, dora_wd=True) - Resolution: 1024(Bucketing enabled, min 512, max 2048) - Optimizer: Lion - Train U-Net only: Yes - LR Scheduler: cosine with restart(warmup ratio=0.1, repeat=4-6) - Learning Rate: various(min=1e-05, max=6e-05) - Noise Offset: 0.04 - Immiscible Noise: 2048 - Batch size: 1 - Gradient Accumulation steps: 1 - Dim/Alpha: 16/4 - Conv Dim/Alpha: 1/0.25 ## Dataset information Dataset size: 289 ## Training scripts: [sd-scripts](https://github.com/kohya-ss/sd-scripts) ## Notice This model is licensed under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) If you make modify this model, you must share both your changes and the original license. You are prohibited from monetizing any close-sourced fine-tuned / merged model, which disallows the public from accessing the model's source code / weights and its usages.