|
--- |
|
license: openrail |
|
--- |
|
|
|
# ControlNet-XS model for StableDiffusionXL and canny edges input |
|
|
|
π¬ Original paper and models by https://github.com/vislearn/ControlNet-XS |
|
|
|
π·π½ββοΈ Translated into diffusers architecture by https://twitter.com/UmerHAdil |
|
|
|
This model is trained for use with [StableDiffusionXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) |
|
|
|
--- |
|
|
|
ControlNet-XS was introduced in [ControlNet-XS](https://vislearn.github.io/ControlNet-XS/) by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the [original ControlNet](https://huggingface.co/papers/2302.05543) can be made much smaller and still produces good results. |
|
|
|
As with the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. |
|
|
|
Using ControlNet-XS instead of regular ControlNet will produce images of roughly the same quality, but 20-25% faster ([see benchmark](https://github.com/UmerHA/controlnet-xs-benchmark/blob/main/Speed%20Benchmark.ipynb)) and with ~45% less memory usage. |
|
|
|
--- |
|
|
|
Other ControlNet-XS models: |
|
|
|
- [StableDiffusion-XL and depth input](https://huggingface.co/UmerHA/ConrolNetXS-SDXL-depth) |
|
- [StableDiffusion 2.1 and canny edges input](https://huggingface.co/UmerHA/ConrolNetXS-SD2.1-canny) |
|
- [StableDiffusion 2.1 and depth input](https://huggingface.co/UmerHA/ConrolNetXS-SD2.1-depth) |