Spaces:
Sleeping
title: Real-Time SD Turbo
emoji: ⚡️
colorFrom: gray
colorTo: indigo
sdk: docker
pinned: false
suggested_hardware: a10g-small
disable_embedding: true
short_description: Real-Time Image-to-Image with SD-Turbo and ControlNet
Real-Time Latent Consistency Model
This demo showcases Latent Consistency Model (LCM) using Diffusers with a MJPEG stream server. You can read more about LCM + LoRAs with diffusers here.
You need a webcam to run this demo. 🤗
See a collecting with live demos here
Running Locally
You need CUDA and Python 3.10, Node > 19, Mac with an M1/M2/M3 chip or Intel Arc GPU
Install
python -m venv venv
source venv/bin/activate
pip3 install -r server/requirements.txt
cd frontend && npm install && npm run build && cd ..
python server/main.py --reload --pipeline img2imgSDTurbo
Don't forget to fuild the frontend!!!
cd frontend && npm install && npm run build && cd ..
Pipelines
You can build your own pipeline following examples here here,
LCM
Image to Image
python server/main.py --reload --pipeline img2img
LCM
Text to Image
python server/main.py --reload --pipeline txt2img
Image to Image ControlNet Canny
python server/main.py --reload --pipeline controlnet
LCM + LoRa
Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. Learn more here or technical report
Image to Image ControlNet Canny LoRa
python server/main.py --reload --pipeline controlnetLoraSD15
or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images
python server/main.py --reload --pipeline controlnetLoraSDXL
Text to Image
python server/main.py --reload --pipeline txt2imgLora
python server/main.py --reload --pipeline txt2imgLoraSDXL
Available Pipelines
LCM
img2img
txt2img
controlnet
txt2imgLora
controlnetLoraSD15
SD15
controlnetLoraSDXL
txt2imgLoraSDXL
SDXL Turbo
img2imgSDXLTurbo
controlnetSDXLTurbo
SDTurbo
img2imgSDTurbo
controlnetSDTurbo
Segmind-Vega
controlnetSegmindVegaRT
img2imgSegmindVegaRT
Setting environment variables
--host
: Host address (default: 0.0.0.0)--port
: Port number (default: 7860)--reload
: Reload code on change--max-queue-size
: Maximum queue size (optional)--timeout
: Timeout period (optional)--safety-checker
: Enable Safety Checker (optional)--torch-compile
: Use Torch Compile--use-taesd
/--no-taesd
: Use Tiny Autoencoder--pipeline
: Pipeline to use (default: "txt2img")--ssl-certfile
: SSL Certificate File (optional)--ssl-keyfile
: SSL Key File (optional)--debug
: Print Inference time--compel
: Compel option--sfast
: Enable Stable Fast--onediff
: Enable OneDiff
If you run using bash build-run.sh
you can set PIPELINE
variables to choose the pipeline you want to run
PIPELINE=txt2imgLoraSDXL bash build-run.sh
and setting environment variables
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python server/main.py --reload --pipeline txt2imgLoraSDXL
If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my comment
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
python server/main.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
Docker
You need NVIDIA Container Toolkit for Docker, defaults to `controlnet``
docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live
reuse models data from host to avoid downloading them again, you can change ~/.cache/huggingface
to any other directory, but if you use hugingface-cli locally, you can share the same cache
docker run -ti -p 7860:7860 -e HF_HOME=/data -v ~/.cache/huggingface:/data --gpus all lcm-live
or with environment variables
docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live