File size: 1,957 Bytes
2a2e175 89d0f47 2a2e175 89d0f47 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
pipeline_tag: text-to-image
---
# SDXS Onnx
Converted from [IDKiro/sdxs-512-0.9](https://huggingface.co/IDKiro/sdxs-512-0.9) (i.e. the original one, without dreamshaper) through this command:
```
optimum-cli export onnx -m <local absolute path to original model> --task stable-diffusion ./mysdxs
```
Notice that I replaced the `/vae` folder in the local copy of the repo with `/vae_large` in that same repo, and updated the model config at the repo root. This is because the Onnx converter doesn't currently seem mature enough to handle nonstandard pipeline so we're effectively using the original, ordinary autoencoder.
For actual inference, you can test with something like:
```py
from optimum.onnxruntime import ORTStableDiffusionPipeline
pipeline = ORTStableDiffusionPipeline.from_pretrained("/local/absolute/path/to/repo")
prompt = "Sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt, num_inference_steps=1, guidance_scale=0).images[0]
image.save("hello.png", "PNG")
```
## Using with TAESD
(Not tested yet)
Consider using the Onnx converted model of TAESD at [deinferno/taesd-onnx](https://huggingface.co/deinferno/taesd-onnx) (Original model at [madebyollin/taesd](https://huggingface.co/madebyollin/taesd) )
Combined inference code:
```py
from huggingface_hub import snapshot_download
from diffusers.pipelines import OnnxRuntimeModel
from optimum.onnxruntime import ORTStableDiffusionPipeline
taesd_dir = snapshot_download(repo_id="deinferno/taesd-onnx")
pipeline = ORTStableDiffusionPipeline.from_pretrained(
"lemonteaa/sdxs-onnx",
vae_decoder_session = OnnxRuntimeModel.from_pretrained(f"{taesd_dir}/vae_decoder"),
vae_encoder_session = OnnxRuntimeModel.from_pretrained(f"{taesd_dir}/vae_encoder"),
revision="onnx")
prompt = "Sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt, num_inference_steps=1, guidance_scale=0).images[0]
image.save("hello.png", "PNG")
```
|