Spaces:
Runtime error
Runtime error
from diffusers import StableDiffusionPipeline | |
import torch | |
file_name = "/blob/main/rem_3k.ckpt" | |
model_url = "https://huggingface.co/waifu-research-department/Rem" + file_name | |
pipeline = StableDiffusionPipeline.from_single_file( | |
model_url, | |
torch_dtype=torch.float16, | |
) | |
import gradio as gr | |
description=""" | |
# running stable diffusion from a ckpt file | |
## NOTICE β οΈ: | |
- this space does not work rn because it needs GPU, feel free to **clone this space** and set your own with GPU an meet your waifu **γ½οΌβ§β‘β¦οΌγ** | |
if you do not have money (just like me **(β¬β¬οΉβ¬β¬)** ) you can always : | |
* **run the code in your PC** if you have a good GPU a good internet connection (to download the ai model only a 1 time thing) | |
* **run the model in the cloud** (colab, and kaggle are good alternatives and they have a pretty good internet connection ) | |
### minimalistic code to run a ckpt model | |
* enable GPU (click runtime then change runtime type) | |
* install the following libraries | |
``` | |
!pip install -q diffusers gradio omegaconf | |
``` | |
* **restart your kernal** π (click runtime then click restart session) | |
* run the following code | |
```python | |
from diffusers import StableDiffusionPipeline | |
import torch | |
pipeline = StableDiffusionPipeline.from_single_file( | |
"https://huggingface.co/waifu-research-department/Rem/blob/main/rem_3k.ckpt", # put your model url here | |
torch_dtype=torch.float16, | |
).to("cuda") | |
postive_prompt = "anime girl prompt here" # π change this | |
negative_prompt = "3D" # π things you hate here | |
image = pipeline(postive_prompt,negative_prompt=negative_prompt).images[0] | |
image # your image is saved in this PIL variable | |
``` | |
""" | |
try : | |
pipeline.to("cuda") | |
except: | |
log = "no GPU available" | |
def text2img(positive_prompt,negative_prompt): | |
try : | |
image = pipeline(positive_prompt,negative_prompt=negative_prompt).images[0] | |
log = {"postive_prompt":positive_prompt,"negative_prompt":negative_prompt} | |
except Exception as e: | |
log = f"ERROR: {e}" | |
image = None | |
return log,image | |
gr.Interface(text2img,["text","text"],["text","image"],examples=[["rem","3D"]],description=description).launch() |