Commit
·
46be0e1
1
Parent(s):
2c274c3
Update README.md
Browse files
README.md
CHANGED
@@ -25,4 +25,59 @@ You easily can click on [this link](https://colab.research.google.com/github/prp
|
|
25 |
|
26 |
### Code
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
### Code
|
27 |
|
28 |
+
The following code is written for _CUDA_ supported devices. If you use UI's or inference tools on other devices, you may need to tweak them in order to get them to the work. Otherwise, it will be fine.
|
29 |
+
|
30 |
+
First, you need to install required libraries:
|
31 |
+
|
32 |
+
```
|
33 |
+
pip3 install diffusers transformers scipy ftfy accelerate
|
34 |
+
```
|
35 |
+
|
36 |
+
_NOTE: installation of `accelerate` library makes the inference process amazingly faster. but it's totally optional_.
|
37 |
+
|
38 |
+
Then, you need to import required libraries:
|
39 |
+
|
40 |
+
```python
|
41 |
+
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler, DiffusionPipeline, DPMSolverMultistepScheduler
|
42 |
+
import torch
|
43 |
+
```
|
44 |
+
|
45 |
+
and then, create a pipeline (this pipeline is made with Euler Scheduler):
|
46 |
+
|
47 |
+
```python
|
48 |
+
model_id = "mann-e/mann-e_rev-2"
|
49 |
+
|
50 |
+
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
|
51 |
+
|
52 |
+
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
|
53 |
+
pipe = pipe.to("cuda")
|
54 |
+
```
|
55 |
+
|
56 |
+
and of course, since you may get NSFW filteration warnings even on simplest prompts, you may consider disabling it:
|
57 |
+
|
58 |
+
```python
|
59 |
+
def dummy(images, **kwargs):
|
60 |
+
return images, False
|
61 |
+
|
62 |
+
pipe.safety_checker = dummy
|
63 |
+
```
|
64 |
+
|
65 |
+
_NOTE: Please consider consequences of disabling this filter as well. we do not want people to get any sort of damage or injury from the image generation results_.
|
66 |
+
|
67 |
+
And after that, you easily can start inference:
|
68 |
+
|
69 |
+
```python
|
70 |
+
prompt = "Concept art of a hostile alien planet with unbreathable purple air and toxic clouds, sinister atmosphere, deep shadows, sharp details"
|
71 |
+
negative_prompt = "low quality, blurry"
|
72 |
+
width = 768
|
73 |
+
height = 512
|
74 |
+
```
|
75 |
+
|
76 |
+
then:
|
77 |
+
|
78 |
+
```python
|
79 |
+
image = pipe(prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=100, width=width, height=height, guidance_scale=10).images[0]
|
80 |
+
image.save("My_image.png")
|
81 |
+
```
|
82 |
+
|
83 |
+
## Important Notes
|