Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,22 @@ These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. Th
|
|
21 |
![img_2](./image_2.png)
|
22 |
![img_3](./image_3.png)
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
LoRA for the text encoder was enabled: False.
|
26 |
|
|
|
21 |
![img_2](./image_2.png)
|
22 |
![img_3](./image_3.png)
|
23 |
|
24 |
+
You can use this code 👇
|
25 |
+
```python
|
26 |
+
from huggingface_hub.repocard import RepoCard
|
27 |
+
from diffusers import DiffusionPipeline
|
28 |
+
import torch
|
29 |
+
|
30 |
+
lora_model_id = "merve/lego-lora-trained-xl"
|
31 |
+
card = RepoCard.load(lora_model_id)
|
32 |
+
base_model_id = card.data.to_dict()["base_model"]
|
33 |
+
|
34 |
+
pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
|
35 |
+
pipe = pipe.to("cuda")
|
36 |
+
pipe.load_lora_weights(lora_model_id)
|
37 |
+
|
38 |
+
pipe("a picture of <s1><s2> minifigure as lana del rey, high quality", num_inference_steps=35).images[0]
|
39 |
+
```
|
40 |
|
41 |
LoRA for the text encoder was enabled: False.
|
42 |
|