Artiprocher commited on
Commit
f1f325f
·
1 Parent(s): 649a29c
Files changed (3) hide show
  1. README.md +56 -0
  2. config.json +42 -0
  3. diffusion_pytorch_model.bin +3 -0
README.md CHANGED
@@ -1,3 +1,59 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - pytorch
5
+ - diffusers
6
+ - text-to-image
7
  ---
8
+
9
+ # Chinese ControlNet Model (Canny)
10
+
11
+ ## 简介 Brief Introduction
12
+
13
+ 我们开源了一个中文 ControlNet 模型,该模型适配 diffusion 模型 `alibaba-pai/pai-diffusion-artist-large-zh`,您可以使用该模型控制 diffusion 模型生成的图像。
14
+
15
+ We release a Chinese ControlNet model, which works with diffusion model `alibaba-pai/pai-diffusion-artist-large-zh`. You can use this model to control the diffusion model generating images as you wish.
16
+
17
+ * Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
18
+
19
+ ## 使用 Usage
20
+
21
+ ```python
22
+ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
23
+ from transformers import pipeline
24
+ from PIL import Image
25
+ import numpy as np
26
+ import cv2
27
+
28
+
29
+ def to_canny(image):
30
+ low_threshold = 100
31
+ high_threshold = 200
32
+ image = np.array(image)
33
+ image = cv2.Canny(image, low_threshold, high_threshold)
34
+ image = image[:, :, None]
35
+ image = np.concatenate([image, image, image], axis=2)
36
+ image = Image.fromarray(image)
37
+ return image
38
+
39
+
40
+ controlnet_id = "alibaba-pai/pai-diffusion-artist-large-zh-controlnet-canny"
41
+ controlnet = ControlNetModel.from_pretrained(controlnet_id)
42
+ model_id = "alibaba-pai/pai-diffusion-artist-large-zh"
43
+ pipe = StableDiffusionControlNetPipeline.from_pretrained(model_id, controlnet=controlnet)
44
+ pipe = pipe.to("cuda")
45
+
46
+ image = Image.open("image.png")
47
+ controlnet_image = to_canny(image)
48
+ prompt = "白色羽毛的小鸟"
49
+ image = pipe(prompt, controlnet_image).images[0]
50
+
51
+ controlnet_image.save("image_canny.png")
52
+ image.save("image_canny_output.png")
53
+ ```
54
+
55
+ ## 使用须知 Notice for Use
56
+
57
+ 使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
58
+
59
+ If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "ControlNetModel",
3
+ "_diffusers_version": "0.16.1",
4
+ "act_fn": "silu",
5
+ "attention_head_dim": 8,
6
+ "block_out_channels": [
7
+ 320,
8
+ 640,
9
+ 1280,
10
+ 1280
11
+ ],
12
+ "class_embed_type": null,
13
+ "conditioning_embedding_out_channels": [
14
+ 16,
15
+ 32,
16
+ 96,
17
+ 256
18
+ ],
19
+ "controlnet_conditioning_channel_order": "rgb",
20
+ "cross_attention_dim": 768,
21
+ "down_block_types": [
22
+ "CrossAttnDownBlock2D",
23
+ "CrossAttnDownBlock2D",
24
+ "CrossAttnDownBlock2D",
25
+ "DownBlock2D"
26
+ ],
27
+ "downsample_padding": 1,
28
+ "flip_sin_to_cos": true,
29
+ "freq_shift": 0,
30
+ "global_pool_conditions": false,
31
+ "in_channels": 4,
32
+ "layers_per_block": 2,
33
+ "mid_block_scale_factor": 1,
34
+ "norm_eps": 1e-05,
35
+ "norm_num_groups": 32,
36
+ "num_class_embeds": null,
37
+ "only_cross_attention": false,
38
+ "projection_class_embeddings_input_dim": null,
39
+ "resnet_time_scale_shift": "default",
40
+ "upcast_attention": false,
41
+ "use_linear_projection": false
42
+ }
diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1237f33b79961e64da25b69352718568aba575ba9a80a0b38befb06d773632a9
3
+ size 1445253593