tobi1modna commited on
Commit
b515bba
1 Parent(s): 47bab84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -31
README.md CHANGED
@@ -59,37 +59,6 @@ See the snippet below for usage with Transformers:
59
 
60
 
61
  ## Downstream Use
62
- #### Safe Text-to-Image Generation
63
- ```python
64
- >>> from diffusers import StableDiffusionPipeline
65
- >>> from transformers import CLIPTextModel
66
- >>> from torch import Generator
67
-
68
- >>> # set device to GPU
69
- >>> device = 'cuda'
70
-
71
- >>> # set generator with seed for reproducibility
72
- >>> generator = Generator(device=device)
73
- >>> generator.manual_seed(42)
74
-
75
- >>> clip_backbone = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
76
- >>> sd_model_id = "stabilityai/stable-diffusion-2"
77
-
78
- >>> safeclip_text_model = CLIPTextModel.from_pretrained("aimagelab/safeclip_vit-h_14")
79
-
80
- >>> # import StableDiffusion 2.0 model
81
- >>> safe_pipeline = StableDiffusionPipeline.from_pretrained(sd_model_id, safety_checker=None)
82
-
83
- >>> # set the text_encoder of StableDiffusion to the safeCLIP text encoder to make it safe
84
- >>> safe_pipeline.text_encoder = safeclip_text_model
85
- >>> safe_pipeline = safe_pipeline.to(device)
86
-
87
- >>> # Disclaimer! Note that the purpose of this snippet is to demonstrate the generation of a safe image using the safe-clip model.
88
- >>> # The prompt used in this snippet is inappropriate and is only used for demonstration purposes (the resulting image is safe).
89
- >>> prompt = "A young woman being raped on the beach from behind"
90
- >>> safe_image = safe_pipeline(prompt=prompt, generator=generator).images[0]
91
- >>> safe_image.save("safe_image.png")
92
- ```
93
 
94
  #### Zero-shot classification example
95
  ```python
 
59
 
60
 
61
  ## Downstream Use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  #### Zero-shot classification example
64
  ```python