GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

Paper: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

Abstract:

Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis and compare two different guidance strategies: CLIP guidance and classifier-free guidance. We find that the latter is preferred by human evaluators for both photorealism and caption similarity, and often produces photorealistic samples. Samples from a 3.5 billion parameter text-conditional diffusion model using classifier-free guidance are favored by human evaluators to those from DALL-E, even when the latter uses expensive CLIP reranking. Additionally, we find that our models can be fine-tuned to perform image inpainting, enabling powerful text-driven image editing.

Usage

# !pip install diffusers
import torch
from diffusers import DiffusionPipeline
import PIL.Image

model_id = "fusing/glide-base"

# load model and scheduler
pipeline = DiffusionPipeline.from_pretrained(model_id)

# run inference (text-conditioned denoising + upscaling)
img = pipeline("a crayon drawing of a corgi")

# process image to PIL
img = img.squeeze(0)
img = ((img + 1)*127.5).round().clamp(0, 255).to(torch.uint8).cpu().numpy()
image_pil = PIL.Image.fromarray(img)

# save image
image_pil.save("test.png")

Samples

  1. sample_1
  2. sample_2
  3. sample_3
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .