This repo contains bitsandbytes 8bit model weights for OmniGen-v1. For info about OmniGen see the original model card.
- 4-bit (bf16, nf4) weights: gryan/OmniGen-v1-bnb-4bit
- 4-bit (fp16, nf4) weights: gryan/OmniGen-v1-fp16-bnb-4bit -- for older GPUs (< Ampere/RTX 30xx) / Colab users.
Usage
Set up your environment by following the original Quick Start Guide before getting started.
NOTE: This feature is not officially supported yet. You'll need to install the repo from this pull request.
from OmniGen import OmniGenPipeline, OmniGen
# pass the quantized model in the pipeline
model = OmniGen.from_pretrained('gryan/OmniGen-v1-bnb-8bit')
pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1", model=model)
# proceed as normal!
## Text to Image
images = pipe(
prompt="A curly-haired man in a red shirt is drinking tea.",
height=1024,
width=1024,
guidance_scale=2.5,
seed=0,
)
images[0].save("example_t2i.png") # save output PIL Image
## Multi-modal to Image
# In the prompt, we use the placeholder to represent the image. The image placeholder should be in the format of <img><|image_*|></img>
# You can add multiple images in the input_images. Please ensure that each image has its placeholder. For example, for the list input_images [img1_path, img2_path], the prompt needs to have two placeholders: <img><|image_1|></img>, <img><|image_2|></img>.
images = pipe(
prompt="A man in a black shirt is reading a book. The man is the right man in <img><|image_1|></img>.",
input_images=["./imgs/test_cases/two_man.jpg"],
height=1024,
width=1024,
guidance_scale=2.5,
img_guidance_scale=1.6,
seed=0
)
images[0].save("example_ti2i.png") # save output PIL image
Image Samples (8-bit)
- Downloads last month
- 281
Model tree for gryan/OmniGen-v1-bnb-8bit
Base model
Shitao/OmniGen-v1