AuraFlow

image/png

AuraFlow v0.1 is the fully open-sourced largest flow-based text-to-image generation model.

This model achieves state-of-the-art results on GenEval. Read our blog post for more technical details.

The model is currently in beta. We are working on improving it and the community's feedback is important. Join fal's Discord to give us feedback and stay in touch with the model development.

Credits: A huge thank you to @cloneofsimo and @isidentical for bringing this project to life. It's incredible what two cracked engineers can achieve in such a short period of time. We also extend our gratitude to the incredible researchers whose prior work laid the foundation for our efforts.

Usage

$ pip install transformers accelerate protobuf sentencepiece
$ pip install git+https://github.com/huggingface/diffusers.git
from diffusers import AuraFlowPipeline
from PIL import Image

pipeline = AuraFlowPipeline.from_pretrained(
    "cozy-creator/aura-flow-fp16-version",
    torch_dtype=torch.bfloat16,
    variant="fp16"
)

pipeline.enable_model_cpu_offload()

image = pipeline(
    prompt="close-up portrait of a majestic iguana with vibrant blue-green scales, piercing amber eyes, and orange spiky crest. Intricate textures and details visible on scaly skin. Wrapped in dark hood, giving regal appearance. Dramatic lighting against black background. Hyper-realistic, high-resolution image showcasing the reptile's expressive features and coloration.",
    height=1024,
    width=1024,
    num_inference_steps=25, 
    generator=torch.Generator().manual_seed(234),
    guidance_scale=3.5
).images[0]

Note

We are not the principal owner of the AuraFlow model. We have only created the FP16 version of the released AuraFlow model, originally created by fal. Please refer to the main repository for the original model and further information. https://huggingface.co/fal/AuraFlow

Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.