HiDream-I1 4Bit Quantized Model

This repository is a fork of HiDream-I1 quantized to 4 bits, allowing the full model to run in less than 16GB of VRAM.

The original repository can be found here.

HiDream-I1 is a new open-source image generative foundation model with 17B parameters that achieves state-of-the-art image generation quality within seconds.

image

Models

We offer both the full version and distilled models. The parameter size are the same, so they require the same amount of GPU memory to run. However, the distilled models are faster because of reduced number of inference steps.

Name Min VRAM Steps HuggingFace
HiDream-I1-Full 16 GB 50 ๐Ÿค— Original / NF4
HiDream-I1-Dev 16 GB 28 ๐Ÿค— Original / NF4
HiDream-I1-Fast 16 GB 16 ๐Ÿค— Original / NF4

Hardware Requirements

  • GPU Architecture: NVIDIA >= Ampere (e.g. A100, H100, A40, RTX 3090, RTX 4090)
  • GPU RAM: >= 16 GB
  • CPU RAM: >= 16 GB

Quick Start

Simply run:

pip install hdi1 --no-build-isolation

It's recommended that you start a new python environment for this package to avoid dependency conflicts.
To do that, you can use conda create -n hdi1 python=3.12 and then conda activate hdi1.
Or you can use python3 -m venv venv and then source venv/bin/activate on Linux or venv\Scripts\activate on Windows.

Command Line Interface

Then you can run the module to generate images:

python -m hdi1 "A cat holding a sign that says 'hello world'"

# or you can specify the model
python -m hdi1 "A cat holding a sign that says 'hello world'" -m fast

The inference script will try to automatically download meta-llama/Llama-3.1-8B-Instruct model files. You need to agree to the license of the Llama model on your HuggingFace account and login using huggingface-cli login in order to use the automatic downloader.

Web Dashboard

We also provide a web dashboard for interactive image generation. You can start it by running:

python -m hdi1.web

image

License

The code in this repository and the HiDream-I1 models are licensed under MIT License.

Downloads last month
15,900
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for azaneko/HiDream-I1-Full-nf4

Quantized
(4)
this model