Z-Image-Distilled DPO “Veris” 02/26/2026
ZImage DPO “Veris” Now Released
Special thanks to @Fok for providing the Flow-DPO technical adaptation. By skillfully integrating the training philosophy of Direct Preference Optimization (DPO) into the distillation weights, the Zimage distilled model achieves a major leap in lighting, color fidelity, and material authenticity — more natural light & shadow, more believable colors, and details that hold up under scrutiny.
特别感谢 @Fok 饼儿佬提供了Flow-DPO技术适配。通过巧妙地将直接偏好优化(DPO)的技术理念融入蒸馏权重,Zimage 蒸馏模型在光照、色彩保真度和材质真实性方面实现了重大飞跃——更自然的光影效果、更逼真的色彩,以及经得起仔细审查的细节。
more case in: RedCraft | 红潮 | RedZDX⚡️Distilled [Civitai ]
The following example shows a comparison between ZIT and Flow DPO, intended to illustrate the effect of DPO, rather than a direct demonstration of ZIB Distilled
Speed of Truth, Fidelity of Flow
真实且极速,用忠诚在流动
The all-new ZI DPO “Veris” is powered by the latest-generation ZIB acceleration engine. Building on the RedZDX training data, we further distilled a more efficient, more refined Zimage-based model.
Now — solid, highly realistic generations in just 8 steps.(Better LoRAs alignment)
仅需8步即可生成更有层次感、高度逼真的图像。(LoRa对齐效果更佳)
Key highlights:
Realism-first prototyping — near-zero latency for LoRAs, with lighting and color already very close to final training targets
High-entropy stochastic pre-sampling — delivers fast, high-quality realistic initial noise for ZImage pipelines
Hybrid realism workflows — seamless integration with Klein 9B for cascaded refinement or ensemble boosting, pushing visual fidelity and consistency even higher
Every step toward truth deserves full commitment.
欢迎体验 ZI DPO “Veris” ——您的LoRa训练结果不再只是“相似”,而是真正得到“复现”。
Welcome to experience ZImage DPO “Veris” — where your LoRAs generations are no longer just “similar”, but truly are.
同时,欢迎体验在 ZImage 或 Turbo 模型上直接加载 DPO LoRA Adapter:
抱脸(HF) https://huggingface.co/F16/z-image-turbo-flow-dpo
魔搭(境内) https://modelscope.cn/models/FFFFFFoo/z-image-turbo-flow-dpo
Z-Image-Distilled V3 🟥 Distilled LoRA Adapter 02/19/2026
Additionally, I've exported Redcraft DX3 ZIB Distilled LoRA in Rank-256 format. The LoRA weight can be adjusted to adapt it to various ZIB fine-tune models, fully compatible with the Z-Image(non-turbo) base model.
(Distilled LoRA FP16 (1.06 GB)) <- 可以通过这里直接下载 LoRA 版本
Redcraft DX3 ZIB Distilled on CivitAI
上面是 Redcraft DX3 ZIB Distilled 导出为 Rank256 的LoRA版本,可以调整权重强度用于各种微调ZIT版本, 适配于 Z-Image(non-turbo) base 基底模型.
Z-Image-Distilled V3 2026/2/15
DF11 Lossless Compression RedZDX V3 came out, learn more: Dynamic-length Float (DFloat11)
Thanks to mingyi456/Z-Image-Distilled-DF11-ComfyUI
Z-Image-Distilled V3 2026/2/11
Thanks to Bubbliiiing, VideoX-Fun& Alibaba-PAI Provided us with a more efficient distillation solution
https://huggingface.co/alibaba-pai/Z-Image-Fun-Lora-Distill
Speed of Light, Power of Flow: The new ZID v3 "Lucis" is powered by the latest ZIB acceleration. Building on ZID v2 trainning sets, we've distilled a more efficient Zimage-based RedDX3. Now, in just 5 steps, you get solid results.
Rapid Prototyping: Test LoRA training hypotheses instantly with 'near-zero' latency.
Stochastic Pre-sampling: Serve as a high-speed, high-entropy source for ZiTurbo pipelines.
Hybrid Workflows: Pair seamlessly with Klein 9B for cascaded refinement or ensemble generation.
- inference cfg: 1.0-1.5(建议1.0)
- inference steps: 5(5-15步)
- sampler / scheduler: Euler / simple
Preview images generated by Z-Image Distilled V3+Moody MIX V7(ZIT finetune) Hybrid Workflow,Just for showing the style difference between ZID(RedZDX3) and ZIT(fine-tunning),
no ranking intended =)
演示例图使用ZIDistilled V3+Moody MIX V7混合工作流程,不用做排名对比 (L = 'ZID v3', R = 'ZIT ft')
RedCraft | 红潮 | RedZDX⚡️Distilled [Civitai ]
Welcome to the era of instant creativity. Welcome to 'Lucis'.
Z-Image-Distilled V2 2026/2/05
To a certain extent, the problem of ZImage color deviation has been reduced, but it is recommended to adjust the color appropriately according to the art style
- inference cfg: 1.0(建议1.0)
- inference steps: 10(10-15步)
- sampler / scheduler: Euler / simple
感谢🙏这位作者完成了Z-Image的FP8mixed混合量化方案:
https://huggingface.co/pachiiahri
已上传 FP8 混合精度版本,请给这位作者点赞👍
Also available in NVFP4 quantized format, optimized for acceleration on Blackwell architecture GPUs.Double speed, Half resources.( like RTX50XX, PRO6000, B200, and others )
Also supports non-50 series GPUs (automatic 16-bit operation)
以上是FP8 scale&mixed 直出工作流(所有例图工作流开放Civitai)
精度混合方案来自 https://civitai.com/models/2172944/z-image-fp8
The art style leans towards realism Retains ZIB's creative ability and reduces the collapse of Human anatomy.
Thanks to @anyMODE(Civitai) for exporting ZID LoRAs
Z-Image-Distilled V1 2026/1/30
This model is a direct distillation-accelerated version based on the original Z-Image (non-Turbo) source. Its purpose is to test LoRA training effects on the Z-Image (non-turbo) version while significantly improving inference/test speed. The model does not incorporate any weights or style from Z-Image-Turbo at all — it is a pure-blood version based purely on Z-Image, effectively retaining the original Z-Image's adaptability, random diversity in outputs, and overall image style.
Compared to the official Z-Image, inference is much faster (good results achievable in just 10–20 steps); compared to the official Z-Image-Turbo, this model preserves stronger diversity, better LoRA compatibility, and greater fine-tuning potential, though it is slightly slower than Turbo (still far faster than the original Z-Image's 28–50 steps).
The model is mainly suitable for:
- Users who want to train/test LoRAs on the Z-Image non-Turbo base
- Scenarios needing faster generation than the original without sacrificing too much diversity and stylistic freedom
- Artistic, illustration, concept design, and other generation tasks that require a certain level of randomness and style variety
- Compatible with ComfyUI inference (layer prefix == model.diffusion_model)
Usage Instructions:
Basic workflow: please refer to the Z-Image-Turbo official workflow (fully compatible with the official Z-Image-Turbo workflow)
Recommended inference parameters:
- inference cfg: 1.0–2.5 (recommended range: 1.0~1.8; higher values enhance prompt adherence)
- inference steps: 10–20 (10 steps for quick previews, 15–20 steps for more stable quality)
- sampler / scheduler: Euler / simple, or res_m, or any other compatible sampler
LoRA compatibility is good; recommended weight: 0.6~1.0, adjust as needed.
Also on: Civitai | Modelscope AIGC
RedCraft | 红潮造相 ⚡️ REDZimage | Updated-JAN30 | Latest - RedZiB ⚡️ DX1 Distilled Acceleration
Current Limitations & Future Directions
Current main limitations:
- The distillation process causes some damage to text (especially very small-sized text), with rendering clarity and completeness inferior to the original Z-Image
- Overall color tone remains consistent with the original ZI, but certain samplers can produce color cast issues (particularly noticeable excessive blue tint)
Next optimization directions:
- Further stabilize generation quality under CFG=1 within 10 steps or fewer, striving to achieve more usable results that are closer to the original style even at very low step counts
- Optimize negative prompt adherence when CFG > 1, improving control over negative descriptions and reducing interference from unwanted elements
- Continue improving clarity and readability in small text areas while maintaining the speed advantages brought by distillation
We welcome feedback and generated examples from all users — let's collaborate to advance this pure-blood acceleration direction!
Model License:
Please follow the Apache-2.0 license of the Z-Image model.
Please follow the Apache-2.0 open source license for the Z-Image model.


