-> Paired the EditPlusPipeline with the Diffusers-compatible transformer weights of Rapid AIO from Qwen-Image-Edit. (experimental) -> This fusion delivers more accurate instruction following, higher image quality, and consistent visual coherence @ 4-step fast inference. -> Better maintains text styles with high fidelity, along with high-quality old photo restoration, enhancement, and best-in-class virtual try-on.
if you like it give the demo a little star and send a shoutout to : @MaxLSB@jddqd and @GAD-cell for absolutely obliterating the pareto frontier of the french language understanding .
Dropping the Qwen3 VL Series of Unredacted MAX-VL models. These models have undergone multi-stage training to minimize refusal rates through continuous abliterated optimization. You can find the models in BF16, FP8-Dynamic, and GGUF formats at the links below.🔥🚀
Introducing FLUX.2-Klein-LoRA-Studio, a demo for image editing using specialized LoRA adapters built for the FLUX.2-Klein-Distilled model. It features an edit-style gallery for multi-style image editing, including de-light, face swap, mannequin, and more. Try the demo below.
GLM OCR, a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It delivers high accuracy and strong generalization with a blazing-fast inference pipeline. The demo is live . Try it now. 🤗🚀
Introducing the Qwen-Image-Edit-3D-Lighting-Control app, featuring 8× horizontal and 3× elevational lighting positions for precise 3D lighting control. It enables studio-level lighting using fast Qwen Image Edit fast inference, paired with Multi-Angle-Lighting adapters. 🔦
Daggr UI version of the Qwen3-TTS demo.🔥 (custom voice, voice design, qwen3-asr and voice cloning) nodes. No remote spaces used for API inference; all functions run in-app fn. Powered by t4-m and built with [email protected] and gradio@6.
Qwen-Image-Edit-Object-Manipulator Space is now featured in Hugging Face Space of the Week. It enables object manipulation such as extracting objects, adding designs, and removing objects or designs from the red highlighted area using specialized adapters.
Introducing QIE-2511-Zoom-Master for highlight-guided area zoom-in, enabling lossless zooming within a drawn square area, and QIE-2511-Object-Remover-v2 for precise object or highlight-guided area cleanup. These experimental adapters are trained based on QIE-2511. Find the adapters below.
Now Live: The Reubencf/Nano_Banana_Editor now includes 10 free requests/day! 🍌 I'm personally sponsoring these credits to help make open AI accessible to all. (Note: Limits are subject to change based on funding).
TranslateGemma: Open Translation Models (Jan 2026)
Google introduces TranslateGemma, a new suite of open translation models based on Gemma 3, available in 4B, 12B, and 27B parameter sizes.
Key Highlights: • Supports 55 languages with high-quality translation across high-, mid-, and low-resource languages • Exceptional efficiency: 12B model outperforms 27B baseline on WMT24++ benchmark • Built using two-stage fine-tuning process distilling knowledge from Gemini models • Retains strong multimodal capabilities (can translate text within images) • Trained on nearly 500 additional language pairs for research adaptation • Designed for diverse deployment environments from mobile to cloud
The models achieve state-of-the-art performance while maintaining exceptional efficiency, making high-quality translation accessible across different devices and use cases.