Papers
arxiv:2503.21694

Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data

Published on Mar 27
· Submitted by ZhiyuanthePony on Apr 1
Authors:
,
,
,

Abstract

It is highly desirable to obtain a model that can generate high-quality 3D meshes from text prompts in just seconds. While recent attempts have adapted pre-trained text-to-image diffusion models, such as Stable Diffusion (SD), into generators of 3D representations (e.g., Triplane), they often suffer from poor quality due to the lack of sufficient high-quality 3D training data. Aiming at overcoming the data shortage, we propose a novel training scheme, termed as Progressive Rendering Distillation (PRD), eliminating the need for 3D ground-truths by distilling multi-view diffusion models and adapting SD into a native 3D generator. In each iteration of training, PRD uses the U-Net to progressively denoise the latent from random noise for a few steps, and in each step it decodes the denoised latent into 3D output. Multi-view diffusion models, including MVDream and RichDreamer, are used in joint with SD to distill text-consistent textures and geometries into the 3D outputs through score distillation. Since PRD supports training without 3D ground-truths, we can easily scale up the training data and improve generation quality for challenging text prompts with creative concepts. Meanwhile, PRD can accelerate the inference speed of the generation model in just a few steps. With PRD, we train a Triplane generator, namely TriplaneTurbo, which adds only 2.5% trainable parameters to adapt SD for Triplane generation. TriplaneTurbo outperforms previous text-to-3D generators in both efficiency and quality. Specifically, it can produce high-quality 3D meshes in 1.2 seconds and generalize well for challenging text input. The code is available at https://github.com/theEricMa/TriplaneTurbo.

Community

Paper author Paper submitter

🎉 Thrilled to share our #CVPR2025 accepted work:
"Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data"
📄 Arxiv | 🌐 Project Page | 💻 Code | 🎮 Demo

🔥 ​Key Innovations:
1️⃣ First to adapt SD for ​direct textured mesh generation (1-2s inference)
2️⃣ Novel teacher-student framework leveraging multi-view diffusion models (MVDream & RichDreamer)
3️⃣ ​Parameter-efficient tuning - ​only +2.6% params over base SD
4️⃣ ​3D data-free training liberates model from dataset constraints

💡 Why matters?
→ A novel ​3D-Data-Free paradigm
→ Outperforms data-driven methods on creative concept generation
→ Unlocks web-scale text corpus for 3D content creation

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.21694 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 2