To Meta AI Research: I would like to fold ylacombe/expresso into the training mix of an Apache TTS model series. Can you relax the Expresso dataset license to CC-BY or more permissive?
Barring that, can I have an individual exception to train on the materials and distribute trained Apache models, without direct redistribution of the original files? Thanks!
🔥 Key Innovations: 1️⃣ First to adapt SD for direct textured mesh generation (1-2s inference) 2️⃣ Novel teacher-student framework leveraging multi-view diffusion models ([MVDream](https://arxiv.org/abs/2308.16512) & [RichDreamer](https://arxiv.org/abs/2311.16918)) 3️⃣ Parameter-efficient tuning - only +2.6% params over base SD 4️⃣ 3D data-free training liberates model from dataset constraints
💡 Why matters? → A novel 3D-Data-Free paradigm → Outperforms data-driven methods on creative concept generation → Unlocks web-scale text corpus for 3D content creation
Open source models are immutable, this is a big pain.
When you open source a piece of software, users leave their feedbacks via issues or PRs. You can merge their feedbacks in semi real time, this creates a positive cycle. Then you have a community.
LLMs don't have these nice micro steps. There are no hot fixes. Even a minor version bump is an endeavor. I'm quite confident my model is being used by teams somewhere. But until next launch, it's awfully quiet.
I don't know the solution. Just a regular lament before weekend. 🤗