StyleStudio: Text-Driven Style Transfer with Selective Control of Style Elements Paper • 2412.08503 • Published 14 days ago • 8
StyleStudio: Text-Driven Style Transfer with Selective Control of Style Elements Paper • 2412.08503 • Published 14 days ago • 8 • 2
GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting Paper • 2311.14521 • Published Nov 24, 2023
LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning Paper • 2311.18651 • Published Nov 30, 2023
IT3D: Improved Text-to-3D Generation with Explicit View Synthesis Paper • 2308.11473 • Published Aug 22, 2023
MovieLLM: Enhancing Long Video Understanding with AI-Generated Movies Paper • 2403.01422 • Published Mar 3 • 26
Metric3D v2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation Paper • 2404.15506 • Published Mar 22
MeshXL: Neural Coordinate Field for Generative 3D Foundation Models Paper • 2405.20853 • Published May 31
EMMA: Your Text-to-Image Diffusion Model Can Secretly Accept Multi-Modal Prompts Paper • 2406.09162 • Published Jun 13 • 13
MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers Paper • 2406.10163 • Published Jun 14 • 32
MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization Paper • 2408.02555 • Published Aug 5 • 28
MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization Paper • 2408.02555 • Published Aug 5 • 28
MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers Paper • 2406.10163 • Published Jun 14 • 32
EMMA: Your Text-to-Image Diffusion Model Can Secretly Accept Multi-Modal Prompts Paper • 2406.09162 • Published Jun 13 • 13 • 3
EMMA: Your Text-to-Image Diffusion Model Can Secretly Accept Multi-Modal Prompts Paper • 2406.09162 • Published Jun 13 • 13 • 3
Robust Geometry-Preserving Depth Estimation Using Differentiable Rendering Paper • 2309.09724 • Published Sep 18, 2023