-
Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models
Paper • 2406.09416 • Published • 27 -
Wavelets Are All You Need for Autoregressive Image Generation
Paper • 2406.19997 • Published • 29 -
ViPer: Visual Personalization of Generative Models via Individual Preference Learning
Paper • 2407.17365 • Published • 12 -
MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning
Paper • 2408.11001 • Published • 11
Collections
Discover the best community collections!
Collections including paper arxiv:2412.09626
-
DepthMaster: Taming Diffusion Models for Monocular Depth Estimation
Paper • 2501.02576 • Published • 6 -
FreeScale: Unleashing the Resolution of Diffusion Models via Tuning-Free Scale Fusion
Paper • 2412.09626 • Published • 20 -
Marigold-DC: Zero-Shot Monocular Depth Completion with Guided Diffusion
Paper • 2412.13389 • Published • 6 -
CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up
Paper • 2412.16112 • Published • 21
-
NexaAIDev/Qwen2-Audio-7B-GGUF
Audio-Text-to-Text • Updated • 7.67k • 130 -
strangerzonehf/Flux-Isometric-3D-LoRA
Text-to-Image • Updated • 1.18k • • 33 -
strangerzonehf/Flux-Super-Realism-LoRA
Text-to-Image • Updated • 30.1k • • 116 -
strangerzonehf/Flux-Isometric-3D-Cinematography
Text-to-Image • Updated • 481 • • 18
-
Animate-X: Universal Character Image Animation with Enhanced Motion Representation
Paper • 2410.10306 • Published • 54 -
ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-Tuning
Paper • 2411.05003 • Published • 70 -
TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation
Paper • 2411.04709 • Published • 25 -
IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation
Paper • 2410.07171 • Published • 42
-
LinFusion: 1 GPU, 1 Minute, 16K Image
Paper • 2409.02097 • Published • 33 -
Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion
Paper • 2409.11406 • Published • 26 -
Diffusion Models Are Real-Time Game Engines
Paper • 2408.14837 • Published • 122 -
Segment Anything with Multiple Modalities
Paper • 2408.09085 • Published • 21
-
MambaVision: A Hybrid Mamba-Transformer Vision Backbone
Paper • 2407.08083 • Published • 28 -
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
Paper • 2408.11039 • Published • 58 -
The Mamba in the Llama: Distilling and Accelerating Hybrid Models
Paper • 2408.15237 • Published • 39 -
Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think
Paper • 2409.11355 • Published • 29
-
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Paper • 2311.17049 • Published • 1 -
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Paper • 2405.04434 • Published • 14 -
A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Paper • 2303.17376 • Published -
Sigmoid Loss for Language Image Pre-Training
Paper • 2303.15343 • Published • 6
-
Compose and Conquer: Diffusion-Based 3D Depth Aware Composable Image Synthesis
Paper • 2401.09048 • Published • 9 -
Improving fine-grained understanding in image-text pre-training
Paper • 2401.09865 • Published • 16 -
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
Paper • 2401.10891 • Published • 60 -
Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild
Paper • 2401.13627 • Published • 73