mv-adapter / README.md
huanngzh's picture
Update README.md
f39f330 verified
|
raw
history blame
3.77 kB
metadata
license: apache-2.0
pipeline_tag: text-to-image
library_name: diffusion-single-file
tags:
  - Image-to-Image

MV-Adapter Model Card

Project Page | Paper (ArXiv) | Paper (HF) | Code | Gradio demo

Create High-fidelity Multi-view Images with Various Base T2I Models and Various Conditions.

Introduction

MV-Adapter is a creative productivity tool that seamlessly transfer text-to-image models to multi-view generators.

Highlights:

  • 768x768 multi-view images
  • work well with personalized models (e.g. DreamShaper, Animagine), LCM, ControlNet
  • support text or image to multi-view (reconstruct 3D thereafter), or with geometry guidance for 3D texture generation
  • arbitrary view generation

Examples

Model Details

Model Base Model HF Weights Demo Link
Text-to-Multiview SDXL mvadapter_t2mv_sdxl.safetensors General / Anime
Image-to-Multiview SDXL mvadapter_i2mv_sdxl.safetensors Demo
Text-Geometry-to-Multiview SDXL
Image-Geometry-to-Multiview SDXL
Image-to-Arbitrary-Views SDXL

Usage

Refer to our Github repository.

Citation

If you find this work helpful, please consider citing our paper:

@article{huang2024mvadapter,
  title={MV-Adapter: Multi-view Consistent Image Generation Made Easy},
  author={Huang, Zehuan and Guo, Yuanchen and Wang, Haoran and Yi, Ran and Ma, Lizhuang and Cao, Yan-Pei and Sheng, Lu},
  journal={arXiv preprint arXiv:2412.03632},
  year={2024}
}