Image-to-3D
File size: 3,409 Bytes
9a06fc4
 
 
 
 
6b83d25
 
9a06fc4
 
 
 
6b83d25
 
 
 
 
 
 
 
bf7666a
 
6b83d25
 
 
 
bf7666a
 
 
 
 
 
 
 
 
6b83d25
 
9a06fc4
 
 
 
53fc90d
a8a63dd
53fc90d
a8a63dd
e6a1dec
53fc90d
9a06fc4
 
53fc90d
 
9a06fc4
a8a63dd
b861db3
53fc90d
48e8b1b
 
 
 
53fc90d
 
e6a1dec
 
 
 
 
 
 
53fc90d
b861db3
53fc90d
9a06fc4
53fc90d
 
1d16c41
af95a32
a8a63dd
 
af95a32
a8a63dd
 
af95a32
 
 
3a2ea0a
9a06fc4
af95a32
53fc90d
 
 
 
 
af95a32
 
53fc90d
a8a63dd
 
53fc90d
 
9a06fc4
53fc90d
 
 
 
 
 
 
 
 
 
 
 
 
 
9a06fc4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
license: openrail
pipeline_tag: image-to-3d
---

# Overview

This is a duplicate of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers).

It is hosted here for the purpose of persistence and reproducibility for the ML for 3D course.

### Usage

This project can be used from other projects as follows.

```
import torch
from diffusers import DiffusionPipeline

# Text to Multi-View Diffusion
text_pipeline = DiffusionPipeline.from_pretrained(
    "ashawkey/mvdream-sd2.1-diffusers",
    custom_pipeline="dylanebert/multi_view_diffusion",
    torch_dtype=torch.float16,
    trust_remote_code=True,
).to("cuda")

# Image to Multi-View Diffusion
image_pipeline = DiffusionPipeline.from_pretrained(
    "ashawkey/imagedream-ipmv-diffusers",
    custom_pipeline="dylanebert/multi_view_diffusion",
    torch_dtype=torch.float16,
    trust_remote_code=True,
).to("cuda")
```

Original model card below.

---

# MVDream-diffusers

A **unified** diffusers implementation of [MVDream](https://github.com/bytedance/MVDream) and [ImageDream](https://github.com/bytedance/ImageDream).

We provide converted `fp16` weights on huggingface:

-   [MVDream](https://huggingface.co/ashawkey/mvdream-sd2.1-diffusers)
-   [ImageDream](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers)

### Install

```bash
# dependency
pip install -r requirements.txt

# xformers is required! please refer to https://github.com/facebookresearch/xformers
pip install ninja
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
```

### Usage

```bash
python run_mvdream.py "a cute owl"
python run_imagedream.py data/anya_rgba.png
```

### Convert weights

MVDream:

```bash
# download original ckpt (we only support the SD 2.1 version)
mkdir models
cd models
wget https://huggingface.co/MVDream/MVDream/resolve/main/sd-v2.1-base-4view.pt
wget https://raw.githubusercontent.com/bytedance/MVDream/main/mvdream/configs/sd-v2-base.yaml
cd ..

# convert
python convert_mvdream_to_diffusers.py --checkpoint_path models/sd-v2.1-base-4view.pt --dump_path ./weights_mvdream --original_config_file models/sd-v2-base.yaml --half --to_safetensors --test
```

ImageDream:

```bash
# download original ckpt (we only support the pixel-controller version)
cd models
wget https://huggingface.co/Peng-Wang/ImageDream/resolve/main/sd-v2.1-base-4view-ipmv.pt
wget https://raw.githubusercontent.com/bytedance/ImageDream/main/extern/ImageDream/imagedream/configs/sd_v2_base_ipmv.yaml
cd ..

# convert
python convert_mvdream_to_diffusers.py --checkpoint_path models/sd-v2.1-base-4view-ipmv.pt --dump_path ./weights_imagedream --original_config_file models/sd_v2_base_ipmv.yaml --half --to_safetensors --test
```

### Acknowledgement

-   The original papers:
    ```bibtex
    @article{shi2023MVDream,
        author = {Shi, Yichun and Wang, Peng and Ye, Jianglong and Mai, Long and Li, Kejie and Yang, Xiao},
        title = {MVDream: Multi-view Diffusion for 3D Generation},
        journal = {arXiv:2308.16512},
        year = {2023},
    }
    @article{wang2023imagedream,
        title={ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation},
        author={Wang, Peng and Shi, Yichun},
        journal={arXiv preprint arXiv:2312.02201},
        year={2023}
    }
    ```
-   This codebase is modified from [mvdream-hf](https://github.com/KokeCacao/mvdream-hf).