Huiwenshi commited on
Commit
c45ea16
1 Parent(s): 4edbfb8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +59 -33
README.md CHANGED
@@ -1,12 +1,3 @@
1
- ---
2
- library_name: hunyuan3d-1.0
3
- license: other
4
- license_name: tencent-hunyuan-community
5
- license_link: https://huggingface.co/tencent/Hunyuan3D-1/blob/main/LICENSE.txt
6
- language:
7
- - en
8
- - zh
9
- ---
10
  <!-- ## **Hunyuan3D-1.0** -->
11
 
12
  <p align="center">
@@ -15,9 +6,13 @@ language:
15
 
16
  # Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
17
 
18
- [\[Code\]](https://github.com/tencent/Hunyuan3D-1)
19
- [\[Huggingface\]](https://huggingface.co/tencent/Hunyuan3D-1)
20
- [\[Report\]](https://arxiv.org/pdf/2411.02293)
 
 
 
 
21
 
22
 
23
  ## 🔥🔥🔥 News!!
@@ -81,20 +76,42 @@ cd Hunyuan3D-1
81
 
82
  We provide an env_install.sh script file for setting up environment.
83
 
84
- python3.9 and CUDA11.7+ (recommended)
85
  ```
86
- conda create -n hunyuan3d-1-py39 python=3.9
87
- conda activate hunyuan3d-1-py39
88
- pip install torch==2.2.0 torchvision==0.17.0 --index-url https://download.pytorch.org/whl/cu118
 
 
 
 
 
 
 
 
89
  bash env_install.sh
90
  ```
91
- or python3.11 and CUDA12.1+ see [link](https://github.com/Tencent/Hunyuan3D-1/issues/9#issuecomment-2458695670)
 
 
 
 
92
  ```
93
- conda create -n hunyuan3d-1-py311 python=3.11
94
- conda activate hunyuan3d-1-py311
95
- pip install torch torchvision xformers --index-url https://download.pytorch.org/whl/cu121
96
- bash env_install.sh
 
 
 
 
 
 
97
  ```
 
 
 
 
 
98
  #### Download Pretrained Models
99
 
100
  The models are available at [https://huggingface.co/tencent/Hunyuan3D-1](https://huggingface.co/tencent/Hunyuan3D-1):
@@ -151,34 +168,43 @@ We list some more useful configurations for easy usage:
151
  |`--gen_seed` | 0 |The random seed for generating 3d generation |
152
  |`--gen_steps` | 50 |The number of steps for sampling of 3d generation |
153
  |`--max_faces_numm` | 90000 |The limit number of faces of 3d mesh |
154
- |`--save_memory` | False |text2image will move to cpu automatically|
155
  |`--do_texture_mapping` | False |Change vertex shadding to texture shading |
156
  |`--do_render` | False |render gif |
157
 
158
 
159
  We have also prepared scripts with different configurations for reference
 
 
 
 
160
  ```bash
161
- bash scripts/text_to_3d_demo.sh
162
- bash scripts/text_to_3d_fast_demo.sh
163
- bash scripts/image_to_3d_demo.sh
164
- bash scripts/image_to_3d_fast_demo.sh
165
  ```
166
 
167
- This example requires ~40GB VRAM to run.
 
 
 
 
 
 
168
 
169
  #### Using Gradio
170
 
171
  We have prepared two versions of multi-view generation, std and lite.
172
 
173
- For better results, the std version of the running script is as follows
174
  ```shell
 
175
  python3 app.py
176
- ```
177
-
178
- For faster speed, you can use the lite version by adding the --use_lite parameter.
179
 
180
- ```shell
181
  python3 app.py --use_lite
 
182
  ```
183
 
184
  Then the demo can be accessed through http://0.0.0.0:8080. It should be noted that the 0.0.0.0 here needs to be X.X.X.X with your server IP.
@@ -202,4 +228,4 @@ If you found this repository helpful, please cite our report:
202
  archivePrefix={arXiv},
203
  primaryClass={cs.CV}
204
  }
205
- ```
 
 
 
 
 
 
 
 
 
 
1
  <!-- ## **Hunyuan3D-1.0** -->
2
 
3
  <p align="center">
 
6
 
7
  # Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
8
 
9
+ <div align="center">
10
+ <a href="https://github.com/tencent/Hunyuan3D-1"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github-pages"></a> &ensp;
11
+ <a href="https://3d.hunyuan.tencent.com"><img src="https://img.shields.io/static/v1?label=Homepage&message=Tencent Hunyuan3D&color=blue&logo=github-pages"></a> &ensp;
12
+ <a href="https://arxiv.org/pdf/2411.02293"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red&logo=arxiv"></a> &ensp;
13
+ <a href="https://huggingface.co/Tencent/Hunyuan3D-1"><img src="https://img.shields.io/static/v1?label=Checkpoints&message=HuggingFace&color=yellow"></a> &ensp;
14
+ <a href="https://huggingface.co/spaces/Tencent/Hunyuan3D-1"><img src="https://img.shields.io/static/v1?label=Demo&message=HuggingFace&color=yellow"></a> &ensp;
15
+ </div>
16
 
17
 
18
  ## 🔥🔥🔥 News!!
 
76
 
77
  We provide an env_install.sh script file for setting up environment.
78
 
 
79
  ```
80
+ # step 1, create conda env
81
+ conda create -n hunyuan3d-1 python=3.9 or 3.10 or 3.11 or 3.12
82
+ conda activate hunyuan3d-1
83
+
84
+ # step 2. install torch realated package
85
+ which pip # check pip corresponds to python
86
+
87
+ # modify the cuda version according to your machine (recommended)
88
+ pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
89
+
90
+ # step 3. install other packages
91
  bash env_install.sh
92
  ```
93
+ <details>
94
+ <summary>💡Other tips for envrionment installation</summary>
95
+
96
+ Optionally, you can install xformers or flash_attn to acclerate computation:
97
+
98
  ```
99
+ pip install xformers --index-url https://download.pytorch.org/whl/cu121
100
+ ```
101
+ ```
102
+ pip install flash_attn
103
+ ```
104
+
105
+ Most environment errors are caused by a mismatch between machine and packages. You can try manually specifying the version, as shown in the following successful cases:
106
+ ```
107
+ # python3.9
108
+ pip install torch==2.0.1 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118
109
  ```
110
+
111
+ when install pytorch3d, the gcc version is preferably greater than 9, and the gpu driver should not be too old.
112
+
113
+ </details>
114
+
115
  #### Download Pretrained Models
116
 
117
  The models are available at [https://huggingface.co/tencent/Hunyuan3D-1](https://huggingface.co/tencent/Hunyuan3D-1):
 
168
  |`--gen_seed` | 0 |The random seed for generating 3d generation |
169
  |`--gen_steps` | 50 |The number of steps for sampling of 3d generation |
170
  |`--max_faces_numm` | 90000 |The limit number of faces of 3d mesh |
171
+ |`--save_memory` | False |module will move to cpu automatically|
172
  |`--do_texture_mapping` | False |Change vertex shadding to texture shading |
173
  |`--do_render` | False |render gif |
174
 
175
 
176
  We have also prepared scripts with different configurations for reference
177
+ - Inference Std-pipeline requires 30GB VRAM (24G VRAM with --save_memory).
178
+ - Inference Lite-pipeline requires 22GB VRAM (18G VRAM with --save_memory).
179
+ - Note: --save_memory will increase inference time
180
+
181
  ```bash
182
+ bash scripts/text_to_3d_std.sh
183
+ bash scripts/text_to_3d_lite.sh
184
+ bash scripts/image_to_3d_std.sh
185
+ bash scripts/image_to_3d_lite.sh
186
  ```
187
 
188
+ If your gpu memory is 16G, you can try to run modules in pipeline seperately:
189
+ ```bash
190
+ bash scripts/text_to_3d_std_separately.sh 'a lovely rabbit' ./outputs/test # >= 16G
191
+ bash scripts/text_to_3d_lite_separately.sh 'a lovely rabbit' ./outputs/test # >= 14G
192
+ bash scripts/image_to_3d_std_separately.sh ./demos/example_000.png ./outputs/test # >= 16G
193
+ bash scripts/image_to_3d_lite_separately.sh ./demos/example_000.png ./outputs/test # >= 10G
194
+ ```
195
 
196
  #### Using Gradio
197
 
198
  We have prepared two versions of multi-view generation, std and lite.
199
 
 
200
  ```shell
201
+ # std
202
  python3 app.py
203
+ python3 app.py --save_memory
 
 
204
 
205
+ # lite
206
  python3 app.py --use_lite
207
+ python3 app.py --use_lite --save_memory
208
  ```
209
 
210
  Then the demo can be accessed through http://0.0.0.0:8080. It should be noted that the 0.0.0.0 here needs to be X.X.X.X with your server IP.
 
228
  archivePrefix={arXiv},
229
  primaryClass={cs.CV}
230
  }
231
+ ```