Spaces:
Running
on
Zero
How many GB of VRAM do I need to run it locally?
I tried to run it using Docker but always get OOM message. I have 12GB of VRAM. Btw, very cool app!
Hi! >16GB VRAM is required currently.
Thanks!
See if https://github.com/microsoft/TRELLIS/issues/31 helps
Can disagree with the statement that it requires more then 16 gb. Ran it locally on rtx 4070 with 12 gb of vram. Also added torch.cuda.clear_cache after the generation is done, so it frees up memory and is ready to go again. The issue was probably because after the first generation the second generation would take a much slower amount of time due to the overfill of the 12 gb of vram, now the issue is not present.
Can disagree with the statement that it requires more then 16 gb. Ran it locally on rtx 4070 with 12 gb of vram. Also added torch.cuda.clear_cache after the generation is done, so it frees up memory and is ready to go again. The issue was probably because after the first generation the second generation would take a much slower amount of time due to the overfill of the 12 gb of vram, now the issue is not present.
Great to hear that! I may edit the README after verified.
Can disagree with the statement that it requires more then 16 gb. Ran it locally on rtx 4070 with 12 gb of vram. Also added torch.cuda.clear_cache after the generation is done, so it frees up memory and is ready to go again. The issue was probably because after the first generation the second generation would take a much slower amount of time due to the overfill of the 12 gb of vram, now the issue is not present.
I've only succeed once for the first generation. After that regardless how many times that I've tried, it always gave me OOM error message. Restarted Docker multiple times and still no luck. This is a stupid question since I'm not familiar with Docker: how do I add torch.cuda.clear_cache to app.py inside Docker image? I want to try it again to see if this will solve my problem.
You create a custom function in base.py called clear_cache for example then call it in image_to_3d at the end.
@Dainzh I somehow managed to install it in Windows. After adding torch.cuda.empty_cache() here and there and setting some environment variables, I was able to run it without OOM. Thanks for your help!
Running it locally on windows in WSL2 I can confirm this allows me to generate as much as I want without it breaking after a few generations in a raw
It sounds like I really might be able to run it on my RTX3060 with 12gb. Yes?
@Dainzh I somehow managed to install it in Windows. After adding torch.cuda.empty_cache() here and there and setting some environment variables, I was able to run it without OOM. Thanks for your help!
Running it locally on windows in WSL2 I can confirm this allows me to generate as much as I want without it breaking after a few generations in a raw
What I mean is you can install it directly on Windows, no WSL2 is needed. And with NVIDIA Cuda fallback policy enabled, I haven't seen any OOM errors since then.
It sounds like I really might be able to run it on my RTX3060 with 12gb. Yes?
Yes, you can. I've been using it for almost a week now, and I haven't seen any OOM errors. It's only a bit slower than Zero GPU, like 80% with an RTX 3060 12GB.
@Dainzh I somehow managed to install it in Windows. After adding torch.cuda.empty_cache() here and there and setting some environment variables, I was able to run it without OOM. Thanks for your help!
Running it locally on windows in WSL2 I can confirm this allows me to generate as much as I want without it breaking after a few generations in a raw
What I mean is you can install it directly on Windows, no WSL2 is needed. And with NVIDIA Cuda fallback policy enabled, I haven't seen any OOM errors since then.
Oh sure, I should have been clearer, I meant that torch.cuda.empty_cache() is confirmed to take care of the OOM issue (regardless of where you installed it upon).
Having said that, it worked out-of-the-box for me on WSL, didn't need to do anything special, so I kept it.
Can anyone upload/send me a files that works on 12Gb? I tried but i still get the same errors.... anyone can help me?
For anyone who wants to install this on Windows (with 12GB of VRAM), follow these steps on GitHub: https://github.com/microsoft/TRELLIS/issues/3#issuecomment-2524713914. That's how I got it working, hope this helps.