Text Generation
Transformers
PyTorch
TensorBoard
bloom
Eval Results
text-generation-inference

Why does this take so long to load?

#1
by Robo0890 - opened

Every time I try to run it, it spends so long just to load the model. So long that it times out.

BigScience Workshop org
edited Nov 18, 2022

Every time I try to run it, it spends so long just to load the model. So long that it times out.

If you're referring to the Hosted Inference API, that's because there is no GPU provisioned for this model and the model is huge, so it will take very very long.
If you want to run it, you need to download the model and run it on your own hardware, sorry :(

Here are some guidelines for running it: https://huggingface.co/bigscience/bloomz/discussions/18#636b6ad958a8f9348d0ab82c

BigScience Workshop org

We should probably disable the widget as it may be confusing then

Ah, got it.
Thanks. I was just attempting to try it out. If it does what I think it does, then it could be just as good if not better than GPT-3.
Nice work, I love open source.
Even if I can’t run it :(

Robo0890 changed discussion status to closed
BigScience Workshop org

FYI removed the widget to prevent more confusion about this: https://huggingface.co/bigscience/bloomz-p3/commit/51f3d0d7079a37501554eb7ce2558012bb96d062

Sign up or log in to comment