Some tips on running the model
Hi,
can you share some tips on running the model?
Under 'Use this model' I can see that with transformers it should work like the below code suggests and some additional steps are surely needed to install required dependancies. Can you perhaps share a notebook on running this model?
I tried on lightning AI with L40 GPU and failed to run the model. It seemed like it wasn't loading the model to GPU memory.
I also tried on lightning AI with just 32 CPU and 128GB ram without GPU and it did use up all the memory but still didn't run.
I must be doing something incorrectly.
Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="LumiOpen/Poro-34B-chat")
pipe(messages)
Kind regards,
John
The only real dependency is transformers
as that should pull everything else that is needed during install.
The following code should work (can't test it at this moment as the model is quite large):
import torch
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="LumiOpen/Poro-34B-chat", torch_dtype=torch.bfloat16, device="cuda", max_length=512)
pipe(messages)
You can change the max_length
to whatever you want (the default is 20).
Is there an error message you got when trying to run the model?
Br,
Antti
Actually, now that I look at the specs of the L40, I don't think the model will fit into its memory. If you replace device="cuda"
with device="cpu"
you should be able to run it on CPU. It will be very slow though.