Failed to load the model

#2
by amanpatkar - opened

Hi There,
I have tried to load the model using the below code. But I believe there must be another file by the name of "configuration_jamba.py".
Due to this, I am getting the following error.
[os Error] Could not locate the configuration_jamba.py inside FreedomIntelligence/LongLLaVA.

code to load the model -
model_id = "FreedomIntelligence/LongLLaVA"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained(model_id)

FreedomAI org

Hi there, sorry for the confusion, the configuration file is in https://github.com/FreedomIntelligence/LongLLaVA/tree/main/llava/model/language_model/Jamba.
We are still preparing the code and will release it soon.

Hi,
Thanks for your quick reply. I want to load the model using transformer library. Can you make it work on that? That will be very helpful.

FreedomAI org

We have updated the code, please try again, we are looking forward for your feedback.

Only for the 9B has the configuration_jamba.py been updated. When will the configuration_jamba.py be released for the 53B model?

FreedomAI org

We have updated it, thank you for your reminder. We apologize for not including these two files directly in the model repo. In fact, the relevant files are in the code repo (https://github.com/FreedomIntelligence/LongLLaVA/tree/main/llava/model/language_model/Jamba). We are very sorry.

Sign up or log in to comment