model evaluation failed

#36
by realPCH - opened

My model failed the evaluation, can I find out why?
I found the code below working fine, and the evaluation using lm-evaluation-harness also worked fine.

from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("realPCH/240102_test_float16",revision='main')
tokenizer = AutoTokenizer.from_pretrained("realPCH/240102_test_float16",revision='main')
model = AutoModelForCausalLM.from_pretrained("realPCH/240102_test_float16",revision='main')
upstage org

Hello, There is not a submission history of realPCH/240102_test_float16.
You can check the Link

choco9966. Sorry, I must have been mistaken.
Could you tell me why the model "realPCH/240104_mistral_lora" failed?
Thanks .
@choco9966

upstage org
edited Jan 7

@realPCH
... site-packages/transformers/models/mistral/modeling_mistral.py", line 353, in forward
and kv_seq_len > self.config.sliding_window
TypeError: '>' not supported between instances of 'int' and 'NoneType'

i think that null value cause a problem.

(Looks like this bug might have been fixed in the latest update of the transformers library, so I'll update the version and give it retry.)

realPCH changed discussion status to closed

Sign up or log in to comment