runtime error
Exit code: 1. Reason: , 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.7.self_attn.q_proj.weight'] - This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). preprocessor_config.json: 0%| | 0.00/316 [00:00<?, ?B/s][A preprocessor_config.json: 100%|██████████| 316/316 [00:00<00:00, 1.72MB/s] tokenizer_config.json: 0%| | 0.00/1.46k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 1.46k/1.46k [00:00<00:00, 9.56MB/s] tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s][A tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 55.3MB/s] special_tokens_map.json: 0%| | 0.00/438 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 438/438 [00:00<00:00, 2.99MB/s] config.json: 0%| | 0.00/1.41k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.41k/1.41k [00:00<00:00, 8.68MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 32, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 456, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 957, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 671, in __getitem__ raise KeyError(key) KeyError: 'llava_mistral'
Container logs:
Fetching error logs...