KeyError: 'model.layers.60.mlp.down_proj.weight' in Vllm

#1
by pteromyini - opened

Thank you for your contribution. I encountered the following problems when using vllm to pull data. Do you have any possible solutions?

(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method load_model: 'model.layers.60.mlp.down_proj.weight', Traceback (most recent call last):
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]              ^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]     self.model_runner.load_model()
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1016, in load_model
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]     self.model = get_model(model_config=self.model_config,
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 19, in get_model
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]     return loader.load_model(model_config=model_config,
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 403, in load_model
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]     model.load_weights(self._get_all_weights(model_config, model))
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 442, in load_weights
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]     param = params_dict[name]
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233]             ~~~~~~~~~~~^^^^^^
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233] KeyError: 'model.layers.60.mlp.down_proj.weight'
(VllmWorkerProcess pid=208) ERROR 11-24 04:56:00 multiproc_worker_utils.py:233] 
(VllmWorkerProcess pid=208) INFO 11-24 04:56:00 multiproc_worker_utils.py:244] Worker exiting
INFO 11-24 04:56:00 multiproc_worker_utils.py:124] Killing local vLLM worker processes
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 388, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 138, in from_engine_args
    return cls(
           ^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 78, in __init__
    self.engine = LLMEngine(*args,
                  ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 325, in __init__
    self.model_executor = executor_class(
                          ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 47, in __init__
    self._init_executor()
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 111, in _init_executor
    self._run_workers("load_model",
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 185, in _run_workers
    driver_worker_output = driver_worker_method(*args, **kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 183, in load_model
    self.model_runner.load_model()
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1016, in load_model
    self.model = get_model(model_config=self.model_config,
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 19, in get_model
    return loader.load_model(model_config=model_config,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 403, in load_model
    model.load_weights(self._get_all_weights(model_config, model))
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 442, in load_weights
    param = params_dict[name]
            ~~~~~~~~~~~^^^^^^
KeyError: 'model.layers.60.mlp.down_proj.weight'
Loading safetensors checkpoint shards:   0% Completed | 0/9 [00:00<?, ?it/s]

[rank0]:[W1124 04:56:01.842849235 CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Sign up or log in to comment