'CogVLMVideoForCausalLM' object has no attribute '_extract_past_from_model_output'
I run
if name == 'main':
test()
And got error:
File ~/.cache/huggingface/modules/transformers_modules/THUDM/cogvlm2-llama3-caption/89899160b0c5dc7980ab9ef60c609710d3643e05/modeling_cogvlm.py:728, in CogVLMVideoForCausalLM._update_model_kwargs_for_generation(self, outputs, model_kwargs, is_encoder_decoder, standardize_cache_format)
719 def _update_model_kwargs_for_generation(
720 self,
721 outputs: "ModelOutput",
(...) 725 ) -> Dict[str, Any]:
726 # update past_key_values
727 if transformers.version >= "4.44.0":
--> 728 cache_name, cache = self._extract_past_from_model_output(
729 outputs
730 )
731 else:
732 cache_name, cache = self._extract_past_from_model_output(
733 outputs, standardize_cache_format=standardize_cache_format
734 )
File ~/.conda/envs/cogvlm/lib/python3.12/site-packages/torch/nn/modules/module.py:1928, in Module.getattr(self, name)
1926 if name in modules:
1927 return modules[name]
-> 1928 raise AttributeError(
1929 f"'{type(self).name}' object has no attribute '{name}'"
1930 )
AttributeError: 'CogVLMVideoForCausalLM' object has no attribute '_extract_past_from_model_output'
The same problem.
It works with transformers-4.48.3. See https://github.com/THUDM/GLM-4/issues/716