Update of modeling_cogvlm.py for Transformers newer version
Compatible with Transformers > 4.41.2
Details:
The current implementation has an error with the line:
if past_key_values is not None:
past_key_values_length = past_key_values[0][0].shape[2]
seq_length_with_past = seq_length_with_past + past_key_values_length
When the Transformers version is > 4.41.2.
The issue is caused by the change in the output of_extract_past_from_model_output
function defined in Transformers src/transformers/generation/utils.py
since version v4.42.0. I tested that this fix works with transformers 4.44.0 as well.
Therefore, my pr includes checking the version of Transformers and modifying the process of the output of _extract_past_from_model_output
to make sure cogvlm2 can work with both the newer version of Transformers, e.g., 4.44.0 and the version below 4.42.0
This solved my issue!
cogvlm-videos have changed to support transformers 4.44.0, I will copy from that changed asap
This pr works for transformer 4.44.0 as well. I think it can be an option to just merge this pr π
hi can you test and merge this pr?