I got an typeError

#3
by xxxqqw - opened

TypeError: PatchEmbeddingBlock.init() got an unexpected keyword argument 'pos_embed'

Traceback (most recent call last):
File "/home/wangxt/M3D/3.py", line 13, in
model = AutoModel.from_pretrained(
File "/home/wangxt/anaconda3/envs/pytorch_envm/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "/home/wangxt/anaconda3/envs/pytorch_envm/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3404, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/wangxt/.cache/huggingface/modules/transformers_modules/CLIP/modeling_m3d_clip.py", line 159, in init
self.vision_encoder = ViT(
File "/home/wangxt/.cache/huggingface/modules/transformers_modules/CLIP/modeling_m3d_clip.py", line 115, in init
self.patch_embedding = PatchEmbeddingBlock(
TypeError: PatchEmbeddingBlock.init() got an unexpected keyword argument 'pos_embed'
I used transformer 4.40.1. I tried different edition of tranformer, it still didn't work.
Can you give me some advices?Thanks so much!
Best regards.

Sign up or log in to comment