Can't load model with SentenceTransformers 3.0.1 AttributeError: 'LatentAttentionConfig' object has no attribute '_attn_implementation_internal'

#50
by jswarner85 - opened

Hello, I am trying to load the model using the following code

from sentence_transformers import SentenceTransformer

nv_model = SentenceTransformer('nvidia/NV-Embed-v1', trust_remote_code=True)

When I do it downloads the shards and then errors out with the error "AttributeError: 'LatentAttentionConfig' object has no attribute '_attn_implementation_internal'". Is there a specific version of sentence_transformers I should be using?

NVIDIA org
edited Aug 21

Hi, @jswarner85 . Can you try to install the package as below?

pip uninstall -y transformer-engine
pip install torch==2.2.0
pip install transformers==4.42.4
pip install flash-attn==2.2.0
pip install sentence-transformers==2.7.0

I am experiencing the same issue

I think the problem is with latest transformers==4.44.1. I downgraded to transformers==4.42.4 and it works now. (I did not try versions between 4.42.4 and 4.44.1)

I tried setting the transformers version to 4.42.4 and 4.44.1 and both times I am getting the same, but different, error: ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory.

Is anyone who has gotten the model working able to share their python,cuda, transformers version worked for you? I'm on sagemaker using image "pytorch 2.2.0 python 3.10 gpu optimized".

NVIDIA org

Hi, @konkalita . Thanks for validating the transformers package version. Readme trouble shooting is updated accordingly.

@jswarner85 Sure, although from my experience libcudart.so.11.0-like errors are usually related to some mismatch between cuda and torch libs not related to this model.

accelerate==0.33.0
einops==0.8.0
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.6.20
nvidia-nvtx-cu12==12.1.105
peft==0.12.0
safetensors==0.4.4
sentence-transformers==3.0.1
tokenizers==0.19.1
torch==2.4.0
transformers==4.42.4
triton==3.0.0
NVIDIA-SMI 550.90.07              Driver Version: 550.90.07      CUDA Version: 12.4

I think the problem is with latest transformers==4.44.1. I downgraded to transformers==4.42.4 and it works now. (I did not try versions between 4.42.4 and 4.44.1)

transformers==4.43.4 works for me

I confirm version 4.43.4 works for me

Sign up or log in to comment