Model outputs 768 dim embeddings instead of 1024 as mentioned

#1
by Bhanu3 - opened

Hello,

I'm trying out the JobBERT-v2 model from the sentence-transformers library. According to the documentation and my understanding, this model is supposed to output 1024-dimensional embeddings. However, during inference, I'm receiving 768-dimensional embeddings.

I suspect that the Asym layer is primarily designed for training scenarios where embeddings like "anchor" and "positive" are compared or contrasted. During inference, using such layers without the corresponding training dynamics might not yield the expected transformations.

Is the Asym layer intended only for training purposes in the JobBERT-v2 model? or if I am doing it wrong?

Model Name: jensjorisdecorte/JobBERT-v2
Library Versions:
sentence-transformers: 3.1.0
transformers: 4.44.2
torch: 2.4.1+cu118
Python Version: 3.8
Device: CUDA

Thanks,

Sign up or log in to comment