ssmits's picture
Update README.md
b12992b verified
|
raw
history blame
1.99 kB
metadata
base_model:
  - ssmits/Falcon2-5.5B-multilingual
library_name: sentence-transformers
tags:
  - ssmits/Falcon2-5.5B-multilingual
license: apache-2.0
language:
  - es
  - fr
  - de
  - 'no'
  - sv
  - da
  - nl
  - pt
  - pl
  - ro
  - it
  - cs
pipeline_tag: text-classification

Usage

Embeddings version of the base model ssmits/Falcon2-5.5B-multilingual. The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as it is pruned and shown by intfloat/e5-mistral-7b-instruct. Additionaly, in stead of a normalization layer, the hidden layers are followed up by both a classical weight and bias 1-dimensional array of 4096 values. The basic Sentence-Transformers implementation is working correctly. This would imply other more sophisticated embeddings techniques such as adding a custom classification head, will work correctly as well.

Inference

from sentence_transformers import SentenceTransformer
import torch

# 1. Load a pretrained Sentence Transformer model
model = SentenceTransformer("ssmits/Falcon2-5.5B-multilingual-embed-base")

# The sentences to encode
sentences = [
    "The weather is lovely today.",
    "It's so sunny outside!",
    "He drove to the stadium.",
]

# 2. Calculate embeddings by calling model.encode()
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 4096)

# 3. Calculate the embedding similarities
# Using torch to compute cosine similarity matrix
similarities = torch.nn.functional.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# tensor([[1.0000, 0.7120, 0.5937],
#         [0.7120, 1.0000, 0.5925],
#         [0.5937, 0.5925, 1.0000]])

Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.