fdschmidt93's picture
feat: add tokenizer to repo
14d8a31
---
library_name: transformers
license: mit
language:
- multilingual
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
---
# `NLLB-LLM2Vec': Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages
- **Repository:** https://github.com/fdschmidt93/trident-nllb-llm2vec
- **Paper:** https://arxiv.org/abs/2406.12739
`NLLB-LLM2Vec` multilingually extends [LLM2Vec](https://github.com/McGill-NLP/llm2vec) via efficient self-supervised distillation. We train the up-projection and LoRA adapters of the
`NLLB-LLM2Vec` by forcing its mean-pooled token embeddings to match (via mean-squared error) the output of the original LLM2Vec.
![Self-supervised Distillation](./nllb-llm2vec-distill.png)
This model has only been trained on self-supervised data and not yet been fine-tuned on any downstream task! This version is expected to perform better than self-supervised adaptation in the original paper, as LoRAs are merged into the model prior to task fine-tuning. The backbone of this model is [LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse](https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse). We use the encoder of [NLLB-600M](https://huggingface.co/facebook/nllb-200-distilled-600M).
> ⚠️ Make sure that you correctly set the `src_lang` (i.e., `AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", src_lang=LANG_CODE)` for the language you are using NLLB-LLM2Vec with! You can find a list of supported languages [here](https://huggingface.co/facebook/nllb-200-distilled-600M/blob/main/special_tokens_map.json)
## Usage
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse")
model = AutoModel.from_pretrained(
"fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
flores_en = [
"Lead researchers say this may bring early detection of cancer, tuberculosis, HIV and malaria to patients in low-income countries, where the survival rates for illnesses such as breast cancer can be half those of richer countries.",
"Enceladus is the most reflective object in the solar system, reflecting about 90 percent of the sunlight that hits it.",
]
en_embeds = model.encode(flores_en)
flores_de = [
"Führende Forscher sagen, dass dies die Früherkennung von Krebs, Tuberkulose, HIV und Malaria für Patienten in einkommensschwachen Ländern fördern könnte, wo die Überlebensraten bei Krankheiten wie Brustkrebs teilweise nur halb so hoch sind wie in reicheren Ländern.",
"Enceladus ist das Objekt im Sonnensystem, das am stärksten reflektiert. Er wirft etwa 90 Prozent des auf ihn treffenden Sonnenlichts zurück.",
]
de_embeds = model.encode(flores_de, src_lang="deu_Latn")
# Compute cosine similarity
en_embeds_norm = F.normalize(en_embeds, p=2, dim=1)
de_embeds_norm = F.normalize(de_embeds, p=2, dim=1)
cos_sim = en_embeds_norm @ de_embeds_norm.T
print(cos_sim)
"""
tensor([[0.9062, 0.0894],
[0.1289, 0.8633]])
"""
```
## Fine-tuning
You should fine-tune the model on labelled data unless you are using the model for unsupervised retrieval-style tasks.
`NLLB-LLM2Vec` supports both `AutoModelForSequenceClassification` and `AutoModelForTokenClassification`.
```python
import torch
from transformers import AutoModelForSequenceClassification
from peft import get_peft_model
from peft.tuners.lora.config import LoraConfig
# Only attach LoRAs to the linear layers of LLM2Vec inside NLLB-LLM2Vec
lora_config = LoraConfig(
rank = 16,
lora_alpha = 32,
target_modules = r".*llm2vec.*(self_attn\.(q|k|v|o)_proj|mlp\.(gate|up|down)_proj).*",
bias = "none",
task_type = "SEQ_CLS"
)
model = AutoModelForSequenceClassification.from_pretrained(
"fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
model = get_peft_model(model, lora_config)
```
## Questions
If you have any question about the code, feel free to email Fabian David Schmidt (`[email protected]`).
## Citation
If you are using `NLLB-LLM2Vec` in your work, please cite
```
@misc{schmidt2024selfdistillationmodelstackingunlocks,
title={Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages},
author={Fabian David Schmidt and Philipp Borchert and Ivan Vulić and Goran Glavaš},
year={2024},
eprint={2406.12739},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.12739},
}
```
The work has been accepted to findings of EMNLP. The Bibtex will therefore be updated when the paper will be released on ACLAnthology.