Fabian-David Schmidt commited on
Commit
3d285e5
1 Parent(s): 838c37a

feat: add README

Browse files
Files changed (2) hide show
  1. README.md +134 -0
  2. nllb-llm2vec-distill.png +0 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ language:
5
+ - multilingual
6
+ pipeline_tag: sentence-similarity
7
+ tags:
8
+ - text-embedding
9
+ - embeddings
10
+ - information-retrieval
11
+ - beir
12
+ - text-classification
13
+ - language-model
14
+ - text-clustering
15
+ - text-semantic-similarity
16
+ - text-evaluation
17
+ - text-reranking
18
+ - feature-extraction
19
+ - sentence-similarity
20
+ - Sentence Similarity
21
+ - natural_questions
22
+ - ms_marco
23
+ - fever
24
+ - hotpot_qa
25
+ - mteb
26
+ ---
27
+
28
+ # `NLLB-LLM2Vec': Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages
29
+
30
+ - **Repository:** https://github.com/fdschmidt93/trident-nllb-llm2vec
31
+ - **Paper:** https://arxiv.org/abs/2406.12739
32
+
33
+ `NLLB-LLM2Vec` multilingually extends [LLM2Vec](https://github.com/McGill-NLP/llm2vec) via efficient self-supervised distillation. We train the up-projection and LoRA adapters of the
34
+ `NLLB-LLM2Vec` by forcing its mean-pooled token embeddings to match (via mean-squared error) the output of the original LLM2Vec.
35
+
36
+ ![Self-supervised Distillation](./nllb-llm2vec-distill.png)
37
+
38
+ This model has only been trained on self-supervised data and not yet been fine-tuned on any downstream task! This version is expected to perform better than self-supervised adaptation in the original paper, as LoRAs are merged into the model prior to task fine-tuning. The backbone of this model is [LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse](https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse). We use the encoder of [NLLB-600M](https://huggingface.co/facebook/nllb-200-distilled-600M).
39
+
40
+ ## Usage
41
+ ```python
42
+ import torch
43
+ import torch.nn.functional as F
44
+ from transformers import AutoTokenizer, AutoModel, AutoConfig
45
+
46
+ # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs.
47
+ tokenizer = AutoTokenizer.from_pretrained(
48
+ "facebook/nllb-200-distilled-600M"
49
+ )
50
+
51
+ model = AutoModel.from_pretrained(
52
+ "fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
53
+ trust_remote_code=True,
54
+ torch_dtype=torch.bfloat16,
55
+ device_map="cuda" if torch.cuda.is_available() else "cpu",
56
+ )
57
+
58
+ # Encoding queries using instructions
59
+ instruction = (
60
+ "Given a web search query, retrieve relevant passages that answer the query:"
61
+ )
62
+ queries = [
63
+ [instruction, "how much protein should a female eat"],
64
+ [instruction, "summit define"],
65
+ ]
66
+ q_reps = l2v.encode(queries)
67
+
68
+ # Encoding documents. Instruction are not required for documents
69
+ documents = [
70
+ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
71
+ "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
72
+ ]
73
+ d_reps = l2v.encode(documents)
74
+
75
+ # Compute cosine similarity
76
+ q_reps_norm = F.normalize(q_reps, p=2, dim=1)
77
+ d_reps_norm = F.normalize(d_reps, p=2, dim=1)
78
+ cos_sim = q_reps_norm @ d_reps_norm.T
79
+
80
+ print(cos_sim)
81
+ """
82
+ tensor([[0.7740, 0.5580],
83
+ [0.4845, 0.4993]])
84
+ """
85
+ ```
86
+
87
+
88
+ ## Fine-tuning
89
+
90
+ You should fine-tune the model on labelled data unless you are using the model for unsupervised retrieval-style tasks.
91
+ `NLLB-LLM2Vec` supports both `AutoModelForSequenceClassification` and `AutoModelForTokenClassification`.
92
+
93
+
94
+ ```python
95
+ import torch
96
+ from transformers import AutoModelForSequenceClassification, AutoModelForTokenClassification
97
+ from peft import get_peft_model
98
+ from peft.tuners.lora.config import LoraConfig
99
+
100
+ # Only attach LoRAs to the linear layers of LLM2Vec inside NLLB-LLM2Vec
101
+ lora_config = LoraConfig(
102
+ lora_alpha = 32,
103
+ target_modules = r".*llm2vec.*(self_attn\.(q|k|v|o)_proj|mlp\.(gate|up|down)_proj).*",
104
+ bias = "none",
105
+ task_type = "SEQ_CLS"
106
+ )
107
+ model = AutoModel.from_pretrained(
108
+ "fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
109
+ trust_remote_code=True,
110
+ torch_dtype=torch.bfloat16,
111
+ )
112
+ model = get_peft_model(model, lora_config)
113
+ ```
114
+
115
+ ## Questions
116
+ If you have any question about the code, feel free to email Fabian David Schmidt (`[email protected]`).
117
+
118
+ ## Citation
119
+
120
+ If you are using `NLLB-LLM2Vec` in your work, please cite
121
+
122
+ ```
123
+ @misc{schmidt2024selfdistillationmodelstackingunlocks,
124
+ title={Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages},
125
+ author={Fabian David Schmidt and Philipp Borchert and Ivan Vulić and Goran Glavaš},
126
+ year={2024},
127
+ eprint={2406.12739},
128
+ archivePrefix={arXiv},
129
+ primaryClass={cs.CL},
130
+ url={https://arxiv.org/abs/2406.12739},
131
+ }
132
+ ```
133
+
134
+ The work has been accepted to EMNLP findings. The Bibtex will therefore be updated when the paper will be released on ACLAnthology.
nllb-llm2vec-distill.png ADDED