Muennighoff
commited on
Commit
·
1280bba
1
Parent(s):
7febf7b
Add SGPT-125M-weightedmean-nli-bitfit-linearthenpool1-noact
Browse files- 1_Dense/config.json +1 -0
- 1_Dense/pytorch_model.bin +3 -0
- 2_Pooling/config.json +9 -0
- README.md +90 -0
- config.json +54 -0
- config_sentence_transformers.json +7 -0
- eval/similarity_evaluation_sts-dev_results.csv +12 -0
- merges.txt +0 -0
- modules.json +20 -0
- pytorch_model.bin +3 -0
- sentence_bert_config.json +4 -0
- similarity_evaluation_sts-test_results.csv +2 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
1_Dense/config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"in_features": 768, "out_features": 768, "bias": false, "activation_function": "torch.nn.modules.linear.Identity", "key_name": "token_embeddings"}
|
1_Dense/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:41dd94d16e149d9d0cf42cbca2e788fdd958b303e7ee7c5b0e16d55b0bc1819f
|
3 |
+
size 2360171
|
2_Pooling/config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 768,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": false,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
+
"pooling_mode_weightedmean_tokens": true,
|
8 |
+
"pooling_mode_lasttoken": false
|
9 |
+
}
|
README.md
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: sentence-similarity
|
3 |
+
tags:
|
4 |
+
- sentence-transformers
|
5 |
+
- feature-extraction
|
6 |
+
- sentence-similarity
|
7 |
+
---
|
8 |
+
|
9 |
+
# {MODEL_NAME}
|
10 |
+
|
11 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
12 |
+
|
13 |
+
<!--- Describe your model here -->
|
14 |
+
|
15 |
+
## Usage (Sentence-Transformers)
|
16 |
+
|
17 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
18 |
+
|
19 |
+
```
|
20 |
+
pip install -U sentence-transformers
|
21 |
+
```
|
22 |
+
|
23 |
+
Then you can use the model like this:
|
24 |
+
|
25 |
+
```python
|
26 |
+
from sentence_transformers import SentenceTransformer
|
27 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
28 |
+
|
29 |
+
model = SentenceTransformer('{MODEL_NAME}')
|
30 |
+
embeddings = model.encode(sentences)
|
31 |
+
print(embeddings)
|
32 |
+
```
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
## Evaluation Results
|
37 |
+
|
38 |
+
<!--- Describe how your model was evaluated -->
|
39 |
+
|
40 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
|
41 |
+
|
42 |
+
|
43 |
+
## Training
|
44 |
+
The model was trained with the parameters:
|
45 |
+
|
46 |
+
**DataLoader**:
|
47 |
+
|
48 |
+
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
|
49 |
+
```
|
50 |
+
{'batch_size': 64}
|
51 |
+
```
|
52 |
+
|
53 |
+
**Loss**:
|
54 |
+
|
55 |
+
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
|
56 |
+
```
|
57 |
+
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
|
58 |
+
```
|
59 |
+
|
60 |
+
Parameters of the fit()-Method:
|
61 |
+
```
|
62 |
+
{
|
63 |
+
"epochs": 1,
|
64 |
+
"evaluation_steps": 880,
|
65 |
+
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
|
66 |
+
"max_grad_norm": 1,
|
67 |
+
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
|
68 |
+
"optimizer_params": {
|
69 |
+
"lr": 0.0002
|
70 |
+
},
|
71 |
+
"scheduler": "WarmupLinear",
|
72 |
+
"steps_per_epoch": null,
|
73 |
+
"warmup_steps": 881,
|
74 |
+
"weight_decay": 0.01
|
75 |
+
}
|
76 |
+
```
|
77 |
+
|
78 |
+
|
79 |
+
## Full Model Architecture
|
80 |
+
```
|
81 |
+
SentenceTransformer(
|
82 |
+
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
|
83 |
+
(1): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'key_name': 'token_embeddings'})
|
84 |
+
(2): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
|
85 |
+
)
|
86 |
+
```
|
87 |
+
|
88 |
+
## Citing & Authors
|
89 |
+
|
90 |
+
<!--- Describe where people can find more information -->
|
config.json
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "EleutherAI/gpt-neo-125M",
|
3 |
+
"activation_function": "gelu_new",
|
4 |
+
"architectures": [
|
5 |
+
"GPTNeoModel"
|
6 |
+
],
|
7 |
+
"attention_dropout": 0,
|
8 |
+
"attention_layers": [
|
9 |
+
"global",
|
10 |
+
"local",
|
11 |
+
"global",
|
12 |
+
"local",
|
13 |
+
"global",
|
14 |
+
"local",
|
15 |
+
"global",
|
16 |
+
"local",
|
17 |
+
"global",
|
18 |
+
"local",
|
19 |
+
"global",
|
20 |
+
"local"
|
21 |
+
],
|
22 |
+
"attention_types": [
|
23 |
+
[
|
24 |
+
[
|
25 |
+
"global",
|
26 |
+
"local"
|
27 |
+
],
|
28 |
+
6
|
29 |
+
]
|
30 |
+
],
|
31 |
+
"bos_token_id": 50256,
|
32 |
+
"embed_dropout": 0,
|
33 |
+
"eos_token_id": 50256,
|
34 |
+
"gradient_checkpointing": false,
|
35 |
+
"hidden_size": 768,
|
36 |
+
"initializer_range": 0.02,
|
37 |
+
"intermediate_size": null,
|
38 |
+
"layer_norm_epsilon": 1e-05,
|
39 |
+
"max_position_embeddings": 2048,
|
40 |
+
"model_type": "gpt_neo",
|
41 |
+
"num_heads": 12,
|
42 |
+
"num_layers": 12,
|
43 |
+
"resid_dropout": 0,
|
44 |
+
"summary_activation": null,
|
45 |
+
"summary_first_dropout": 0.1,
|
46 |
+
"summary_proj_to_labels": true,
|
47 |
+
"summary_type": "cls_index",
|
48 |
+
"summary_use_proj": true,
|
49 |
+
"torch_dtype": "float32",
|
50 |
+
"transformers_version": "4.11.3",
|
51 |
+
"use_cache": true,
|
52 |
+
"vocab_size": 50257,
|
53 |
+
"window_size": 256
|
54 |
+
}
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.1.0",
|
4 |
+
"transformers": "4.11.3",
|
5 |
+
"pytorch": "1.10.1"
|
6 |
+
}
|
7 |
+
}
|
eval/similarity_evaluation_sts-dev_results.csv
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
2 |
+
0,880,0.7950654474666194,0.8007724298457999,0.8053906756375231,0.8097874874987419,0.8047586985807786,0.8089171899149843,0.6466071182992355,0.6641264512033144
|
3 |
+
0,1760,0.8030398104842608,0.8160455008841297,0.8086338103815267,0.815550525453574,0.8079955327128969,0.8151088561096271,0.6693501137278115,0.6782979930117164
|
4 |
+
0,2640,0.8075786833336444,0.8224187933799936,0.8149775620050075,0.8227937580519055,0.8144927041872945,0.822288643982625,0.6701705379431102,0.6778687026798715
|
5 |
+
0,3520,0.8065297027403904,0.8221457453074965,0.8149968631108975,0.8226186513278043,0.8143420677382092,0.8221145566738735,0.6668185712885413,0.6781621156937467
|
6 |
+
0,4400,0.8229752862752813,0.8346565368594768,0.8173067458283395,0.8247088573966124,0.8164936355056336,0.8239528853446589,0.69134471203173,0.7009815762815577
|
7 |
+
0,5280,0.8194553973919252,0.8306975625970974,0.8137693590385502,0.8220428039553168,0.8133634213550716,0.8219195725535476,0.6768752025434883,0.689327875791452
|
8 |
+
0,6160,0.8180249517730525,0.8301200604191976,0.817183845811426,0.8237163851814475,0.8169188076497365,0.8236161004147311,0.6899741455955114,0.6988393523832572
|
9 |
+
0,7040,0.8224062724671777,0.8344848872748275,0.8196195964009184,0.8254658952856002,0.8194220367977899,0.825170944904116,0.697337270552,0.7065405818972463
|
10 |
+
0,7920,0.822964185859869,0.8349577968538565,0.8191320481438031,0.8252181726947154,0.818974767334848,0.8252869307492836,0.694260169706717,0.705534588320437
|
11 |
+
0,8800,0.8222761356146835,0.8342738533213615,0.818554098461939,0.8247641794988477,0.8183280287676225,0.8248310959151341,0.6974533094203264,0.7091388203451227
|
12 |
+
0,-1,0.8222915401539983,0.8342784634967285,0.8185605463249078,0.8247645442677827,0.8183366680230943,0.8248284250947857,0.6974779126897581,0.7091391014841067
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
modules.json
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Dense",
|
12 |
+
"type": "sentence_transformers.models.Dense"
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"idx": 2,
|
16 |
+
"name": "2",
|
17 |
+
"path": "2_Pooling",
|
18 |
+
"type": "sentence_transformers.models.Pooling"
|
19 |
+
}
|
20 |
+
]
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec2c7243367d43b8517fb4560415e64dc8b3128017624d42a35f9fa6e32da183
|
3 |
+
size 551190545
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 75,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
similarity_evaluation_sts-test_results.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
2 |
+
-1,-1,0.7918788718215923,0.7977578945132462,0.7650596217424315,0.7621632173553866,0.7645615412069551,0.7611209816320168,0.5683667608213765,0.5593629199027566
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": "<|endoftext|>"}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "model_max_length": 2048, "special_tokens_map_file": null, "name_or_path": "EleutherAI/gpt-neo-125M", "tokenizer_class": "GPT2Tokenizer"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|