Files changed (5) hide show
  1. README.md +33 -110
  2. config.json +2 -2
  3. model_head.pkl +2 -2
  4. pytorch_model.bin +2 -2
  5. tokenizer_config.json +1 -2
README.md CHANGED
@@ -1,126 +1,49 @@
1
  ---
2
- pipeline_tag: sentence-similarity
3
  tags:
 
4
  - sentence-transformers
5
- - feature-extraction
6
- - sentence-similarity
7
- - transformers
8
-
9
  ---
10
 
11
- # {MODEL_NAME}
12
-
13
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
-
15
- <!--- Describe your model here -->
16
 
17
- ## Usage (Sentence-Transformers)
18
 
19
- Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
20
-
21
- ```
22
- pip install -U sentence-transformers
23
- ```
24
 
25
- Then you can use the model like this:
26
 
27
- ```python
28
- from sentence_transformers import SentenceTransformer
29
- sentences = ["This is an example sentence", "Each sentence is converted"]
30
 
31
- model = SentenceTransformer('{MODEL_NAME}')
32
- embeddings = model.encode(sentences)
33
- print(embeddings)
34
  ```
35
 
36
-
37
-
38
- ## Usage (HuggingFace Transformers)
39
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
40
 
41
  ```python
42
- from transformers import AutoTokenizer, AutoModel
43
- import torch
44
-
45
-
46
- #Mean Pooling - Take attention mask into account for correct averaging
47
- def mean_pooling(model_output, attention_mask):
48
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
49
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
50
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
51
-
52
-
53
- # Sentences we want sentence embeddings for
54
- sentences = ['This is an example sentence', 'Each sentence is converted']
55
-
56
- # Load model from HuggingFace Hub
57
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
58
- model = AutoModel.from_pretrained('{MODEL_NAME}')
59
-
60
- # Tokenize sentences
61
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
62
-
63
- # Compute token embeddings
64
- with torch.no_grad():
65
- model_output = model(**encoded_input)
66
-
67
- # Perform pooling. In this case, mean pooling.
68
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
69
-
70
- print("Sentence embeddings:")
71
- print(sentence_embeddings)
72
- ```
73
-
74
-
75
-
76
- ## Evaluation Results
77
-
78
- <!--- Describe how your model was evaluated -->
79
-
80
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
81
-
82
-
83
- ## Training
84
- The model was trained with the parameters:
85
-
86
- **DataLoader**:
87
-
88
- `torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
89
- ```
90
- {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
- ```
92
-
93
- **Loss**:
94
-
95
- `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
96
-
97
- Parameters of the fit()-Method:
98
- ```
99
- {
100
- "epochs": 1,
101
- "evaluation_steps": 0,
102
- "evaluator": "NoneType",
103
- "max_grad_norm": 1,
104
- "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
105
- "optimizer_params": {
106
- "lr": 2e-05
107
- },
108
- "scheduler": "WarmupLinear",
109
- "steps_per_epoch": 40,
110
- "warmup_steps": 4,
111
- "weight_decay": 0.01
112
  }
113
  ```
114
-
115
-
116
- ## Full Model Architecture
117
- ```
118
- SentenceTransformer(
119
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
120
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
- )
122
- ```
123
-
124
- ## Citing & Authors
125
-
126
- <!--- Describe where people can find more information -->
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
+ - setfit
5
  - sentence-transformers
6
+ - text-classification
7
+ pipeline_tag: text-classification
 
 
8
  ---
9
 
10
+ # lewtun/my-awesome-setfit-model
 
 
 
 
11
 
12
+ This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
13
 
14
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
15
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
 
 
 
16
 
17
+ ## Usage
18
 
19
+ To use this model for inference, first install the SetFit library:
 
 
20
 
21
+ ```bash
22
+ python -m pip install setfit
 
23
  ```
24
 
25
+ You can then run inference as follows:
 
 
 
26
 
27
  ```python
28
+ from setfit import SetFitModel
29
+
30
+ # Download from Hub and run inference
31
+ model = SetFitModel.from_pretrained("lewtun/my-awesome-setfit-model")
32
+ # Run inference
33
+ preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
34
+ ```
35
+
36
+ ## BibTeX entry and citation info
37
+
38
+ ```bibtex
39
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
40
+ doi = {10.48550/ARXIV.2209.11055},
41
+ url = {https://arxiv.org/abs/2209.11055},
42
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
43
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
44
+ title = {Efficient Few-Shot Learning Without Prompts},
45
+ publisher = {arXiv},
46
+ year = {2022},
47
+ copyright = {Creative Commons Attribution 4.0 International}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  }
49
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/home/lewis_huggingface_co/.cache/torch/sentence_transformers/sentence-transformers_paraphrase-mpnet-base-v2/",
3
  "architectures": [
4
  "MPNetModel"
5
  ],
@@ -19,6 +19,6 @@
19
  "pad_token_id": 1,
20
  "relative_attention_num_buckets": 32,
21
  "torch_dtype": "float32",
22
- "transformers_version": "4.20.0",
23
  "vocab_size": 30527
24
  }
 
1
  {
2
+ "_name_or_path": "/home/zjs/.cache/torch/sentence_transformers/sentence-transformers_paraphrase-mpnet-base-v2/",
3
  "architectures": [
4
  "MPNetModel"
5
  ],
 
19
  "pad_token_id": 1,
20
  "relative_attention_num_buckets": 32,
21
  "torch_dtype": "float32",
22
+ "transformers_version": "4.30.2",
23
  "vocab_size": 30527
24
  }
model_head.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6802cd87b7c943c0591009cb3bc4063470757fa53ed89b600f3f2a819e7b0d42
3
- size 6927
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38f4c170e3f7734344bce3c15bdbe2142484e9253b18fdf723b937aa234b8d8e
3
+ size 7041
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:acb2bffff89914827e11045bce312c1f74eddaeb2bfdb4bfa9b78ab132cb854e
3
- size 438014769
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be98415bdf9d33cd6fa30ffc65e831b33fff0d5196a5c386f4f3ed84713ed0c
3
+ size 438016493
tokenizer_config.json CHANGED
@@ -7,6 +7,7 @@
7
  "rstrip": false,
8
  "single_word": false
9
  },
 
10
  "cls_token": {
11
  "__type": "AddedToken",
12
  "content": "<s>",
@@ -34,7 +35,6 @@
34
  "single_word": false
35
  },
36
  "model_max_length": 512,
37
- "name_or_path": "/home/lewis_huggingface_co/.cache/torch/sentence_transformers/sentence-transformers_paraphrase-mpnet-base-v2/",
38
  "never_split": null,
39
  "pad_token": {
40
  "__type": "AddedToken",
@@ -52,7 +52,6 @@
52
  "rstrip": false,
53
  "single_word": false
54
  },
55
- "special_tokens_map_file": null,
56
  "strip_accents": null,
57
  "tokenize_chinese_chars": true,
58
  "tokenizer_class": "MPNetTokenizer",
 
7
  "rstrip": false,
8
  "single_word": false
9
  },
10
+ "clean_up_tokenization_spaces": true,
11
  "cls_token": {
12
  "__type": "AddedToken",
13
  "content": "<s>",
 
35
  "single_word": false
36
  },
37
  "model_max_length": 512,
 
38
  "never_split": null,
39
  "pad_token": {
40
  "__type": "AddedToken",
 
52
  "rstrip": false,
53
  "single_word": false
54
  },
 
55
  "strip_accents": null,
56
  "tokenize_chinese_chars": true,
57
  "tokenizer_class": "MPNetTokenizer",