NghiemAbe commited on
Commit
b0f0df6
·
verified ·
1 Parent(s): 35fcce8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -34
README.md CHANGED
@@ -84,40 +84,6 @@ For an automated evaluation of this model, see the *Sentence Embeddings Benchmar
84
  ## Training
85
  The model was trained with the parameters:
86
 
87
- **DataLoader**:
88
-
89
- `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 36 with parameters:
90
- ```
91
- {'batch_size': 512}
92
- ```
93
-
94
- **Loss**:
95
-
96
- `GIST.CachedGISTEmbedLoss.CachedGISTEmbedLoss` with parameters:
97
- ```
98
- {'guide': SentenceTransformer(
99
- (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
100
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
101
- ), 'temperature': 0.05}
102
- ```
103
-
104
- Parameters of the fit()-Method:
105
- ```
106
- {
107
- "epochs": 10,
108
- "evaluation_steps": 10,
109
- "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
110
- "max_grad_norm": 1,
111
- "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
112
- "optimizer_params": {
113
- "lr": 1e-05
114
- },
115
- "scheduler": "warmuplinear",
116
- "steps_per_epoch": null,
117
- "warmup_steps": 36,
118
- "weight_decay": 0.01
119
- }
120
- ```
121
 
122
 
123
  ## Full Model Architecture
 
84
  ## Training
85
  The model was trained with the parameters:
86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
 
88
 
89
  ## Full Model Architecture