Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
bobox
/
DeBERTaV3-small-GeneralSentenceTransformer-v2-AllSoft
like
0
Sentence Similarity
sentence-transformers
PyTorch
15 datasets
English
deberta-v2
feature-extraction
Generated from Trainer
dataset_size:78183
loss:AdaptiveLayerLoss
loss:CoSENTLoss
loss:GISTEmbedLoss
loss:OnlineContrastiveLoss
loss:MultipleNegativesSymmetricRankingLoss
Eval Results
Inference Endpoints
arxiv:
1908.10084
arxiv:
2402.14776
arxiv:
2402.16829
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
ca0ee78
DeBERTaV3-small-GeneralSentenceTransformer-v2-AllSoft
1 contributor
History:
3 commits
bobox
all layer trained for every step.AdaptiveLayerLoss(model=model,
ca0ee78
verified
6 months ago
1_Pooling
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
.gitattributes
1.52 kB
initial commit
6 months ago
README.md
121 kB
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
added_tokens.json
23 Bytes
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
config.json
860 Bytes
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
config_sentence_transformers.json
195 Bytes
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
modules.json
229 Bytes
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
What is a pickle import?
565 MB
LFS
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
sentence_bert_config.json
53 Bytes
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
special_tokens_map.json
286 Bytes
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
spm.model
2.46 MB
LFS
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
tokenizer.json
8.66 MB
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago
tokenizer_config.json
1.28 kB
all layer trained for every step.AdaptiveLayerLoss(model=model,
6 months ago