---
base_model: sentence-transformers/all-MiniLM-L6-v2
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1821475
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Estimating User Location in Social Media with Stacked Denoising
Auto-encoders
sentences:
- 'Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach'
- Conventional sphygmomanometers are being replaced by automated devices; can they
be used to accurately calculate ABPI?Thirty-six volunteers (72 legs) attending
a vascular clinic had their ankle, brachial blood pressure and ABPIs calculated
using each of these 3 methods. (1) Conventional aneuroid BP cuff with hand held
doppler. (2) OMRON HEM 705CP portable automated BP monitor. (3) The hand held
doppler to determine systolic BP measured by the OMRON.Conventional doppler readings
for brachial and ankle pressures were generally higher than those obtained digitally
by less than 3 mmHg but this was not statistically significant. This did not translate
into a significant difference in ABPIs obtained using all 3 techniques; the correlation
coefficient of conventional ABPI with automated ABPI (method 2) was 0.746, this
was improved to 0.899 using method 3. The OMRON failed to detect a signal in 16
of the 72 legs, 11 of these legs had ABPIs<0.66.
- Deep neural networks based user interface detection for mobile applications using
symbol marker
- source_sentence: 'Central mesenteric lymph node BER-Ep4+ cells in colorectal cancer:
challenge to sentinel node concept?'
sentences:
- The Lovely Bones (film) the film had many positive messages about life." The Lovely
Bones (film) The Lovely Bones is a 2009 supernatural drama film directed by Peter
Jackson, and starring Mark Wahlberg, Rachel Weisz, Susan Sarandon, Stanley Tucci,
Michael Imperioli, and Saoirse Ronan. The screenplay by Fran Walsh, Philippa Boyens,
and Jackson was based on Alice Sebold’s award-winning and bestselling 2002 novel
of the same name. It follows a girl who is murdered and watches over her family
from the in-between, and is torn between seeking vengeance on her killer and allowing
her family to heal. An international co-production between the United States,
- Postoperative intracranial hematoma (POIH) is a frequent sequela secondary to
cranial surgery. The role of routine early postoperative computed tomography (CT)
scanning in the detection of POIH remains controversial. The study was aimed at
analyzing the effect of routine early CT scanning after craniotomy for the early
detection of POIH.Routine early postoperative CT scanning was performed at our
institute, and a retrospective study was conducted to analyze the data. POIH was
defined as an intracranial hematoma requiring surgical management.A total of 1,148
patients undergoing craniotomy were included in this study; 28 of these patients
developed POIH. The majority of POIH cases (15/28, 54 %) were detected during
the first 6 h following craniotomy. A routine CT scan was performed on all included
patients but two; however, CT scans detected only 16 POIH cases. During the first
6 h, the rate at which CT scans detected POIH was 1.9 % (15/786); subsequently,
the rate decreased to only 0.3 % (1/360; p < 0.05, compared with the rate during
the first 6 h). Among patients without clinical manifestations, the rate at which
the routine post-craniotomy CT scan detected POIH was only 0.7 % (5/721) (p < 0.05,
compared with the incidence of POIH). Finally, among high-risk POIH patients,
the POIH-positive rate of routine CT scanning was elevated.
- The role of sentinel lymph nodes in colorectal cancer remains unclear.Cryosections
from central para-aortic mesenterial lymph nodes were stained using mAb BER-Ep4.
Overall survival and distant recurrence were calculated using Kaplan-Meier plots.All
patients (n = 48) were free of distant metastases and curatively resected (R0).
23 pN0, 13 pN1 and 12 pN2 stages were found. 21/48 patients (44%) showed BER-Ep4+
cells in their central lymph nodes (7/23 pN0, 8/13 pN1, 6/12 pN2). In 6/23 pN0
patients, BER-Ep4+ cells were also found in locoregional nodes (p = 0.03, Fisher's
exact test). pN status predicted overall survival (p = 0.006, Kaplan-Meier curve,
log-rank test). An impact was exerted by central mesenteric BER-Ep4+ cells on
overall survival (p = 0.009 in pN0 patients, p = 0.07 for all pN) and distant
recurrence-free survival (p = 0.001 in pN0 patients, p = 0.007 for all pN). Multivariate
analysis showed an independent prognostic effect on overall survival in pN0 patients
(p = 0.022).
- source_sentence: when did the samsung galaxy s8 come out
sentences:
- Samsung Galaxy S8 support for Daydream. The Galaxy S8 was one of the first Android
phones to support ARCore, Google's augmented reality engine. In February 2018,
the official Android 8.0 Oreo update began rolling out to the Samsung Galaxy S8,
Samsung Galaxy S8+, and Samsung Galaxy S8 Active. Besides the phone's protective
case reportedly cracking and peeling away in under 2 months of use, Dan Seifert
of "The Verge" praised the design of the Galaxy S8, describing it as a "stunning
device to look at and hold" that was "refined and polished to a literal shine",
and adding that it "truly doesn't look
- British Raj British Raj The British Raj (; from "rāj", literally, "rule" in Hindustani)
was the rule by the British Crown in the Indian subcontinent between 1858 and
1947. The rule is also called Crown rule in India, or direct rule in India. The
region under British control was commonly called British India or simply India
in contemporaneous usage, and included areas directly administered by the United
Kingdom, which were collectively called British India, and those ruled by indigenous
rulers, but under British tutelage or paramountcy, and called the princely states.
The whole was also informally called the Indian Empire. As India,
- Samsung Galaxy S8 Samsung Galaxy S8 The Samsung Galaxy S8, Samsung Galaxy S8+
(shortened to S8 and S8+, respectively) and Samsung Galaxy S8 Active are Android
smartphones (with the S8+ being the phablet smartphone) produced by Samsung Electronics
as the eighth generation of the Samsung Galaxy S series. The S8 and S8+ were unveiled
on 29 March 2017 and directly succeeded the Samsung Galaxy S7 and S7 edge, with
a North American release on 21 April 2017 and international rollout throughout
April and May. The S8 Active was announced on 8 August 2017 and is exclusive to
certain U.S. cellular carriers. The S8
- source_sentence: Can Carrier-Mediated Delivery System Promote the Development of
Antisense Imaging?
sentences:
- 8-track tape month of the vinyl release. The eight-track format became by far
the most popular and offered the largest music library of all the tape systems.
Eight-track players were fitted as standard equipment in most Rolls-Royce and
Bentley cars of the period for sale in Great Britain and worldwide. Optional 8-track
players were available in many cars and trucks through the early 1980s. Ampex,
based in Elk Grove Village, Illinois, set up a European operation (Ampex Stereo
Tapes) in London, England, in 1970 under general manager Gerry Hall, with manufacturing
in Nivelles, Belgium, to promote 8-track product (as well as musicassettes)
- Heterotopic heart transplantation (HHTx) is a therapeutic option in heart failure
patients with fixed elevated pulmonary hypertension. However, survival is poorer
in HHTx recipients, and with improving results in continuous flow ventricular
assist devices (VADs), many patients can be bridged to allow normalization of
pulmonary artery pressures, making them orthotopic heart transplant (OHTx) candidates.
Thus, the aim of this study was to analyse the survival of our HHTx cohort and
compare them with our VAD bridge patients.A retrospective review of 342 heart
transplant patients (315 OHTx and 27 HHTx) performed at our institution over 15
years was compared with 124 bridge-to-transplant VAD patients over the same time
period, of whom 69 received an OHTx. Pulmonary artery pressures before and after
VAD implant were analysed. Survival was analysed using both univariate and multivariate
analyses.HHTx recipients were significantly older, and the donor allografts were
older, smaller and had longer ischaemic times than the OHTx cohort. Comparison
of the VAD types implanted (pulsatile vs continuous) showed significantly longer
time supported on the continuous devices with significantly fewer deaths than
the pulsatile devices. The continuous devices were successful in reducing pulmonary
artery pressures pretransplant. The HHTx cohort had a significantly poorer survival
than the OHTx cohort (P=0.002). Survival on a continuous device and then OHTx
was significantly better than either HHTx or pulsatile device support.
- We aimed to explore the feasibility of transfection methods for antisense imaging.Antisense
oligonucleotides (ASON) targeted to the mRNA of hTERT gene were synthesized and
labeled with Technetium-99m and fluorescein isothiocyanate (FITC), respectively.
Then, ASON was combined with transfection reagent Lipofectamine 2000 and Xfect(TM),
named Lipo-ASON and Xfect-ASON, respectively. After transfection, the labeled
ASON was characterized in hNPCs-G3 and hRPE cells. Reverse transcription polymerase
chain reaction (RT-PCR) and Western blotting were performed to assay the hTERT
mRNA and protein levels after hNPCs-G3 cells were incubated with Lipo-ASON, Xfect-ASON,
and naked ASON. In addition, Lipo-ASON, Xfect-ASON, and naked ASON were injected
into tumor-bearing mice, and the biodistribution in vivo was performed.The presence
of two transfection reagents significantly increased intracellular uptake of radiolabeled
ASON in both cell lines compared with naked ASON (p < 0.05). However, there was
no significant difference in cellular uptake rates of Lipo-ASON and Xfect-ASON
between hNPCs-G3 and hRPE cells. In comparison with naked ASON, the fluorescence
intensity was strongly enhanced after binding to transfection reagents. Furthermore,
the levels of hTERT mRNA and protein were significantly reduced in cells treated
with Lipo-ASON and Xfect-ASON (p < 0.05), but naked ASON had no significant effect
on hTERT expression level. The biodistribution study indicated that tumor radioactivity
uptake of radiolabeled ASON for naked ASON, Lipo-ASON, and Xfect-ASON group was
low and shown no significant difference in vivo.
- source_sentence: Does early second-trimester sonography predict adverse perinatal
outcomes in monochorionic diamniotic twin pregnancies?
sentences:
- Calcium and vitamin D are essential nutrients for bone metabolism Vitamin D can
either be obtained from dietary sources or cutaneous synthesis. The study was
conducted in subtropic weather; therefore, some might believe that the levels
of solar radiation would be sufficient in this area.To evaluate calcium and vitamin
D supplementation in postmenopausal women with osteoporosis living in a sunny
country.A 3-month controlled clinical trial with 64 postmenopausal women with
osteoporosis, mean age 62 + or - 8 years. They were randomly assigned to either
the supplement group, who received 1,200 mg of calcium carbonate and 400 IU (10
microg) of vitamin D(3,) or the control group. Dietary intake assessment was performed,
bone mineral density and body composition were measured, and biochemical markers
of bone metabolism were analyzed.Considering all participants at baseline, serum
vitamin D was under 75 nmol/l in 91.4% of the participants. The concentration
of serum 25(OH)D increased significantly (p = 0.023) after 3 months of supplementation
from 46.67 + or - 13.97 to 59.47 + or - 17.50 nmol/l. However, the dose given
was limited in effect, and 86.2% of the supplement group did not reach optimal
levels of 25(OH)D. Parathyroid hormone was elevated in 22.4% of the study group.
After the intervention period, mean parathyroid hormone tended to decrease in
the supplement group (p = 0.063).
- 'To determine whether intertwin discordant abdominal circumference, femur length,
head circumference, and estimated fetal weight sonographic measurements in early
second-trimester monochorionic diamniotic twins predict adverse obstetric and
neonatal outcomes.We conducted a multicenter retrospective cohort study involving
9 regional perinatal centers in the United States. We examined the records of
all monochorionic diamniotic twin pregnancies with two live fetuses at the 16-
to 18-week sonographic examination who had serial follow-up sonography until delivery.
The intertwin discordance in abdominal circumference, femur length, head circumference,
and estimated fetal weight was calculated as the difference between the two fetuses,
expressed as a percentage of the larger using the 16- to 18-week sonographic measurements.
An adverse composite obstetric outcome was defined as the occurrence of 1 or more
of the following in either fetus: intrauterine growth restriction, twin-twin transfusion
syndrome, intrauterine fetal death, abnormal growth discordance (≥20% difference),
and very preterm birth at or before 28 weeks. An adverse composite neonatal outcome
was defined as the occurrence of 1 or more of the following: respiratory distress
syndrome, any stage of intraventricular hemorrhage, 5-minute Apgar score less
than 7, necrotizing enterocolitis, culture-proven early-onset sepsis, and neonatal
death. Receiver operating characteristic and logistic regression-with-generalized
estimating equation analyses were constructed.Among the 177 monochorionic diamniotic
twin pregnancies analyzed, intertwin abdominal circumference and estimated fetal
weight discordances were only predictive of adverse composite obstetric outcomes
(areas under the curve, 79% and 80%, respectively). Receiver operating characteristic
curves showed that intertwin discordances in abdominal circumference, femur length,
head circumference, and estimated fetal weight were not acceptable predictors
of twin-twin transfusion syndrome or adverse neonatal outcomes.'
- We aimed to investigate our results of carotid endarterectomy operations in symptomatic
patients operated by using an intraluminal shunt and without use of an intraluminal
shunt in patients with contralateral carotid artery stenosis.We reviewed the results
of 144 carotid endarterectomy operations in patients with contralateral carotid
artery stenosis from January 2007 to December 2012. These patients were allocated
in 2 groups. Group 1 (n = 70) consisted of the patients operated by using an intraluminal
shunt and Group 2 (n = 74) consisted of the patients operated without use of an
intraluminal shunt. Postoperative neurologic complications were recorded.Temporary
neurologic impairment developed in 3 (4.3%) patients postoperatively in group
1 and in 2 (2.7%) patients postoperatively in group 2. This difference was not
statistically significant between groups (p = 0.675). None of the patients returned
to operation theatre due to excessive bleeding postoperatively. The stroke/death
rate was 0.7% in the study group.
model-index:
- name: all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
results:
- task:
type: triplet
name: Triplet
dataset:
name: medi mteb dev
type: medi-mteb-dev
metrics:
- type: cosine_accuracy
value: 0.9152662981006076
name: Cosine Accuracy
---
# all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- NQ
- pubmed
- specter_train_triples
- S2ORC_citations_abstracts
- fever
- gooaq_pairs
- codesearchnet
- wikihow
- WikiAnswers
- eli5_question_answer
- amazon-qa
- medmcqa
- zeroshot
- TriviaQA_pairs
- PAQ_pairs
- stackexchange_duplicate_questions_title-body_title-body
- trex
- flickr30k_captions
- hotpotqa
- task671_ambigqa_text_generation
- task061_ropes_answer_generation
- task285_imdb_answer_generation
- task905_hate_speech_offensive_classification
- task566_circa_classification
- task184_snli_entailment_to_neutral_text_modification
- task280_stereoset_classification_stereotype_type
- task1599_smcalflow_classification
- task1384_deal_or_no_dialog_classification
- task591_sciq_answer_generation
- task823_peixian-rtgender_sentiment_analysis
- task023_cosmosqa_question_generation
- task900_freebase_qa_category_classification
- task924_event2mind_word_generation
- task152_tomqa_find_location_easy_noise
- task1368_healthfact_sentence_generation
- task1661_super_glue_classification
- task1187_politifact_classification
- task1728_web_nlg_data_to_text
- task112_asset_simple_sentence_identification
- task1340_msr_text_compression_compression
- task072_abductivenli_answer_generation
- task1504_hatexplain_answer_generation
- task684_online_privacy_policy_text_information_type_generation
- task1290_xsum_summarization
- task075_squad1.1_answer_generation
- task1587_scifact_classification
- task384_socialiqa_question_classification
- task1555_scitail_answer_generation
- task1532_daily_dialog_emotion_classification
- task239_tweetqa_answer_generation
- task596_mocha_question_generation
- task1411_dart_subject_identification
- task1359_numer_sense_answer_generation
- task329_gap_classification
- task220_rocstories_title_classification
- task316_crows-pairs_classification_stereotype
- task495_semeval_headline_classification
- task1168_brown_coarse_pos_tagging
- task348_squad2.0_unanswerable_question_generation
- task049_multirc_questions_needed_to_answer
- task1534_daily_dialog_question_classification
- task322_jigsaw_classification_threat
- task295_semeval_2020_task4_commonsense_reasoning
- task186_snli_contradiction_to_entailment_text_modification
- task034_winogrande_question_modification_object
- task160_replace_letter_in_a_sentence
- task469_mrqa_answer_generation
- task105_story_cloze-rocstories_sentence_generation
- task649_race_blank_question_generation
- task1536_daily_dialog_happiness_classification
- task683_online_privacy_policy_text_purpose_answer_generation
- task024_cosmosqa_answer_generation
- task584_udeps_eng_fine_pos_tagging
- task066_timetravel_binary_consistency_classification
- task413_mickey_en_sentence_perturbation_generation
- task182_duorc_question_generation
- task028_drop_answer_generation
- task1601_webquestions_answer_generation
- task1295_adversarial_qa_question_answering
- task201_mnli_neutral_classification
- task038_qasc_combined_fact
- task293_storycommonsense_emotion_text_generation
- task572_recipe_nlg_text_generation
- task517_emo_classify_emotion_of_dialogue
- task382_hybridqa_answer_generation
- task176_break_decompose_questions
- task1291_multi_news_summarization
- task155_count_nouns_verbs
- task031_winogrande_question_generation_object
- task279_stereoset_classification_stereotype
- task1336_peixian_equity_evaluation_corpus_gender_classifier
- task508_scruples_dilemmas_more_ethical_isidentifiable
- task518_emo_different_dialogue_emotions
- task077_splash_explanation_to_sql
- task923_event2mind_classifier
- task470_mrqa_question_generation
- task638_multi_woz_classification
- task1412_web_questions_question_answering
- task847_pubmedqa_question_generation
- task678_ollie_actual_relationship_answer_generation
- task290_tellmewhy_question_answerability
- task575_air_dialogue_classification
- task189_snli_neutral_to_contradiction_text_modification
- task026_drop_question_generation
- task162_count_words_starting_with_letter
- task079_conala_concat_strings
- task610_conllpp_ner
- task046_miscellaneous_question_typing
- task197_mnli_domain_answer_generation
- task1325_qa_zre_question_generation_on_subject_relation
- task430_senteval_subject_count
- task672_nummersense
- task402_grailqa_paraphrase_generation
- task904_hate_speech_offensive_classification
- task192_hotpotqa_sentence_generation
- task069_abductivenli_classification
- task574_air_dialogue_sentence_generation
- task187_snli_entailment_to_contradiction_text_modification
- task749_glucose_reverse_cause_emotion_detection
- task1552_scitail_question_generation
- task750_aqua_multiple_choice_answering
- task327_jigsaw_classification_toxic
- task1502_hatexplain_classification
- task328_jigsaw_classification_insult
- task304_numeric_fused_head_resolution
- task1293_kilt_tasks_hotpotqa_question_answering
- task216_rocstories_correct_answer_generation
- task1326_qa_zre_question_generation_from_answer
- task1338_peixian_equity_evaluation_corpus_sentiment_classifier
- task1729_personachat_generate_next
- task1202_atomic_classification_xneed
- task400_paws_paraphrase_classification
- task502_scruples_anecdotes_whoiswrong_verification
- task088_identify_typo_verification
- task221_rocstories_two_choice_classification
- task200_mnli_entailment_classification
- task074_squad1.1_question_generation
- task581_socialiqa_question_generation
- task1186_nne_hrngo_classification
- task898_freebase_qa_answer_generation
- task1408_dart_similarity_classification
- task168_strategyqa_question_decomposition
- task1357_xlsum_summary_generation
- task390_torque_text_span_selection
- task165_mcscript_question_answering_commonsense
- task1533_daily_dialog_formal_classification
- task002_quoref_answer_generation
- task1297_qasc_question_answering
- task305_jeopardy_answer_generation_normal
- task029_winogrande_full_object
- task1327_qa_zre_answer_generation_from_question
- task326_jigsaw_classification_obscene
- task1542_every_ith_element_from_starting
- task570_recipe_nlg_ner_generation
- task1409_dart_text_generation
- task401_numeric_fused_head_reference
- task846_pubmedqa_classification
- task1712_poki_classification
- task344_hybridqa_answer_generation
- task875_emotion_classification
- task1214_atomic_classification_xwant
- task106_scruples_ethical_judgment
- task238_iirc_answer_from_passage_answer_generation
- task1391_winogrande_easy_answer_generation
- task195_sentiment140_classification
- task163_count_words_ending_with_letter
- task579_socialiqa_classification
- task569_recipe_nlg_text_generation
- task1602_webquestion_question_genreation
- task747_glucose_cause_emotion_detection
- task219_rocstories_title_answer_generation
- task178_quartz_question_answering
- task103_facts2story_long_text_generation
- task301_record_question_generation
- task1369_healthfact_sentence_generation
- task515_senteval_odd_word_out
- task496_semeval_answer_generation
- task1658_billsum_summarization
- task1204_atomic_classification_hinderedby
- task1392_superglue_multirc_answer_verification
- task306_jeopardy_answer_generation_double
- task1286_openbookqa_question_answering
- task159_check_frequency_of_words_in_sentence_pair
- task151_tomqa_find_location_easy_clean
- task323_jigsaw_classification_sexually_explicit
- task037_qasc_generate_related_fact
- task027_drop_answer_type_generation
- task1596_event2mind_text_generation_2
- task141_odd-man-out_classification_category
- task194_duorc_answer_generation
- task679_hope_edi_english_text_classification
- task246_dream_question_generation
- task1195_disflqa_disfluent_to_fluent_conversion
- task065_timetravel_consistent_sentence_classification
- task351_winomt_classification_gender_identifiability_anti
- task580_socialiqa_answer_generation
- task583_udeps_eng_coarse_pos_tagging
- task202_mnli_contradiction_classification
- task222_rocstories_two_chioce_slotting_classification
- task498_scruples_anecdotes_whoiswrong_classification
- task067_abductivenli_answer_generation
- task616_cola_classification
- task286_olid_offense_judgment
- task188_snli_neutral_to_entailment_text_modification
- task223_quartz_explanation_generation
- task820_protoqa_answer_generation
- task196_sentiment140_answer_generation
- task1678_mathqa_answer_selection
- task349_squad2.0_answerable_unanswerable_question_classification
- task154_tomqa_find_location_hard_noise
- task333_hateeval_classification_hate_en
- task235_iirc_question_from_subtext_answer_generation
- task1554_scitail_classification
- task210_logic2text_structured_text_generation
- task035_winogrande_question_modification_person
- task230_iirc_passage_classification
- task1356_xlsum_title_generation
- task1726_mathqa_correct_answer_generation
- task302_record_classification
- task380_boolq_yes_no_question
- task212_logic2text_classification
- task748_glucose_reverse_cause_event_detection
- task834_mathdataset_classification
- task350_winomt_classification_gender_identifiability_pro
- task191_hotpotqa_question_generation
- task236_iirc_question_from_passage_answer_generation
- task217_rocstories_ordering_answer_generation
- task568_circa_question_generation
- task614_glucose_cause_event_detection
- task361_spolin_yesand_prompt_response_classification
- task421_persent_sentence_sentiment_classification
- task203_mnli_sentence_generation
- task420_persent_document_sentiment_classification
- task153_tomqa_find_location_hard_clean
- task346_hybridqa_classification
- task1211_atomic_classification_hassubevent
- task360_spolin_yesand_response_generation
- task510_reddit_tifu_title_summarization
- task511_reddit_tifu_long_text_summarization
- task345_hybridqa_answer_generation
- task270_csrg_counterfactual_context_generation
- task307_jeopardy_answer_generation_final
- task001_quoref_question_generation
- task089_swap_words_verification
- task1196_atomic_classification_oeffect
- task080_piqa_answer_generation
- task1598_nyc_long_text_generation
- task240_tweetqa_question_generation
- task615_moviesqa_answer_generation
- task1347_glue_sts-b_similarity_classification
- task114_is_the_given_word_longest
- task292_storycommonsense_character_text_generation
- task115_help_advice_classification
- task431_senteval_object_count
- task1360_numer_sense_multiple_choice_qa_generation
- task177_para-nmt_paraphrasing
- task132_dais_text_modification
- task269_csrg_counterfactual_story_generation
- task233_iirc_link_exists_classification
- task161_count_words_containing_letter
- task1205_atomic_classification_isafter
- task571_recipe_nlg_ner_generation
- task1292_yelp_review_full_text_categorization
- task428_senteval_inversion
- task311_race_question_generation
- task429_senteval_tense
- task403_creak_commonsense_inference
- task929_products_reviews_classification
- task582_naturalquestion_answer_generation
- task237_iirc_answer_from_subtext_answer_generation
- task050_multirc_answerability
- task184_break_generate_question
- task669_ambigqa_answer_generation
- task169_strategyqa_sentence_generation
- task500_scruples_anecdotes_title_generation
- task241_tweetqa_classification
- task1345_glue_qqp_question_paraprashing
- task218_rocstories_swap_order_answer_generation
- task613_politifact_text_generation
- task1167_penn_treebank_coarse_pos_tagging
- task1422_mathqa_physics
- task247_dream_answer_generation
- task199_mnli_classification
- task164_mcscript_question_answering_text
- task1541_agnews_classification
- task516_senteval_conjoints_inversion
- task294_storycommonsense_motiv_text_generation
- task501_scruples_anecdotes_post_type_verification
- task213_rocstories_correct_ending_classification
- task821_protoqa_question_generation
- task493_review_polarity_classification
- task308_jeopardy_answer_generation_all
- task1595_event2mind_text_generation_1
- task040_qasc_question_generation
- task231_iirc_link_classification
- task1727_wiqa_what_is_the_effect
- task578_curiosity_dialogs_answer_generation
- task310_race_classification
- task309_race_answer_generation
- task379_agnews_topic_classification
- task030_winogrande_full_person
- task1540_parsed_pdfs_summarization
- task039_qasc_find_overlapping_words
- task1206_atomic_classification_isbefore
- task157_count_vowels_and_consonants
- task339_record_answer_generation
- task453_swag_answer_generation
- task848_pubmedqa_classification
- task673_google_wellformed_query_classification
- task676_ollie_relationship_answer_generation
- task268_casehold_legal_answer_generation
- task844_financial_phrasebank_classification
- task330_gap_answer_generation
- task595_mocha_answer_generation
- task1285_kpa_keypoint_matching
- task234_iirc_passage_line_answer_generation
- task494_review_polarity_answer_generation
- task670_ambigqa_question_generation
- task289_gigaword_summarization
- npr
- nli
- SimpleWiki
- amazon_review_2018
- ccnews_title_text
- agnews
- xsum
- msmarco
- yahoo_answers_title_answer
- squad_pairs
- wow
- mteb-amazon_counterfactual-avs_triplets
- mteb-amazon_massive_intent-avs_triplets
- mteb-amazon_massive_scenario-avs_triplets
- mteb-amazon_reviews_multi-avs_triplets
- mteb-banking77-avs_triplets
- mteb-emotion-avs_triplets
- mteb-imdb-avs_triplets
- mteb-mtop_domain-avs_triplets
- mteb-mtop_intent-avs_triplets
- mteb-toxic_conversations_50k-avs_triplets
- mteb-tweet_sentiment_extraction-avs_triplets
- covid-bing-query-gpt4-avs_triplets
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-64-final")
# Run inference
sentences = [
'Does early second-trimester sonography predict adverse perinatal outcomes in monochorionic diamniotic twin pregnancies?',
'To determine whether intertwin discordant abdominal circumference, femur length, head circumference, and estimated fetal weight sonographic measurements in early second-trimester monochorionic diamniotic twins predict adverse obstetric and neonatal outcomes.We conducted a multicenter retrospective cohort study involving 9 regional perinatal centers in the United States. We examined the records of all monochorionic diamniotic twin pregnancies with two live fetuses at the 16- to 18-week sonographic examination who had serial follow-up sonography until delivery. The intertwin discordance in abdominal circumference, femur length, head circumference, and estimated fetal weight was calculated as the difference between the two fetuses, expressed as a percentage of the larger using the 16- to 18-week sonographic measurements. An adverse composite obstetric outcome was defined as the occurrence of 1 or more of the following in either fetus: intrauterine growth restriction, twin-twin transfusion syndrome, intrauterine fetal death, abnormal growth discordance (≥20% difference), and very preterm birth at or before 28 weeks. An adverse composite neonatal outcome was defined as the occurrence of 1 or more of the following: respiratory distress syndrome, any stage of intraventricular hemorrhage, 5-minute Apgar score less than 7, necrotizing enterocolitis, culture-proven early-onset sepsis, and neonatal death. Receiver operating characteristic and logistic regression-with-generalized estimating equation analyses were constructed.Among the 177 monochorionic diamniotic twin pregnancies analyzed, intertwin abdominal circumference and estimated fetal weight discordances were only predictive of adverse composite obstetric outcomes (areas under the curve, 79% and 80%, respectively). Receiver operating characteristic curves showed that intertwin discordances in abdominal circumference, femur length, head circumference, and estimated fetal weight were not acceptable predictors of twin-twin transfusion syndrome or adverse neonatal outcomes.',
'Calcium and vitamin D are essential nutrients for bone metabolism Vitamin D can either be obtained from dietary sources or cutaneous synthesis. The study was conducted in subtropic weather; therefore, some might believe that the levels of solar radiation would be sufficient in this area.To evaluate calcium and vitamin D supplementation in postmenopausal women with osteoporosis living in a sunny country.A 3-month controlled clinical trial with 64 postmenopausal women with osteoporosis, mean age 62 + or - 8 years. They were randomly assigned to either the supplement group, who received 1,200 mg of calcium carbonate and 400 IU (10 microg) of vitamin D(3,) or the control group. Dietary intake assessment was performed, bone mineral density and body composition were measured, and biochemical markers of bone metabolism were analyzed.Considering all participants at baseline, serum vitamin D was under 75 nmol/l in 91.4% of the participants. The concentration of serum 25(OH)D increased significantly (p = 0.023) after 3 months of supplementation from 46.67 + or - 13.97 to 59.47 + or - 17.50 nmol/l. However, the dose given was limited in effect, and 86.2% of the supplement group did not reach optimal levels of 25(OH)D. Parathyroid hormone was elevated in 22.4% of the study group. After the intervention period, mean parathyroid hormone tended to decrease in the supplement group (p = 0.063).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Triplet
* Dataset: `medi-mteb-dev`
* Evaluated with [TripletEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9153** |
## Training Details
### Training Datasets
#### NQ
* Dataset: NQ
* Size: 49,548 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details |
MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### pubmed
* Dataset: pubmed
* Size: 29,716 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### specter_train_triples
* Dataset: specter_train_triples
* Size: 49,548 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### S2ORC_citations_abstracts
* Dataset: S2ORC_citations_abstracts
* Size: 99,032 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### fever
* Dataset: fever
* Size: 74,258 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gooaq_pairs
* Dataset: gooaq_pairs
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### codesearchnet
* Dataset: codesearchnet
* Size: 14,890 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wikihow
* Dataset: wikihow
* Size: 5,006 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### WikiAnswers
* Dataset: WikiAnswers
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### eli5_question_answer
* Dataset: eli5_question_answer
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon-qa
* Dataset: amazon-qa
* Size: 99,032 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### medmcqa
* Dataset: medmcqa
* Size: 29,716 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### zeroshot
* Dataset: zeroshot
* Size: 14,890 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### TriviaQA_pairs
* Dataset: TriviaQA_pairs
* Size: 49,548 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### PAQ_pairs
* Dataset: PAQ_pairs
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### stackexchange_duplicate_questions_title-body_title-body
* Dataset: stackexchange_duplicate_questions_title-body_title-body
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### trex
* Dataset: trex
* Size: 29,716 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### flickr30k_captions
* Dataset: flickr30k_captions
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### hotpotqa
* Dataset: hotpotqa
* Size: 39,600 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task671_ambigqa_text_generation
* Dataset: task671_ambigqa_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task061_ropes_answer_generation
* Dataset: task061_ropes_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task285_imdb_answer_generation
* Dataset: task285_imdb_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task905_hate_speech_offensive_classification
* Dataset: task905_hate_speech_offensive_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task566_circa_classification
* Dataset: task566_circa_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_snli_entailment_to_neutral_text_modification
* Dataset: task184_snli_entailment_to_neutral_text_modification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task280_stereoset_classification_stereotype_type
* Dataset: task280_stereoset_classification_stereotype_type
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1599_smcalflow_classification
* Dataset: task1599_smcalflow_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1384_deal_or_no_dialog_classification
* Dataset: task1384_deal_or_no_dialog_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task591_sciq_answer_generation
* Dataset: task591_sciq_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task823_peixian-rtgender_sentiment_analysis
* Dataset: task823_peixian-rtgender_sentiment_analysis
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task023_cosmosqa_question_generation
* Dataset: task023_cosmosqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task900_freebase_qa_category_classification
* Dataset: task900_freebase_qa_category_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task924_event2mind_word_generation
* Dataset: task924_event2mind_word_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task152_tomqa_find_location_easy_noise
* Dataset: task152_tomqa_find_location_easy_noise
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1368_healthfact_sentence_generation
* Dataset: task1368_healthfact_sentence_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1661_super_glue_classification
* Dataset: task1661_super_glue_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1187_politifact_classification
* Dataset: task1187_politifact_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1728_web_nlg_data_to_text
* Dataset: task1728_web_nlg_data_to_text
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task112_asset_simple_sentence_identification
* Dataset: task112_asset_simple_sentence_identification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1340_msr_text_compression_compression
* Dataset: task1340_msr_text_compression_compression
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task072_abductivenli_answer_generation
* Dataset: task072_abductivenli_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1504_hatexplain_answer_generation
* Dataset: task1504_hatexplain_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task684_online_privacy_policy_text_information_type_generation
* Dataset: task684_online_privacy_policy_text_information_type_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1290_xsum_summarization
* Dataset: task1290_xsum_summarization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task075_squad1.1_answer_generation
* Dataset: task075_squad1.1_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1587_scifact_classification
* Dataset: task1587_scifact_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task384_socialiqa_question_classification
* Dataset: task384_socialiqa_question_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1555_scitail_answer_generation
* Dataset: task1555_scitail_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1532_daily_dialog_emotion_classification
* Dataset: task1532_daily_dialog_emotion_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task239_tweetqa_answer_generation
* Dataset: task239_tweetqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task596_mocha_question_generation
* Dataset: task596_mocha_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1411_dart_subject_identification
* Dataset: task1411_dart_subject_identification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1359_numer_sense_answer_generation
* Dataset: task1359_numer_sense_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task329_gap_classification
* Dataset: task329_gap_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task220_rocstories_title_classification
* Dataset: task220_rocstories_title_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task316_crows-pairs_classification_stereotype
* Dataset: task316_crows-pairs_classification_stereotype
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task495_semeval_headline_classification
* Dataset: task495_semeval_headline_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1168_brown_coarse_pos_tagging
* Dataset: task1168_brown_coarse_pos_tagging
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task348_squad2.0_unanswerable_question_generation
* Dataset: task348_squad2.0_unanswerable_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task049_multirc_questions_needed_to_answer
* Dataset: task049_multirc_questions_needed_to_answer
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1534_daily_dialog_question_classification
* Dataset: task1534_daily_dialog_question_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task322_jigsaw_classification_threat
* Dataset: task322_jigsaw_classification_threat
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task295_semeval_2020_task4_commonsense_reasoning
* Dataset: task295_semeval_2020_task4_commonsense_reasoning
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task186_snli_contradiction_to_entailment_text_modification
* Dataset: task186_snli_contradiction_to_entailment_text_modification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task034_winogrande_question_modification_object
* Dataset: task034_winogrande_question_modification_object
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task160_replace_letter_in_a_sentence
* Dataset: task160_replace_letter_in_a_sentence
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task469_mrqa_answer_generation
* Dataset: task469_mrqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task105_story_cloze-rocstories_sentence_generation
* Dataset: task105_story_cloze-rocstories_sentence_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task649_race_blank_question_generation
* Dataset: task649_race_blank_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1536_daily_dialog_happiness_classification
* Dataset: task1536_daily_dialog_happiness_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task683_online_privacy_policy_text_purpose_answer_generation
* Dataset: task683_online_privacy_policy_text_purpose_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task024_cosmosqa_answer_generation
* Dataset: task024_cosmosqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task584_udeps_eng_fine_pos_tagging
* Dataset: task584_udeps_eng_fine_pos_tagging
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task066_timetravel_binary_consistency_classification
* Dataset: task066_timetravel_binary_consistency_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task413_mickey_en_sentence_perturbation_generation
* Dataset: task413_mickey_en_sentence_perturbation_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task182_duorc_question_generation
* Dataset: task182_duorc_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task028_drop_answer_generation
* Dataset: task028_drop_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1601_webquestions_answer_generation
* Dataset: task1601_webquestions_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1295_adversarial_qa_question_answering
* Dataset: task1295_adversarial_qa_question_answering
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task201_mnli_neutral_classification
* Dataset: task201_mnli_neutral_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task038_qasc_combined_fact
* Dataset: task038_qasc_combined_fact
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task293_storycommonsense_emotion_text_generation
* Dataset: task293_storycommonsense_emotion_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task572_recipe_nlg_text_generation
* Dataset: task572_recipe_nlg_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task517_emo_classify_emotion_of_dialogue
* Dataset: task517_emo_classify_emotion_of_dialogue
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task382_hybridqa_answer_generation
* Dataset: task382_hybridqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task176_break_decompose_questions
* Dataset: task176_break_decompose_questions
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1291_multi_news_summarization
* Dataset: task1291_multi_news_summarization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task155_count_nouns_verbs
* Dataset: task155_count_nouns_verbs
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task031_winogrande_question_generation_object
* Dataset: task031_winogrande_question_generation_object
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task279_stereoset_classification_stereotype
* Dataset: task279_stereoset_classification_stereotype
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1336_peixian_equity_evaluation_corpus_gender_classifier
* Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task508_scruples_dilemmas_more_ethical_isidentifiable
* Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task518_emo_different_dialogue_emotions
* Dataset: task518_emo_different_dialogue_emotions
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task077_splash_explanation_to_sql
* Dataset: task077_splash_explanation_to_sql
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task923_event2mind_classifier
* Dataset: task923_event2mind_classifier
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task470_mrqa_question_generation
* Dataset: task470_mrqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task638_multi_woz_classification
* Dataset: task638_multi_woz_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1412_web_questions_question_answering
* Dataset: task1412_web_questions_question_answering
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task847_pubmedqa_question_generation
* Dataset: task847_pubmedqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task678_ollie_actual_relationship_answer_generation
* Dataset: task678_ollie_actual_relationship_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task290_tellmewhy_question_answerability
* Dataset: task290_tellmewhy_question_answerability
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task575_air_dialogue_classification
* Dataset: task575_air_dialogue_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task189_snli_neutral_to_contradiction_text_modification
* Dataset: task189_snli_neutral_to_contradiction_text_modification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task026_drop_question_generation
* Dataset: task026_drop_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task162_count_words_starting_with_letter
* Dataset: task162_count_words_starting_with_letter
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task079_conala_concat_strings
* Dataset: task079_conala_concat_strings
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task610_conllpp_ner
* Dataset: task610_conllpp_ner
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task046_miscellaneous_question_typing
* Dataset: task046_miscellaneous_question_typing
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task197_mnli_domain_answer_generation
* Dataset: task197_mnli_domain_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1325_qa_zre_question_generation_on_subject_relation
* Dataset: task1325_qa_zre_question_generation_on_subject_relation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task430_senteval_subject_count
* Dataset: task430_senteval_subject_count
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task672_nummersense
* Dataset: task672_nummersense
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task402_grailqa_paraphrase_generation
* Dataset: task402_grailqa_paraphrase_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task904_hate_speech_offensive_classification
* Dataset: task904_hate_speech_offensive_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task192_hotpotqa_sentence_generation
* Dataset: task192_hotpotqa_sentence_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task069_abductivenli_classification
* Dataset: task069_abductivenli_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task574_air_dialogue_sentence_generation
* Dataset: task574_air_dialogue_sentence_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task187_snli_entailment_to_contradiction_text_modification
* Dataset: task187_snli_entailment_to_contradiction_text_modification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task749_glucose_reverse_cause_emotion_detection
* Dataset: task749_glucose_reverse_cause_emotion_detection
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1552_scitail_question_generation
* Dataset: task1552_scitail_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task750_aqua_multiple_choice_answering
* Dataset: task750_aqua_multiple_choice_answering
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task327_jigsaw_classification_toxic
* Dataset: task327_jigsaw_classification_toxic
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1502_hatexplain_classification
* Dataset: task1502_hatexplain_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task328_jigsaw_classification_insult
* Dataset: task328_jigsaw_classification_insult
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task304_numeric_fused_head_resolution
* Dataset: task304_numeric_fused_head_resolution
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1293_kilt_tasks_hotpotqa_question_answering
* Dataset: task1293_kilt_tasks_hotpotqa_question_answering
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task216_rocstories_correct_answer_generation
* Dataset: task216_rocstories_correct_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1326_qa_zre_question_generation_from_answer
* Dataset: task1326_qa_zre_question_generation_from_answer
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1729_personachat_generate_next
* Dataset: task1729_personachat_generate_next
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1202_atomic_classification_xneed
* Dataset: task1202_atomic_classification_xneed
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task400_paws_paraphrase_classification
* Dataset: task400_paws_paraphrase_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task502_scruples_anecdotes_whoiswrong_verification
* Dataset: task502_scruples_anecdotes_whoiswrong_verification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task088_identify_typo_verification
* Dataset: task088_identify_typo_verification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task221_rocstories_two_choice_classification
* Dataset: task221_rocstories_two_choice_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task200_mnli_entailment_classification
* Dataset: task200_mnli_entailment_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task074_squad1.1_question_generation
* Dataset: task074_squad1.1_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task581_socialiqa_question_generation
* Dataset: task581_socialiqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1186_nne_hrngo_classification
* Dataset: task1186_nne_hrngo_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task898_freebase_qa_answer_generation
* Dataset: task898_freebase_qa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1408_dart_similarity_classification
* Dataset: task1408_dart_similarity_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task168_strategyqa_question_decomposition
* Dataset: task168_strategyqa_question_decomposition
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1357_xlsum_summary_generation
* Dataset: task1357_xlsum_summary_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task390_torque_text_span_selection
* Dataset: task390_torque_text_span_selection
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task165_mcscript_question_answering_commonsense
* Dataset: task165_mcscript_question_answering_commonsense
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1533_daily_dialog_formal_classification
* Dataset: task1533_daily_dialog_formal_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task002_quoref_answer_generation
* Dataset: task002_quoref_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1297_qasc_question_answering
* Dataset: task1297_qasc_question_answering
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task305_jeopardy_answer_generation_normal
* Dataset: task305_jeopardy_answer_generation_normal
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task029_winogrande_full_object
* Dataset: task029_winogrande_full_object
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1327_qa_zre_answer_generation_from_question
* Dataset: task1327_qa_zre_answer_generation_from_question
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task326_jigsaw_classification_obscene
* Dataset: task326_jigsaw_classification_obscene
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1542_every_ith_element_from_starting
* Dataset: task1542_every_ith_element_from_starting
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task570_recipe_nlg_ner_generation
* Dataset: task570_recipe_nlg_ner_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1409_dart_text_generation
* Dataset: task1409_dart_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task401_numeric_fused_head_reference
* Dataset: task401_numeric_fused_head_reference
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task846_pubmedqa_classification
* Dataset: task846_pubmedqa_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1712_poki_classification
* Dataset: task1712_poki_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task344_hybridqa_answer_generation
* Dataset: task344_hybridqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task875_emotion_classification
* Dataset: task875_emotion_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1214_atomic_classification_xwant
* Dataset: task1214_atomic_classification_xwant
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task106_scruples_ethical_judgment
* Dataset: task106_scruples_ethical_judgment
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task238_iirc_answer_from_passage_answer_generation
* Dataset: task238_iirc_answer_from_passage_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1391_winogrande_easy_answer_generation
* Dataset: task1391_winogrande_easy_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task195_sentiment140_classification
* Dataset: task195_sentiment140_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task163_count_words_ending_with_letter
* Dataset: task163_count_words_ending_with_letter
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task579_socialiqa_classification
* Dataset: task579_socialiqa_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task569_recipe_nlg_text_generation
* Dataset: task569_recipe_nlg_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1602_webquestion_question_genreation
* Dataset: task1602_webquestion_question_genreation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task747_glucose_cause_emotion_detection
* Dataset: task747_glucose_cause_emotion_detection
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task219_rocstories_title_answer_generation
* Dataset: task219_rocstories_title_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task178_quartz_question_answering
* Dataset: task178_quartz_question_answering
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task103_facts2story_long_text_generation
* Dataset: task103_facts2story_long_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task301_record_question_generation
* Dataset: task301_record_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1369_healthfact_sentence_generation
* Dataset: task1369_healthfact_sentence_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task515_senteval_odd_word_out
* Dataset: task515_senteval_odd_word_out
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task496_semeval_answer_generation
* Dataset: task496_semeval_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1658_billsum_summarization
* Dataset: task1658_billsum_summarization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1204_atomic_classification_hinderedby
* Dataset: task1204_atomic_classification_hinderedby
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1392_superglue_multirc_answer_verification
* Dataset: task1392_superglue_multirc_answer_verification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task306_jeopardy_answer_generation_double
* Dataset: task306_jeopardy_answer_generation_double
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1286_openbookqa_question_answering
* Dataset: task1286_openbookqa_question_answering
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task159_check_frequency_of_words_in_sentence_pair
* Dataset: task159_check_frequency_of_words_in_sentence_pair
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task151_tomqa_find_location_easy_clean
* Dataset: task151_tomqa_find_location_easy_clean
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task323_jigsaw_classification_sexually_explicit
* Dataset: task323_jigsaw_classification_sexually_explicit
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task037_qasc_generate_related_fact
* Dataset: task037_qasc_generate_related_fact
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task027_drop_answer_type_generation
* Dataset: task027_drop_answer_type_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1596_event2mind_text_generation_2
* Dataset: task1596_event2mind_text_generation_2
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task141_odd-man-out_classification_category
* Dataset: task141_odd-man-out_classification_category
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task194_duorc_answer_generation
* Dataset: task194_duorc_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task679_hope_edi_english_text_classification
* Dataset: task679_hope_edi_english_text_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task246_dream_question_generation
* Dataset: task246_dream_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1195_disflqa_disfluent_to_fluent_conversion
* Dataset: task1195_disflqa_disfluent_to_fluent_conversion
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task065_timetravel_consistent_sentence_classification
* Dataset: task065_timetravel_consistent_sentence_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task351_winomt_classification_gender_identifiability_anti
* Dataset: task351_winomt_classification_gender_identifiability_anti
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task580_socialiqa_answer_generation
* Dataset: task580_socialiqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task583_udeps_eng_coarse_pos_tagging
* Dataset: task583_udeps_eng_coarse_pos_tagging
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task202_mnli_contradiction_classification
* Dataset: task202_mnli_contradiction_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task222_rocstories_two_chioce_slotting_classification
* Dataset: task222_rocstories_two_chioce_slotting_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task498_scruples_anecdotes_whoiswrong_classification
* Dataset: task498_scruples_anecdotes_whoiswrong_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task067_abductivenli_answer_generation
* Dataset: task067_abductivenli_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task616_cola_classification
* Dataset: task616_cola_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task286_olid_offense_judgment
* Dataset: task286_olid_offense_judgment
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task188_snli_neutral_to_entailment_text_modification
* Dataset: task188_snli_neutral_to_entailment_text_modification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task223_quartz_explanation_generation
* Dataset: task223_quartz_explanation_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task820_protoqa_answer_generation
* Dataset: task820_protoqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task196_sentiment140_answer_generation
* Dataset: task196_sentiment140_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1678_mathqa_answer_selection
* Dataset: task1678_mathqa_answer_selection
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task349_squad2.0_answerable_unanswerable_question_classification
* Dataset: task349_squad2.0_answerable_unanswerable_question_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task154_tomqa_find_location_hard_noise
* Dataset: task154_tomqa_find_location_hard_noise
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task333_hateeval_classification_hate_en
* Dataset: task333_hateeval_classification_hate_en
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task235_iirc_question_from_subtext_answer_generation
* Dataset: task235_iirc_question_from_subtext_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1554_scitail_classification
* Dataset: task1554_scitail_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task210_logic2text_structured_text_generation
* Dataset: task210_logic2text_structured_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task035_winogrande_question_modification_person
* Dataset: task035_winogrande_question_modification_person
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task230_iirc_passage_classification
* Dataset: task230_iirc_passage_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1356_xlsum_title_generation
* Dataset: task1356_xlsum_title_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1726_mathqa_correct_answer_generation
* Dataset: task1726_mathqa_correct_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task302_record_classification
* Dataset: task302_record_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task380_boolq_yes_no_question
* Dataset: task380_boolq_yes_no_question
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task212_logic2text_classification
* Dataset: task212_logic2text_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task748_glucose_reverse_cause_event_detection
* Dataset: task748_glucose_reverse_cause_event_detection
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task834_mathdataset_classification
* Dataset: task834_mathdataset_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task350_winomt_classification_gender_identifiability_pro
* Dataset: task350_winomt_classification_gender_identifiability_pro
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task191_hotpotqa_question_generation
* Dataset: task191_hotpotqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task236_iirc_question_from_passage_answer_generation
* Dataset: task236_iirc_question_from_passage_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task217_rocstories_ordering_answer_generation
* Dataset: task217_rocstories_ordering_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task568_circa_question_generation
* Dataset: task568_circa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task614_glucose_cause_event_detection
* Dataset: task614_glucose_cause_event_detection
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task361_spolin_yesand_prompt_response_classification
* Dataset: task361_spolin_yesand_prompt_response_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task421_persent_sentence_sentiment_classification
* Dataset: task421_persent_sentence_sentiment_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task203_mnli_sentence_generation
* Dataset: task203_mnli_sentence_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task420_persent_document_sentiment_classification
* Dataset: task420_persent_document_sentiment_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task153_tomqa_find_location_hard_clean
* Dataset: task153_tomqa_find_location_hard_clean
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task346_hybridqa_classification
* Dataset: task346_hybridqa_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1211_atomic_classification_hassubevent
* Dataset: task1211_atomic_classification_hassubevent
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task360_spolin_yesand_response_generation
* Dataset: task360_spolin_yesand_response_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task510_reddit_tifu_title_summarization
* Dataset: task510_reddit_tifu_title_summarization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task511_reddit_tifu_long_text_summarization
* Dataset: task511_reddit_tifu_long_text_summarization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task345_hybridqa_answer_generation
* Dataset: task345_hybridqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task270_csrg_counterfactual_context_generation
* Dataset: task270_csrg_counterfactual_context_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task307_jeopardy_answer_generation_final
* Dataset: task307_jeopardy_answer_generation_final
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task001_quoref_question_generation
* Dataset: task001_quoref_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task089_swap_words_verification
* Dataset: task089_swap_words_verification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1196_atomic_classification_oeffect
* Dataset: task1196_atomic_classification_oeffect
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task080_piqa_answer_generation
* Dataset: task080_piqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1598_nyc_long_text_generation
* Dataset: task1598_nyc_long_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task240_tweetqa_question_generation
* Dataset: task240_tweetqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task615_moviesqa_answer_generation
* Dataset: task615_moviesqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1347_glue_sts-b_similarity_classification
* Dataset: task1347_glue_sts-b_similarity_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task114_is_the_given_word_longest
* Dataset: task114_is_the_given_word_longest
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task292_storycommonsense_character_text_generation
* Dataset: task292_storycommonsense_character_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task115_help_advice_classification
* Dataset: task115_help_advice_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task431_senteval_object_count
* Dataset: task431_senteval_object_count
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1360_numer_sense_multiple_choice_qa_generation
* Dataset: task1360_numer_sense_multiple_choice_qa_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task177_para-nmt_paraphrasing
* Dataset: task177_para-nmt_paraphrasing
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task132_dais_text_modification
* Dataset: task132_dais_text_modification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task269_csrg_counterfactual_story_generation
* Dataset: task269_csrg_counterfactual_story_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task233_iirc_link_exists_classification
* Dataset: task233_iirc_link_exists_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task161_count_words_containing_letter
* Dataset: task161_count_words_containing_letter
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1205_atomic_classification_isafter
* Dataset: task1205_atomic_classification_isafter
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task571_recipe_nlg_ner_generation
* Dataset: task571_recipe_nlg_ner_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1292_yelp_review_full_text_categorization
* Dataset: task1292_yelp_review_full_text_categorization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task428_senteval_inversion
* Dataset: task428_senteval_inversion
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task311_race_question_generation
* Dataset: task311_race_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task429_senteval_tense
* Dataset: task429_senteval_tense
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task403_creak_commonsense_inference
* Dataset: task403_creak_commonsense_inference
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task929_products_reviews_classification
* Dataset: task929_products_reviews_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task582_naturalquestion_answer_generation
* Dataset: task582_naturalquestion_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task237_iirc_answer_from_subtext_answer_generation
* Dataset: task237_iirc_answer_from_subtext_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task050_multirc_answerability
* Dataset: task050_multirc_answerability
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_break_generate_question
* Dataset: task184_break_generate_question
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task669_ambigqa_answer_generation
* Dataset: task669_ambigqa_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task169_strategyqa_sentence_generation
* Dataset: task169_strategyqa_sentence_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task500_scruples_anecdotes_title_generation
* Dataset: task500_scruples_anecdotes_title_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task241_tweetqa_classification
* Dataset: task241_tweetqa_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1345_glue_qqp_question_paraprashing
* Dataset: task1345_glue_qqp_question_paraprashing
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task218_rocstories_swap_order_answer_generation
* Dataset: task218_rocstories_swap_order_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task613_politifact_text_generation
* Dataset: task613_politifact_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1167_penn_treebank_coarse_pos_tagging
* Dataset: task1167_penn_treebank_coarse_pos_tagging
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1422_mathqa_physics
* Dataset: task1422_mathqa_physics
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task247_dream_answer_generation
* Dataset: task247_dream_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task199_mnli_classification
* Dataset: task199_mnli_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task164_mcscript_question_answering_text
* Dataset: task164_mcscript_question_answering_text
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1541_agnews_classification
* Dataset: task1541_agnews_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task516_senteval_conjoints_inversion
* Dataset: task516_senteval_conjoints_inversion
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task294_storycommonsense_motiv_text_generation
* Dataset: task294_storycommonsense_motiv_text_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task501_scruples_anecdotes_post_type_verification
* Dataset: task501_scruples_anecdotes_post_type_verification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task213_rocstories_correct_ending_classification
* Dataset: task213_rocstories_correct_ending_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task821_protoqa_question_generation
* Dataset: task821_protoqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task493_review_polarity_classification
* Dataset: task493_review_polarity_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task308_jeopardy_answer_generation_all
* Dataset: task308_jeopardy_answer_generation_all
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1595_event2mind_text_generation_1
* Dataset: task1595_event2mind_text_generation_1
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task040_qasc_question_generation
* Dataset: task040_qasc_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task231_iirc_link_classification
* Dataset: task231_iirc_link_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1727_wiqa_what_is_the_effect
* Dataset: task1727_wiqa_what_is_the_effect
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task578_curiosity_dialogs_answer_generation
* Dataset: task578_curiosity_dialogs_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task310_race_classification
* Dataset: task310_race_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task309_race_answer_generation
* Dataset: task309_race_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task379_agnews_topic_classification
* Dataset: task379_agnews_topic_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task030_winogrande_full_person
* Dataset: task030_winogrande_full_person
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1540_parsed_pdfs_summarization
* Dataset: task1540_parsed_pdfs_summarization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task039_qasc_find_overlapping_words
* Dataset: task039_qasc_find_overlapping_words
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1206_atomic_classification_isbefore
* Dataset: task1206_atomic_classification_isbefore
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task157_count_vowels_and_consonants
* Dataset: task157_count_vowels_and_consonants
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task339_record_answer_generation
* Dataset: task339_record_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task453_swag_answer_generation
* Dataset: task453_swag_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task848_pubmedqa_classification
* Dataset: task848_pubmedqa_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task673_google_wellformed_query_classification
* Dataset: task673_google_wellformed_query_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task676_ollie_relationship_answer_generation
* Dataset: task676_ollie_relationship_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task268_casehold_legal_answer_generation
* Dataset: task268_casehold_legal_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task844_financial_phrasebank_classification
* Dataset: task844_financial_phrasebank_classification
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task330_gap_answer_generation
* Dataset: task330_gap_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task595_mocha_answer_generation
* Dataset: task595_mocha_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1285_kpa_keypoint_matching
* Dataset: task1285_kpa_keypoint_matching
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task234_iirc_passage_line_answer_generation
* Dataset: task234_iirc_passage_line_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task494_review_polarity_answer_generation
* Dataset: task494_review_polarity_answer_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task670_ambigqa_question_generation
* Dataset: task670_ambigqa_question_generation
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task289_gigaword_summarization
* Dataset: task289_gigaword_summarization
* Size: 634 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 634 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### npr
* Dataset: npr
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### nli
* Dataset: nli
* Size: 49,548 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### SimpleWiki
* Dataset: SimpleWiki
* Size: 5,006 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon_review_2018
* Dataset: amazon_review_2018
* Size: 99,032 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### ccnews_title_text
* Dataset: ccnews_title_text
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### agnews
* Dataset: agnews
* Size: 44,606 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### xsum
* Dataset: xsum
* Size: 9,948 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### msmarco
* Dataset: msmarco
* Size: 173,290 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### yahoo_answers_title_answer
* Dataset: yahoo_answers_title_answer
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### squad_pairs
* Dataset: squad_pairs
* Size: 24,774 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wow
* Dataset: wow
* Size: 29,716 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_counterfactual-avs_triplets
* Dataset: mteb-amazon_counterfactual-avs_triplets
* Size: 3,991 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_intent-avs_triplets
* Dataset: mteb-amazon_massive_intent-avs_triplets
* Size: 11,405 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_scenario-avs_triplets
* Dataset: mteb-amazon_massive_scenario-avs_triplets
* Size: 11,405 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_reviews_multi-avs_triplets
* Dataset: mteb-amazon_reviews_multi-avs_triplets
* Size: 198,000 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-banking77-avs_triplets
* Dataset: mteb-banking77-avs_triplets
* Size: 9,947 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-emotion-avs_triplets
* Dataset: mteb-emotion-avs_triplets
* Size: 15,840 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-imdb-avs_triplets
* Dataset: mteb-imdb-avs_triplets
* Size: 24,647 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_domain-avs_triplets
* Dataset: mteb-mtop_domain-avs_triplets
* Size: 15,523 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_intent-avs_triplets
* Dataset: mteb-mtop_intent-avs_triplets
* Size: 15,523 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-toxic_conversations_50k-avs_triplets
* Dataset: mteb-toxic_conversations_50k-avs_triplets
* Size: 49,421 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-tweet_sentiment_extraction-avs_triplets
* Dataset: mteb-tweet_sentiment_extraction-avs_triplets
* Size: 27,245 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### covid-bing-query-gpt4-avs_triplets
* Dataset: covid-bing-query-gpt4-avs_triplets
* Size: 4,942 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 18,269 evaluation samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters