--- base_model: sentence-transformers/all-MiniLM-L6-v2 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1821475 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Estimating User Location in Social Media with Stacked Denoising Auto-encoders sentences: - 'Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach' - Conventional sphygmomanometers are being replaced by automated devices; can they be used to accurately calculate ABPI?Thirty-six volunteers (72 legs) attending a vascular clinic had their ankle, brachial blood pressure and ABPIs calculated using each of these 3 methods. (1) Conventional aneuroid BP cuff with hand held doppler. (2) OMRON HEM 705CP portable automated BP monitor. (3) The hand held doppler to determine systolic BP measured by the OMRON.Conventional doppler readings for brachial and ankle pressures were generally higher than those obtained digitally by less than 3 mmHg but this was not statistically significant. This did not translate into a significant difference in ABPIs obtained using all 3 techniques; the correlation coefficient of conventional ABPI with automated ABPI (method 2) was 0.746, this was improved to 0.899 using method 3. The OMRON failed to detect a signal in 16 of the 72 legs, 11 of these legs had ABPIs<0.66. - Deep neural networks based user interface detection for mobile applications using symbol marker - source_sentence: 'Central mesenteric lymph node BER-Ep4+ cells in colorectal cancer: challenge to sentinel node concept?' sentences: - The Lovely Bones (film) the film had many positive messages about life." The Lovely Bones (film) The Lovely Bones is a 2009 supernatural drama film directed by Peter Jackson, and starring Mark Wahlberg, Rachel Weisz, Susan Sarandon, Stanley Tucci, Michael Imperioli, and Saoirse Ronan. The screenplay by Fran Walsh, Philippa Boyens, and Jackson was based on Alice Sebold’s award-winning and bestselling 2002 novel of the same name. It follows a girl who is murdered and watches over her family from the in-between, and is torn between seeking vengeance on her killer and allowing her family to heal. An international co-production between the United States, - Postoperative intracranial hematoma (POIH) is a frequent sequela secondary to cranial surgery. The role of routine early postoperative computed tomography (CT) scanning in the detection of POIH remains controversial. The study was aimed at analyzing the effect of routine early CT scanning after craniotomy for the early detection of POIH.Routine early postoperative CT scanning was performed at our institute, and a retrospective study was conducted to analyze the data. POIH was defined as an intracranial hematoma requiring surgical management.A total of 1,148 patients undergoing craniotomy were included in this study; 28 of these patients developed POIH. The majority of POIH cases (15/28, 54 %) were detected during the first 6 h following craniotomy. A routine CT scan was performed on all included patients but two; however, CT scans detected only 16 POIH cases. During the first 6 h, the rate at which CT scans detected POIH was 1.9 % (15/786); subsequently, the rate decreased to only 0.3 % (1/360; p < 0.05, compared with the rate during the first 6 h). Among patients without clinical manifestations, the rate at which the routine post-craniotomy CT scan detected POIH was only 0.7 % (5/721) (p < 0.05, compared with the incidence of POIH). Finally, among high-risk POIH patients, the POIH-positive rate of routine CT scanning was elevated. - The role of sentinel lymph nodes in colorectal cancer remains unclear.Cryosections from central para-aortic mesenterial lymph nodes were stained using mAb BER-Ep4. Overall survival and distant recurrence were calculated using Kaplan-Meier plots.All patients (n = 48) were free of distant metastases and curatively resected (R0). 23 pN0, 13 pN1 and 12 pN2 stages were found. 21/48 patients (44%) showed BER-Ep4+ cells in their central lymph nodes (7/23 pN0, 8/13 pN1, 6/12 pN2). In 6/23 pN0 patients, BER-Ep4+ cells were also found in locoregional nodes (p = 0.03, Fisher's exact test). pN status predicted overall survival (p = 0.006, Kaplan-Meier curve, log-rank test). An impact was exerted by central mesenteric BER-Ep4+ cells on overall survival (p = 0.009 in pN0 patients, p = 0.07 for all pN) and distant recurrence-free survival (p = 0.001 in pN0 patients, p = 0.007 for all pN). Multivariate analysis showed an independent prognostic effect on overall survival in pN0 patients (p = 0.022). - source_sentence: when did the samsung galaxy s8 come out sentences: - Samsung Galaxy S8 support for Daydream. The Galaxy S8 was one of the first Android phones to support ARCore, Google's augmented reality engine. In February 2018, the official Android 8.0 Oreo update began rolling out to the Samsung Galaxy S8, Samsung Galaxy S8+, and Samsung Galaxy S8 Active. Besides the phone's protective case reportedly cracking and peeling away in under 2 months of use, Dan Seifert of "The Verge" praised the design of the Galaxy S8, describing it as a "stunning device to look at and hold" that was "refined and polished to a literal shine", and adding that it "truly doesn't look - British Raj British Raj The British Raj (; from "rāj", literally, "rule" in Hindustani) was the rule by the British Crown in the Indian subcontinent between 1858 and 1947. The rule is also called Crown rule in India, or direct rule in India. The region under British control was commonly called British India or simply India in contemporaneous usage, and included areas directly administered by the United Kingdom, which were collectively called British India, and those ruled by indigenous rulers, but under British tutelage or paramountcy, and called the princely states. The whole was also informally called the Indian Empire. As India, - Samsung Galaxy S8 Samsung Galaxy S8 The Samsung Galaxy S8, Samsung Galaxy S8+ (shortened to S8 and S8+, respectively) and Samsung Galaxy S8 Active are Android smartphones (with the S8+ being the phablet smartphone) produced by Samsung Electronics as the eighth generation of the Samsung Galaxy S series. The S8 and S8+ were unveiled on 29 March 2017 and directly succeeded the Samsung Galaxy S7 and S7 edge, with a North American release on 21 April 2017 and international rollout throughout April and May. The S8 Active was announced on 8 August 2017 and is exclusive to certain U.S. cellular carriers. The S8 - source_sentence: Can Carrier-Mediated Delivery System Promote the Development of Antisense Imaging? sentences: - 8-track tape month of the vinyl release. The eight-track format became by far the most popular and offered the largest music library of all the tape systems. Eight-track players were fitted as standard equipment in most Rolls-Royce and Bentley cars of the period for sale in Great Britain and worldwide. Optional 8-track players were available in many cars and trucks through the early 1980s. Ampex, based in Elk Grove Village, Illinois, set up a European operation (Ampex Stereo Tapes) in London, England, in 1970 under general manager Gerry Hall, with manufacturing in Nivelles, Belgium, to promote 8-track product (as well as musicassettes) - Heterotopic heart transplantation (HHTx) is a therapeutic option in heart failure patients with fixed elevated pulmonary hypertension. However, survival is poorer in HHTx recipients, and with improving results in continuous flow ventricular assist devices (VADs), many patients can be bridged to allow normalization of pulmonary artery pressures, making them orthotopic heart transplant (OHTx) candidates. Thus, the aim of this study was to analyse the survival of our HHTx cohort and compare them with our VAD bridge patients.A retrospective review of 342 heart transplant patients (315 OHTx and 27 HHTx) performed at our institution over 15 years was compared with 124 bridge-to-transplant VAD patients over the same time period, of whom 69 received an OHTx. Pulmonary artery pressures before and after VAD implant were analysed. Survival was analysed using both univariate and multivariate analyses.HHTx recipients were significantly older, and the donor allografts were older, smaller and had longer ischaemic times than the OHTx cohort. Comparison of the VAD types implanted (pulsatile vs continuous) showed significantly longer time supported on the continuous devices with significantly fewer deaths than the pulsatile devices. The continuous devices were successful in reducing pulmonary artery pressures pretransplant. The HHTx cohort had a significantly poorer survival than the OHTx cohort (P=0.002). Survival on a continuous device and then OHTx was significantly better than either HHTx or pulsatile device support. - We aimed to explore the feasibility of transfection methods for antisense imaging.Antisense oligonucleotides (ASON) targeted to the mRNA of hTERT gene were synthesized and labeled with Technetium-99m and fluorescein isothiocyanate (FITC), respectively. Then, ASON was combined with transfection reagent Lipofectamine 2000 and Xfect(TM), named Lipo-ASON and Xfect-ASON, respectively. After transfection, the labeled ASON was characterized in hNPCs-G3 and hRPE cells. Reverse transcription polymerase chain reaction (RT-PCR) and Western blotting were performed to assay the hTERT mRNA and protein levels after hNPCs-G3 cells were incubated with Lipo-ASON, Xfect-ASON, and naked ASON. In addition, Lipo-ASON, Xfect-ASON, and naked ASON were injected into tumor-bearing mice, and the biodistribution in vivo was performed.The presence of two transfection reagents significantly increased intracellular uptake of radiolabeled ASON in both cell lines compared with naked ASON (p < 0.05). However, there was no significant difference in cellular uptake rates of Lipo-ASON and Xfect-ASON between hNPCs-G3 and hRPE cells. In comparison with naked ASON, the fluorescence intensity was strongly enhanced after binding to transfection reagents. Furthermore, the levels of hTERT mRNA and protein were significantly reduced in cells treated with Lipo-ASON and Xfect-ASON (p < 0.05), but naked ASON had no significant effect on hTERT expression level. The biodistribution study indicated that tumor radioactivity uptake of radiolabeled ASON for naked ASON, Lipo-ASON, and Xfect-ASON group was low and shown no significant difference in vivo. - source_sentence: Does early second-trimester sonography predict adverse perinatal outcomes in monochorionic diamniotic twin pregnancies? sentences: - Calcium and vitamin D are essential nutrients for bone metabolism Vitamin D can either be obtained from dietary sources or cutaneous synthesis. The study was conducted in subtropic weather; therefore, some might believe that the levels of solar radiation would be sufficient in this area.To evaluate calcium and vitamin D supplementation in postmenopausal women with osteoporosis living in a sunny country.A 3-month controlled clinical trial with 64 postmenopausal women with osteoporosis, mean age 62 + or - 8 years. They were randomly assigned to either the supplement group, who received 1,200 mg of calcium carbonate and 400 IU (10 microg) of vitamin D(3,) or the control group. Dietary intake assessment was performed, bone mineral density and body composition were measured, and biochemical markers of bone metabolism were analyzed.Considering all participants at baseline, serum vitamin D was under 75 nmol/l in 91.4% of the participants. The concentration of serum 25(OH)D increased significantly (p = 0.023) after 3 months of supplementation from 46.67 + or - 13.97 to 59.47 + or - 17.50 nmol/l. However, the dose given was limited in effect, and 86.2% of the supplement group did not reach optimal levels of 25(OH)D. Parathyroid hormone was elevated in 22.4% of the study group. After the intervention period, mean parathyroid hormone tended to decrease in the supplement group (p = 0.063). - 'To determine whether intertwin discordant abdominal circumference, femur length, head circumference, and estimated fetal weight sonographic measurements in early second-trimester monochorionic diamniotic twins predict adverse obstetric and neonatal outcomes.We conducted a multicenter retrospective cohort study involving 9 regional perinatal centers in the United States. We examined the records of all monochorionic diamniotic twin pregnancies with two live fetuses at the 16- to 18-week sonographic examination who had serial follow-up sonography until delivery. The intertwin discordance in abdominal circumference, femur length, head circumference, and estimated fetal weight was calculated as the difference between the two fetuses, expressed as a percentage of the larger using the 16- to 18-week sonographic measurements. An adverse composite obstetric outcome was defined as the occurrence of 1 or more of the following in either fetus: intrauterine growth restriction, twin-twin transfusion syndrome, intrauterine fetal death, abnormal growth discordance (≥20% difference), and very preterm birth at or before 28 weeks. An adverse composite neonatal outcome was defined as the occurrence of 1 or more of the following: respiratory distress syndrome, any stage of intraventricular hemorrhage, 5-minute Apgar score less than 7, necrotizing enterocolitis, culture-proven early-onset sepsis, and neonatal death. Receiver operating characteristic and logistic regression-with-generalized estimating equation analyses were constructed.Among the 177 monochorionic diamniotic twin pregnancies analyzed, intertwin abdominal circumference and estimated fetal weight discordances were only predictive of adverse composite obstetric outcomes (areas under the curve, 79% and 80%, respectively). Receiver operating characteristic curves showed that intertwin discordances in abdominal circumference, femur length, head circumference, and estimated fetal weight were not acceptable predictors of twin-twin transfusion syndrome or adverse neonatal outcomes.' - We aimed to investigate our results of carotid endarterectomy operations in symptomatic patients operated by using an intraluminal shunt and without use of an intraluminal shunt in patients with contralateral carotid artery stenosis.We reviewed the results of 144 carotid endarterectomy operations in patients with contralateral carotid artery stenosis from January 2007 to December 2012. These patients were allocated in 2 groups. Group 1 (n = 70) consisted of the patients operated by using an intraluminal shunt and Group 2 (n = 74) consisted of the patients operated without use of an intraluminal shunt. Postoperative neurologic complications were recorded.Temporary neurologic impairment developed in 3 (4.3%) patients postoperatively in group 1 and in 2 (2.7%) patients postoperatively in group 2. This difference was not statistically significant between groups (p = 0.675). None of the patients returned to operation theatre due to excessive bleeding postoperatively. The stroke/death rate was 0.7% in the study group. model-index: - name: all-MiniLM-L6-v2 trained on MEDI-MTEB triplets results: - task: type: triplet name: Triplet dataset: name: medi mteb dev type: medi-mteb-dev metrics: - type: cosine_accuracy value: 0.9152662981006076 name: Cosine Accuracy --- # all-MiniLM-L6-v2 trained on MEDI-MTEB triplets This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Datasets:** - NQ - pubmed - specter_train_triples - S2ORC_citations_abstracts - fever - gooaq_pairs - codesearchnet - wikihow - WikiAnswers - eli5_question_answer - amazon-qa - medmcqa - zeroshot - TriviaQA_pairs - PAQ_pairs - stackexchange_duplicate_questions_title-body_title-body - trex - flickr30k_captions - hotpotqa - task671_ambigqa_text_generation - task061_ropes_answer_generation - task285_imdb_answer_generation - task905_hate_speech_offensive_classification - task566_circa_classification - task184_snli_entailment_to_neutral_text_modification - task280_stereoset_classification_stereotype_type - task1599_smcalflow_classification - task1384_deal_or_no_dialog_classification - task591_sciq_answer_generation - task823_peixian-rtgender_sentiment_analysis - task023_cosmosqa_question_generation - task900_freebase_qa_category_classification - task924_event2mind_word_generation - task152_tomqa_find_location_easy_noise - task1368_healthfact_sentence_generation - task1661_super_glue_classification - task1187_politifact_classification - task1728_web_nlg_data_to_text - task112_asset_simple_sentence_identification - task1340_msr_text_compression_compression - task072_abductivenli_answer_generation - task1504_hatexplain_answer_generation - task684_online_privacy_policy_text_information_type_generation - task1290_xsum_summarization - task075_squad1.1_answer_generation - task1587_scifact_classification - task384_socialiqa_question_classification - task1555_scitail_answer_generation - task1532_daily_dialog_emotion_classification - task239_tweetqa_answer_generation - task596_mocha_question_generation - task1411_dart_subject_identification - task1359_numer_sense_answer_generation - task329_gap_classification - task220_rocstories_title_classification - task316_crows-pairs_classification_stereotype - task495_semeval_headline_classification - task1168_brown_coarse_pos_tagging - task348_squad2.0_unanswerable_question_generation - task049_multirc_questions_needed_to_answer - task1534_daily_dialog_question_classification - task322_jigsaw_classification_threat - task295_semeval_2020_task4_commonsense_reasoning - task186_snli_contradiction_to_entailment_text_modification - task034_winogrande_question_modification_object - task160_replace_letter_in_a_sentence - task469_mrqa_answer_generation - task105_story_cloze-rocstories_sentence_generation - task649_race_blank_question_generation - task1536_daily_dialog_happiness_classification - task683_online_privacy_policy_text_purpose_answer_generation - task024_cosmosqa_answer_generation - task584_udeps_eng_fine_pos_tagging - task066_timetravel_binary_consistency_classification - task413_mickey_en_sentence_perturbation_generation - task182_duorc_question_generation - task028_drop_answer_generation - task1601_webquestions_answer_generation - task1295_adversarial_qa_question_answering - task201_mnli_neutral_classification - task038_qasc_combined_fact - task293_storycommonsense_emotion_text_generation - task572_recipe_nlg_text_generation - task517_emo_classify_emotion_of_dialogue - task382_hybridqa_answer_generation - task176_break_decompose_questions - task1291_multi_news_summarization - task155_count_nouns_verbs - task031_winogrande_question_generation_object - task279_stereoset_classification_stereotype - task1336_peixian_equity_evaluation_corpus_gender_classifier - task508_scruples_dilemmas_more_ethical_isidentifiable - task518_emo_different_dialogue_emotions - task077_splash_explanation_to_sql - task923_event2mind_classifier - task470_mrqa_question_generation - task638_multi_woz_classification - task1412_web_questions_question_answering - task847_pubmedqa_question_generation - task678_ollie_actual_relationship_answer_generation - task290_tellmewhy_question_answerability - task575_air_dialogue_classification - task189_snli_neutral_to_contradiction_text_modification - task026_drop_question_generation - task162_count_words_starting_with_letter - task079_conala_concat_strings - task610_conllpp_ner - task046_miscellaneous_question_typing - task197_mnli_domain_answer_generation - task1325_qa_zre_question_generation_on_subject_relation - task430_senteval_subject_count - task672_nummersense - task402_grailqa_paraphrase_generation - task904_hate_speech_offensive_classification - task192_hotpotqa_sentence_generation - task069_abductivenli_classification - task574_air_dialogue_sentence_generation - task187_snli_entailment_to_contradiction_text_modification - task749_glucose_reverse_cause_emotion_detection - task1552_scitail_question_generation - task750_aqua_multiple_choice_answering - task327_jigsaw_classification_toxic - task1502_hatexplain_classification - task328_jigsaw_classification_insult - task304_numeric_fused_head_resolution - task1293_kilt_tasks_hotpotqa_question_answering - task216_rocstories_correct_answer_generation - task1326_qa_zre_question_generation_from_answer - task1338_peixian_equity_evaluation_corpus_sentiment_classifier - task1729_personachat_generate_next - task1202_atomic_classification_xneed - task400_paws_paraphrase_classification - task502_scruples_anecdotes_whoiswrong_verification - task088_identify_typo_verification - task221_rocstories_two_choice_classification - task200_mnli_entailment_classification - task074_squad1.1_question_generation - task581_socialiqa_question_generation - task1186_nne_hrngo_classification - task898_freebase_qa_answer_generation - task1408_dart_similarity_classification - task168_strategyqa_question_decomposition - task1357_xlsum_summary_generation - task390_torque_text_span_selection - task165_mcscript_question_answering_commonsense - task1533_daily_dialog_formal_classification - task002_quoref_answer_generation - task1297_qasc_question_answering - task305_jeopardy_answer_generation_normal - task029_winogrande_full_object - task1327_qa_zre_answer_generation_from_question - task326_jigsaw_classification_obscene - task1542_every_ith_element_from_starting - task570_recipe_nlg_ner_generation - task1409_dart_text_generation - task401_numeric_fused_head_reference - task846_pubmedqa_classification - task1712_poki_classification - task344_hybridqa_answer_generation - task875_emotion_classification - task1214_atomic_classification_xwant - task106_scruples_ethical_judgment - task238_iirc_answer_from_passage_answer_generation - task1391_winogrande_easy_answer_generation - task195_sentiment140_classification - task163_count_words_ending_with_letter - task579_socialiqa_classification - task569_recipe_nlg_text_generation - task1602_webquestion_question_genreation - task747_glucose_cause_emotion_detection - task219_rocstories_title_answer_generation - task178_quartz_question_answering - task103_facts2story_long_text_generation - task301_record_question_generation - task1369_healthfact_sentence_generation - task515_senteval_odd_word_out - task496_semeval_answer_generation - task1658_billsum_summarization - task1204_atomic_classification_hinderedby - task1392_superglue_multirc_answer_verification - task306_jeopardy_answer_generation_double - task1286_openbookqa_question_answering - task159_check_frequency_of_words_in_sentence_pair - task151_tomqa_find_location_easy_clean - task323_jigsaw_classification_sexually_explicit - task037_qasc_generate_related_fact - task027_drop_answer_type_generation - task1596_event2mind_text_generation_2 - task141_odd-man-out_classification_category - task194_duorc_answer_generation - task679_hope_edi_english_text_classification - task246_dream_question_generation - task1195_disflqa_disfluent_to_fluent_conversion - task065_timetravel_consistent_sentence_classification - task351_winomt_classification_gender_identifiability_anti - task580_socialiqa_answer_generation - task583_udeps_eng_coarse_pos_tagging - task202_mnli_contradiction_classification - task222_rocstories_two_chioce_slotting_classification - task498_scruples_anecdotes_whoiswrong_classification - task067_abductivenli_answer_generation - task616_cola_classification - task286_olid_offense_judgment - task188_snli_neutral_to_entailment_text_modification - task223_quartz_explanation_generation - task820_protoqa_answer_generation - task196_sentiment140_answer_generation - task1678_mathqa_answer_selection - task349_squad2.0_answerable_unanswerable_question_classification - task154_tomqa_find_location_hard_noise - task333_hateeval_classification_hate_en - task235_iirc_question_from_subtext_answer_generation - task1554_scitail_classification - task210_logic2text_structured_text_generation - task035_winogrande_question_modification_person - task230_iirc_passage_classification - task1356_xlsum_title_generation - task1726_mathqa_correct_answer_generation - task302_record_classification - task380_boolq_yes_no_question - task212_logic2text_classification - task748_glucose_reverse_cause_event_detection - task834_mathdataset_classification - task350_winomt_classification_gender_identifiability_pro - task191_hotpotqa_question_generation - task236_iirc_question_from_passage_answer_generation - task217_rocstories_ordering_answer_generation - task568_circa_question_generation - task614_glucose_cause_event_detection - task361_spolin_yesand_prompt_response_classification - task421_persent_sentence_sentiment_classification - task203_mnli_sentence_generation - task420_persent_document_sentiment_classification - task153_tomqa_find_location_hard_clean - task346_hybridqa_classification - task1211_atomic_classification_hassubevent - task360_spolin_yesand_response_generation - task510_reddit_tifu_title_summarization - task511_reddit_tifu_long_text_summarization - task345_hybridqa_answer_generation - task270_csrg_counterfactual_context_generation - task307_jeopardy_answer_generation_final - task001_quoref_question_generation - task089_swap_words_verification - task1196_atomic_classification_oeffect - task080_piqa_answer_generation - task1598_nyc_long_text_generation - task240_tweetqa_question_generation - task615_moviesqa_answer_generation - task1347_glue_sts-b_similarity_classification - task114_is_the_given_word_longest - task292_storycommonsense_character_text_generation - task115_help_advice_classification - task431_senteval_object_count - task1360_numer_sense_multiple_choice_qa_generation - task177_para-nmt_paraphrasing - task132_dais_text_modification - task269_csrg_counterfactual_story_generation - task233_iirc_link_exists_classification - task161_count_words_containing_letter - task1205_atomic_classification_isafter - task571_recipe_nlg_ner_generation - task1292_yelp_review_full_text_categorization - task428_senteval_inversion - task311_race_question_generation - task429_senteval_tense - task403_creak_commonsense_inference - task929_products_reviews_classification - task582_naturalquestion_answer_generation - task237_iirc_answer_from_subtext_answer_generation - task050_multirc_answerability - task184_break_generate_question - task669_ambigqa_answer_generation - task169_strategyqa_sentence_generation - task500_scruples_anecdotes_title_generation - task241_tweetqa_classification - task1345_glue_qqp_question_paraprashing - task218_rocstories_swap_order_answer_generation - task613_politifact_text_generation - task1167_penn_treebank_coarse_pos_tagging - task1422_mathqa_physics - task247_dream_answer_generation - task199_mnli_classification - task164_mcscript_question_answering_text - task1541_agnews_classification - task516_senteval_conjoints_inversion - task294_storycommonsense_motiv_text_generation - task501_scruples_anecdotes_post_type_verification - task213_rocstories_correct_ending_classification - task821_protoqa_question_generation - task493_review_polarity_classification - task308_jeopardy_answer_generation_all - task1595_event2mind_text_generation_1 - task040_qasc_question_generation - task231_iirc_link_classification - task1727_wiqa_what_is_the_effect - task578_curiosity_dialogs_answer_generation - task310_race_classification - task309_race_answer_generation - task379_agnews_topic_classification - task030_winogrande_full_person - task1540_parsed_pdfs_summarization - task039_qasc_find_overlapping_words - task1206_atomic_classification_isbefore - task157_count_vowels_and_consonants - task339_record_answer_generation - task453_swag_answer_generation - task848_pubmedqa_classification - task673_google_wellformed_query_classification - task676_ollie_relationship_answer_generation - task268_casehold_legal_answer_generation - task844_financial_phrasebank_classification - task330_gap_answer_generation - task595_mocha_answer_generation - task1285_kpa_keypoint_matching - task234_iirc_passage_line_answer_generation - task494_review_polarity_answer_generation - task670_ambigqa_question_generation - task289_gigaword_summarization - npr - nli - SimpleWiki - amazon_review_2018 - ccnews_title_text - agnews - xsum - msmarco - yahoo_answers_title_answer - squad_pairs - wow - mteb-amazon_counterfactual-avs_triplets - mteb-amazon_massive_intent-avs_triplets - mteb-amazon_massive_scenario-avs_triplets - mteb-amazon_reviews_multi-avs_triplets - mteb-banking77-avs_triplets - mteb-emotion-avs_triplets - mteb-imdb-avs_triplets - mteb-mtop_domain-avs_triplets - mteb-mtop_intent-avs_triplets - mteb-toxic_conversations_50k-avs_triplets - mteb-tweet_sentiment_extraction-avs_triplets - covid-bing-query-gpt4-avs_triplets - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-64-final") # Run inference sentences = [ 'Does early second-trimester sonography predict adverse perinatal outcomes in monochorionic diamniotic twin pregnancies?', 'To determine whether intertwin discordant abdominal circumference, femur length, head circumference, and estimated fetal weight sonographic measurements in early second-trimester monochorionic diamniotic twins predict adverse obstetric and neonatal outcomes.We conducted a multicenter retrospective cohort study involving 9 regional perinatal centers in the United States. We examined the records of all monochorionic diamniotic twin pregnancies with two live fetuses at the 16- to 18-week sonographic examination who had serial follow-up sonography until delivery. The intertwin discordance in abdominal circumference, femur length, head circumference, and estimated fetal weight was calculated as the difference between the two fetuses, expressed as a percentage of the larger using the 16- to 18-week sonographic measurements. An adverse composite obstetric outcome was defined as the occurrence of 1 or more of the following in either fetus: intrauterine growth restriction, twin-twin transfusion syndrome, intrauterine fetal death, abnormal growth discordance (≥20% difference), and very preterm birth at or before 28 weeks. An adverse composite neonatal outcome was defined as the occurrence of 1 or more of the following: respiratory distress syndrome, any stage of intraventricular hemorrhage, 5-minute Apgar score less than 7, necrotizing enterocolitis, culture-proven early-onset sepsis, and neonatal death. Receiver operating characteristic and logistic regression-with-generalized estimating equation analyses were constructed.Among the 177 monochorionic diamniotic twin pregnancies analyzed, intertwin abdominal circumference and estimated fetal weight discordances were only predictive of adverse composite obstetric outcomes (areas under the curve, 79% and 80%, respectively). Receiver operating characteristic curves showed that intertwin discordances in abdominal circumference, femur length, head circumference, and estimated fetal weight were not acceptable predictors of twin-twin transfusion syndrome or adverse neonatal outcomes.', 'Calcium and vitamin D are essential nutrients for bone metabolism Vitamin D can either be obtained from dietary sources or cutaneous synthesis. The study was conducted in subtropic weather; therefore, some might believe that the levels of solar radiation would be sufficient in this area.To evaluate calcium and vitamin D supplementation in postmenopausal women with osteoporosis living in a sunny country.A 3-month controlled clinical trial with 64 postmenopausal women with osteoporosis, mean age 62 + or - 8 years. They were randomly assigned to either the supplement group, who received 1,200 mg of calcium carbonate and 400 IU (10 microg) of vitamin D(3,) or the control group. Dietary intake assessment was performed, bone mineral density and body composition were measured, and biochemical markers of bone metabolism were analyzed.Considering all participants at baseline, serum vitamin D was under 75 nmol/l in 91.4% of the participants. The concentration of serum 25(OH)D increased significantly (p = 0.023) after 3 months of supplementation from 46.67 + or - 13.97 to 59.47 + or - 17.50 nmol/l. However, the dose given was limited in effect, and 86.2% of the supplement group did not reach optimal levels of 25(OH)D. Parathyroid hormone was elevated in 22.4% of the study group. After the intervention period, mean parathyroid hormone tended to decrease in the supplement group (p = 0.063).', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Triplet * Dataset: `medi-mteb-dev` * Evaluated with [TripletEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9153** | ## Training Details ### Training Datasets #### NQ * Dataset: NQ * Size: 49,548 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### pubmed * Dataset: pubmed * Size: 29,716 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### specter_train_triples * Dataset: specter_train_triples * Size: 49,548 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### S2ORC_citations_abstracts * Dataset: S2ORC_citations_abstracts * Size: 99,032 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### fever * Dataset: fever * Size: 74,258 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### gooaq_pairs * Dataset: gooaq_pairs * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### codesearchnet * Dataset: codesearchnet * Size: 14,890 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### wikihow * Dataset: wikihow * Size: 5,006 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### WikiAnswers * Dataset: WikiAnswers * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### eli5_question_answer * Dataset: eli5_question_answer * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### amazon-qa * Dataset: amazon-qa * Size: 99,032 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### medmcqa * Dataset: medmcqa * Size: 29,716 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### zeroshot * Dataset: zeroshot * Size: 14,890 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### TriviaQA_pairs * Dataset: TriviaQA_pairs * Size: 49,548 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### PAQ_pairs * Dataset: PAQ_pairs * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### stackexchange_duplicate_questions_title-body_title-body * Dataset: stackexchange_duplicate_questions_title-body_title-body * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### trex * Dataset: trex * Size: 29,716 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### flickr30k_captions * Dataset: flickr30k_captions * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### hotpotqa * Dataset: hotpotqa * Size: 39,600 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task671_ambigqa_text_generation * Dataset: task671_ambigqa_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task061_ropes_answer_generation * Dataset: task061_ropes_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task285_imdb_answer_generation * Dataset: task285_imdb_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task905_hate_speech_offensive_classification * Dataset: task905_hate_speech_offensive_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task566_circa_classification * Dataset: task566_circa_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task184_snli_entailment_to_neutral_text_modification * Dataset: task184_snli_entailment_to_neutral_text_modification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task280_stereoset_classification_stereotype_type * Dataset: task280_stereoset_classification_stereotype_type * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1599_smcalflow_classification * Dataset: task1599_smcalflow_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1384_deal_or_no_dialog_classification * Dataset: task1384_deal_or_no_dialog_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task591_sciq_answer_generation * Dataset: task591_sciq_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task823_peixian-rtgender_sentiment_analysis * Dataset: task823_peixian-rtgender_sentiment_analysis * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task023_cosmosqa_question_generation * Dataset: task023_cosmosqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task900_freebase_qa_category_classification * Dataset: task900_freebase_qa_category_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task924_event2mind_word_generation * Dataset: task924_event2mind_word_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task152_tomqa_find_location_easy_noise * Dataset: task152_tomqa_find_location_easy_noise * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1368_healthfact_sentence_generation * Dataset: task1368_healthfact_sentence_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1661_super_glue_classification * Dataset: task1661_super_glue_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1187_politifact_classification * Dataset: task1187_politifact_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1728_web_nlg_data_to_text * Dataset: task1728_web_nlg_data_to_text * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task112_asset_simple_sentence_identification * Dataset: task112_asset_simple_sentence_identification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1340_msr_text_compression_compression * Dataset: task1340_msr_text_compression_compression * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task072_abductivenli_answer_generation * Dataset: task072_abductivenli_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1504_hatexplain_answer_generation * Dataset: task1504_hatexplain_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task684_online_privacy_policy_text_information_type_generation * Dataset: task684_online_privacy_policy_text_information_type_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1290_xsum_summarization * Dataset: task1290_xsum_summarization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task075_squad1.1_answer_generation * Dataset: task075_squad1.1_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1587_scifact_classification * Dataset: task1587_scifact_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task384_socialiqa_question_classification * Dataset: task384_socialiqa_question_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1555_scitail_answer_generation * Dataset: task1555_scitail_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1532_daily_dialog_emotion_classification * Dataset: task1532_daily_dialog_emotion_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task239_tweetqa_answer_generation * Dataset: task239_tweetqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task596_mocha_question_generation * Dataset: task596_mocha_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1411_dart_subject_identification * Dataset: task1411_dart_subject_identification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1359_numer_sense_answer_generation * Dataset: task1359_numer_sense_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task329_gap_classification * Dataset: task329_gap_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task220_rocstories_title_classification * Dataset: task220_rocstories_title_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task316_crows-pairs_classification_stereotype * Dataset: task316_crows-pairs_classification_stereotype * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task495_semeval_headline_classification * Dataset: task495_semeval_headline_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1168_brown_coarse_pos_tagging * Dataset: task1168_brown_coarse_pos_tagging * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task348_squad2.0_unanswerable_question_generation * Dataset: task348_squad2.0_unanswerable_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task049_multirc_questions_needed_to_answer * Dataset: task049_multirc_questions_needed_to_answer * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1534_daily_dialog_question_classification * Dataset: task1534_daily_dialog_question_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task322_jigsaw_classification_threat * Dataset: task322_jigsaw_classification_threat * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task295_semeval_2020_task4_commonsense_reasoning * Dataset: task295_semeval_2020_task4_commonsense_reasoning * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task186_snli_contradiction_to_entailment_text_modification * Dataset: task186_snli_contradiction_to_entailment_text_modification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task034_winogrande_question_modification_object * Dataset: task034_winogrande_question_modification_object * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task160_replace_letter_in_a_sentence * Dataset: task160_replace_letter_in_a_sentence * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task469_mrqa_answer_generation * Dataset: task469_mrqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task105_story_cloze-rocstories_sentence_generation * Dataset: task105_story_cloze-rocstories_sentence_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task649_race_blank_question_generation * Dataset: task649_race_blank_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1536_daily_dialog_happiness_classification * Dataset: task1536_daily_dialog_happiness_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task683_online_privacy_policy_text_purpose_answer_generation * Dataset: task683_online_privacy_policy_text_purpose_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task024_cosmosqa_answer_generation * Dataset: task024_cosmosqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task584_udeps_eng_fine_pos_tagging * Dataset: task584_udeps_eng_fine_pos_tagging * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task066_timetravel_binary_consistency_classification * Dataset: task066_timetravel_binary_consistency_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task413_mickey_en_sentence_perturbation_generation * Dataset: task413_mickey_en_sentence_perturbation_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task182_duorc_question_generation * Dataset: task182_duorc_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task028_drop_answer_generation * Dataset: task028_drop_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1601_webquestions_answer_generation * Dataset: task1601_webquestions_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1295_adversarial_qa_question_answering * Dataset: task1295_adversarial_qa_question_answering * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task201_mnli_neutral_classification * Dataset: task201_mnli_neutral_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task038_qasc_combined_fact * Dataset: task038_qasc_combined_fact * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task293_storycommonsense_emotion_text_generation * Dataset: task293_storycommonsense_emotion_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task572_recipe_nlg_text_generation * Dataset: task572_recipe_nlg_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task517_emo_classify_emotion_of_dialogue * Dataset: task517_emo_classify_emotion_of_dialogue * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task382_hybridqa_answer_generation * Dataset: task382_hybridqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task176_break_decompose_questions * Dataset: task176_break_decompose_questions * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1291_multi_news_summarization * Dataset: task1291_multi_news_summarization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task155_count_nouns_verbs * Dataset: task155_count_nouns_verbs * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task031_winogrande_question_generation_object * Dataset: task031_winogrande_question_generation_object * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task279_stereoset_classification_stereotype * Dataset: task279_stereoset_classification_stereotype * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1336_peixian_equity_evaluation_corpus_gender_classifier * Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task508_scruples_dilemmas_more_ethical_isidentifiable * Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task518_emo_different_dialogue_emotions * Dataset: task518_emo_different_dialogue_emotions * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task077_splash_explanation_to_sql * Dataset: task077_splash_explanation_to_sql * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task923_event2mind_classifier * Dataset: task923_event2mind_classifier * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task470_mrqa_question_generation * Dataset: task470_mrqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task638_multi_woz_classification * Dataset: task638_multi_woz_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1412_web_questions_question_answering * Dataset: task1412_web_questions_question_answering * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task847_pubmedqa_question_generation * Dataset: task847_pubmedqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task678_ollie_actual_relationship_answer_generation * Dataset: task678_ollie_actual_relationship_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task290_tellmewhy_question_answerability * Dataset: task290_tellmewhy_question_answerability * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task575_air_dialogue_classification * Dataset: task575_air_dialogue_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task189_snli_neutral_to_contradiction_text_modification * Dataset: task189_snli_neutral_to_contradiction_text_modification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task026_drop_question_generation * Dataset: task026_drop_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task162_count_words_starting_with_letter * Dataset: task162_count_words_starting_with_letter * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task079_conala_concat_strings * Dataset: task079_conala_concat_strings * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task610_conllpp_ner * Dataset: task610_conllpp_ner * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task046_miscellaneous_question_typing * Dataset: task046_miscellaneous_question_typing * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task197_mnli_domain_answer_generation * Dataset: task197_mnli_domain_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1325_qa_zre_question_generation_on_subject_relation * Dataset: task1325_qa_zre_question_generation_on_subject_relation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task430_senteval_subject_count * Dataset: task430_senteval_subject_count * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task672_nummersense * Dataset: task672_nummersense * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task402_grailqa_paraphrase_generation * Dataset: task402_grailqa_paraphrase_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task904_hate_speech_offensive_classification * Dataset: task904_hate_speech_offensive_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task192_hotpotqa_sentence_generation * Dataset: task192_hotpotqa_sentence_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task069_abductivenli_classification * Dataset: task069_abductivenli_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task574_air_dialogue_sentence_generation * Dataset: task574_air_dialogue_sentence_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task187_snli_entailment_to_contradiction_text_modification * Dataset: task187_snli_entailment_to_contradiction_text_modification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task749_glucose_reverse_cause_emotion_detection * Dataset: task749_glucose_reverse_cause_emotion_detection * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1552_scitail_question_generation * Dataset: task1552_scitail_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task750_aqua_multiple_choice_answering * Dataset: task750_aqua_multiple_choice_answering * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task327_jigsaw_classification_toxic * Dataset: task327_jigsaw_classification_toxic * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1502_hatexplain_classification * Dataset: task1502_hatexplain_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task328_jigsaw_classification_insult * Dataset: task328_jigsaw_classification_insult * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task304_numeric_fused_head_resolution * Dataset: task304_numeric_fused_head_resolution * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1293_kilt_tasks_hotpotqa_question_answering * Dataset: task1293_kilt_tasks_hotpotqa_question_answering * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task216_rocstories_correct_answer_generation * Dataset: task216_rocstories_correct_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1326_qa_zre_question_generation_from_answer * Dataset: task1326_qa_zre_question_generation_from_answer * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1338_peixian_equity_evaluation_corpus_sentiment_classifier * Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1729_personachat_generate_next * Dataset: task1729_personachat_generate_next * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1202_atomic_classification_xneed * Dataset: task1202_atomic_classification_xneed * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task400_paws_paraphrase_classification * Dataset: task400_paws_paraphrase_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task502_scruples_anecdotes_whoiswrong_verification * Dataset: task502_scruples_anecdotes_whoiswrong_verification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task088_identify_typo_verification * Dataset: task088_identify_typo_verification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task221_rocstories_two_choice_classification * Dataset: task221_rocstories_two_choice_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task200_mnli_entailment_classification * Dataset: task200_mnli_entailment_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task074_squad1.1_question_generation * Dataset: task074_squad1.1_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task581_socialiqa_question_generation * Dataset: task581_socialiqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1186_nne_hrngo_classification * Dataset: task1186_nne_hrngo_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task898_freebase_qa_answer_generation * Dataset: task898_freebase_qa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1408_dart_similarity_classification * Dataset: task1408_dart_similarity_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task168_strategyqa_question_decomposition * Dataset: task168_strategyqa_question_decomposition * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1357_xlsum_summary_generation * Dataset: task1357_xlsum_summary_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task390_torque_text_span_selection * Dataset: task390_torque_text_span_selection * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task165_mcscript_question_answering_commonsense * Dataset: task165_mcscript_question_answering_commonsense * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1533_daily_dialog_formal_classification * Dataset: task1533_daily_dialog_formal_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task002_quoref_answer_generation * Dataset: task002_quoref_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1297_qasc_question_answering * Dataset: task1297_qasc_question_answering * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task305_jeopardy_answer_generation_normal * Dataset: task305_jeopardy_answer_generation_normal * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task029_winogrande_full_object * Dataset: task029_winogrande_full_object * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1327_qa_zre_answer_generation_from_question * Dataset: task1327_qa_zre_answer_generation_from_question * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task326_jigsaw_classification_obscene * Dataset: task326_jigsaw_classification_obscene * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1542_every_ith_element_from_starting * Dataset: task1542_every_ith_element_from_starting * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task570_recipe_nlg_ner_generation * Dataset: task570_recipe_nlg_ner_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1409_dart_text_generation * Dataset: task1409_dart_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task401_numeric_fused_head_reference * Dataset: task401_numeric_fused_head_reference * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task846_pubmedqa_classification * Dataset: task846_pubmedqa_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1712_poki_classification * Dataset: task1712_poki_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task344_hybridqa_answer_generation * Dataset: task344_hybridqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task875_emotion_classification * Dataset: task875_emotion_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1214_atomic_classification_xwant * Dataset: task1214_atomic_classification_xwant * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task106_scruples_ethical_judgment * Dataset: task106_scruples_ethical_judgment * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task238_iirc_answer_from_passage_answer_generation * Dataset: task238_iirc_answer_from_passage_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1391_winogrande_easy_answer_generation * Dataset: task1391_winogrande_easy_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task195_sentiment140_classification * Dataset: task195_sentiment140_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task163_count_words_ending_with_letter * Dataset: task163_count_words_ending_with_letter * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task579_socialiqa_classification * Dataset: task579_socialiqa_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task569_recipe_nlg_text_generation * Dataset: task569_recipe_nlg_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1602_webquestion_question_genreation * Dataset: task1602_webquestion_question_genreation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task747_glucose_cause_emotion_detection * Dataset: task747_glucose_cause_emotion_detection * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task219_rocstories_title_answer_generation * Dataset: task219_rocstories_title_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task178_quartz_question_answering * Dataset: task178_quartz_question_answering * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task103_facts2story_long_text_generation * Dataset: task103_facts2story_long_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task301_record_question_generation * Dataset: task301_record_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1369_healthfact_sentence_generation * Dataset: task1369_healthfact_sentence_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task515_senteval_odd_word_out * Dataset: task515_senteval_odd_word_out * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task496_semeval_answer_generation * Dataset: task496_semeval_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1658_billsum_summarization * Dataset: task1658_billsum_summarization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1204_atomic_classification_hinderedby * Dataset: task1204_atomic_classification_hinderedby * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1392_superglue_multirc_answer_verification * Dataset: task1392_superglue_multirc_answer_verification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task306_jeopardy_answer_generation_double * Dataset: task306_jeopardy_answer_generation_double * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1286_openbookqa_question_answering * Dataset: task1286_openbookqa_question_answering * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task159_check_frequency_of_words_in_sentence_pair * Dataset: task159_check_frequency_of_words_in_sentence_pair * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task151_tomqa_find_location_easy_clean * Dataset: task151_tomqa_find_location_easy_clean * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task323_jigsaw_classification_sexually_explicit * Dataset: task323_jigsaw_classification_sexually_explicit * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task037_qasc_generate_related_fact * Dataset: task037_qasc_generate_related_fact * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task027_drop_answer_type_generation * Dataset: task027_drop_answer_type_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1596_event2mind_text_generation_2 * Dataset: task1596_event2mind_text_generation_2 * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task141_odd-man-out_classification_category * Dataset: task141_odd-man-out_classification_category * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task194_duorc_answer_generation * Dataset: task194_duorc_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task679_hope_edi_english_text_classification * Dataset: task679_hope_edi_english_text_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task246_dream_question_generation * Dataset: task246_dream_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1195_disflqa_disfluent_to_fluent_conversion * Dataset: task1195_disflqa_disfluent_to_fluent_conversion * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task065_timetravel_consistent_sentence_classification * Dataset: task065_timetravel_consistent_sentence_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task351_winomt_classification_gender_identifiability_anti * Dataset: task351_winomt_classification_gender_identifiability_anti * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task580_socialiqa_answer_generation * Dataset: task580_socialiqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task583_udeps_eng_coarse_pos_tagging * Dataset: task583_udeps_eng_coarse_pos_tagging * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task202_mnli_contradiction_classification * Dataset: task202_mnli_contradiction_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task222_rocstories_two_chioce_slotting_classification * Dataset: task222_rocstories_two_chioce_slotting_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task498_scruples_anecdotes_whoiswrong_classification * Dataset: task498_scruples_anecdotes_whoiswrong_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task067_abductivenli_answer_generation * Dataset: task067_abductivenli_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task616_cola_classification * Dataset: task616_cola_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task286_olid_offense_judgment * Dataset: task286_olid_offense_judgment * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task188_snli_neutral_to_entailment_text_modification * Dataset: task188_snli_neutral_to_entailment_text_modification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task223_quartz_explanation_generation * Dataset: task223_quartz_explanation_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task820_protoqa_answer_generation * Dataset: task820_protoqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task196_sentiment140_answer_generation * Dataset: task196_sentiment140_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1678_mathqa_answer_selection * Dataset: task1678_mathqa_answer_selection * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task349_squad2.0_answerable_unanswerable_question_classification * Dataset: task349_squad2.0_answerable_unanswerable_question_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task154_tomqa_find_location_hard_noise * Dataset: task154_tomqa_find_location_hard_noise * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task333_hateeval_classification_hate_en * Dataset: task333_hateeval_classification_hate_en * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task235_iirc_question_from_subtext_answer_generation * Dataset: task235_iirc_question_from_subtext_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1554_scitail_classification * Dataset: task1554_scitail_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task210_logic2text_structured_text_generation * Dataset: task210_logic2text_structured_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task035_winogrande_question_modification_person * Dataset: task035_winogrande_question_modification_person * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task230_iirc_passage_classification * Dataset: task230_iirc_passage_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1356_xlsum_title_generation * Dataset: task1356_xlsum_title_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1726_mathqa_correct_answer_generation * Dataset: task1726_mathqa_correct_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task302_record_classification * Dataset: task302_record_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task380_boolq_yes_no_question * Dataset: task380_boolq_yes_no_question * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task212_logic2text_classification * Dataset: task212_logic2text_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task748_glucose_reverse_cause_event_detection * Dataset: task748_glucose_reverse_cause_event_detection * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task834_mathdataset_classification * Dataset: task834_mathdataset_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task350_winomt_classification_gender_identifiability_pro * Dataset: task350_winomt_classification_gender_identifiability_pro * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task191_hotpotqa_question_generation * Dataset: task191_hotpotqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task236_iirc_question_from_passage_answer_generation * Dataset: task236_iirc_question_from_passage_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task217_rocstories_ordering_answer_generation * Dataset: task217_rocstories_ordering_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task568_circa_question_generation * Dataset: task568_circa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task614_glucose_cause_event_detection * Dataset: task614_glucose_cause_event_detection * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task361_spolin_yesand_prompt_response_classification * Dataset: task361_spolin_yesand_prompt_response_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task421_persent_sentence_sentiment_classification * Dataset: task421_persent_sentence_sentiment_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task203_mnli_sentence_generation * Dataset: task203_mnli_sentence_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task420_persent_document_sentiment_classification * Dataset: task420_persent_document_sentiment_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task153_tomqa_find_location_hard_clean * Dataset: task153_tomqa_find_location_hard_clean * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task346_hybridqa_classification * Dataset: task346_hybridqa_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1211_atomic_classification_hassubevent * Dataset: task1211_atomic_classification_hassubevent * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task360_spolin_yesand_response_generation * Dataset: task360_spolin_yesand_response_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task510_reddit_tifu_title_summarization * Dataset: task510_reddit_tifu_title_summarization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task511_reddit_tifu_long_text_summarization * Dataset: task511_reddit_tifu_long_text_summarization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task345_hybridqa_answer_generation * Dataset: task345_hybridqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task270_csrg_counterfactual_context_generation * Dataset: task270_csrg_counterfactual_context_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task307_jeopardy_answer_generation_final * Dataset: task307_jeopardy_answer_generation_final * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task001_quoref_question_generation * Dataset: task001_quoref_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task089_swap_words_verification * Dataset: task089_swap_words_verification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1196_atomic_classification_oeffect * Dataset: task1196_atomic_classification_oeffect * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task080_piqa_answer_generation * Dataset: task080_piqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1598_nyc_long_text_generation * Dataset: task1598_nyc_long_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task240_tweetqa_question_generation * Dataset: task240_tweetqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task615_moviesqa_answer_generation * Dataset: task615_moviesqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1347_glue_sts-b_similarity_classification * Dataset: task1347_glue_sts-b_similarity_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task114_is_the_given_word_longest * Dataset: task114_is_the_given_word_longest * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task292_storycommonsense_character_text_generation * Dataset: task292_storycommonsense_character_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task115_help_advice_classification * Dataset: task115_help_advice_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task431_senteval_object_count * Dataset: task431_senteval_object_count * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1360_numer_sense_multiple_choice_qa_generation * Dataset: task1360_numer_sense_multiple_choice_qa_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task177_para-nmt_paraphrasing * Dataset: task177_para-nmt_paraphrasing * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task132_dais_text_modification * Dataset: task132_dais_text_modification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task269_csrg_counterfactual_story_generation * Dataset: task269_csrg_counterfactual_story_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task233_iirc_link_exists_classification * Dataset: task233_iirc_link_exists_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task161_count_words_containing_letter * Dataset: task161_count_words_containing_letter * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1205_atomic_classification_isafter * Dataset: task1205_atomic_classification_isafter * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task571_recipe_nlg_ner_generation * Dataset: task571_recipe_nlg_ner_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1292_yelp_review_full_text_categorization * Dataset: task1292_yelp_review_full_text_categorization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task428_senteval_inversion * Dataset: task428_senteval_inversion * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task311_race_question_generation * Dataset: task311_race_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task429_senteval_tense * Dataset: task429_senteval_tense * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task403_creak_commonsense_inference * Dataset: task403_creak_commonsense_inference * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task929_products_reviews_classification * Dataset: task929_products_reviews_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task582_naturalquestion_answer_generation * Dataset: task582_naturalquestion_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task237_iirc_answer_from_subtext_answer_generation * Dataset: task237_iirc_answer_from_subtext_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task050_multirc_answerability * Dataset: task050_multirc_answerability * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task184_break_generate_question * Dataset: task184_break_generate_question * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task669_ambigqa_answer_generation * Dataset: task669_ambigqa_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task169_strategyqa_sentence_generation * Dataset: task169_strategyqa_sentence_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task500_scruples_anecdotes_title_generation * Dataset: task500_scruples_anecdotes_title_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task241_tweetqa_classification * Dataset: task241_tweetqa_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1345_glue_qqp_question_paraprashing * Dataset: task1345_glue_qqp_question_paraprashing * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task218_rocstories_swap_order_answer_generation * Dataset: task218_rocstories_swap_order_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task613_politifact_text_generation * Dataset: task613_politifact_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1167_penn_treebank_coarse_pos_tagging * Dataset: task1167_penn_treebank_coarse_pos_tagging * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1422_mathqa_physics * Dataset: task1422_mathqa_physics * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task247_dream_answer_generation * Dataset: task247_dream_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task199_mnli_classification * Dataset: task199_mnli_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task164_mcscript_question_answering_text * Dataset: task164_mcscript_question_answering_text * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1541_agnews_classification * Dataset: task1541_agnews_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task516_senteval_conjoints_inversion * Dataset: task516_senteval_conjoints_inversion * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task294_storycommonsense_motiv_text_generation * Dataset: task294_storycommonsense_motiv_text_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task501_scruples_anecdotes_post_type_verification * Dataset: task501_scruples_anecdotes_post_type_verification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task213_rocstories_correct_ending_classification * Dataset: task213_rocstories_correct_ending_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task821_protoqa_question_generation * Dataset: task821_protoqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task493_review_polarity_classification * Dataset: task493_review_polarity_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task308_jeopardy_answer_generation_all * Dataset: task308_jeopardy_answer_generation_all * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1595_event2mind_text_generation_1 * Dataset: task1595_event2mind_text_generation_1 * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task040_qasc_question_generation * Dataset: task040_qasc_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task231_iirc_link_classification * Dataset: task231_iirc_link_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1727_wiqa_what_is_the_effect * Dataset: task1727_wiqa_what_is_the_effect * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task578_curiosity_dialogs_answer_generation * Dataset: task578_curiosity_dialogs_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task310_race_classification * Dataset: task310_race_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task309_race_answer_generation * Dataset: task309_race_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task379_agnews_topic_classification * Dataset: task379_agnews_topic_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task030_winogrande_full_person * Dataset: task030_winogrande_full_person * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1540_parsed_pdfs_summarization * Dataset: task1540_parsed_pdfs_summarization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task039_qasc_find_overlapping_words * Dataset: task039_qasc_find_overlapping_words * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1206_atomic_classification_isbefore * Dataset: task1206_atomic_classification_isbefore * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task157_count_vowels_and_consonants * Dataset: task157_count_vowels_and_consonants * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task339_record_answer_generation * Dataset: task339_record_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task453_swag_answer_generation * Dataset: task453_swag_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task848_pubmedqa_classification * Dataset: task848_pubmedqa_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task673_google_wellformed_query_classification * Dataset: task673_google_wellformed_query_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task676_ollie_relationship_answer_generation * Dataset: task676_ollie_relationship_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task268_casehold_legal_answer_generation * Dataset: task268_casehold_legal_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task844_financial_phrasebank_classification * Dataset: task844_financial_phrasebank_classification * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task330_gap_answer_generation * Dataset: task330_gap_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task595_mocha_answer_generation * Dataset: task595_mocha_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1285_kpa_keypoint_matching * Dataset: task1285_kpa_keypoint_matching * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task234_iirc_passage_line_answer_generation * Dataset: task234_iirc_passage_line_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task494_review_polarity_answer_generation * Dataset: task494_review_polarity_answer_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task670_ambigqa_question_generation * Dataset: task670_ambigqa_question_generation * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task289_gigaword_summarization * Dataset: task289_gigaword_summarization * Size: 634 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### npr * Dataset: npr * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### nli * Dataset: nli * Size: 49,548 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### SimpleWiki * Dataset: SimpleWiki * Size: 5,006 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### amazon_review_2018 * Dataset: amazon_review_2018 * Size: 99,032 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### ccnews_title_text * Dataset: ccnews_title_text * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### agnews * Dataset: agnews * Size: 44,606 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### xsum * Dataset: xsum * Size: 9,948 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### msmarco * Dataset: msmarco * Size: 173,290 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### yahoo_answers_title_answer * Dataset: yahoo_answers_title_answer * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### squad_pairs * Dataset: squad_pairs * Size: 24,774 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### wow * Dataset: wow * Size: 29,716 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_counterfactual-avs_triplets * Dataset: mteb-amazon_counterfactual-avs_triplets * Size: 3,991 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_massive_intent-avs_triplets * Dataset: mteb-amazon_massive_intent-avs_triplets * Size: 11,405 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_massive_scenario-avs_triplets * Dataset: mteb-amazon_massive_scenario-avs_triplets * Size: 11,405 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_reviews_multi-avs_triplets * Dataset: mteb-amazon_reviews_multi-avs_triplets * Size: 198,000 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-banking77-avs_triplets * Dataset: mteb-banking77-avs_triplets * Size: 9,947 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-emotion-avs_triplets * Dataset: mteb-emotion-avs_triplets * Size: 15,840 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-imdb-avs_triplets * Dataset: mteb-imdb-avs_triplets * Size: 24,647 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-mtop_domain-avs_triplets * Dataset: mteb-mtop_domain-avs_triplets * Size: 15,523 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-mtop_intent-avs_triplets * Dataset: mteb-mtop_intent-avs_triplets * Size: 15,523 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-toxic_conversations_50k-avs_triplets * Dataset: mteb-toxic_conversations_50k-avs_triplets * Size: 49,421 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-tweet_sentiment_extraction-avs_triplets * Dataset: mteb-tweet_sentiment_extraction-avs_triplets * Size: 27,245 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### covid-bing-query-gpt4-avs_triplets * Dataset: covid-bing-query-gpt4-avs_triplets * Size: 4,942 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 18,269 evaluation samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 2e-05 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `fp16`: True - `gradient_checkpointing`: True - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: True - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs
Click to expand | Epoch | Step | Training Loss | Validation Loss | medi-mteb-dev_cosine_accuracy | |:------:|:------:|:-------------:|:---------------:|:-----------------------------:| | 0 | 0 | - | - | 0.8503 | | 0.0175 | 500 | 1.9411 | 1.9039 | 0.8588 | | 0.0351 | 1000 | 1.5495 | 0.9698 | 0.8698 | | 0.0526 | 1500 | 1.3527 | 0.7684 | 0.8753 | | 0.0701 | 2000 | 1.1995 | 0.7102 | 0.8777 | | 0.0877 | 2500 | 1.1782 | 0.6829 | 0.8793 | | 0.1052 | 3000 | 1.1662 | 0.6633 | 0.8830 | | 0.1227 | 3500 | 1.139 | 0.6510 | 0.8844 | | 0.1403 | 4000 | 1.1389 | 0.6429 | 0.8851 | | 0.1578 | 4500 | 1.1381 | 0.6273 | 0.8863 | | 0.1753 | 5000 | 1.0616 | 0.6225 | 0.8869 | | 0.1929 | 5500 | 1.114 | 0.6169 | 0.8872 | | 0.2104 | 6000 | 0.9854 | 0.6108 | 0.8886 | | 0.2279 | 6500 | 1.081 | 0.6047 | 0.8900 | | 0.2455 | 7000 | 0.9899 | 0.5983 | 0.8912 | | 0.2630 | 7500 | 1.0551 | 0.5931 | 0.8921 | | 0.2805 | 8000 | 1.0515 | 0.5882 | 0.8930 | | 0.2981 | 8500 | 1.0384 | 0.5768 | 0.8946 | | 0.3156 | 9000 | 1.0545 | 0.5716 | 0.8945 | | 0.3331 | 9500 | 1.006 | 0.5744 | 0.8959 | | 0.3507 | 10000 | 0.9629 | 0.5719 | 0.8960 | | 0.3682 | 10500 | 1.0877 | 0.5600 | 0.8958 | | 0.3857 | 11000 | 1.0594 | 0.5639 | 0.8975 | | 0.4033 | 11500 | 1.0708 | 0.5672 | 0.8975 | | 0.4208 | 12000 | 1.0275 | 0.5481 | 0.8986 | | 0.4383 | 12500 | 0.9467 | 0.5552 | 0.9007 | | 0.4559 | 13000 | 1.0048 | 0.5524 | 0.9008 | | 0.4734 | 13500 | 1.0135 | 0.5482 | 0.9002 | | 0.4909 | 14000 | 0.9579 | 0.5428 | 0.9002 | | 0.5085 | 14500 | 0.9534 | 0.5373 | 0.9015 | | 0.5260 | 15000 | 0.9225 | 0.5347 | 0.9025 | | 0.5435 | 15500 | 0.9936 | 0.5384 | 0.9011 | | 0.5611 | 16000 | 0.926 | 0.5298 | 0.9028 | | 0.5786 | 16500 | 0.9904 | 0.5338 | 0.9034 | | 0.5961 | 17000 | 0.9302 | 0.5281 | 0.9033 | | 0.6137 | 17500 | 0.908 | 0.5332 | 0.9025 | | 0.6312 | 18000 | 0.8936 | 0.5322 | 0.9046 | | 0.6487 | 18500 | 0.9549 | 0.5312 | 0.9039 | | 0.6663 | 19000 | 0.9498 | 0.5319 | 0.9030 | | 0.6838 | 19500 | 0.9291 | 0.5279 | 0.9038 | | 0.7013 | 20000 | 0.9573 | 0.5165 | 0.9017 | | 0.7189 | 20500 | 0.9395 | 0.5223 | 0.9039 | | 0.7364 | 21000 | 0.8753 | 0.5335 | 0.9009 | | 0.7539 | 21500 | 0.95 | 0.5173 | 0.9040 | | 0.7715 | 22000 | 0.9656 | 0.5451 | 0.9043 | | 0.7890 | 22500 | 0.9145 | 0.5305 | 0.9033 | | 0.8065 | 23000 | 0.9768 | 0.5135 | 0.9041 | | 0.8241 | 23500 | 0.8779 | 0.5185 | 0.9037 | | 0.8416 | 24000 | 0.9603 | 0.5338 | 0.9036 | | 0.8591 | 24500 | 0.9045 | 0.5090 | 0.9056 | | 0.8767 | 25000 | 0.9536 | 0.5254 | 0.9043 | | 0.8942 | 25500 | 0.8499 | 0.5388 | 0.9023 | | 0.9117 | 26000 | 0.88 | 0.5676 | 0.9011 | | 0.9293 | 26500 | 0.8884 | 0.5127 | 0.9046 | | 0.9468 | 27000 | 0.8556 | 0.5227 | 0.9065 | | 0.9643 | 27500 | 0.8641 | 0.5901 | 0.9027 | | 0.9819 | 28000 | 0.8884 | 0.4982 | 0.9054 | | 0.9994 | 28500 | 0.8404 | 0.5078 | 0.9064 | | 1.0169 | 29000 | 0.8613 | 0.5211 | 0.9052 | | 1.0345 | 29500 | 0.8971 | 0.5061 | 0.9065 | | 1.0520 | 30000 | 0.9426 | 0.5118 | 0.9062 | | 1.0695 | 30500 | 0.8791 | 0.5062 | 0.9062 | | 1.0871 | 31000 | 0.8953 | 0.5056 | 0.9044 | | 1.1046 | 31500 | 0.9229 | 0.5002 | 0.9065 | | 1.1221 | 32000 | 0.8914 | 0.4912 | 0.9088 | | 1.1397 | 32500 | 0.9105 | 0.4973 | 0.9086 | | 1.1572 | 33000 | 0.9168 | 0.4954 | 0.9074 | | 1.1747 | 33500 | 0.845 | 0.5073 | 0.9088 | | 1.1923 | 34000 | 0.9209 | 0.4890 | 0.9088 | | 1.2098 | 34500 | 0.8014 | 0.5063 | 0.9063 | | 1.2273 | 35000 | 0.8888 | 0.5270 | 0.9070 | | 1.2449 | 35500 | 0.8269 | 0.5062 | 0.9059 | | 1.2624 | 36000 | 0.8637 | 0.4951 | 0.9054 | | 1.2799 | 36500 | 0.8796 | 0.4922 | 0.9083 | | 1.2975 | 37000 | 0.8644 | 0.4851 | 0.9068 | | 1.3150 | 37500 | 0.8907 | 0.5396 | 0.9069 | | 1.3325 | 38000 | 0.8477 | 0.4944 | 0.9082 | | 1.3501 | 38500 | 0.8237 | 0.4915 | 0.9081 | | 1.3676 | 39000 | 0.9217 | 0.4918 | 0.9083 | | 1.3851 | 39500 | 0.887 | 0.4955 | 0.9064 | | 1.4027 | 40000 | 0.9172 | 0.5259 | 0.9077 | | 1.4202 | 40500 | 0.8693 | 0.5002 | 0.9092 | | 1.4377 | 41000 | 0.8223 | 0.5109 | 0.9084 | | 1.4553 | 41500 | 0.8554 | 0.4859 | 0.9079 | | 1.4728 | 42000 | 0.8772 | 0.4850 | 0.9079 | | 1.4903 | 42500 | 0.8232 | 0.4860 | 0.9088 | | 1.5079 | 43000 | 0.8218 | 0.4917 | 0.9083 | | 1.5254 | 43500 | 0.7905 | 0.4839 | 0.9094 | | 1.5429 | 44000 | 0.847 | 0.5150 | 0.9081 | | 1.5605 | 44500 | 0.7929 | 0.5234 | 0.9082 | | 1.5780 | 45000 | 0.8621 | 0.5084 | 0.9094 | | 1.5955 | 45500 | 0.7908 | 0.4980 | 0.9092 | | 1.6131 | 46000 | 0.792 | 0.5385 | 0.9071 | | 1.6306 | 46500 | 0.7569 | 0.5405 | 0.9088 | | 1.6481 | 47000 | 0.8178 | 0.5172 | 0.9078 | | 1.6657 | 47500 | 0.8101 | 0.5379 | 0.9082 | | 1.6832 | 48000 | 0.8013 | 0.5627 | 0.9068 | | 1.7007 | 48500 | 0.8298 | 0.5947 | 0.9072 | | 1.7183 | 49000 | 0.8028 | 0.5302 | 0.9076 | | 1.7358 | 49500 | 0.7663 | 0.5523 | 0.9066 | | 1.7533 | 50000 | 0.8255 | 0.5361 | 0.9080 | | 1.7709 | 50500 | 0.8354 | 0.5373 | 0.9080 | | 1.7884 | 51000 | 0.7917 | 0.5546 | 0.9079 | | 1.8059 | 51500 | 0.837 | 0.5113 | 0.9085 | | 1.8235 | 52000 | 0.7488 | 0.5037 | 0.9082 | | 1.8410 | 52500 | 0.8439 | 0.5349 | 0.9084 | | 1.8585 | 53000 | 0.7688 | 0.5279 | 0.9083 | | 1.8761 | 53500 | 0.8205 | 0.5496 | 0.9071 | | 1.8936 | 54000 | 0.7256 | 0.5454 | 0.9075 | | 1.9111 | 54500 | 0.7536 | 0.5582 | 0.9060 | | 1.9287 | 55000 | 0.7544 | 0.5331 | 0.9075 | | 1.9462 | 55500 | 0.7332 | 0.5139 | 0.9091 | | 1.9637 | 56000 | 0.7244 | 0.5767 | 0.9078 | | 1.9813 | 56500 | 0.7574 | 0.4962 | 0.9084 | | 1.9988 | 57000 | 0.7116 | 0.5210 | 0.9090 | | 2.0163 | 57500 | 0.7376 | 0.5196 | 0.9088 | | 2.0339 | 58000 | 0.768 | 0.5609 | 0.9086 | | 2.0514 | 58500 | 0.8056 | 0.5230 | 0.9081 | | 2.0689 | 59000 | 0.7744 | 0.5527 | 0.9077 | | 2.0865 | 59500 | 0.7543 | 0.4949 | 0.9090 | | 2.1040 | 60000 | 0.8 | 0.4925 | 0.9095 | | 2.1215 | 60500 | 0.7664 | 0.4989 | 0.9093 | | 2.1391 | 61000 | 0.7849 | 0.4956 | 0.9106 | | 2.1566 | 61500 | 0.7955 | 0.5312 | 0.9099 | | 2.1741 | 62000 | 0.7326 | 0.5126 | 0.9112 | | 2.1917 | 62500 | 0.7975 | 0.4701 | 0.9114 | | 2.2092 | 63000 | 0.7001 | 0.5118 | 0.9093 | | 2.2267 | 63500 | 0.7477 | 0.5371 | 0.9102 | | 2.2443 | 64000 | 0.7227 | 0.5536 | 0.9083 | | 2.2618 | 64500 | 0.7687 | 0.5174 | 0.9102 | | 2.2793 | 65000 | 0.7633 | 0.4925 | 0.9102 | | 2.2969 | 65500 | 0.7572 | 0.5059 | 0.9093 | | 2.3144 | 66000 | 0.7846 | 0.5391 | 0.9088 | | 2.3319 | 66500 | 0.7434 | 0.4991 | 0.9111 | | 2.3495 | 67000 | 0.7124 | 0.5115 | 0.9107 | | 2.3670 | 67500 | 0.8085 | 0.4974 | 0.9086 | | 2.3845 | 68000 | 0.7879 | 0.5114 | 0.9089 | | 2.4021 | 68500 | 0.7977 | 0.5297 | 0.9086 | | 2.4196 | 69000 | 0.782 | 0.5251 | 0.9103 | | 2.4371 | 69500 | 0.7237 | 0.5568 | 0.9088 | | 2.4547 | 70000 | 0.7556 | 0.5008 | 0.9098 | | 2.4722 | 70500 | 0.777 | 0.4784 | 0.9097 | | 2.4897 | 71000 | 0.7205 | 0.4993 | 0.9097 | | 2.5073 | 71500 | 0.7237 | 0.5096 | 0.9102 | | 2.5248 | 72000 | 0.6976 | 0.4833 | 0.9107 | | 2.5423 | 72500 | 0.7572 | 0.5234 | 0.9092 | | 2.5599 | 73000 | 0.7012 | 0.5339 | 0.9096 | | 2.5774 | 73500 | 0.7799 | 0.5056 | 0.9107 | | 2.5949 | 74000 | 0.7036 | 0.4961 | 0.9101 | | 2.6125 | 74500 | 0.6932 | 0.5656 | 0.9088 | | 2.6300 | 75000 | 0.6676 | 0.5347 | 0.9097 | | 2.6475 | 75500 | 0.7246 | 0.5110 | 0.9101 | | 2.6651 | 76000 | 0.715 | 0.5551 | 0.9096 | | 2.6826 | 76500 | 0.7298 | 0.5658 | 0.9106 | | 2.7001 | 77000 | 0.7349 | 0.5571 | 0.9106 | | 2.7177 | 77500 | 0.721 | 0.5667 | 0.9100 | | 2.7352 | 78000 | 0.6863 | 0.5616 | 0.9066 | | 2.7527 | 78500 | 0.739 | 0.5419 | 0.9101 | | 2.7703 | 79000 | 0.7529 | 0.5343 | 0.9107 | | 2.7878 | 79500 | 0.7008 | 0.5601 | 0.9107 | | 2.8053 | 80000 | 0.7655 | 0.5189 | 0.9097 | | 2.8229 | 80500 | 0.6666 | 0.5073 | 0.9106 | | 2.8404 | 81000 | 0.7551 | 0.5381 | 0.9102 | | 2.8579 | 81500 | 0.6769 | 0.5650 | 0.9092 | | 2.8755 | 82000 | 0.7508 | 0.5189 | 0.9097 | | 2.8930 | 82500 | 0.6418 | 0.5521 | 0.9094 | | 2.9105 | 83000 | 0.6808 | 0.5490 | 0.9095 | | 2.9281 | 83500 | 0.6833 | 0.5524 | 0.9092 | | 2.9456 | 84000 | 0.6508 | 0.5229 | 0.9105 | | 2.9631 | 84500 | 0.6576 | 0.5789 | 0.9100 | | 2.9807 | 85000 | 0.6778 | 0.5075 | 0.9108 | | 2.9982 | 85500 | 0.642 | 0.5139 | 0.9107 | | 3.0157 | 86000 | 0.6596 | 0.5337 | 0.9104 | | 3.0333 | 86500 | 0.6769 | 0.5713 | 0.9106 | | 3.0508 | 87000 | 0.7349 | 0.5374 | 0.9103 | | 3.0683 | 87500 | 0.7034 | 0.5680 | 0.9094 | | 3.0859 | 88000 | 0.6853 | 0.5130 | 0.9106 | | 3.1034 | 88500 | 0.726 | 0.5093 | 0.9123 | | 3.1209 | 89000 | 0.6939 | 0.5078 | 0.9104 | | 3.1385 | 89500 | 0.7085 | 0.4847 | 0.9125 | | 3.1560 | 90000 | 0.7118 | 0.5154 | 0.9113 | | 3.1735 | 90500 | 0.6755 | 0.5066 | 0.9121 | | 3.1911 | 91000 | 0.718 | 0.4665 | 0.9129 | | 3.2086 | 91500 | 0.6277 | 0.5047 | 0.9111 | | 3.2261 | 92000 | 0.6907 | 0.5292 | 0.9123 | | 3.2437 | 92500 | 0.6624 | 0.5414 | 0.9103 | | 3.2612 | 93000 | 0.6943 | 0.5274 | 0.9101 | | 3.2787 | 93500 | 0.6979 | 0.4985 | 0.9110 | | 3.2963 | 94000 | 0.6858 | 0.5156 | 0.9099 | | 3.3138 | 94500 | 0.7221 | 0.5062 | 0.9114 | | 3.3313 | 95000 | 0.6647 | 0.5129 | 0.9108 | | 3.3489 | 95500 | 0.6572 | 0.5213 | 0.9127 | | 3.3664 | 96000 | 0.7417 | 0.4926 | 0.9119 | | 3.3839 | 96500 | 0.7237 | 0.5090 | 0.9104 | | 3.4015 | 97000 | 0.7218 | 0.5336 | 0.9111 | | 3.4190 | 97500 | 0.7091 | 0.5062 | 0.9128 | | 3.4365 | 98000 | 0.668 | 0.5727 | 0.9118 | | 3.4541 | 98500 | 0.6724 | 0.5106 | 0.9119 | | 3.4716 | 99000 | 0.7331 | 0.4740 | 0.9130 | | 3.4891 | 99500 | 0.6427 | 0.5021 | 0.9119 | | 3.5067 | 100000 | 0.6659 | 0.5037 | 0.9119 | | 3.5242 | 100500 | 0.6413 | 0.5024 | 0.9109 | | 3.5417 | 101000 | 0.6889 | 0.5277 | 0.9109 | | 3.5593 | 101500 | 0.6401 | 0.5389 | 0.9103 | | 3.5768 | 102000 | 0.7116 | 0.5114 | 0.9111 | | 3.5943 | 102500 | 0.6511 | 0.5124 | 0.9112 | | 3.6119 | 103000 | 0.6392 | 0.5505 | 0.9096 | | 3.6294 | 103500 | 0.6049 | 0.5306 | 0.9099 | | 3.6469 | 104000 | 0.675 | 0.5219 | 0.9098 | | 3.6645 | 104500 | 0.6498 | 0.5392 | 0.9100 | | 3.6820 | 105000 | 0.6774 | 0.5609 | 0.9097 | | 3.6995 | 105500 | 0.6655 | 0.5441 | 0.9107 | | 3.7171 | 106000 | 0.6664 | 0.5713 | 0.9113 | | 3.7346 | 106500 | 0.6343 | 0.5742 | 0.9086 | | 3.7521 | 107000 | 0.6686 | 0.5225 | 0.9113 | | 3.7697 | 107500 | 0.7018 | 0.5221 | 0.9111 | | 3.7872 | 108000 | 0.6479 | 0.5641 | 0.9113 | | 3.8047 | 108500 | 0.7005 | 0.5352 | 0.9123 | | 3.8223 | 109000 | 0.6068 | 0.5007 | 0.9107 | | 3.8398 | 109500 | 0.6846 | 0.5593 | 0.9102 | | 3.8573 | 110000 | 0.6272 | 0.5458 | 0.9107 | | 3.8749 | 110500 | 0.685 | 0.5178 | 0.9100 | | 3.8924 | 111000 | 0.5992 | 0.5200 | 0.9102 | | 3.9099 | 111500 | 0.6231 | 0.5488 | 0.9101 | | 3.9275 | 112000 | 0.6343 | 0.5496 | 0.9100 | | 3.9450 | 112500 | 0.593 | 0.5207 | 0.9115 | | 3.9625 | 113000 | 0.6017 | 0.5679 | 0.9108 | | 3.9801 | 113500 | 0.6218 | 0.5174 | 0.9113 | | 3.9976 | 114000 | 0.5916 | 0.5108 | 0.9118 | | 4.0151 | 114500 | 0.603 | 0.5259 | 0.9117 | | 4.0327 | 115000 | 0.6215 | 0.5362 | 0.9121 | | 4.0502 | 115500 | 0.6784 | 0.5343 | 0.9112 | | 4.0677 | 116000 | 0.65 | 0.5488 | 0.9114 | | 4.0853 | 116500 | 0.632 | 0.4905 | 0.9119 | | 4.1028 | 117000 | 0.6708 | 0.5091 | 0.9129 | | 4.1203 | 117500 | 0.6374 | 0.5228 | 0.9124 | | 4.1379 | 118000 | 0.6593 | 0.4976 | 0.9125 | | 4.1554 | 118500 | 0.649 | 0.5151 | 0.9109 | | 4.1729 | 119000 | 0.629 | 0.5303 | 0.9124 | | 4.1905 | 119500 | 0.6709 | 0.4868 | 0.9121 | | 4.2080 | 120000 | 0.5803 | 0.5177 | 0.9130 | | 4.2255 | 120500 | 0.6356 | 0.5329 | 0.9140 | | 4.2431 | 121000 | 0.6075 | 0.5057 | 0.9129 | | 4.2606 | 121500 | 0.6463 | 0.5084 | 0.9126 | | 4.2781 | 122000 | 0.6408 | 0.4859 | 0.9127 | | 4.2957 | 122500 | 0.6331 | 0.5210 | 0.9114 | | 4.3132 | 123000 | 0.6719 | 0.4893 | 0.9122 | | 4.3308 | 123500 | 0.6227 | 0.5126 | 0.9129 | | 4.3483 | 124000 | 0.6144 | 0.5293 | 0.9136 | | 4.3658 | 124500 | 0.6589 | 0.4978 | 0.9127 | | 4.3834 | 125000 | 0.6849 | 0.5195 | 0.9122 | | 4.4009 | 125500 | 0.6731 | 0.5150 | 0.9119 | | 4.4184 | 126000 | 0.658 | 0.4890 | 0.9136 | | 4.4360 | 126500 | 0.6256 | 0.5271 | 0.9134 | | 4.4535 | 127000 | 0.6295 | 0.5182 | 0.9129 | | 4.4710 | 127500 | 0.6804 | 0.4870 | 0.9133 | | 4.4886 | 128000 | 0.5868 | 0.4831 | 0.9129 | | 4.5061 | 128500 | 0.6316 | 0.4963 | 0.9135 | | 4.5236 | 129000 | 0.5873 | 0.5179 | 0.9149 | | 4.5412 | 129500 | 0.6383 | 0.5188 | 0.9126 | | 4.5587 | 130000 | 0.5936 | 0.5420 | 0.9117 | | 4.5762 | 130500 | 0.654 | 0.5248 | 0.9123 | | 4.5938 | 131000 | 0.6172 | 0.5067 | 0.9130 | | 4.6113 | 131500 | 0.5766 | 0.5335 | 0.9117 | | 4.6288 | 132000 | 0.5688 | 0.5345 | 0.9106 | | 4.6464 | 132500 | 0.6254 | 0.5352 | 0.9115 | | 4.6639 | 133000 | 0.5978 | 0.5244 | 0.9117 | | 4.6814 | 133500 | 0.6332 | 0.5511 | 0.9119 | | 4.6990 | 134000 | 0.6209 | 0.5356 | 0.9120 | | 4.7165 | 134500 | 0.6166 | 0.5532 | 0.9125 | | 4.7340 | 135000 | 0.5897 | 0.5888 | 0.9105 | | 4.7516 | 135500 | 0.624 | 0.5153 | 0.9123 | | 4.7691 | 136000 | 0.6563 | 0.5260 | 0.9134 | | 4.7866 | 136500 | 0.6098 | 0.5603 | 0.9122 | | 4.8042 | 137000 | 0.6313 | 0.5390 | 0.9124 | | 4.8217 | 137500 | 0.5737 | 0.5093 | 0.9129 | | 4.8392 | 138000 | 0.6475 | 0.5320 | 0.9114 | | 4.8568 | 138500 | 0.5752 | 0.5531 | 0.9120 | | 4.8743 | 139000 | 0.6378 | 0.4997 | 0.9114 | | 4.8918 | 139500 | 0.5641 | 0.5121 | 0.9120 | | 4.9094 | 140000 | 0.5771 | 0.5343 | 0.9114 | | 4.9269 | 140500 | 0.5869 | 0.5277 | 0.9124 | | 4.9444 | 141000 | 0.5417 | 0.5105 | 0.9143 | | 4.9620 | 141500 | 0.5517 | 0.5664 | 0.9133 | | 4.9795 | 142000 | 0.589 | 0.5326 | 0.9122 | | 4.9970 | 142500 | 0.5449 | 0.5236 | 0.9136 | | 5.0146 | 143000 | 0.5687 | 0.5217 | 0.9141 | | 5.0321 | 143500 | 0.5815 | 0.5520 | 0.9131 | | 5.0496 | 144000 | 0.6309 | 0.5290 | 0.9125 | | 5.0672 | 144500 | 0.6086 | 0.5305 | 0.9128 | | 5.0847 | 145000 | 0.5905 | 0.5044 | 0.9135 | | 5.1022 | 145500 | 0.6242 | 0.5113 | 0.9144 | | 5.1198 | 146000 | 0.603 | 0.5263 | 0.9137 | | 5.1373 | 146500 | 0.6187 | 0.5086 | 0.9131 | | 5.1548 | 147000 | 0.6007 | 0.5291 | 0.9136 | | 5.1724 | 147500 | 0.5934 | 0.5113 | 0.9131 | | 5.1899 | 148000 | 0.6208 | 0.4981 | 0.9142 | | 5.2074 | 148500 | 0.5524 | 0.5414 | 0.9146 | | 5.2250 | 149000 | 0.5941 | 0.5274 | 0.9146 | | 5.2425 | 149500 | 0.5694 | 0.5315 | 0.9140 | | 5.2600 | 150000 | 0.6045 | 0.5177 | 0.9138 | | 5.2776 | 150500 | 0.5928 | 0.4923 | 0.9146 | | 5.2951 | 151000 | 0.594 | 0.5209 | 0.9138 | | 5.3126 | 151500 | 0.6303 | 0.5014 | 0.9137 | | 5.3302 | 152000 | 0.5867 | 0.5151 | 0.9135 | | 5.3477 | 152500 | 0.5686 | 0.5244 | 0.9142 | | 5.3652 | 153000 | 0.6198 | 0.5063 | 0.9140 | | 5.3828 | 153500 | 0.6458 | 0.5403 | 0.9131 | | 5.4003 | 154000 | 0.6284 | 0.4988 | 0.9140 | | 5.4178 | 154500 | 0.6192 | 0.5008 | 0.9143 | | 5.4354 | 155000 | 0.5943 | 0.5334 | 0.9134 | | 5.4529 | 155500 | 0.5725 | 0.5270 | 0.9141 | | 5.4704 | 156000 | 0.656 | 0.4985 | 0.9146 | | 5.4880 | 156500 | 0.5562 | 0.4863 | 0.9137 | | 5.5055 | 157000 | 0.5888 | 0.5099 | 0.9141 | | 5.5230 | 157500 | 0.5329 | 0.5039 | 0.9149 | | 5.5406 | 158000 | 0.619 | 0.5232 | 0.9136 | | 5.5581 | 158500 | 0.5528 | 0.5471 | 0.9135 | | 5.5756 | 159000 | 0.6086 | 0.5226 | 0.9125 | | 5.5932 | 159500 | 0.5895 | 0.5072 | 0.9132 | | 5.6107 | 160000 | 0.5358 | 0.5419 | 0.9139 | | 5.6282 | 160500 | 0.5438 | 0.5334 | 0.9121 | | 5.6458 | 161000 | 0.579 | 0.5548 | 0.9118 | | 5.6633 | 161500 | 0.5636 | 0.5257 | 0.9127 | | 5.6808 | 162000 | 0.5984 | 0.5520 | 0.9136 | | 5.6984 | 162500 | 0.581 | 0.5314 | 0.9135 | | 5.7159 | 163000 | 0.5923 | 0.5665 | 0.9132 | | 5.7334 | 163500 | 0.5433 | 0.5717 | 0.9121 | | 5.7510 | 164000 | 0.583 | 0.5338 | 0.9137 | | 5.7685 | 164500 | 0.6272 | 0.5275 | 0.9137 | | 5.7860 | 165000 | 0.576 | 0.5657 | 0.9130 | | 5.8036 | 165500 | 0.5983 | 0.5457 | 0.9131 | | 5.8211 | 166000 | 0.5389 | 0.5252 | 0.9141 | | 5.8386 | 166500 | 0.6035 | 0.5478 | 0.9131 | | 5.8562 | 167000 | 0.5398 | 0.5334 | 0.9136 | | 5.8737 | 167500 | 0.5986 | 0.5021 | 0.9136 | | 5.8912 | 168000 | 0.5383 | 0.5261 | 0.9137 | | 5.9088 | 168500 | 0.5376 | 0.5374 | 0.9128 | | 5.9263 | 169000 | 0.5555 | 0.5375 | 0.9136 | | 5.9438 | 169500 | 0.5182 | 0.5230 | 0.9137 | | 5.9614 | 170000 | 0.5175 | 0.5653 | 0.9143 | | 5.9789 | 170500 | 0.5572 | 0.5433 | 0.9141 | | 5.9964 | 171000 | 0.5169 | 0.5035 | 0.9151 | | 6.0140 | 171500 | 0.5336 | 0.5178 | 0.9149 | | 6.0315 | 172000 | 0.5479 | 0.5427 | 0.9141 | | 6.0490 | 172500 | 0.5885 | 0.5417 | 0.9137 | | 6.0666 | 173000 | 0.5694 | 0.5232 | 0.9138 | | 6.0841 | 173500 | 0.5634 | 0.5074 | 0.9142 | | 6.1016 | 174000 | 0.5888 | 0.5102 | 0.9145 | | 6.1192 | 174500 | 0.576 | 0.5225 | 0.9148 | | 6.1367 | 175000 | 0.5843 | 0.5161 | 0.9144 | | 6.1542 | 175500 | 0.5635 | 0.5244 | 0.9141 | | 6.1718 | 176000 | 0.5666 | 0.5088 | 0.9149 | | 6.1893 | 176500 | 0.5868 | 0.5185 | 0.9150 | | 6.2068 | 177000 | 0.5211 | 0.5348 | 0.9154 | | 6.2244 | 177500 | 0.5672 | 0.5268 | 0.9150 | | 6.2419 | 178000 | 0.5286 | 0.5431 | 0.9141 | | 6.2594 | 178500 | 0.5723 | 0.5359 | 0.9154 | | 6.2770 | 179000 | 0.5648 | 0.5016 | 0.9154 | | 6.2945 | 179500 | 0.5566 | 0.5200 | 0.9145 | | 6.3120 | 180000 | 0.6074 | 0.5132 | 0.9145 | | 6.3296 | 180500 | 0.5473 | 0.5294 | 0.9145 | | 6.3471 | 181000 | 0.5325 | 0.5380 | 0.9150 | | 6.3646 | 181500 | 0.5868 | 0.5243 | 0.9149 | | 6.3822 | 182000 | 0.6155 | 0.5368 | 0.9143 | | 6.3997 | 182500 | 0.5944 | 0.4978 | 0.9149 | | 6.4172 | 183000 | 0.5838 | 0.5224 | 0.9146 | | 6.4348 | 183500 | 0.5644 | 0.5384 | 0.9146 | | 6.4523 | 184000 | 0.5471 | 0.5549 | 0.9152 | | 6.4698 | 184500 | 0.6198 | 0.5101 | 0.9147 | | 6.4874 | 185000 | 0.5304 | 0.5016 | 0.9152 | | 6.5049 | 185500 | 0.5621 | 0.5076 | 0.9155 | | 6.5224 | 186000 | 0.5027 | 0.5085 | 0.9148 | | 6.5400 | 186500 | 0.5882 | 0.5293 | 0.9147 | | 6.5575 | 187000 | 0.5228 | 0.5374 | 0.9152 | | 6.5750 | 187500 | 0.5717 | 0.5233 | 0.9140 | | 6.5926 | 188000 | 0.5651 | 0.5269 | 0.9136 | | 6.6101 | 188500 | 0.5182 | 0.5328 | 0.9140 | | 6.6276 | 189000 | 0.508 | 0.5250 | 0.9134 | | 6.6452 | 189500 | 0.5464 | 0.5427 | 0.9128 | | 6.6627 | 190000 | 0.5362 | 0.5137 | 0.9136 | | 6.6802 | 190500 | 0.5732 | 0.5161 | 0.9148 | | 6.6978 | 191000 | 0.5466 | 0.5416 | 0.9136 | | 6.7153 | 191500 | 0.5501 | 0.5736 | 0.9137 | | 6.7328 | 192000 | 0.5258 | 0.5528 | 0.9130 | | 6.7504 | 192500 | 0.5589 | 0.5380 | 0.9142 | | 6.7679 | 193000 | 0.5947 | 0.5297 | 0.9148 | | 6.7854 | 193500 | 0.5579 | 0.5590 | 0.9145 | | 6.8030 | 194000 | 0.5644 | 0.5412 | 0.9142 | | 6.8205 | 194500 | 0.5128 | 0.5181 | 0.9137 | | 6.8380 | 195000 | 0.5802 | 0.5451 | 0.9136 | | 6.8556 | 195500 | 0.5002 | 0.5293 | 0.9144 | | 6.8731 | 196000 | 0.5763 | 0.5153 | 0.9140 | | 6.8906 | 196500 | 0.5205 | 0.5261 | 0.9144 | | 6.9082 | 197000 | 0.5112 | 0.5342 | 0.9149 | | 6.9257 | 197500 | 0.523 | 0.5503 | 0.9140 | | 6.9432 | 198000 | 0.4875 | 0.5420 | 0.9148 | | 6.9608 | 198500 | 0.4963 | 0.5638 | 0.9142 | | 6.9783 | 199000 | 0.5327 | 0.5536 | 0.9149 | | 6.9958 | 199500 | 0.4822 | 0.5224 | 0.9141 | | 7.0134 | 200000 | 0.5078 | 0.5300 | 0.9140 | | 7.0309 | 200500 | 0.5208 | 0.5486 | 0.9149 | | 7.0484 | 201000 | 0.5641 | 0.5442 | 0.9148 | | 7.0660 | 201500 | 0.5484 | 0.5165 | 0.9143 | | 7.0835 | 202000 | 0.5289 | 0.5206 | 0.9142 | | 7.1010 | 202500 | 0.557 | 0.5178 | 0.9146 | | 7.1186 | 203000 | 0.556 | 0.5190 | 0.9147 | | 7.1361 | 203500 | 0.5567 | 0.5244 | 0.9143 | | 7.1536 | 204000 | 0.5376 | 0.5212 | 0.9148 | | 7.1712 | 204500 | 0.5448 | 0.5138 | 0.9150 | | 7.1887 | 205000 | 0.5541 | 0.5231 | 0.9155 | | 7.2062 | 205500 | 0.5006 | 0.5261 | 0.9155 | | 7.2238 | 206000 | 0.5366 | 0.5184 | 0.9159 | | 7.2413 | 206500 | 0.5127 | 0.5360 | 0.9148 | | 7.2588 | 207000 | 0.5469 | 0.5225 | 0.9148 | | 7.2764 | 207500 | 0.5414 | 0.5080 | 0.9152 | | 7.2939 | 208000 | 0.5361 | 0.5135 | 0.9151 | | 7.3114 | 208500 | 0.5833 | 0.5132 | 0.9147 | | 7.3290 | 209000 | 0.515 | 0.5282 | 0.9137 | | 7.3465 | 209500 | 0.5165 | 0.5362 | 0.9154 | | 7.3640 | 210000 | 0.5551 | 0.5327 | 0.9159 | | 7.3816 | 210500 | 0.5845 | 0.5409 | 0.9143 | | 7.3991 | 211000 | 0.5798 | 0.5057 | 0.9147 | | 7.4166 | 211500 | 0.5614 | 0.5275 | 0.9149 | | 7.4342 | 212000 | 0.5445 | 0.5175 | 0.9153 | | 7.4517 | 212500 | 0.5175 | 0.5424 | 0.9139 | | 7.4692 | 213000 | 0.6043 | 0.5075 | 0.9148 | | 7.4868 | 213500 | 0.5051 | 0.5067 | 0.9154 | | 7.5043 | 214000 | 0.5337 | 0.5143 | 0.9153 | | 7.5218 | 214500 | 0.4822 | 0.5049 | 0.9156 | | 7.5394 | 215000 | 0.5722 | 0.5359 | 0.9153 | | 7.5569 | 215500 | 0.5014 | 0.5306 | 0.9147 | | 7.5744 | 216000 | 0.5441 | 0.5222 | 0.9138 | | 7.5920 | 216500 | 0.5391 | 0.5261 | 0.9138 | | 7.6095 | 217000 | 0.494 | 0.5275 | 0.9144 | | 7.6270 | 217500 | 0.4881 | 0.5268 | 0.9141 | | 7.6446 | 218000 | 0.5263 | 0.5381 | 0.9138 | | 7.6621 | 218500 | 0.5017 | 0.5209 | 0.9134 | | 7.6796 | 219000 | 0.5566 | 0.5347 | 0.9138 | | 7.6972 | 219500 | 0.5201 | 0.5519 | 0.9135 | | 7.7147 | 220000 | 0.5269 | 0.5718 | 0.9143 | | 7.7322 | 220500 | 0.5125 | 0.5442 | 0.9135 | | 7.7498 | 221000 | 0.5307 | 0.5292 | 0.9142 | | 7.7673 | 221500 | 0.5718 | 0.5179 | 0.9140 | | 7.7848 | 222000 | 0.5345 | 0.5512 | 0.9147 | | 7.8024 | 222500 | 0.5456 | 0.5447 | 0.9143 | | 7.8199 | 223000 | 0.4889 | 0.5197 | 0.9144 | | 7.8374 | 223500 | 0.5532 | 0.5487 | 0.9146 | | 7.8550 | 224000 | 0.4902 | 0.5257 | 0.9137 | | 7.8725 | 224500 | 0.5535 | 0.5095 | 0.9135 | | 7.8900 | 225000 | 0.4988 | 0.5404 | 0.9141 | | 7.9076 | 225500 | 0.4883 | 0.5280 | 0.9143 | | 7.9251 | 226000 | 0.4975 | 0.5458 | 0.9133 | | 7.9426 | 226500 | 0.4698 | 0.5357 | 0.9147 | | 7.9602 | 227000 | 0.4831 | 0.5391 | 0.9143 | | 7.9777 | 227500 | 0.5073 | 0.5492 | 0.9148 | | 7.9952 | 228000 | 0.4637 | 0.5140 | 0.9148 | | 8.0128 | 228500 | 0.4817 | 0.5200 | 0.9137 | | 8.0303 | 229000 | 0.5078 | 0.5370 | 0.9146 | | 8.0478 | 229500 | 0.5342 | 0.5497 | 0.9149 | | 8.0654 | 230000 | 0.5317 | 0.5179 | 0.9156 | | 8.0829 | 230500 | 0.5074 | 0.5286 | 0.9151 | | 8.1004 | 231000 | 0.5302 | 0.5165 | 0.9162 | | 8.1180 | 231500 | 0.5481 | 0.5200 | 0.9163 | | 8.1355 | 232000 | 0.538 | 0.5216 | 0.9161 | | 8.1530 | 232500 | 0.5168 | 0.5189 | 0.9152 | | 8.1706 | 233000 | 0.5118 | 0.5195 | 0.9153 | | 8.1881 | 233500 | 0.5394 | 0.5192 | 0.9155 | | 8.2056 | 234000 | 0.488 | 0.5100 | 0.9153 | | 8.2232 | 234500 | 0.5214 | 0.5162 | 0.9161 | | 8.2407 | 235000 | 0.4944 | 0.5343 | 0.9149 | | 8.2582 | 235500 | 0.5226 | 0.5190 | 0.9152 | | 8.2758 | 236000 | 0.5234 | 0.5146 | 0.9159 | | 8.2933 | 236500 | 0.5165 | 0.5011 | 0.9153 | | 8.3108 | 237000 | 0.5599 | 0.5129 | 0.9152 | | 8.3284 | 237500 | 0.4991 | 0.5212 | 0.9154 | | 8.3459 | 238000 | 0.5007 | 0.5383 | 0.9148 | | 8.3634 | 238500 | 0.5406 | 0.5394 | 0.9154 | | 8.3810 | 239000 | 0.5606 | 0.5445 | 0.9147 | | 8.3985 | 239500 | 0.5626 | 0.5143 | 0.9149 | | 8.4160 | 240000 | 0.5353 | 0.5338 | 0.9156 | | 8.4336 | 240500 | 0.5168 | 0.5208 | 0.9158 | | 8.4511 | 241000 | 0.5058 | 0.5312 | 0.9146 | | 8.4686 | 241500 | 0.5919 | 0.5143 | 0.9149 | | 8.4862 | 242000 | 0.4883 | 0.5149 | 0.9159 | | 8.5037 | 242500 | 0.5072 | 0.5132 | 0.9156 | | 8.5212 | 243000 | 0.4655 | 0.5111 | 0.9148 | | 8.5388 | 243500 | 0.5592 | 0.5269 | 0.9155 | | 8.5563 | 244000 | 0.4836 | 0.5217 | 0.9152 | | 8.5738 | 244500 | 0.5299 | 0.5269 | 0.9143 | | 8.5914 | 245000 | 0.5081 | 0.5206 | 0.9136 | | 8.6089 | 245500 | 0.48 | 0.5159 | 0.9144 | | 8.6264 | 246000 | 0.4713 | 0.5272 | 0.9141 | | 8.6440 | 246500 | 0.5038 | 0.5287 | 0.9139 | | 8.6615 | 247000 | 0.4872 | 0.5199 | 0.9142 | | 8.6790 | 247500 | 0.5429 | 0.5227 | 0.9138 | | 8.6966 | 248000 | 0.5042 | 0.5402 | 0.9136 | | 8.7141 | 248500 | 0.511 | 0.5530 | 0.9141 | | 8.7316 | 249000 | 0.5097 | 0.5374 | 0.9131 | | 8.7492 | 249500 | 0.4974 | 0.5312 | 0.9138 | | 8.7667 | 250000 | 0.5617 | 0.5381 | 0.9148 | | 8.7842 | 250500 | 0.5234 | 0.5476 | 0.9150 | | 8.8018 | 251000 | 0.5133 | 0.5447 | 0.9147 | | 8.8193 | 251500 | 0.488 | 0.5270 | 0.9148 | | 8.8368 | 252000 | 0.5377 | 0.5325 | 0.9144 | | 8.8544 | 252500 | 0.479 | 0.5324 | 0.9145 | | 8.8719 | 253000 | 0.5329 | 0.5200 | 0.9140 | | 8.8894 | 253500 | 0.4744 | 0.5346 | 0.9140 | | 8.9070 | 254000 | 0.4827 | 0.5333 | 0.9145 | | 8.9245 | 254500 | 0.4757 | 0.5415 | 0.9139 | | 8.9420 | 255000 | 0.4504 | 0.5307 | 0.9147 | | 8.9596 | 255500 | 0.4657 | 0.5337 | 0.9146 | | 8.9771 | 256000 | 0.4976 | 0.5473 | 0.9150 | | 8.9946 | 256500 | 0.459 | 0.5214 | 0.9144 | | 9.0122 | 257000 | 0.4615 | 0.5296 | 0.9147 | | 9.0297 | 257500 | 0.5019 | 0.5312 | 0.9149 | | 9.0472 | 258000 | 0.5142 | 0.5379 | 0.9152 | | 9.0648 | 258500 | 0.5174 | 0.5197 | 0.9150 | | 9.0823 | 259000 | 0.4896 | 0.5277 | 0.9155 | | 9.0998 | 259500 | 0.5114 | 0.5240 | 0.9161 | | 9.1174 | 260000 | 0.529 | 0.5293 | 0.9155 | | 9.1349 | 260500 | 0.5305 | 0.5242 | 0.9157 | | 9.1524 | 261000 | 0.4941 | 0.5160 | 0.9155 | | 9.1700 | 261500 | 0.5025 | 0.5274 | 0.9153 | | 9.1875 | 262000 | 0.5148 | 0.5198 | 0.9155 | | 9.2050 | 262500 | 0.4882 | 0.5116 | 0.9160 | | 9.2226 | 263000 | 0.4964 | 0.5139 | 0.9155 | | 9.2401 | 263500 | 0.4792 | 0.5284 | 0.9153 | | 9.2576 | 264000 | 0.5089 | 0.5175 | 0.9154 | | 9.2752 | 264500 | 0.5124 | 0.5188 | 0.9154 | | 9.2927 | 265000 | 0.4968 | 0.5153 | 0.9152 | | 9.3102 | 265500 | 0.5454 | 0.5129 | 0.9152 | | 9.3278 | 266000 | 0.4858 | 0.5209 | 0.9147 | | 9.3453 | 266500 | 0.4822 | 0.5257 | 0.9148 | | 9.3628 | 267000 | 0.5343 | 0.5298 | 0.9148 | | 9.3804 | 267500 | 0.5443 | 0.5303 | 0.9145 | | 9.3979 | 268000 | 0.546 | 0.5204 | 0.9153 | | 9.4154 | 268500 | 0.5253 | 0.5326 | 0.9154 | | 9.4330 | 269000 | 0.5062 | 0.5270 | 0.9154 | | 9.4505 | 269500 | 0.4901 | 0.5284 | 0.9150 | | 9.4680 | 270000 | 0.5675 | 0.5271 | 0.9154 | | 9.4856 | 270500 | 0.4831 | 0.5263 | 0.9152 | | 9.5031 | 271000 | 0.4873 | 0.5256 | 0.9152 | | 9.5206 | 271500 | 0.4576 | 0.5208 | 0.9155 | | 9.5382 | 272000 | 0.5392 | 0.5250 | 0.9154 | | 9.5557 | 272500 | 0.4716 | 0.5238 | 0.9158 | | 9.5732 | 273000 | 0.5202 | 0.5282 | 0.9156 | | 9.5908 | 273500 | 0.5036 | 0.5284 | 0.9149 | | 9.6083 | 274000 | 0.4645 | 0.5216 | 0.9151 | | 9.6258 | 274500 | 0.4683 | 0.5273 | 0.9154 | | 9.6434 | 275000 | 0.4881 | 0.5307 | 0.9154 | | 9.6609 | 275500 | 0.4677 | 0.5234 | 0.9155 | | 9.6784 | 276000 | 0.54 | 0.5212 | 0.9153 | | 9.6960 | 276500 | 0.4948 | 0.5277 | 0.9150 | | 9.7135 | 277000 | 0.5008 | 0.5293 | 0.9150 | | 9.7310 | 277500 | 0.4907 | 0.5307 | 0.9147 | | 9.7486 | 278000 | 0.4876 | 0.5276 | 0.9144 | | 9.7661 | 278500 | 0.539 | 0.5324 | 0.9145 | | 9.7836 | 279000 | 0.5147 | 0.5325 | 0.9145 | | 9.8012 | 279500 | 0.5095 | 0.5367 | 0.9150 | | 9.8187 | 280000 | 0.476 | 0.5333 | 0.9147 | | 9.8362 | 280500 | 0.5189 | 0.5325 | 0.9150 | | 9.8538 | 281000 | 0.4633 | 0.5342 | 0.9149 | | 9.8713 | 281500 | 0.5199 | 0.5314 | 0.9146 | | 9.8888 | 282000 | 0.4645 | 0.5312 | 0.9151 | | 9.9064 | 282500 | 0.4702 | 0.5339 | 0.9151 | | 9.9239 | 283000 | 0.4609 | 0.5362 | 0.9151 | | 9.9414 | 283500 | 0.4365 | 0.5340 | 0.9152 | | 9.9590 | 284000 | 0.4587 | 0.5339 | 0.9152 | | 9.9765 | 284500 | 0.4861 | 0.5355 | 0.9153 | | 9.9940 | 285000 | 0.4473 | 0.5352 | 0.9153 |
### Framework Versions - Python: 3.10.10 - Sentence Transformers: 3.4.0.dev0 - Transformers: 4.46.3 - PyTorch: 2.5.1+cu124 - Accelerate: 0.34.2 - Datasets: 2.21.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```