---
base_model: sentence-transformers/all-MiniLM-L6-v2
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1943715
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: percentage of irrigated land in india is about
sentences:
- Irrigation in India Irrigation in India Irrigation in India includes a network
of major and minor canals from Indian rivers, groundwater well based systems,
tanks, and other rainwater harvesting projects for agricultural activities. Of
these groundwater system is the largest. In 2013-14, only about 47.7% of total
agricultural land in India was reliably irrigated. The largest canal in India
is Indira Gandhi Canal, which is about 650 km long. About 2/3rd cultivated land
in India is dependent on monsoons. Irrigation in India helps improve food security,
reduce dependence on monsoons, improve agricultural productivity and create rural
job opportunities. Dams used for irrigation projects
- Waiting for a Girl Like You Waiting for a Girl Like You "Waiting for a Girl Like
You" is a 1981 power ballad by the British-American rock band Foreigner. The distinctive
synthesizer theme was performed by the then-little-known Thomas Dolby, and this
song also marked a major departure from their earlier singles because their previous
singles were mid to upper tempo rock songs while this song was a softer love song
with the energy of a power ballad. It was the second single released from the
album "4" (1981) and was co-written by Lou Gramm and Mick Jones. It has become
one of the band's most
- Agriculture in India 2010, only about 35% of agricultural land in India was reliably
irrigated. About 2/3rd cultivated land in India is dependent on monsoons. The
improvements in irrigation infrastructure in the last 50 years have helped India
improve food security, reduce dependence on monsoons, improve agricultural productivity
and create rural job opportunities. Dams used for irrigation projects have helped
provide drinking water to a growing rural population, control flood and prevent
drought-related damage to agriculture. , India had a large and diverse agricultural
sector, accounting, on average, for about 16% of GDP and 10% of export earnings.
India's arable land area of
- source_sentence: 'Use of multiple antimicrobial drugs by clinical patients: a prognostic
index of hospital mortality?'
sentences:
- Recent reports have suggested that extramedullary (EM) relapse of acute myeloid
leukemia (AML) post-hematopoietic stem cell transplantation (HSCT), unlike isolated
bone marrow (BM) relapse, is associated with improved prognosis. We reviewed the
outcomes of relapsed AML post-HSCT at our institution to determine whether survival
for patients with EM relapse was truly improved in comparison to patients suffering
BM relapses treated in a similar (active) way.Outcomes of all 274 allogeneic HSCT
performed for adult AML between 2000 and 2010 at our institution were retrospectively
reviewed.As of January 2011, 72 relapses post-HSCT had occurred, including 64
BM relapses (89%), two concomitant BM and EM relapses (3%), and six EM relapses
alone (8%). EM relapses occurred significantly later post-HSCT than BM relapses
(median 25.2 vs 3.9 months, respectively; P = 0.001). Patients suffering an EM
relapse were significantly more likely to receive active therapy at relapse (7/8;
88%) than those suffering a BM relapse alone (28/64; 44%; P = 0.026). When survival
analysis was restricted to outcomes of patients treated actively (i.e., with curative
intent), no difference in outcome between EM and BM relapses was observed (median
survival 13.5 vs 8 months for EM vs BM relapses, respectively, P = 0.44).
- 'Laparoscopic box model trainers have been used in training curricula for a long
time, however data on their impact on skills acquisition is still limited. Our
aim was to validate a low cost box model trainer as a tool for the training of
skills relevant to laparoscopic surgery.Randomised, controlled trial (Canadian
Task Force Classification I).University Hospital.Sixteen gynaecologic residents
with limited laparoscopic experience were randomised to a group that received
a structured box model training curriculum, and a control group. Performance before
and after the training was assessed in a virtual reality laparoscopic trainer
(LapSim and was based on objective parameters, registered by the computer system
(time, error, and economy of motion scores). Group A showed significantly greater
improvement in all performance parameters compared with the control group: economy
of movement (p=0.001), time (p=0.001) and tissue damage (p=0.036), confirming
the positive impact of box-trainer curriculum on laparoscopic skills acquisition.'
- To quantify the use of multiple and prolonged antibiotics and anti-infective drug
therapy in clinical patients in a 144-bed hospital.Adult patients (2,790 patients
with 3,706 admissions over a period of 19 months) were investigated prospectively
regarding treatment with anti-infective agents. The mean age was 57.4 (range,
18.8-97 years), and 54.3% were females (2012).Hospital stay was 5.5 (6.7 days
(range, 2-226 days), with duration up to 10 days for 91.9% of the subjects. Antibiotics
or other agents were administered to 1,166 subjects (31.5%), 325 (8.8%) required
assistance in the ICU, and a total of 141 (3.8%) died. The association between
anti-infective drug therapy and hospital mortality was statistically significant
(P<.01) with a strong linear correlation (r = 0.902, P = .014). The quantity of
prescribed antimicrobial drugs, age, and need for ICU assistance were independent
variables for death by logistic regression analysis. The odds ratio for anti-infective
drug therapy was 1.341 (1.043 to 1.725); for age, 1.042 ( 1.026 to 1.058); and
for stay in the ICU, 11.226 ( 6.648 to 18.957).
- source_sentence: who is notre dame de paris dedicated to
sentences:
- Musée de Notre Dame de Paris paintings; and historical documents including a
petition to restore the cathedral signed by, among others, Victor Hugo and Jean
Auguste Dominique Ingres. The museum closed in November 2008. [and opened again
in 2013] Musée de Notre Dame de Paris The Musée de Notre Dame de Paris was a small
museum dedicated to the cathedral of Notre Dame de Paris and its archaeology.
It stands at 10 Rue du Cloître Notre Dame, Paris, France, and was open to the
public several afternoons per week; an admission fee was charged. The museum was
established in 1951 to present the cathedral's history, as
- 'Smoking serves different functions for men and women. Thus, we wanted to investigate
the association between smoking behaviour and intakes of selected healthy foods
in men and women with special focus on differences and similarities between the
two genders.In 1993-1997, a random sample of 80 996 men and 79 729 women aged
50-64 y was invited to participate in the study ''Diet, Cancer and Health''. In
all, 27 179 men and 29 876 women attended a health examination and completed a
192-item food-frequency questionnaire (FFQ). The association between smoking status
and low, median and high intakes of selected foods was examined among 25 821 men
and 28 596 women.The greater Copenhagen and Aarhus area, Denmark.For both men
and women, smoking status group was associated with diet, such that increasing
level of smoking status ranging from never smokers over ex-smokers to currently
heavy smokers was associated with a lower intake of the healthy foods: fresh fruit,
cooked vegetables, raw vegetables/salad, and olive oil. For wine, increasing level
of smoking status category was associated with a higher fraction of abstainers
and heavy drinkers. The difference between the extreme smoking status categories
was larger than the difference between men and women within smoking status categories
such that never smoking men in general had a higher intake of healthy foods than
heavy smoking women. Correction for age, educational level, and body mass index
(BMI) did not affect the results.'
- Notre-Dame de Paris rededicated to the Cult of Reason, and then to the Cult of
the Supreme Being. During this time, many of the treasures of the cathedral were
either destroyed or plundered. The twenty-eight statues of biblical kings located
at the west facade, mistaken for statues of French kings, were beheaded. Many
of the heads were found during a 1977 excavation nearby, and are on display at
the Musée de Cluny. For a time the Goddess of Liberty replaced the Virgin Mary
on several altars. The cathedral's great bells escaped being melted down. All
of the other large statues on the facade,
- source_sentence: who sang schoolhouse rock i 'm just a bill
sentences:
- Grand Hotel (Mackinac Island) In 1886, the Michigan Central Railroad, Grand Rapids
and Indiana Railroad, and Detroit and Cleveland Steamship Navigation Company formed
the Mackinac Island Hotel Company. The group purchased the land on which the hotel
was built and construction began, based upon the design by Detroit architects
Mason and Rice. When it opened the following year, the hotel was advertised to
Chicago, Erie, Montreal and Detroit residents as a summer retreat for vacationers
who arrived by lake steamer and by rail from across the continent. The hotel opened
on July 10, 1887 and took a mere 93 days to complete. At its
- Jack Sheldon He was Griffin's sidekick for many years. His voice is perhaps best
known from the "Schoolhouse Rock!" cartoons of the 1970s, such as "Conjunction
Junction" and "I'm Just a Bill." He appeared in one episode of "Johnny Bravo"
as the Sensitive Man. He sang a few songs in the episode similar to the "Schoolhouse
Rock!" style. Sheldon returned to the "Schoolhouse Rock!" series for a 2002 episode
titled "I'm Gonna Send Your Vote to College," explaining the electoral college
process, and distributed on the series' DVD collection that same year. Sheldon
sang and played trumpet for the new segment. Sheldon
- I'm Just a Bill I'm Just a Bill "I'm Just a Bill" is a 1976 "Schoolhouse Rock!"
segment, featuring a song of the same title written by Dave Frishberg. The segment
debuted as part of "America Rock", the third season of the Schoolhouse Rock series.
The song featured in the segment is sung by Jack Sheldon (the voice of the Bill),
with dialogue by Sheldon's son John as the boy learning the process. It is about
how a bill becomes a law, how it must go through Congress, and how it can be vetoed,
etc. The Bill is for the law that school buses
- source_sentence: who does the chief risk officer report to
sentences:
- Chief risk officer a company's executive chief officer and chief financial officer
to clarify the precision of its financial reports. Moreover, to ensure the mentioned
accuracy of financial reports, internal controls are required. Accordingly, each
financial report required an internal control report to prevent fraud. Furthermore,
the CRO has to be aware of everything occurring in his company on a daily basis,
but he must also be current on all of the requirements from the SEC. In addition,
the CRO restrains corporate risk by managing compliance. Why is a CRO so important
in financial institutions? There is a report of having a CRO
- Chief risk officer Chief risk officer The chief risk officer (CRO) or chief risk
management officer (CRMO) of a firm or corporation is the executive accountable
for enabling the efficient and effective governance of significant risks, and
related opportunities, to a business and its various segments. Risks are commonly
categorized as strategic, reputational, operational, financial, or compliance-related.
CROs are accountable to the Executive Committee and The Board for enabling the
business to balance risk and reward. In more complex organizations, they are generally
responsible for coordinating the organization's Enterprise Risk Management (ERM)
approach. The CRO is responsible for assessing and mitigating significant competitive,
- Foundations of Constraint Satisfaction
model-index:
- name: all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
results:
- task:
type: triplet
name: Triplet
dataset:
name: medi mteb dev
type: medi-mteb-dev
metrics:
- type: cosine_accuracy
value: 0.9156494608352947
name: Cosine Accuracy
---
# all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- NQ
- pubmed
- specter_train_triples
- S2ORC_citations_abstracts
- fever
- gooaq_pairs
- codesearchnet
- wikihow
- WikiAnswers
- eli5_question_answer
- amazon-qa
- medmcqa
- zeroshot
- TriviaQA_pairs
- PAQ_pairs
- stackexchange_duplicate_questions_title-body_title-body
- trex
- flickr30k_captions
- hotpotqa
- task671_ambigqa_text_generation
- task061_ropes_answer_generation
- task285_imdb_answer_generation
- task905_hate_speech_offensive_classification
- task566_circa_classification
- task184_snli_entailment_to_neutral_text_modification
- task280_stereoset_classification_stereotype_type
- task1599_smcalflow_classification
- task1384_deal_or_no_dialog_classification
- task591_sciq_answer_generation
- task823_peixian-rtgender_sentiment_analysis
- task023_cosmosqa_question_generation
- task900_freebase_qa_category_classification
- task924_event2mind_word_generation
- task152_tomqa_find_location_easy_noise
- task1368_healthfact_sentence_generation
- task1661_super_glue_classification
- task1187_politifact_classification
- task1728_web_nlg_data_to_text
- task112_asset_simple_sentence_identification
- task1340_msr_text_compression_compression
- task072_abductivenli_answer_generation
- task1504_hatexplain_answer_generation
- task684_online_privacy_policy_text_information_type_generation
- task1290_xsum_summarization
- task075_squad1.1_answer_generation
- task1587_scifact_classification
- task384_socialiqa_question_classification
- task1555_scitail_answer_generation
- task1532_daily_dialog_emotion_classification
- task239_tweetqa_answer_generation
- task596_mocha_question_generation
- task1411_dart_subject_identification
- task1359_numer_sense_answer_generation
- task329_gap_classification
- task220_rocstories_title_classification
- task316_crows-pairs_classification_stereotype
- task495_semeval_headline_classification
- task1168_brown_coarse_pos_tagging
- task348_squad2.0_unanswerable_question_generation
- task049_multirc_questions_needed_to_answer
- task1534_daily_dialog_question_classification
- task322_jigsaw_classification_threat
- task295_semeval_2020_task4_commonsense_reasoning
- task186_snli_contradiction_to_entailment_text_modification
- task034_winogrande_question_modification_object
- task160_replace_letter_in_a_sentence
- task469_mrqa_answer_generation
- task105_story_cloze-rocstories_sentence_generation
- task649_race_blank_question_generation
- task1536_daily_dialog_happiness_classification
- task683_online_privacy_policy_text_purpose_answer_generation
- task024_cosmosqa_answer_generation
- task584_udeps_eng_fine_pos_tagging
- task066_timetravel_binary_consistency_classification
- task413_mickey_en_sentence_perturbation_generation
- task182_duorc_question_generation
- task028_drop_answer_generation
- task1601_webquestions_answer_generation
- task1295_adversarial_qa_question_answering
- task201_mnli_neutral_classification
- task038_qasc_combined_fact
- task293_storycommonsense_emotion_text_generation
- task572_recipe_nlg_text_generation
- task517_emo_classify_emotion_of_dialogue
- task382_hybridqa_answer_generation
- task176_break_decompose_questions
- task1291_multi_news_summarization
- task155_count_nouns_verbs
- task031_winogrande_question_generation_object
- task279_stereoset_classification_stereotype
- task1336_peixian_equity_evaluation_corpus_gender_classifier
- task508_scruples_dilemmas_more_ethical_isidentifiable
- task518_emo_different_dialogue_emotions
- task077_splash_explanation_to_sql
- task923_event2mind_classifier
- task470_mrqa_question_generation
- task638_multi_woz_classification
- task1412_web_questions_question_answering
- task847_pubmedqa_question_generation
- task678_ollie_actual_relationship_answer_generation
- task290_tellmewhy_question_answerability
- task575_air_dialogue_classification
- task189_snli_neutral_to_contradiction_text_modification
- task026_drop_question_generation
- task162_count_words_starting_with_letter
- task079_conala_concat_strings
- task610_conllpp_ner
- task046_miscellaneous_question_typing
- task197_mnli_domain_answer_generation
- task1325_qa_zre_question_generation_on_subject_relation
- task430_senteval_subject_count
- task672_nummersense
- task402_grailqa_paraphrase_generation
- task904_hate_speech_offensive_classification
- task192_hotpotqa_sentence_generation
- task069_abductivenli_classification
- task574_air_dialogue_sentence_generation
- task187_snli_entailment_to_contradiction_text_modification
- task749_glucose_reverse_cause_emotion_detection
- task1552_scitail_question_generation
- task750_aqua_multiple_choice_answering
- task327_jigsaw_classification_toxic
- task1502_hatexplain_classification
- task328_jigsaw_classification_insult
- task304_numeric_fused_head_resolution
- task1293_kilt_tasks_hotpotqa_question_answering
- task216_rocstories_correct_answer_generation
- task1326_qa_zre_question_generation_from_answer
- task1338_peixian_equity_evaluation_corpus_sentiment_classifier
- task1729_personachat_generate_next
- task1202_atomic_classification_xneed
- task400_paws_paraphrase_classification
- task502_scruples_anecdotes_whoiswrong_verification
- task088_identify_typo_verification
- task221_rocstories_two_choice_classification
- task200_mnli_entailment_classification
- task074_squad1.1_question_generation
- task581_socialiqa_question_generation
- task1186_nne_hrngo_classification
- task898_freebase_qa_answer_generation
- task1408_dart_similarity_classification
- task168_strategyqa_question_decomposition
- task1357_xlsum_summary_generation
- task390_torque_text_span_selection
- task165_mcscript_question_answering_commonsense
- task1533_daily_dialog_formal_classification
- task002_quoref_answer_generation
- task1297_qasc_question_answering
- task305_jeopardy_answer_generation_normal
- task029_winogrande_full_object
- task1327_qa_zre_answer_generation_from_question
- task326_jigsaw_classification_obscene
- task1542_every_ith_element_from_starting
- task570_recipe_nlg_ner_generation
- task1409_dart_text_generation
- task401_numeric_fused_head_reference
- task846_pubmedqa_classification
- task1712_poki_classification
- task344_hybridqa_answer_generation
- task875_emotion_classification
- task1214_atomic_classification_xwant
- task106_scruples_ethical_judgment
- task238_iirc_answer_from_passage_answer_generation
- task1391_winogrande_easy_answer_generation
- task195_sentiment140_classification
- task163_count_words_ending_with_letter
- task579_socialiqa_classification
- task569_recipe_nlg_text_generation
- task1602_webquestion_question_genreation
- task747_glucose_cause_emotion_detection
- task219_rocstories_title_answer_generation
- task178_quartz_question_answering
- task103_facts2story_long_text_generation
- task301_record_question_generation
- task1369_healthfact_sentence_generation
- task515_senteval_odd_word_out
- task496_semeval_answer_generation
- task1658_billsum_summarization
- task1204_atomic_classification_hinderedby
- task1392_superglue_multirc_answer_verification
- task306_jeopardy_answer_generation_double
- task1286_openbookqa_question_answering
- task159_check_frequency_of_words_in_sentence_pair
- task151_tomqa_find_location_easy_clean
- task323_jigsaw_classification_sexually_explicit
- task037_qasc_generate_related_fact
- task027_drop_answer_type_generation
- task1596_event2mind_text_generation_2
- task141_odd-man-out_classification_category
- task194_duorc_answer_generation
- task679_hope_edi_english_text_classification
- task246_dream_question_generation
- task1195_disflqa_disfluent_to_fluent_conversion
- task065_timetravel_consistent_sentence_classification
- task351_winomt_classification_gender_identifiability_anti
- task580_socialiqa_answer_generation
- task583_udeps_eng_coarse_pos_tagging
- task202_mnli_contradiction_classification
- task222_rocstories_two_chioce_slotting_classification
- task498_scruples_anecdotes_whoiswrong_classification
- task067_abductivenli_answer_generation
- task616_cola_classification
- task286_olid_offense_judgment
- task188_snli_neutral_to_entailment_text_modification
- task223_quartz_explanation_generation
- task820_protoqa_answer_generation
- task196_sentiment140_answer_generation
- task1678_mathqa_answer_selection
- task349_squad2.0_answerable_unanswerable_question_classification
- task154_tomqa_find_location_hard_noise
- task333_hateeval_classification_hate_en
- task235_iirc_question_from_subtext_answer_generation
- task1554_scitail_classification
- task210_logic2text_structured_text_generation
- task035_winogrande_question_modification_person
- task230_iirc_passage_classification
- task1356_xlsum_title_generation
- task1726_mathqa_correct_answer_generation
- task302_record_classification
- task380_boolq_yes_no_question
- task212_logic2text_classification
- task748_glucose_reverse_cause_event_detection
- task834_mathdataset_classification
- task350_winomt_classification_gender_identifiability_pro
- task191_hotpotqa_question_generation
- task236_iirc_question_from_passage_answer_generation
- task217_rocstories_ordering_answer_generation
- task568_circa_question_generation
- task614_glucose_cause_event_detection
- task361_spolin_yesand_prompt_response_classification
- task421_persent_sentence_sentiment_classification
- task203_mnli_sentence_generation
- task420_persent_document_sentiment_classification
- task153_tomqa_find_location_hard_clean
- task346_hybridqa_classification
- task1211_atomic_classification_hassubevent
- task360_spolin_yesand_response_generation
- task510_reddit_tifu_title_summarization
- task511_reddit_tifu_long_text_summarization
- task345_hybridqa_answer_generation
- task270_csrg_counterfactual_context_generation
- task307_jeopardy_answer_generation_final
- task001_quoref_question_generation
- task089_swap_words_verification
- task1196_atomic_classification_oeffect
- task080_piqa_answer_generation
- task1598_nyc_long_text_generation
- task240_tweetqa_question_generation
- task615_moviesqa_answer_generation
- task1347_glue_sts-b_similarity_classification
- task114_is_the_given_word_longest
- task292_storycommonsense_character_text_generation
- task115_help_advice_classification
- task431_senteval_object_count
- task1360_numer_sense_multiple_choice_qa_generation
- task177_para-nmt_paraphrasing
- task132_dais_text_modification
- task269_csrg_counterfactual_story_generation
- task233_iirc_link_exists_classification
- task161_count_words_containing_letter
- task1205_atomic_classification_isafter
- task571_recipe_nlg_ner_generation
- task1292_yelp_review_full_text_categorization
- task428_senteval_inversion
- task311_race_question_generation
- task429_senteval_tense
- task403_creak_commonsense_inference
- task929_products_reviews_classification
- task582_naturalquestion_answer_generation
- task237_iirc_answer_from_subtext_answer_generation
- task050_multirc_answerability
- task184_break_generate_question
- task669_ambigqa_answer_generation
- task169_strategyqa_sentence_generation
- task500_scruples_anecdotes_title_generation
- task241_tweetqa_classification
- task1345_glue_qqp_question_paraprashing
- task218_rocstories_swap_order_answer_generation
- task613_politifact_text_generation
- task1167_penn_treebank_coarse_pos_tagging
- task1422_mathqa_physics
- task247_dream_answer_generation
- task199_mnli_classification
- task164_mcscript_question_answering_text
- task1541_agnews_classification
- task516_senteval_conjoints_inversion
- task294_storycommonsense_motiv_text_generation
- task501_scruples_anecdotes_post_type_verification
- task213_rocstories_correct_ending_classification
- task821_protoqa_question_generation
- task493_review_polarity_classification
- task308_jeopardy_answer_generation_all
- task1595_event2mind_text_generation_1
- task040_qasc_question_generation
- task231_iirc_link_classification
- task1727_wiqa_what_is_the_effect
- task578_curiosity_dialogs_answer_generation
- task310_race_classification
- task309_race_answer_generation
- task379_agnews_topic_classification
- task030_winogrande_full_person
- task1540_parsed_pdfs_summarization
- task039_qasc_find_overlapping_words
- task1206_atomic_classification_isbefore
- task157_count_vowels_and_consonants
- task339_record_answer_generation
- task453_swag_answer_generation
- task848_pubmedqa_classification
- task673_google_wellformed_query_classification
- task676_ollie_relationship_answer_generation
- task268_casehold_legal_answer_generation
- task844_financial_phrasebank_classification
- task330_gap_answer_generation
- task595_mocha_answer_generation
- task1285_kpa_keypoint_matching
- task234_iirc_passage_line_answer_generation
- task494_review_polarity_answer_generation
- task670_ambigqa_question_generation
- task289_gigaword_summarization
- npr
- nli
- SimpleWiki
- amazon_review_2018
- ccnews_title_text
- agnews
- xsum
- msmarco
- yahoo_answers_title_answer
- squad_pairs
- wow
- mteb-amazon_counterfactual-avs_triplets
- mteb-amazon_massive_intent-avs_triplets
- mteb-amazon_massive_scenario-avs_triplets
- mteb-amazon_reviews_multi-avs_triplets
- mteb-banking77-avs_triplets
- mteb-emotion-avs_triplets
- mteb-imdb-avs_triplets
- mteb-mtop_domain-avs_triplets
- mteb-mtop_intent-avs_triplets
- mteb-toxic_conversations_50k-avs_triplets
- mteb-tweet_sentiment_extraction-avs_triplets
- covid-bing-query-gpt4-avs_triplets
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-512-final")
# Run inference
sentences = [
'who does the chief risk officer report to',
"Chief risk officer Chief risk officer The chief risk officer (CRO) or chief risk management officer (CRMO) of a firm or corporation is the executive accountable for enabling the efficient and effective governance of significant risks, and related opportunities, to a business and its various segments. Risks are commonly categorized as strategic, reputational, operational, financial, or compliance-related. CROs are accountable to the Executive Committee and The Board for enabling the business to balance risk and reward. In more complex organizations, they are generally responsible for coordinating the organization's Enterprise Risk Management (ERM) approach. The CRO is responsible for assessing and mitigating significant competitive,",
"Chief risk officer a company's executive chief officer and chief financial officer to clarify the precision of its financial reports. Moreover, to ensure the mentioned accuracy of financial reports, internal controls are required. Accordingly, each financial report required an internal control report to prevent fraud. Furthermore, the CRO has to be aware of everything occurring in his company on a daily basis, but he must also be current on all of the requirements from the SEC. In addition, the CRO restrains corporate risk by managing compliance. Why is a CRO so important in financial institutions? There is a report of having a CRO",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Triplet
* Dataset: `medi-mteb-dev`
* Evaluated with [TripletEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9156** |
## Training Details
### Training Datasets
#### NQ
* Dataset: NQ
* Size: 49,676 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details |
MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### pubmed
* Dataset: pubmed
* Size: 29,908 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### specter_train_triples
* Dataset: specter_train_triples
* Size: 49,676 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### S2ORC_citations_abstracts
* Dataset: S2ORC_citations_abstracts
* Size: 99,352 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### fever
* Dataset: fever
* Size: 74,514 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gooaq_pairs
* Dataset: gooaq_pairs
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### codesearchnet
* Dataset: codesearchnet
* Size: 15,210 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wikihow
* Dataset: wikihow
* Size: 5,070 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### WikiAnswers
* Dataset: WikiAnswers
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### eli5_question_answer
* Dataset: eli5_question_answer
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon-qa
* Dataset: amazon-qa
* Size: 99,352 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### medmcqa
* Dataset: medmcqa
* Size: 29,908 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### zeroshot
* Dataset: zeroshot
* Size: 15,210 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### TriviaQA_pairs
* Dataset: TriviaQA_pairs
* Size: 49,676 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### PAQ_pairs
* Dataset: PAQ_pairs
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### stackexchange_duplicate_questions_title-body_title-body
* Dataset: stackexchange_duplicate_questions_title-body_title-body
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### trex
* Dataset: trex
* Size: 29,908 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### flickr30k_captions
* Dataset: flickr30k_captions
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### hotpotqa
* Dataset: hotpotqa
* Size: 40,048 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task671_ambigqa_text_generation
* Dataset: task671_ambigqa_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task061_ropes_answer_generation
* Dataset: task061_ropes_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task285_imdb_answer_generation
* Dataset: task285_imdb_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task905_hate_speech_offensive_classification
* Dataset: task905_hate_speech_offensive_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task566_circa_classification
* Dataset: task566_circa_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_snli_entailment_to_neutral_text_modification
* Dataset: task184_snli_entailment_to_neutral_text_modification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task280_stereoset_classification_stereotype_type
* Dataset: task280_stereoset_classification_stereotype_type
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1599_smcalflow_classification
* Dataset: task1599_smcalflow_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1384_deal_or_no_dialog_classification
* Dataset: task1384_deal_or_no_dialog_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task591_sciq_answer_generation
* Dataset: task591_sciq_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task823_peixian-rtgender_sentiment_analysis
* Dataset: task823_peixian-rtgender_sentiment_analysis
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task023_cosmosqa_question_generation
* Dataset: task023_cosmosqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task900_freebase_qa_category_classification
* Dataset: task900_freebase_qa_category_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task924_event2mind_word_generation
* Dataset: task924_event2mind_word_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task152_tomqa_find_location_easy_noise
* Dataset: task152_tomqa_find_location_easy_noise
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1368_healthfact_sentence_generation
* Dataset: task1368_healthfact_sentence_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1661_super_glue_classification
* Dataset: task1661_super_glue_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1187_politifact_classification
* Dataset: task1187_politifact_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1728_web_nlg_data_to_text
* Dataset: task1728_web_nlg_data_to_text
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task112_asset_simple_sentence_identification
* Dataset: task112_asset_simple_sentence_identification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1340_msr_text_compression_compression
* Dataset: task1340_msr_text_compression_compression
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task072_abductivenli_answer_generation
* Dataset: task072_abductivenli_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1504_hatexplain_answer_generation
* Dataset: task1504_hatexplain_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task684_online_privacy_policy_text_information_type_generation
* Dataset: task684_online_privacy_policy_text_information_type_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1290_xsum_summarization
* Dataset: task1290_xsum_summarization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task075_squad1.1_answer_generation
* Dataset: task075_squad1.1_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1587_scifact_classification
* Dataset: task1587_scifact_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task384_socialiqa_question_classification
* Dataset: task384_socialiqa_question_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1555_scitail_answer_generation
* Dataset: task1555_scitail_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1532_daily_dialog_emotion_classification
* Dataset: task1532_daily_dialog_emotion_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task239_tweetqa_answer_generation
* Dataset: task239_tweetqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task596_mocha_question_generation
* Dataset: task596_mocha_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1411_dart_subject_identification
* Dataset: task1411_dart_subject_identification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1359_numer_sense_answer_generation
* Dataset: task1359_numer_sense_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task329_gap_classification
* Dataset: task329_gap_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task220_rocstories_title_classification
* Dataset: task220_rocstories_title_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task316_crows-pairs_classification_stereotype
* Dataset: task316_crows-pairs_classification_stereotype
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task495_semeval_headline_classification
* Dataset: task495_semeval_headline_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1168_brown_coarse_pos_tagging
* Dataset: task1168_brown_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task348_squad2.0_unanswerable_question_generation
* Dataset: task348_squad2.0_unanswerable_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task049_multirc_questions_needed_to_answer
* Dataset: task049_multirc_questions_needed_to_answer
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1534_daily_dialog_question_classification
* Dataset: task1534_daily_dialog_question_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task322_jigsaw_classification_threat
* Dataset: task322_jigsaw_classification_threat
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task295_semeval_2020_task4_commonsense_reasoning
* Dataset: task295_semeval_2020_task4_commonsense_reasoning
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task186_snli_contradiction_to_entailment_text_modification
* Dataset: task186_snli_contradiction_to_entailment_text_modification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task034_winogrande_question_modification_object
* Dataset: task034_winogrande_question_modification_object
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task160_replace_letter_in_a_sentence
* Dataset: task160_replace_letter_in_a_sentence
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task469_mrqa_answer_generation
* Dataset: task469_mrqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task105_story_cloze-rocstories_sentence_generation
* Dataset: task105_story_cloze-rocstories_sentence_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task649_race_blank_question_generation
* Dataset: task649_race_blank_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1536_daily_dialog_happiness_classification
* Dataset: task1536_daily_dialog_happiness_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task683_online_privacy_policy_text_purpose_answer_generation
* Dataset: task683_online_privacy_policy_text_purpose_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task024_cosmosqa_answer_generation
* Dataset: task024_cosmosqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task584_udeps_eng_fine_pos_tagging
* Dataset: task584_udeps_eng_fine_pos_tagging
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task066_timetravel_binary_consistency_classification
* Dataset: task066_timetravel_binary_consistency_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task413_mickey_en_sentence_perturbation_generation
* Dataset: task413_mickey_en_sentence_perturbation_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task182_duorc_question_generation
* Dataset: task182_duorc_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task028_drop_answer_generation
* Dataset: task028_drop_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1601_webquestions_answer_generation
* Dataset: task1601_webquestions_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1295_adversarial_qa_question_answering
* Dataset: task1295_adversarial_qa_question_answering
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task201_mnli_neutral_classification
* Dataset: task201_mnli_neutral_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task038_qasc_combined_fact
* Dataset: task038_qasc_combined_fact
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task293_storycommonsense_emotion_text_generation
* Dataset: task293_storycommonsense_emotion_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task572_recipe_nlg_text_generation
* Dataset: task572_recipe_nlg_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task517_emo_classify_emotion_of_dialogue
* Dataset: task517_emo_classify_emotion_of_dialogue
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task382_hybridqa_answer_generation
* Dataset: task382_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task176_break_decompose_questions
* Dataset: task176_break_decompose_questions
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1291_multi_news_summarization
* Dataset: task1291_multi_news_summarization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task155_count_nouns_verbs
* Dataset: task155_count_nouns_verbs
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task031_winogrande_question_generation_object
* Dataset: task031_winogrande_question_generation_object
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task279_stereoset_classification_stereotype
* Dataset: task279_stereoset_classification_stereotype
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1336_peixian_equity_evaluation_corpus_gender_classifier
* Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task508_scruples_dilemmas_more_ethical_isidentifiable
* Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task518_emo_different_dialogue_emotions
* Dataset: task518_emo_different_dialogue_emotions
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task077_splash_explanation_to_sql
* Dataset: task077_splash_explanation_to_sql
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task923_event2mind_classifier
* Dataset: task923_event2mind_classifier
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task470_mrqa_question_generation
* Dataset: task470_mrqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task638_multi_woz_classification
* Dataset: task638_multi_woz_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1412_web_questions_question_answering
* Dataset: task1412_web_questions_question_answering
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task847_pubmedqa_question_generation
* Dataset: task847_pubmedqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task678_ollie_actual_relationship_answer_generation
* Dataset: task678_ollie_actual_relationship_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task290_tellmewhy_question_answerability
* Dataset: task290_tellmewhy_question_answerability
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task575_air_dialogue_classification
* Dataset: task575_air_dialogue_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task189_snli_neutral_to_contradiction_text_modification
* Dataset: task189_snli_neutral_to_contradiction_text_modification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task026_drop_question_generation
* Dataset: task026_drop_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task162_count_words_starting_with_letter
* Dataset: task162_count_words_starting_with_letter
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task079_conala_concat_strings
* Dataset: task079_conala_concat_strings
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task610_conllpp_ner
* Dataset: task610_conllpp_ner
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task046_miscellaneous_question_typing
* Dataset: task046_miscellaneous_question_typing
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task197_mnli_domain_answer_generation
* Dataset: task197_mnli_domain_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1325_qa_zre_question_generation_on_subject_relation
* Dataset: task1325_qa_zre_question_generation_on_subject_relation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task430_senteval_subject_count
* Dataset: task430_senteval_subject_count
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task672_nummersense
* Dataset: task672_nummersense
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task402_grailqa_paraphrase_generation
* Dataset: task402_grailqa_paraphrase_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task904_hate_speech_offensive_classification
* Dataset: task904_hate_speech_offensive_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task192_hotpotqa_sentence_generation
* Dataset: task192_hotpotqa_sentence_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task069_abductivenli_classification
* Dataset: task069_abductivenli_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task574_air_dialogue_sentence_generation
* Dataset: task574_air_dialogue_sentence_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task187_snli_entailment_to_contradiction_text_modification
* Dataset: task187_snli_entailment_to_contradiction_text_modification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task749_glucose_reverse_cause_emotion_detection
* Dataset: task749_glucose_reverse_cause_emotion_detection
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1552_scitail_question_generation
* Dataset: task1552_scitail_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task750_aqua_multiple_choice_answering
* Dataset: task750_aqua_multiple_choice_answering
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task327_jigsaw_classification_toxic
* Dataset: task327_jigsaw_classification_toxic
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1502_hatexplain_classification
* Dataset: task1502_hatexplain_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task328_jigsaw_classification_insult
* Dataset: task328_jigsaw_classification_insult
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task304_numeric_fused_head_resolution
* Dataset: task304_numeric_fused_head_resolution
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1293_kilt_tasks_hotpotqa_question_answering
* Dataset: task1293_kilt_tasks_hotpotqa_question_answering
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task216_rocstories_correct_answer_generation
* Dataset: task216_rocstories_correct_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1326_qa_zre_question_generation_from_answer
* Dataset: task1326_qa_zre_question_generation_from_answer
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1729_personachat_generate_next
* Dataset: task1729_personachat_generate_next
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1202_atomic_classification_xneed
* Dataset: task1202_atomic_classification_xneed
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task400_paws_paraphrase_classification
* Dataset: task400_paws_paraphrase_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task502_scruples_anecdotes_whoiswrong_verification
* Dataset: task502_scruples_anecdotes_whoiswrong_verification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task088_identify_typo_verification
* Dataset: task088_identify_typo_verification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task221_rocstories_two_choice_classification
* Dataset: task221_rocstories_two_choice_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task200_mnli_entailment_classification
* Dataset: task200_mnli_entailment_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task074_squad1.1_question_generation
* Dataset: task074_squad1.1_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task581_socialiqa_question_generation
* Dataset: task581_socialiqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1186_nne_hrngo_classification
* Dataset: task1186_nne_hrngo_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task898_freebase_qa_answer_generation
* Dataset: task898_freebase_qa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1408_dart_similarity_classification
* Dataset: task1408_dart_similarity_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task168_strategyqa_question_decomposition
* Dataset: task168_strategyqa_question_decomposition
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1357_xlsum_summary_generation
* Dataset: task1357_xlsum_summary_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task390_torque_text_span_selection
* Dataset: task390_torque_text_span_selection
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task165_mcscript_question_answering_commonsense
* Dataset: task165_mcscript_question_answering_commonsense
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1533_daily_dialog_formal_classification
* Dataset: task1533_daily_dialog_formal_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task002_quoref_answer_generation
* Dataset: task002_quoref_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1297_qasc_question_answering
* Dataset: task1297_qasc_question_answering
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task305_jeopardy_answer_generation_normal
* Dataset: task305_jeopardy_answer_generation_normal
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task029_winogrande_full_object
* Dataset: task029_winogrande_full_object
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1327_qa_zre_answer_generation_from_question
* Dataset: task1327_qa_zre_answer_generation_from_question
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task326_jigsaw_classification_obscene
* Dataset: task326_jigsaw_classification_obscene
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1542_every_ith_element_from_starting
* Dataset: task1542_every_ith_element_from_starting
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task570_recipe_nlg_ner_generation
* Dataset: task570_recipe_nlg_ner_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1409_dart_text_generation
* Dataset: task1409_dart_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task401_numeric_fused_head_reference
* Dataset: task401_numeric_fused_head_reference
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task846_pubmedqa_classification
* Dataset: task846_pubmedqa_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1712_poki_classification
* Dataset: task1712_poki_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task344_hybridqa_answer_generation
* Dataset: task344_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task875_emotion_classification
* Dataset: task875_emotion_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1214_atomic_classification_xwant
* Dataset: task1214_atomic_classification_xwant
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task106_scruples_ethical_judgment
* Dataset: task106_scruples_ethical_judgment
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task238_iirc_answer_from_passage_answer_generation
* Dataset: task238_iirc_answer_from_passage_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1391_winogrande_easy_answer_generation
* Dataset: task1391_winogrande_easy_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task195_sentiment140_classification
* Dataset: task195_sentiment140_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task163_count_words_ending_with_letter
* Dataset: task163_count_words_ending_with_letter
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task579_socialiqa_classification
* Dataset: task579_socialiqa_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task569_recipe_nlg_text_generation
* Dataset: task569_recipe_nlg_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1602_webquestion_question_genreation
* Dataset: task1602_webquestion_question_genreation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task747_glucose_cause_emotion_detection
* Dataset: task747_glucose_cause_emotion_detection
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task219_rocstories_title_answer_generation
* Dataset: task219_rocstories_title_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task178_quartz_question_answering
* Dataset: task178_quartz_question_answering
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task103_facts2story_long_text_generation
* Dataset: task103_facts2story_long_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task301_record_question_generation
* Dataset: task301_record_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1369_healthfact_sentence_generation
* Dataset: task1369_healthfact_sentence_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task515_senteval_odd_word_out
* Dataset: task515_senteval_odd_word_out
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task496_semeval_answer_generation
* Dataset: task496_semeval_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1658_billsum_summarization
* Dataset: task1658_billsum_summarization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1204_atomic_classification_hinderedby
* Dataset: task1204_atomic_classification_hinderedby
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1392_superglue_multirc_answer_verification
* Dataset: task1392_superglue_multirc_answer_verification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task306_jeopardy_answer_generation_double
* Dataset: task306_jeopardy_answer_generation_double
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1286_openbookqa_question_answering
* Dataset: task1286_openbookqa_question_answering
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task159_check_frequency_of_words_in_sentence_pair
* Dataset: task159_check_frequency_of_words_in_sentence_pair
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task151_tomqa_find_location_easy_clean
* Dataset: task151_tomqa_find_location_easy_clean
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task323_jigsaw_classification_sexually_explicit
* Dataset: task323_jigsaw_classification_sexually_explicit
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task037_qasc_generate_related_fact
* Dataset: task037_qasc_generate_related_fact
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task027_drop_answer_type_generation
* Dataset: task027_drop_answer_type_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1596_event2mind_text_generation_2
* Dataset: task1596_event2mind_text_generation_2
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task141_odd-man-out_classification_category
* Dataset: task141_odd-man-out_classification_category
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task194_duorc_answer_generation
* Dataset: task194_duorc_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task679_hope_edi_english_text_classification
* Dataset: task679_hope_edi_english_text_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task246_dream_question_generation
* Dataset: task246_dream_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1195_disflqa_disfluent_to_fluent_conversion
* Dataset: task1195_disflqa_disfluent_to_fluent_conversion
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task065_timetravel_consistent_sentence_classification
* Dataset: task065_timetravel_consistent_sentence_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task351_winomt_classification_gender_identifiability_anti
* Dataset: task351_winomt_classification_gender_identifiability_anti
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task580_socialiqa_answer_generation
* Dataset: task580_socialiqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task583_udeps_eng_coarse_pos_tagging
* Dataset: task583_udeps_eng_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task202_mnli_contradiction_classification
* Dataset: task202_mnli_contradiction_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task222_rocstories_two_chioce_slotting_classification
* Dataset: task222_rocstories_two_chioce_slotting_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task498_scruples_anecdotes_whoiswrong_classification
* Dataset: task498_scruples_anecdotes_whoiswrong_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task067_abductivenli_answer_generation
* Dataset: task067_abductivenli_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task616_cola_classification
* Dataset: task616_cola_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task286_olid_offense_judgment
* Dataset: task286_olid_offense_judgment
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task188_snli_neutral_to_entailment_text_modification
* Dataset: task188_snli_neutral_to_entailment_text_modification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task223_quartz_explanation_generation
* Dataset: task223_quartz_explanation_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task820_protoqa_answer_generation
* Dataset: task820_protoqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task196_sentiment140_answer_generation
* Dataset: task196_sentiment140_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1678_mathqa_answer_selection
* Dataset: task1678_mathqa_answer_selection
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task349_squad2.0_answerable_unanswerable_question_classification
* Dataset: task349_squad2.0_answerable_unanswerable_question_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task154_tomqa_find_location_hard_noise
* Dataset: task154_tomqa_find_location_hard_noise
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task333_hateeval_classification_hate_en
* Dataset: task333_hateeval_classification_hate_en
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task235_iirc_question_from_subtext_answer_generation
* Dataset: task235_iirc_question_from_subtext_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1554_scitail_classification
* Dataset: task1554_scitail_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task210_logic2text_structured_text_generation
* Dataset: task210_logic2text_structured_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task035_winogrande_question_modification_person
* Dataset: task035_winogrande_question_modification_person
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task230_iirc_passage_classification
* Dataset: task230_iirc_passage_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1356_xlsum_title_generation
* Dataset: task1356_xlsum_title_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1726_mathqa_correct_answer_generation
* Dataset: task1726_mathqa_correct_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task302_record_classification
* Dataset: task302_record_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task380_boolq_yes_no_question
* Dataset: task380_boolq_yes_no_question
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task212_logic2text_classification
* Dataset: task212_logic2text_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task748_glucose_reverse_cause_event_detection
* Dataset: task748_glucose_reverse_cause_event_detection
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task834_mathdataset_classification
* Dataset: task834_mathdataset_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task350_winomt_classification_gender_identifiability_pro
* Dataset: task350_winomt_classification_gender_identifiability_pro
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task191_hotpotqa_question_generation
* Dataset: task191_hotpotqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task236_iirc_question_from_passage_answer_generation
* Dataset: task236_iirc_question_from_passage_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task217_rocstories_ordering_answer_generation
* Dataset: task217_rocstories_ordering_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task568_circa_question_generation
* Dataset: task568_circa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task614_glucose_cause_event_detection
* Dataset: task614_glucose_cause_event_detection
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task361_spolin_yesand_prompt_response_classification
* Dataset: task361_spolin_yesand_prompt_response_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task421_persent_sentence_sentiment_classification
* Dataset: task421_persent_sentence_sentiment_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task203_mnli_sentence_generation
* Dataset: task203_mnli_sentence_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task420_persent_document_sentiment_classification
* Dataset: task420_persent_document_sentiment_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task153_tomqa_find_location_hard_clean
* Dataset: task153_tomqa_find_location_hard_clean
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task346_hybridqa_classification
* Dataset: task346_hybridqa_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1211_atomic_classification_hassubevent
* Dataset: task1211_atomic_classification_hassubevent
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task360_spolin_yesand_response_generation
* Dataset: task360_spolin_yesand_response_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task510_reddit_tifu_title_summarization
* Dataset: task510_reddit_tifu_title_summarization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task511_reddit_tifu_long_text_summarization
* Dataset: task511_reddit_tifu_long_text_summarization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task345_hybridqa_answer_generation
* Dataset: task345_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task270_csrg_counterfactual_context_generation
* Dataset: task270_csrg_counterfactual_context_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task307_jeopardy_answer_generation_final
* Dataset: task307_jeopardy_answer_generation_final
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task001_quoref_question_generation
* Dataset: task001_quoref_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task089_swap_words_verification
* Dataset: task089_swap_words_verification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1196_atomic_classification_oeffect
* Dataset: task1196_atomic_classification_oeffect
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task080_piqa_answer_generation
* Dataset: task080_piqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1598_nyc_long_text_generation
* Dataset: task1598_nyc_long_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task240_tweetqa_question_generation
* Dataset: task240_tweetqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task615_moviesqa_answer_generation
* Dataset: task615_moviesqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1347_glue_sts-b_similarity_classification
* Dataset: task1347_glue_sts-b_similarity_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task114_is_the_given_word_longest
* Dataset: task114_is_the_given_word_longest
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task292_storycommonsense_character_text_generation
* Dataset: task292_storycommonsense_character_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task115_help_advice_classification
* Dataset: task115_help_advice_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task431_senteval_object_count
* Dataset: task431_senteval_object_count
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1360_numer_sense_multiple_choice_qa_generation
* Dataset: task1360_numer_sense_multiple_choice_qa_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task177_para-nmt_paraphrasing
* Dataset: task177_para-nmt_paraphrasing
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task132_dais_text_modification
* Dataset: task132_dais_text_modification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task269_csrg_counterfactual_story_generation
* Dataset: task269_csrg_counterfactual_story_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task233_iirc_link_exists_classification
* Dataset: task233_iirc_link_exists_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task161_count_words_containing_letter
* Dataset: task161_count_words_containing_letter
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1205_atomic_classification_isafter
* Dataset: task1205_atomic_classification_isafter
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task571_recipe_nlg_ner_generation
* Dataset: task571_recipe_nlg_ner_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1292_yelp_review_full_text_categorization
* Dataset: task1292_yelp_review_full_text_categorization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task428_senteval_inversion
* Dataset: task428_senteval_inversion
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task311_race_question_generation
* Dataset: task311_race_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task429_senteval_tense
* Dataset: task429_senteval_tense
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task403_creak_commonsense_inference
* Dataset: task403_creak_commonsense_inference
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task929_products_reviews_classification
* Dataset: task929_products_reviews_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task582_naturalquestion_answer_generation
* Dataset: task582_naturalquestion_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task237_iirc_answer_from_subtext_answer_generation
* Dataset: task237_iirc_answer_from_subtext_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task050_multirc_answerability
* Dataset: task050_multirc_answerability
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_break_generate_question
* Dataset: task184_break_generate_question
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task669_ambigqa_answer_generation
* Dataset: task669_ambigqa_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task169_strategyqa_sentence_generation
* Dataset: task169_strategyqa_sentence_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task500_scruples_anecdotes_title_generation
* Dataset: task500_scruples_anecdotes_title_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task241_tweetqa_classification
* Dataset: task241_tweetqa_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1345_glue_qqp_question_paraprashing
* Dataset: task1345_glue_qqp_question_paraprashing
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task218_rocstories_swap_order_answer_generation
* Dataset: task218_rocstories_swap_order_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task613_politifact_text_generation
* Dataset: task613_politifact_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1167_penn_treebank_coarse_pos_tagging
* Dataset: task1167_penn_treebank_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1422_mathqa_physics
* Dataset: task1422_mathqa_physics
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task247_dream_answer_generation
* Dataset: task247_dream_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task199_mnli_classification
* Dataset: task199_mnli_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task164_mcscript_question_answering_text
* Dataset: task164_mcscript_question_answering_text
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1541_agnews_classification
* Dataset: task1541_agnews_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task516_senteval_conjoints_inversion
* Dataset: task516_senteval_conjoints_inversion
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task294_storycommonsense_motiv_text_generation
* Dataset: task294_storycommonsense_motiv_text_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task501_scruples_anecdotes_post_type_verification
* Dataset: task501_scruples_anecdotes_post_type_verification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task213_rocstories_correct_ending_classification
* Dataset: task213_rocstories_correct_ending_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task821_protoqa_question_generation
* Dataset: task821_protoqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task493_review_polarity_classification
* Dataset: task493_review_polarity_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task308_jeopardy_answer_generation_all
* Dataset: task308_jeopardy_answer_generation_all
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1595_event2mind_text_generation_1
* Dataset: task1595_event2mind_text_generation_1
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task040_qasc_question_generation
* Dataset: task040_qasc_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task231_iirc_link_classification
* Dataset: task231_iirc_link_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1727_wiqa_what_is_the_effect
* Dataset: task1727_wiqa_what_is_the_effect
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task578_curiosity_dialogs_answer_generation
* Dataset: task578_curiosity_dialogs_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task310_race_classification
* Dataset: task310_race_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task309_race_answer_generation
* Dataset: task309_race_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task379_agnews_topic_classification
* Dataset: task379_agnews_topic_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task030_winogrande_full_person
* Dataset: task030_winogrande_full_person
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1540_parsed_pdfs_summarization
* Dataset: task1540_parsed_pdfs_summarization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task039_qasc_find_overlapping_words
* Dataset: task039_qasc_find_overlapping_words
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1206_atomic_classification_isbefore
* Dataset: task1206_atomic_classification_isbefore
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task157_count_vowels_and_consonants
* Dataset: task157_count_vowels_and_consonants
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task339_record_answer_generation
* Dataset: task339_record_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task453_swag_answer_generation
* Dataset: task453_swag_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task848_pubmedqa_classification
* Dataset: task848_pubmedqa_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task673_google_wellformed_query_classification
* Dataset: task673_google_wellformed_query_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task676_ollie_relationship_answer_generation
* Dataset: task676_ollie_relationship_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task268_casehold_legal_answer_generation
* Dataset: task268_casehold_legal_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task844_financial_phrasebank_classification
* Dataset: task844_financial_phrasebank_classification
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task330_gap_answer_generation
* Dataset: task330_gap_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task595_mocha_answer_generation
* Dataset: task595_mocha_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1285_kpa_keypoint_matching
* Dataset: task1285_kpa_keypoint_matching
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task234_iirc_passage_line_answer_generation
* Dataset: task234_iirc_passage_line_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task494_review_polarity_answer_generation
* Dataset: task494_review_polarity_answer_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task670_ambigqa_question_generation
* Dataset: task670_ambigqa_question_generation
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task289_gigaword_summarization
* Dataset: task289_gigaword_summarization
* Size: 1,018 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### npr
* Dataset: npr
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### nli
* Dataset: nli
* Size: 49,676 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### SimpleWiki
* Dataset: SimpleWiki
* Size: 5,070 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon_review_2018
* Dataset: amazon_review_2018
* Size: 99,352 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### ccnews_title_text
* Dataset: ccnews_title_text
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### agnews
* Dataset: agnews
* Size: 44,606 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### xsum
* Dataset: xsum
* Size: 10,140 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### msmarco
* Dataset: msmarco
* Size: 173,354 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### yahoo_answers_title_answer
* Dataset: yahoo_answers_title_answer
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### squad_pairs
* Dataset: squad_pairs
* Size: 24,838 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wow
* Dataset: wow
* Size: 29,908 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_counterfactual-avs_triplets
* Dataset: mteb-amazon_counterfactual-avs_triplets
* Size: 4,055 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_intent-avs_triplets
* Dataset: mteb-amazon_massive_intent-avs_triplets
* Size: 11,661 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_scenario-avs_triplets
* Dataset: mteb-amazon_massive_scenario-avs_triplets
* Size: 11,661 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_reviews_multi-avs_triplets
* Dataset: mteb-amazon_reviews_multi-avs_triplets
* Size: 198,192 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-banking77-avs_triplets
* Dataset: mteb-banking77-avs_triplets
* Size: 10,139 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-emotion-avs_triplets
* Dataset: mteb-emotion-avs_triplets
* Size: 16,224 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-imdb-avs_triplets
* Dataset: mteb-imdb-avs_triplets
* Size: 24,839 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_domain-avs_triplets
* Dataset: mteb-mtop_domain-avs_triplets
* Size: 15,715 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_intent-avs_triplets
* Dataset: mteb-mtop_intent-avs_triplets
* Size: 15,715 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-toxic_conversations_50k-avs_triplets
* Dataset: mteb-toxic_conversations_50k-avs_triplets
* Size: 49,677 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-tweet_sentiment_extraction-avs_triplets
* Dataset: mteb-tweet_sentiment_extraction-avs_triplets
* Size: 27,373 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### covid-bing-query-gpt4-avs_triplets
* Dataset: covid-bing-query-gpt4-avs_triplets
* Size: 5,070 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 18,269 evaluation samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `learning_rate`: 5.656854249492381e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters