all-MiniLM-L6-v2 trained on MEDI-MTEB triplets

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2 on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Datasets:
    • NQ
    • pubmed
    • specter_train_triples
    • S2ORC_citations_abstracts
    • fever
    • gooaq_pairs
    • codesearchnet
    • wikihow
    • WikiAnswers
    • eli5_question_answer
    • amazon-qa
    • medmcqa
    • zeroshot
    • TriviaQA_pairs
    • PAQ_pairs
    • stackexchange_duplicate_questions_title-body_title-body
    • trex
    • flickr30k_captions
    • hotpotqa
    • task671_ambigqa_text_generation
    • task061_ropes_answer_generation
    • task285_imdb_answer_generation
    • task905_hate_speech_offensive_classification
    • task566_circa_classification
    • task184_snli_entailment_to_neutral_text_modification
    • task280_stereoset_classification_stereotype_type
    • task1599_smcalflow_classification
    • task1384_deal_or_no_dialog_classification
    • task591_sciq_answer_generation
    • task823_peixian-rtgender_sentiment_analysis
    • task023_cosmosqa_question_generation
    • task900_freebase_qa_category_classification
    • task924_event2mind_word_generation
    • task152_tomqa_find_location_easy_noise
    • task1368_healthfact_sentence_generation
    • task1661_super_glue_classification
    • task1187_politifact_classification
    • task1728_web_nlg_data_to_text
    • task112_asset_simple_sentence_identification
    • task1340_msr_text_compression_compression
    • task072_abductivenli_answer_generation
    • task1504_hatexplain_answer_generation
    • task684_online_privacy_policy_text_information_type_generation
    • task1290_xsum_summarization
    • task075_squad1.1_answer_generation
    • task1587_scifact_classification
    • task384_socialiqa_question_classification
    • task1555_scitail_answer_generation
    • task1532_daily_dialog_emotion_classification
    • task239_tweetqa_answer_generation
    • task596_mocha_question_generation
    • task1411_dart_subject_identification
    • task1359_numer_sense_answer_generation
    • task329_gap_classification
    • task220_rocstories_title_classification
    • task316_crows-pairs_classification_stereotype
    • task495_semeval_headline_classification
    • task1168_brown_coarse_pos_tagging
    • task348_squad2.0_unanswerable_question_generation
    • task049_multirc_questions_needed_to_answer
    • task1534_daily_dialog_question_classification
    • task322_jigsaw_classification_threat
    • task295_semeval_2020_task4_commonsense_reasoning
    • task186_snli_contradiction_to_entailment_text_modification
    • task034_winogrande_question_modification_object
    • task160_replace_letter_in_a_sentence
    • task469_mrqa_answer_generation
    • task105_story_cloze-rocstories_sentence_generation
    • task649_race_blank_question_generation
    • task1536_daily_dialog_happiness_classification
    • task683_online_privacy_policy_text_purpose_answer_generation
    • task024_cosmosqa_answer_generation
    • task584_udeps_eng_fine_pos_tagging
    • task066_timetravel_binary_consistency_classification
    • task413_mickey_en_sentence_perturbation_generation
    • task182_duorc_question_generation
    • task028_drop_answer_generation
    • task1601_webquestions_answer_generation
    • task1295_adversarial_qa_question_answering
    • task201_mnli_neutral_classification
    • task038_qasc_combined_fact
    • task293_storycommonsense_emotion_text_generation
    • task572_recipe_nlg_text_generation
    • task517_emo_classify_emotion_of_dialogue
    • task382_hybridqa_answer_generation
    • task176_break_decompose_questions
    • task1291_multi_news_summarization
    • task155_count_nouns_verbs
    • task031_winogrande_question_generation_object
    • task279_stereoset_classification_stereotype
    • task1336_peixian_equity_evaluation_corpus_gender_classifier
    • task508_scruples_dilemmas_more_ethical_isidentifiable
    • task518_emo_different_dialogue_emotions
    • task077_splash_explanation_to_sql
    • task923_event2mind_classifier
    • task470_mrqa_question_generation
    • task638_multi_woz_classification
    • task1412_web_questions_question_answering
    • task847_pubmedqa_question_generation
    • task678_ollie_actual_relationship_answer_generation
    • task290_tellmewhy_question_answerability
    • task575_air_dialogue_classification
    • task189_snli_neutral_to_contradiction_text_modification
    • task026_drop_question_generation
    • task162_count_words_starting_with_letter
    • task079_conala_concat_strings
    • task610_conllpp_ner
    • task046_miscellaneous_question_typing
    • task197_mnli_domain_answer_generation
    • task1325_qa_zre_question_generation_on_subject_relation
    • task430_senteval_subject_count
    • task672_nummersense
    • task402_grailqa_paraphrase_generation
    • task904_hate_speech_offensive_classification
    • task192_hotpotqa_sentence_generation
    • task069_abductivenli_classification
    • task574_air_dialogue_sentence_generation
    • task187_snli_entailment_to_contradiction_text_modification
    • task749_glucose_reverse_cause_emotion_detection
    • task1552_scitail_question_generation
    • task750_aqua_multiple_choice_answering
    • task327_jigsaw_classification_toxic
    • task1502_hatexplain_classification
    • task328_jigsaw_classification_insult
    • task304_numeric_fused_head_resolution
    • task1293_kilt_tasks_hotpotqa_question_answering
    • task216_rocstories_correct_answer_generation
    • task1326_qa_zre_question_generation_from_answer
    • task1338_peixian_equity_evaluation_corpus_sentiment_classifier
    • task1729_personachat_generate_next
    • task1202_atomic_classification_xneed
    • task400_paws_paraphrase_classification
    • task502_scruples_anecdotes_whoiswrong_verification
    • task088_identify_typo_verification
    • task221_rocstories_two_choice_classification
    • task200_mnli_entailment_classification
    • task074_squad1.1_question_generation
    • task581_socialiqa_question_generation
    • task1186_nne_hrngo_classification
    • task898_freebase_qa_answer_generation
    • task1408_dart_similarity_classification
    • task168_strategyqa_question_decomposition
    • task1357_xlsum_summary_generation
    • task390_torque_text_span_selection
    • task165_mcscript_question_answering_commonsense
    • task1533_daily_dialog_formal_classification
    • task002_quoref_answer_generation
    • task1297_qasc_question_answering
    • task305_jeopardy_answer_generation_normal
    • task029_winogrande_full_object
    • task1327_qa_zre_answer_generation_from_question
    • task326_jigsaw_classification_obscene
    • task1542_every_ith_element_from_starting
    • task570_recipe_nlg_ner_generation
    • task1409_dart_text_generation
    • task401_numeric_fused_head_reference
    • task846_pubmedqa_classification
    • task1712_poki_classification
    • task344_hybridqa_answer_generation
    • task875_emotion_classification
    • task1214_atomic_classification_xwant
    • task106_scruples_ethical_judgment
    • task238_iirc_answer_from_passage_answer_generation
    • task1391_winogrande_easy_answer_generation
    • task195_sentiment140_classification
    • task163_count_words_ending_with_letter
    • task579_socialiqa_classification
    • task569_recipe_nlg_text_generation
    • task1602_webquestion_question_genreation
    • task747_glucose_cause_emotion_detection
    • task219_rocstories_title_answer_generation
    • task178_quartz_question_answering
    • task103_facts2story_long_text_generation
    • task301_record_question_generation
    • task1369_healthfact_sentence_generation
    • task515_senteval_odd_word_out
    • task496_semeval_answer_generation
    • task1658_billsum_summarization
    • task1204_atomic_classification_hinderedby
    • task1392_superglue_multirc_answer_verification
    • task306_jeopardy_answer_generation_double
    • task1286_openbookqa_question_answering
    • task159_check_frequency_of_words_in_sentence_pair
    • task151_tomqa_find_location_easy_clean
    • task323_jigsaw_classification_sexually_explicit
    • task037_qasc_generate_related_fact
    • task027_drop_answer_type_generation
    • task1596_event2mind_text_generation_2
    • task141_odd-man-out_classification_category
    • task194_duorc_answer_generation
    • task679_hope_edi_english_text_classification
    • task246_dream_question_generation
    • task1195_disflqa_disfluent_to_fluent_conversion
    • task065_timetravel_consistent_sentence_classification
    • task351_winomt_classification_gender_identifiability_anti
    • task580_socialiqa_answer_generation
    • task583_udeps_eng_coarse_pos_tagging
    • task202_mnli_contradiction_classification
    • task222_rocstories_two_chioce_slotting_classification
    • task498_scruples_anecdotes_whoiswrong_classification
    • task067_abductivenli_answer_generation
    • task616_cola_classification
    • task286_olid_offense_judgment
    • task188_snli_neutral_to_entailment_text_modification
    • task223_quartz_explanation_generation
    • task820_protoqa_answer_generation
    • task196_sentiment140_answer_generation
    • task1678_mathqa_answer_selection
    • task349_squad2.0_answerable_unanswerable_question_classification
    • task154_tomqa_find_location_hard_noise
    • task333_hateeval_classification_hate_en
    • task235_iirc_question_from_subtext_answer_generation
    • task1554_scitail_classification
    • task210_logic2text_structured_text_generation
    • task035_winogrande_question_modification_person
    • task230_iirc_passage_classification
    • task1356_xlsum_title_generation
    • task1726_mathqa_correct_answer_generation
    • task302_record_classification
    • task380_boolq_yes_no_question
    • task212_logic2text_classification
    • task748_glucose_reverse_cause_event_detection
    • task834_mathdataset_classification
    • task350_winomt_classification_gender_identifiability_pro
    • task191_hotpotqa_question_generation
    • task236_iirc_question_from_passage_answer_generation
    • task217_rocstories_ordering_answer_generation
    • task568_circa_question_generation
    • task614_glucose_cause_event_detection
    • task361_spolin_yesand_prompt_response_classification
    • task421_persent_sentence_sentiment_classification
    • task203_mnli_sentence_generation
    • task420_persent_document_sentiment_classification
    • task153_tomqa_find_location_hard_clean
    • task346_hybridqa_classification
    • task1211_atomic_classification_hassubevent
    • task360_spolin_yesand_response_generation
    • task510_reddit_tifu_title_summarization
    • task511_reddit_tifu_long_text_summarization
    • task345_hybridqa_answer_generation
    • task270_csrg_counterfactual_context_generation
    • task307_jeopardy_answer_generation_final
    • task001_quoref_question_generation
    • task089_swap_words_verification
    • task1196_atomic_classification_oeffect
    • task080_piqa_answer_generation
    • task1598_nyc_long_text_generation
    • task240_tweetqa_question_generation
    • task615_moviesqa_answer_generation
    • task1347_glue_sts-b_similarity_classification
    • task114_is_the_given_word_longest
    • task292_storycommonsense_character_text_generation
    • task115_help_advice_classification
    • task431_senteval_object_count
    • task1360_numer_sense_multiple_choice_qa_generation
    • task177_para-nmt_paraphrasing
    • task132_dais_text_modification
    • task269_csrg_counterfactual_story_generation
    • task233_iirc_link_exists_classification
    • task161_count_words_containing_letter
    • task1205_atomic_classification_isafter
    • task571_recipe_nlg_ner_generation
    • task1292_yelp_review_full_text_categorization
    • task428_senteval_inversion
    • task311_race_question_generation
    • task429_senteval_tense
    • task403_creak_commonsense_inference
    • task929_products_reviews_classification
    • task582_naturalquestion_answer_generation
    • task237_iirc_answer_from_subtext_answer_generation
    • task050_multirc_answerability
    • task184_break_generate_question
    • task669_ambigqa_answer_generation
    • task169_strategyqa_sentence_generation
    • task500_scruples_anecdotes_title_generation
    • task241_tweetqa_classification
    • task1345_glue_qqp_question_paraprashing
    • task218_rocstories_swap_order_answer_generation
    • task613_politifact_text_generation
    • task1167_penn_treebank_coarse_pos_tagging
    • task1422_mathqa_physics
    • task247_dream_answer_generation
    • task199_mnli_classification
    • task164_mcscript_question_answering_text
    • task1541_agnews_classification
    • task516_senteval_conjoints_inversion
    • task294_storycommonsense_motiv_text_generation
    • task501_scruples_anecdotes_post_type_verification
    • task213_rocstories_correct_ending_classification
    • task821_protoqa_question_generation
    • task493_review_polarity_classification
    • task308_jeopardy_answer_generation_all
    • task1595_event2mind_text_generation_1
    • task040_qasc_question_generation
    • task231_iirc_link_classification
    • task1727_wiqa_what_is_the_effect
    • task578_curiosity_dialogs_answer_generation
    • task310_race_classification
    • task309_race_answer_generation
    • task379_agnews_topic_classification
    • task030_winogrande_full_person
    • task1540_parsed_pdfs_summarization
    • task039_qasc_find_overlapping_words
    • task1206_atomic_classification_isbefore
    • task157_count_vowels_and_consonants
    • task339_record_answer_generation
    • task453_swag_answer_generation
    • task848_pubmedqa_classification
    • task673_google_wellformed_query_classification
    • task676_ollie_relationship_answer_generation
    • task268_casehold_legal_answer_generation
    • task844_financial_phrasebank_classification
    • task330_gap_answer_generation
    • task595_mocha_answer_generation
    • task1285_kpa_keypoint_matching
    • task234_iirc_passage_line_answer_generation
    • task494_review_polarity_answer_generation
    • task670_ambigqa_question_generation
    • task289_gigaword_summarization
    • npr
    • nli
    • SimpleWiki
    • amazon_review_2018
    • ccnews_title_text
    • agnews
    • xsum
    • msmarco
    • yahoo_answers_title_answer
    • squad_pairs
    • wow
    • mteb-amazon_counterfactual-avs_triplets
    • mteb-amazon_massive_intent-avs_triplets
    • mteb-amazon_massive_scenario-avs_triplets
    • mteb-amazon_reviews_multi-avs_triplets
    • mteb-banking77-avs_triplets
    • mteb-emotion-avs_triplets
    • mteb-imdb-avs_triplets
    • mteb-mtop_domain-avs_triplets
    • mteb-mtop_intent-avs_triplets
    • mteb-toxic_conversations_50k-avs_triplets
    • mteb-tweet_sentiment_extraction-avs_triplets
    • covid-bing-query-gpt4-avs_triplets
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-64-final")
# Run inference
sentences = [
    'Does early second-trimester sonography predict adverse perinatal outcomes in monochorionic diamniotic twin pregnancies?',
    'To determine whether intertwin discordant abdominal circumference, femur length, head circumference, and estimated fetal weight sonographic measurements in early second-trimester monochorionic diamniotic twins predict adverse obstetric and neonatal outcomes.We conducted a multicenter retrospective cohort study involving 9 regional perinatal centers in the United States. We examined the records of all monochorionic diamniotic twin pregnancies with two live fetuses at the 16- to 18-week sonographic examination who had serial follow-up sonography until delivery. The intertwin discordance in abdominal circumference, femur length, head circumference, and estimated fetal weight was calculated as the difference between the two fetuses, expressed as a percentage of the larger using the 16- to 18-week sonographic measurements. An adverse composite obstetric outcome was defined as the occurrence of 1 or more of the following in either fetus: intrauterine growth restriction, twin-twin transfusion syndrome, intrauterine fetal death, abnormal growth discordance (≥20% difference), and very preterm birth at or before 28 weeks. An adverse composite neonatal outcome was defined as the occurrence of 1 or more of the following: respiratory distress syndrome, any stage of intraventricular hemorrhage, 5-minute Apgar score less than 7, necrotizing enterocolitis, culture-proven early-onset sepsis, and neonatal death. Receiver operating characteristic and logistic regression-with-generalized estimating equation analyses were constructed.Among the 177 monochorionic diamniotic twin pregnancies analyzed, intertwin abdominal circumference and estimated fetal weight discordances were only predictive of adverse composite obstetric outcomes (areas under the curve, 79% and 80%, respectively). Receiver operating characteristic curves showed that intertwin discordances in abdominal circumference, femur length, head circumference, and estimated fetal weight were not acceptable predictors of twin-twin transfusion syndrome or adverse neonatal outcomes.',
    'Calcium and vitamin D are essential nutrients for bone metabolism Vitamin D can either be obtained from dietary sources or cutaneous synthesis. The study was conducted in subtropic weather; therefore, some might believe that the levels of solar radiation would be sufficient in this area.To evaluate calcium and vitamin D supplementation in postmenopausal women with osteoporosis living in a sunny country.A 3-month controlled clinical trial with 64 postmenopausal women with osteoporosis, mean age 62 + or - 8 years. They were randomly assigned to either the supplement group, who received 1,200 mg of calcium carbonate and 400 IU (10 microg) of vitamin D(3,) or the control group. Dietary intake assessment was performed, bone mineral density and body composition were measured, and biochemical markers of bone metabolism were analyzed.Considering all participants at baseline, serum vitamin D was under 75 nmol/l in 91.4% of the participants. The concentration of serum 25(OH)D increased significantly (p = 0.023) after 3 months of supplementation from 46.67 + or - 13.97 to 59.47 + or - 17.50 nmol/l. However, the dose given was limited in effect, and 86.2% of the supplement group did not reach optimal levels of 25(OH)D. Parathyroid hormone was elevated in 22.4% of the study group. After the intervention period, mean parathyroid hormone tended to decrease in the supplement group (p = 0.063).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9153

Training Details

Training Datasets

NQ

  • Dataset: NQ
  • Size: 49,548 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 11.77 tokens
    • max: 22 tokens
    • min: 113 tokens
    • mean: 137.23 tokens
    • max: 220 tokens
    • min: 110 tokens
    • mean: 138.25 tokens
    • max: 239 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

pubmed

  • Dataset: pubmed
  • Size: 29,716 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 22.99 tokens
    • max: 62 tokens
    • min: 78 tokens
    • mean: 240.63 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 239.04 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

specter_train_triples

  • Dataset: specter_train_triples
  • Size: 49,548 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 15.21 tokens
    • max: 42 tokens
    • min: 4 tokens
    • mean: 13.87 tokens
    • max: 45 tokens
    • min: 4 tokens
    • mean: 16.01 tokens
    • max: 70 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

S2ORC_citations_abstracts

  • Dataset: S2ORC_citations_abstracts
  • Size: 99,032 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 198.64 tokens
    • max: 256 tokens
    • min: 30 tokens
    • mean: 203.8 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 203.03 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

fever

  • Dataset: fever
  • Size: 74,258 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 12.23 tokens
    • max: 43 tokens
    • min: 37 tokens
    • mean: 111.79 tokens
    • max: 150 tokens
    • min: 42 tokens
    • mean: 113.24 tokens
    • max: 179 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

gooaq_pairs

  • Dataset: gooaq_pairs
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 11.86 tokens
    • max: 26 tokens
    • min: 16 tokens
    • mean: 59.94 tokens
    • max: 138 tokens
    • min: 15 tokens
    • mean: 63.35 tokens
    • max: 149 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

codesearchnet

  • Dataset: codesearchnet
  • Size: 14,890 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 29.54 tokens
    • max: 124 tokens
    • min: 28 tokens
    • mean: 132.91 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 163.79 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

wikihow

  • Dataset: wikihow
  • Size: 5,006 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 8.16 tokens
    • max: 21 tokens
    • min: 11 tokens
    • mean: 44.62 tokens
    • max: 117 tokens
    • min: 8 tokens
    • mean: 36.33 tokens
    • max: 100 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

WikiAnswers

  • Dataset: WikiAnswers
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 12.83 tokens
    • max: 34 tokens
    • min: 6 tokens
    • mean: 12.7 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 13.12 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

eli5_question_answer

  • Dataset: eli5_question_answer
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 20.98 tokens
    • max: 75 tokens
    • min: 12 tokens
    • mean: 103.88 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 111.38 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

amazon-qa

  • Dataset: amazon-qa
  • Size: 99,032 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 23.07 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 54.48 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 61.35 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

medmcqa

  • Dataset: medmcqa
  • Size: 29,716 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 19.86 tokens
    • max: 176 tokens
    • min: 3 tokens
    • mean: 113.43 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 108.04 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

zeroshot

  • Dataset: zeroshot
  • Size: 14,890 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 8.65 tokens
    • max: 21 tokens
    • min: 22 tokens
    • mean: 112.61 tokens
    • max: 163 tokens
    • min: 13 tokens
    • mean: 117.07 tokens
    • max: 214 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

TriviaQA_pairs

  • Dataset: TriviaQA_pairs
  • Size: 49,548 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 19.79 tokens
    • max: 83 tokens
    • min: 15 tokens
    • mean: 245.73 tokens
    • max: 256 tokens
    • min: 26 tokens
    • mean: 231.5 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

PAQ_pairs

  • Dataset: PAQ_pairs
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 12.67 tokens
    • max: 21 tokens
    • min: 110 tokens
    • mean: 135.61 tokens
    • max: 223 tokens
    • min: 111 tokens
    • mean: 135.86 tokens
    • max: 254 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

stackexchange_duplicate_questions_title-body_title-body

  • Dataset: stackexchange_duplicate_questions_title-body_title-body
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 146.64 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 141.12 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 200.51 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

trex

  • Dataset: trex
  • Size: 29,716 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 9.43 tokens
    • max: 20 tokens
    • min: 23 tokens
    • mean: 102.9 tokens
    • max: 166 tokens
    • min: 19 tokens
    • mean: 118.59 tokens
    • max: 236 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

flickr30k_captions

  • Dataset: flickr30k_captions
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.87 tokens
    • max: 61 tokens
    • min: 6 tokens
    • mean: 15.83 tokens
    • max: 48 tokens
    • min: 7 tokens
    • mean: 17.13 tokens
    • max: 61 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

hotpotqa

  • Dataset: hotpotqa
  • Size: 39,600 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 24.46 tokens
    • max: 97 tokens
    • min: 23 tokens
    • mean: 113.58 tokens
    • max: 176 tokens
    • min: 23 tokens
    • mean: 114.85 tokens
    • max: 167 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task671_ambigqa_text_generation

  • Dataset: task671_ambigqa_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 12.64 tokens
    • max: 26 tokens
    • min: 11 tokens
    • mean: 12.44 tokens
    • max: 23 tokens
    • min: 11 tokens
    • mean: 12.2 tokens
    • max: 19 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task061_ropes_answer_generation

  • Dataset: task061_ropes_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 117 tokens
    • mean: 209.31 tokens
    • max: 256 tokens
    • min: 117 tokens
    • mean: 208.62 tokens
    • max: 256 tokens
    • min: 119 tokens
    • mean: 211.39 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task285_imdb_answer_generation

  • Dataset: task285_imdb_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 46 tokens
    • mean: 209.96 tokens
    • max: 256 tokens
    • min: 49 tokens
    • mean: 205.18 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 209.96 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task905_hate_speech_offensive_classification

  • Dataset: task905_hate_speech_offensive_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 41.48 tokens
    • max: 164 tokens
    • min: 13 tokens
    • mean: 40.59 tokens
    • max: 198 tokens
    • min: 13 tokens
    • mean: 32.37 tokens
    • max: 135 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task566_circa_classification

  • Dataset: task566_circa_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 27.85 tokens
    • max: 48 tokens
    • min: 19 tokens
    • mean: 27.3 tokens
    • max: 44 tokens
    • min: 20 tokens
    • mean: 27.5 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task184_snli_entailment_to_neutral_text_modification

  • Dataset: task184_snli_entailment_to_neutral_text_modification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 29.79 tokens
    • max: 72 tokens
    • min: 16 tokens
    • mean: 28.88 tokens
    • max: 60 tokens
    • min: 17 tokens
    • mean: 30.16 tokens
    • max: 100 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task280_stereoset_classification_stereotype_type

  • Dataset: task280_stereoset_classification_stereotype_type
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 18.4 tokens
    • max: 53 tokens
    • min: 8 tokens
    • mean: 16.82 tokens
    • max: 53 tokens
    • min: 8 tokens
    • mean: 16.81 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1599_smcalflow_classification

  • Dataset: task1599_smcalflow_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 11.32 tokens
    • max: 37 tokens
    • min: 3 tokens
    • mean: 10.48 tokens
    • max: 38 tokens
    • min: 5 tokens
    • mean: 16.23 tokens
    • max: 45 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1384_deal_or_no_dialog_classification

  • Dataset: task1384_deal_or_no_dialog_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 59.18 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 58.75 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 58.81 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task591_sciq_answer_generation

  • Dataset: task591_sciq_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 17.64 tokens
    • max: 70 tokens
    • min: 7 tokens
    • mean: 17.17 tokens
    • max: 43 tokens
    • min: 6 tokens
    • mean: 16.76 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task823_peixian-rtgender_sentiment_analysis

  • Dataset: task823_peixian-rtgender_sentiment_analysis
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 57.03 tokens
    • max: 129 tokens
    • min: 16 tokens
    • mean: 59.85 tokens
    • max: 153 tokens
    • min: 14 tokens
    • mean: 60.39 tokens
    • max: 169 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task023_cosmosqa_question_generation

  • Dataset: task023_cosmosqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 79.22 tokens
    • max: 159 tokens
    • min: 34 tokens
    • mean: 80.25 tokens
    • max: 165 tokens
    • min: 35 tokens
    • mean: 79.05 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task900_freebase_qa_category_classification

  • Dataset: task900_freebase_qa_category_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.33 tokens
    • max: 88 tokens
    • min: 8 tokens
    • mean: 18.3 tokens
    • max: 62 tokens
    • min: 8 tokens
    • mean: 19.08 tokens
    • max: 69 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task924_event2mind_word_generation

  • Dataset: task924_event2mind_word_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 32.19 tokens
    • max: 64 tokens
    • min: 17 tokens
    • mean: 32.09 tokens
    • max: 70 tokens
    • min: 17 tokens
    • mean: 31.45 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task152_tomqa_find_location_easy_noise

  • Dataset: task152_tomqa_find_location_easy_noise
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 52.67 tokens
    • max: 79 tokens
    • min: 37 tokens
    • mean: 52.21 tokens
    • max: 78 tokens
    • min: 37 tokens
    • mean: 52.78 tokens
    • max: 82 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1368_healthfact_sentence_generation

  • Dataset: task1368_healthfact_sentence_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 91 tokens
    • mean: 240.92 tokens
    • max: 256 tokens
    • min: 84 tokens
    • mean: 239.86 tokens
    • max: 256 tokens
    • min: 97 tokens
    • mean: 245.16 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1661_super_glue_classification

  • Dataset: task1661_super_glue_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 140.96 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 144.29 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 143.59 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1187_politifact_classification

  • Dataset: task1187_politifact_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 33.19 tokens
    • max: 79 tokens
    • min: 12 tokens
    • mean: 31.7 tokens
    • max: 75 tokens
    • min: 13 tokens
    • mean: 31.87 tokens
    • max: 71 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1728_web_nlg_data_to_text

  • Dataset: task1728_web_nlg_data_to_text
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 42.96 tokens
    • max: 152 tokens
    • min: 7 tokens
    • mean: 46.52 tokens
    • max: 152 tokens
    • min: 8 tokens
    • mean: 42.39 tokens
    • max: 152 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task112_asset_simple_sentence_identification

  • Dataset: task112_asset_simple_sentence_identification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 51.98 tokens
    • max: 136 tokens
    • min: 18 tokens
    • mean: 51.84 tokens
    • max: 144 tokens
    • min: 22 tokens
    • mean: 51.97 tokens
    • max: 114 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1340_msr_text_compression_compression

  • Dataset: task1340_msr_text_compression_compression
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 42.15 tokens
    • max: 116 tokens
    • min: 14 tokens
    • mean: 44.46 tokens
    • max: 133 tokens
    • min: 12 tokens
    • mean: 40.14 tokens
    • max: 141 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task072_abductivenli_answer_generation

  • Dataset: task072_abductivenli_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 26.9 tokens
    • max: 56 tokens
    • min: 16 tokens
    • mean: 26.28 tokens
    • max: 47 tokens
    • min: 16 tokens
    • mean: 26.46 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1504_hatexplain_answer_generation

  • Dataset: task1504_hatexplain_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 29.09 tokens
    • max: 72 tokens
    • min: 5 tokens
    • mean: 24.67 tokens
    • max: 86 tokens
    • min: 5 tokens
    • mean: 27.96 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task684_online_privacy_policy_text_information_type_generation

  • Dataset: task684_online_privacy_policy_text_information_type_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 30.02 tokens
    • max: 68 tokens
    • min: 10 tokens
    • mean: 30.19 tokens
    • max: 61 tokens
    • min: 14 tokens
    • mean: 30.18 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1290_xsum_summarization

  • Dataset: task1290_xsum_summarization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 226.27 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 228.93 tokens
    • max: 256 tokens
    • min: 34 tokens
    • mean: 229.41 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task075_squad1.1_answer_generation

  • Dataset: task075_squad1.1_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 168.58 tokens
    • max: 256 tokens
    • min: 45 tokens
    • mean: 172.1 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 181.15 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1587_scifact_classification

  • Dataset: task1587_scifact_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 88 tokens
    • mean: 242.35 tokens
    • max: 256 tokens
    • min: 90 tokens
    • mean: 246.75 tokens
    • max: 256 tokens
    • min: 86 tokens
    • mean: 244.87 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task384_socialiqa_question_classification

  • Dataset: task384_socialiqa_question_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 35.44 tokens
    • max: 78 tokens
    • min: 22 tokens
    • mean: 34.35 tokens
    • max: 59 tokens
    • min: 22 tokens
    • mean: 34.51 tokens
    • max: 57 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1555_scitail_answer_generation

  • Dataset: task1555_scitail_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 36.72 tokens
    • max: 90 tokens
    • min: 18 tokens
    • mean: 36.31 tokens
    • max: 80 tokens
    • min: 18 tokens
    • mean: 36.73 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1532_daily_dialog_emotion_classification

  • Dataset: task1532_daily_dialog_emotion_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 137.07 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 140.81 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 132.89 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task239_tweetqa_answer_generation

  • Dataset: task239_tweetqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 55.78 tokens
    • max: 85 tokens
    • min: 29 tokens
    • mean: 56.32 tokens
    • max: 92 tokens
    • min: 25 tokens
    • mean: 55.92 tokens
    • max: 81 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task596_mocha_question_generation

  • Dataset: task596_mocha_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 34 tokens
    • mean: 80.49 tokens
    • max: 163 tokens
    • min: 12 tokens
    • mean: 95.93 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 44.93 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1411_dart_subject_identification

  • Dataset: task1411_dart_subject_identification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 14.86 tokens
    • max: 74 tokens
    • min: 6 tokens
    • mean: 14.02 tokens
    • max: 37 tokens
    • min: 6 tokens
    • mean: 14.25 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1359_numer_sense_answer_generation

  • Dataset: task1359_numer_sense_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 18.67 tokens
    • max: 30 tokens
    • min: 10 tokens
    • mean: 18.43 tokens
    • max: 33 tokens
    • min: 10 tokens
    • mean: 18.34 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task329_gap_classification

  • Dataset: task329_gap_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 40 tokens
    • mean: 122.88 tokens
    • max: 256 tokens
    • min: 62 tokens
    • mean: 127.47 tokens
    • max: 256 tokens
    • min: 58 tokens
    • mean: 127.71 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task220_rocstories_title_classification

  • Dataset: task220_rocstories_title_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 53 tokens
    • mean: 80.81 tokens
    • max: 116 tokens
    • min: 51 tokens
    • mean: 81.08 tokens
    • max: 108 tokens
    • min: 55 tokens
    • mean: 79.99 tokens
    • max: 115 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task316_crows-pairs_classification_stereotype

  • Dataset: task316_crows-pairs_classification_stereotype
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.78 tokens
    • max: 51 tokens
    • min: 8 tokens
    • mean: 18.31 tokens
    • max: 41 tokens
    • min: 7 tokens
    • mean: 19.87 tokens
    • max: 52 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task495_semeval_headline_classification

  • Dataset: task495_semeval_headline_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 24.57 tokens
    • max: 42 tokens
    • min: 15 tokens
    • mean: 24.29 tokens
    • max: 41 tokens
    • min: 15 tokens
    • mean: 24.14 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1168_brown_coarse_pos_tagging

  • Dataset: task1168_brown_coarse_pos_tagging
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 43.61 tokens
    • max: 142 tokens
    • min: 12 tokens
    • mean: 42.6 tokens
    • max: 197 tokens
    • min: 12 tokens
    • mean: 44.23 tokens
    • max: 197 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task348_squad2.0_unanswerable_question_generation

  • Dataset: task348_squad2.0_unanswerable_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 153.88 tokens
    • max: 256 tokens
    • min: 38 tokens
    • mean: 161.26 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 166.13 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task049_multirc_questions_needed_to_answer

  • Dataset: task049_multirc_questions_needed_to_answer
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 174 tokens
    • mean: 252.7 tokens
    • max: 256 tokens
    • min: 169 tokens
    • mean: 252.85 tokens
    • max: 256 tokens
    • min: 178 tokens
    • mean: 252.93 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1534_daily_dialog_question_classification

  • Dataset: task1534_daily_dialog_question_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 124.7 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 130.68 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 135.16 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task322_jigsaw_classification_threat

  • Dataset: task322_jigsaw_classification_threat
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 54.9 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 62.74 tokens
    • max: 249 tokens
    • min: 6 tokens
    • mean: 61.92 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task295_semeval_2020_task4_commonsense_reasoning

  • Dataset: task295_semeval_2020_task4_commonsense_reasoning
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 45.35 tokens
    • max: 92 tokens
    • min: 25 tokens
    • mean: 44.74 tokens
    • max: 95 tokens
    • min: 25 tokens
    • mean: 44.53 tokens
    • max: 88 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task186_snli_contradiction_to_entailment_text_modification

  • Dataset: task186_snli_contradiction_to_entailment_text_modification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.09 tokens
    • max: 102 tokens
    • min: 18 tokens
    • mean: 30.26 tokens
    • max: 65 tokens
    • min: 18 tokens
    • mean: 32.22 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task034_winogrande_question_modification_object

  • Dataset: task034_winogrande_question_modification_object
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 36.26 tokens
    • max: 53 tokens
    • min: 29 tokens
    • mean: 35.64 tokens
    • max: 54 tokens
    • min: 29 tokens
    • mean: 34.85 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task160_replace_letter_in_a_sentence

  • Dataset: task160_replace_letter_in_a_sentence
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 32.03 tokens
    • max: 49 tokens
    • min: 28 tokens
    • mean: 31.76 tokens
    • max: 41 tokens
    • min: 29 tokens
    • mean: 31.77 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task469_mrqa_answer_generation

  • Dataset: task469_mrqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 182.13 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 180.78 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 183.72 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task105_story_cloze-rocstories_sentence_generation

  • Dataset: task105_story_cloze-rocstories_sentence_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 36 tokens
    • mean: 55.65 tokens
    • max: 75 tokens
    • min: 35 tokens
    • mean: 55.02 tokens
    • max: 76 tokens
    • min: 36 tokens
    • mean: 55.88 tokens
    • max: 76 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task649_race_blank_question_generation

  • Dataset: task649_race_blank_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 36 tokens
    • mean: 252.95 tokens
    • max: 256 tokens
    • min: 36 tokens
    • mean: 252.78 tokens
    • max: 256 tokens
    • min: 157 tokens
    • mean: 253.91 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1536_daily_dialog_happiness_classification

  • Dataset: task1536_daily_dialog_happiness_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 127.91 tokens
    • max: 256 tokens
    • min: 13 tokens
    • mean: 134.02 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 143.7 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task683_online_privacy_policy_text_purpose_answer_generation

  • Dataset: task683_online_privacy_policy_text_purpose_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 30.09 tokens
    • max: 68 tokens
    • min: 10 tokens
    • mean: 30.5 tokens
    • max: 64 tokens
    • min: 14 tokens
    • mean: 30.07 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task024_cosmosqa_answer_generation

  • Dataset: task024_cosmosqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 92.62 tokens
    • max: 176 tokens
    • min: 47 tokens
    • mean: 93.35 tokens
    • max: 174 tokens
    • min: 42 tokens
    • mean: 94.9 tokens
    • max: 183 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task584_udeps_eng_fine_pos_tagging

  • Dataset: task584_udeps_eng_fine_pos_tagging
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 40.09 tokens
    • max: 120 tokens
    • min: 12 tokens
    • mean: 39.35 tokens
    • max: 186 tokens
    • min: 12 tokens
    • mean: 40.38 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task066_timetravel_binary_consistency_classification

  • Dataset: task066_timetravel_binary_consistency_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 66.69 tokens
    • max: 93 tokens
    • min: 43 tokens
    • mean: 67.34 tokens
    • max: 94 tokens
    • min: 45 tokens
    • mean: 67.19 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task413_mickey_en_sentence_perturbation_generation

  • Dataset: task413_mickey_en_sentence_perturbation_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 13.71 tokens
    • max: 21 tokens
    • min: 7 tokens
    • mean: 13.75 tokens
    • max: 21 tokens
    • min: 7 tokens
    • mean: 13.29 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task182_duorc_question_generation

  • Dataset: task182_duorc_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 99 tokens
    • mean: 242.77 tokens
    • max: 256 tokens
    • min: 120 tokens
    • mean: 246.47 tokens
    • max: 256 tokens
    • min: 99 tokens
    • mean: 246.38 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task028_drop_answer_generation

  • Dataset: task028_drop_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 76 tokens
    • mean: 230.94 tokens
    • max: 256 tokens
    • min: 86 tokens
    • mean: 234.89 tokens
    • max: 256 tokens
    • min: 81 tokens
    • mean: 235.48 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1601_webquestions_answer_generation

  • Dataset: task1601_webquestions_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 16.49 tokens
    • max: 28 tokens
    • min: 11 tokens
    • mean: 16.71 tokens
    • max: 28 tokens
    • min: 9 tokens
    • mean: 16.76 tokens
    • max: 27 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1295_adversarial_qa_question_answering

  • Dataset: task1295_adversarial_qa_question_answering
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 163.69 tokens
    • max: 256 tokens
    • min: 54 tokens
    • mean: 166.23 tokens
    • max: 256 tokens
    • min: 48 tokens
    • mean: 166.52 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task201_mnli_neutral_classification

  • Dataset: task201_mnli_neutral_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 72.97 tokens
    • max: 218 tokens
    • min: 25 tokens
    • mean: 73.29 tokens
    • max: 170 tokens
    • min: 27 tokens
    • mean: 72.24 tokens
    • max: 205 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task038_qasc_combined_fact

  • Dataset: task038_qasc_combined_fact
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.25 tokens
    • max: 57 tokens
    • min: 19 tokens
    • mean: 30.61 tokens
    • max: 53 tokens
    • min: 18 tokens
    • mean: 30.86 tokens
    • max: 53 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task293_storycommonsense_emotion_text_generation

  • Dataset: task293_storycommonsense_emotion_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 40.25 tokens
    • max: 86 tokens
    • min: 15 tokens
    • mean: 40.27 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 38.11 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task572_recipe_nlg_text_generation

  • Dataset: task572_recipe_nlg_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 115.66 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 122.27 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 124.11 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task517_emo_classify_emotion_of_dialogue

  • Dataset: task517_emo_classify_emotion_of_dialogue
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 18.13 tokens
    • max: 78 tokens
    • min: 7 tokens
    • mean: 17.07 tokens
    • max: 59 tokens
    • min: 7 tokens
    • mean: 18.5 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task382_hybridqa_answer_generation

  • Dataset: task382_hybridqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 42.28 tokens
    • max: 70 tokens
    • min: 29 tokens
    • mean: 41.56 tokens
    • max: 74 tokens
    • min: 28 tokens
    • mean: 41.74 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task176_break_decompose_questions

  • Dataset: task176_break_decompose_questions
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 17.48 tokens
    • max: 41 tokens
    • min: 8 tokens
    • mean: 17.2 tokens
    • max: 39 tokens
    • min: 8 tokens
    • mean: 15.6 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1291_multi_news_summarization

  • Dataset: task1291_multi_news_summarization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 116 tokens
    • mean: 255.49 tokens
    • max: 256 tokens
    • min: 146 tokens
    • mean: 255.55 tokens
    • max: 256 tokens
    • min: 68 tokens
    • mean: 251.87 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task155_count_nouns_verbs

  • Dataset: task155_count_nouns_verbs
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 23 tokens
    • mean: 27.05 tokens
    • max: 56 tokens
    • min: 23 tokens
    • mean: 26.81 tokens
    • max: 43 tokens
    • min: 23 tokens
    • mean: 26.98 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task031_winogrande_question_generation_object

  • Dataset: task031_winogrande_question_generation_object
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.42 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.3 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.25 tokens
    • max: 11 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task279_stereoset_classification_stereotype

  • Dataset: task279_stereoset_classification_stereotype
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 17.85 tokens
    • max: 41 tokens
    • min: 8 tokens
    • mean: 15.47 tokens
    • max: 43 tokens
    • min: 8 tokens
    • mean: 17.28 tokens
    • max: 50 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1336_peixian_equity_evaluation_corpus_gender_classifier

  • Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.66 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 9.61 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.72 tokens
    • max: 16 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task508_scruples_dilemmas_more_ethical_isidentifiable

  • Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 29.84 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 28.5 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 28.66 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task518_emo_different_dialogue_emotions

  • Dataset: task518_emo_different_dialogue_emotions
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 47.9 tokens
    • max: 106 tokens
    • min: 28 tokens
    • mean: 45.44 tokens
    • max: 116 tokens
    • min: 26 tokens
    • mean: 46.17 tokens
    • max: 123 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task077_splash_explanation_to_sql

  • Dataset: task077_splash_explanation_to_sql
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 39.24 tokens
    • max: 126 tokens
    • min: 8 tokens
    • mean: 39.15 tokens
    • max: 126 tokens
    • min: 8 tokens
    • mean: 35.65 tokens
    • max: 111 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task923_event2mind_classifier

  • Dataset: task923_event2mind_classifier
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 20.63 tokens
    • max: 46 tokens
    • min: 11 tokens
    • mean: 18.75 tokens
    • max: 41 tokens
    • min: 11 tokens
    • mean: 19.63 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task470_mrqa_question_generation

  • Dataset: task470_mrqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 173.13 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 175.67 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 181.16 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task638_multi_woz_classification

  • Dataset: task638_multi_woz_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 78 tokens
    • mean: 223.5 tokens
    • max: 256 tokens
    • min: 76 tokens
    • mean: 220.15 tokens
    • max: 256 tokens
    • min: 64 tokens
    • mean: 220.29 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1412_web_questions_question_answering

  • Dataset: task1412_web_questions_question_answering
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 10.32 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 10.23 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 10.06 tokens
    • max: 16 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task847_pubmedqa_question_generation

  • Dataset: task847_pubmedqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 249.15 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 248.61 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 248.86 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task678_ollie_actual_relationship_answer_generation

  • Dataset: task678_ollie_actual_relationship_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 40.63 tokens
    • max: 95 tokens
    • min: 19 tokens
    • mean: 38.38 tokens
    • max: 102 tokens
    • min: 18 tokens
    • mean: 40.99 tokens
    • max: 104 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task290_tellmewhy_question_answerability

  • Dataset: task290_tellmewhy_question_answerability
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 62.58 tokens
    • max: 95 tokens
    • min: 36 tokens
    • mean: 62.21 tokens
    • max: 94 tokens
    • min: 37 tokens
    • mean: 62.91 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task575_air_dialogue_classification

  • Dataset: task575_air_dialogue_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 14.18 tokens
    • max: 45 tokens
    • min: 4 tokens
    • mean: 13.6 tokens
    • max: 43 tokens
    • min: 4 tokens
    • mean: 12.3 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task189_snli_neutral_to_contradiction_text_modification

  • Dataset: task189_snli_neutral_to_contradiction_text_modification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.89 tokens
    • max: 60 tokens
    • min: 18 tokens
    • mean: 30.66 tokens
    • max: 57 tokens
    • min: 18 tokens
    • mean: 33.29 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task026_drop_question_generation

  • Dataset: task026_drop_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 82 tokens
    • mean: 219.82 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 222.71 tokens
    • max: 256 tokens
    • min: 96 tokens
    • mean: 232.56 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task162_count_words_starting_with_letter

  • Dataset: task162_count_words_starting_with_letter
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 32.16 tokens
    • max: 56 tokens
    • min: 28 tokens
    • mean: 31.77 tokens
    • max: 45 tokens
    • min: 28 tokens
    • mean: 31.65 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task079_conala_concat_strings

  • Dataset: task079_conala_concat_strings
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 39.94 tokens
    • max: 76 tokens
    • min: 11 tokens
    • mean: 34.24 tokens
    • max: 80 tokens
    • min: 11 tokens
    • mean: 33.86 tokens
    • max: 76 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task610_conllpp_ner

  • Dataset: task610_conllpp_ner
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 19.74 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 20.71 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 14.24 tokens
    • max: 54 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task046_miscellaneous_question_typing

  • Dataset: task046_miscellaneous_question_typing
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 25.26 tokens
    • max: 70 tokens
    • min: 16 tokens
    • mean: 24.84 tokens
    • max: 70 tokens
    • min: 16 tokens
    • mean: 25.2 tokens
    • max: 57 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task197_mnli_domain_answer_generation

  • Dataset: task197_mnli_domain_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 44.08 tokens
    • max: 197 tokens
    • min: 12 tokens
    • mean: 44.95 tokens
    • max: 211 tokens
    • min: 11 tokens
    • mean: 39.27 tokens
    • max: 115 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1325_qa_zre_question_generation_on_subject_relation

  • Dataset: task1325_qa_zre_question_generation_on_subject_relation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 50.63 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 49.26 tokens
    • max: 180 tokens
    • min: 22 tokens
    • mean: 54.42 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task430_senteval_subject_count

  • Dataset: task430_senteval_subject_count
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 17.26 tokens
    • max: 35 tokens
    • min: 7 tokens
    • mean: 15.37 tokens
    • max: 34 tokens
    • min: 7 tokens
    • mean: 16.07 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task672_nummersense

  • Dataset: task672_nummersense
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.66 tokens
    • max: 30 tokens
    • min: 7 tokens
    • mean: 15.43 tokens
    • max: 27 tokens
    • min: 7 tokens
    • mean: 15.25 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task402_grailqa_paraphrase_generation

  • Dataset: task402_grailqa_paraphrase_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 23 tokens
    • mean: 129.84 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 139.54 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 136.75 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task904_hate_speech_offensive_classification

  • Dataset: task904_hate_speech_offensive_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 34.35 tokens
    • max: 157 tokens
    • min: 8 tokens
    • mean: 34.38 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 27.8 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task192_hotpotqa_sentence_generation

  • Dataset: task192_hotpotqa_sentence_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 124.56 tokens
    • max: 256 tokens
    • min: 35 tokens
    • mean: 123.35 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 132.67 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task069_abductivenli_classification

  • Dataset: task069_abductivenli_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 52.03 tokens
    • max: 86 tokens
    • min: 33 tokens
    • mean: 51.87 tokens
    • max: 95 tokens
    • min: 33 tokens
    • mean: 52.01 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task574_air_dialogue_sentence_generation

  • Dataset: task574_air_dialogue_sentence_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 54 tokens
    • mean: 144.28 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 144.0 tokens
    • max: 256 tokens
    • min: 66 tokens
    • mean: 148.22 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task187_snli_entailment_to_contradiction_text_modification

  • Dataset: task187_snli_entailment_to_contradiction_text_modification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 30.35 tokens
    • max: 69 tokens
    • min: 16 tokens
    • mean: 29.87 tokens
    • max: 104 tokens
    • min: 17 tokens
    • mean: 29.47 tokens
    • max: 71 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task749_glucose_reverse_cause_emotion_detection

  • Dataset: task749_glucose_reverse_cause_emotion_detection
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 38 tokens
    • mean: 67.51 tokens
    • max: 106 tokens
    • min: 37 tokens
    • mean: 67.07 tokens
    • max: 104 tokens
    • min: 39 tokens
    • mean: 68.56 tokens
    • max: 107 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1552_scitail_question_generation

  • Dataset: task1552_scitail_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 18.34 tokens
    • max: 53 tokens
    • min: 7 tokens
    • mean: 17.5 tokens
    • max: 46 tokens
    • min: 7 tokens
    • mean: 15.81 tokens
    • max: 54 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task750_aqua_multiple_choice_answering

  • Dataset: task750_aqua_multiple_choice_answering
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 69.8 tokens
    • max: 194 tokens
    • min: 32 tokens
    • mean: 68.34 tokens
    • max: 194 tokens
    • min: 28 tokens
    • mean: 68.21 tokens
    • max: 165 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task327_jigsaw_classification_toxic

  • Dataset: task327_jigsaw_classification_toxic
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 36.99 tokens
    • max: 234 tokens
    • min: 5 tokens
    • mean: 41.72 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 44.88 tokens
    • max: 244 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1502_hatexplain_classification

  • Dataset: task1502_hatexplain_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 28.7 tokens
    • max: 73 tokens
    • min: 5 tokens
    • mean: 26.89 tokens
    • max: 110 tokens
    • min: 5 tokens
    • mean: 26.9 tokens
    • max: 90 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task328_jigsaw_classification_insult

  • Dataset: task328_jigsaw_classification_insult
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 50.28 tokens
    • max: 247 tokens
    • min: 5 tokens
    • mean: 60.6 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 64.07 tokens
    • max: 249 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task304_numeric_fused_head_resolution

  • Dataset: task304_numeric_fused_head_resolution
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 116.82 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 118.84 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 131.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1293_kilt_tasks_hotpotqa_question_answering

  • Dataset: task1293_kilt_tasks_hotpotqa_question_answering
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 24.8 tokens
    • max: 114 tokens
    • min: 9 tokens
    • mean: 24.33 tokens
    • max: 114 tokens
    • min: 8 tokens
    • mean: 23.79 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task216_rocstories_correct_answer_generation

  • Dataset: task216_rocstories_correct_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 59.37 tokens
    • max: 83 tokens
    • min: 36 tokens
    • mean: 58.11 tokens
    • max: 92 tokens
    • min: 39 tokens
    • mean: 58.26 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1326_qa_zre_question_generation_from_answer

  • Dataset: task1326_qa_zre_question_generation_from_answer
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 46.71 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 45.51 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 49.23 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1338_peixian_equity_evaluation_corpus_sentiment_classifier

  • Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.72 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.73 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.61 tokens
    • max: 17 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1729_personachat_generate_next

  • Dataset: task1729_personachat_generate_next
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 147.13 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 142.78 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 144.33 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1202_atomic_classification_xneed

  • Dataset: task1202_atomic_classification_xneed
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 19.54 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 19.41 tokens
    • max: 31 tokens
    • min: 14 tokens
    • mean: 19.22 tokens
    • max: 28 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task400_paws_paraphrase_classification

  • Dataset: task400_paws_paraphrase_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 52.25 tokens
    • max: 97 tokens
    • min: 18 tokens
    • mean: 51.75 tokens
    • max: 98 tokens
    • min: 19 tokens
    • mean: 52.95 tokens
    • max: 97 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task502_scruples_anecdotes_whoiswrong_verification

  • Dataset: task502_scruples_anecdotes_whoiswrong_verification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 230.24 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 236.91 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 235.21 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task088_identify_typo_verification

  • Dataset: task088_identify_typo_verification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 15.12 tokens
    • max: 48 tokens
    • min: 10 tokens
    • mean: 15.06 tokens
    • max: 47 tokens
    • min: 10 tokens
    • mean: 15.45 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task221_rocstories_two_choice_classification

  • Dataset: task221_rocstories_two_choice_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 47 tokens
    • mean: 72.36 tokens
    • max: 108 tokens
    • min: 48 tokens
    • mean: 72.48 tokens
    • max: 109 tokens
    • min: 46 tokens
    • mean: 73.1 tokens
    • max: 108 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task200_mnli_entailment_classification

  • Dataset: task200_mnli_entailment_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 72.71 tokens
    • max: 198 tokens
    • min: 23 tokens
    • mean: 73.01 tokens
    • max: 224 tokens
    • min: 23 tokens
    • mean: 73.39 tokens
    • max: 226 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task074_squad1.1_question_generation

  • Dataset: task074_squad1.1_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 150.13 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 160.24 tokens
    • max: 256 tokens
    • min: 38 tokens
    • mean: 164.44 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task581_socialiqa_question_generation

  • Dataset: task581_socialiqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 26.5 tokens
    • max: 69 tokens
    • min: 14 tokens
    • mean: 25.65 tokens
    • max: 48 tokens
    • min: 15 tokens
    • mean: 25.77 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1186_nne_hrngo_classification

  • Dataset: task1186_nne_hrngo_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 33.8 tokens
    • max: 79 tokens
    • min: 19 tokens
    • mean: 33.54 tokens
    • max: 74 tokens
    • min: 20 tokens
    • mean: 33.65 tokens
    • max: 77 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task898_freebase_qa_answer_generation

  • Dataset: task898_freebase_qa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.39 tokens
    • max: 125 tokens
    • min: 8 tokens
    • mean: 17.69 tokens
    • max: 49 tokens
    • min: 8 tokens
    • mean: 17.38 tokens
    • max: 79 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1408_dart_similarity_classification

  • Dataset: task1408_dart_similarity_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 59.5 tokens
    • max: 147 tokens
    • min: 22 tokens
    • mean: 61.89 tokens
    • max: 154 tokens
    • min: 20 tokens
    • mean: 48.9 tokens
    • max: 124 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task168_strategyqa_question_decomposition

  • Dataset: task168_strategyqa_question_decomposition
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 79.99 tokens
    • max: 181 tokens
    • min: 42 tokens
    • mean: 79.63 tokens
    • max: 179 tokens
    • min: 42 tokens
    • mean: 76.6 tokens
    • max: 166 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1357_xlsum_summary_generation

  • Dataset: task1357_xlsum_summary_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 67 tokens
    • mean: 241.38 tokens
    • max: 256 tokens
    • min: 69 tokens
    • mean: 243.16 tokens
    • max: 256 tokens
    • min: 67 tokens
    • mean: 246.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task390_torque_text_span_selection

  • Dataset: task390_torque_text_span_selection
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 47 tokens
    • mean: 110.58 tokens
    • max: 196 tokens
    • min: 42 tokens
    • mean: 110.41 tokens
    • max: 195 tokens
    • min: 48 tokens
    • mean: 111.15 tokens
    • max: 196 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task165_mcscript_question_answering_commonsense

  • Dataset: task165_mcscript_question_answering_commonsense
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 147 tokens
    • mean: 199.7 tokens
    • max: 256 tokens
    • min: 145 tokens
    • mean: 198.04 tokens
    • max: 256 tokens
    • min: 147 tokens
    • mean: 200.11 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1533_daily_dialog_formal_classification

  • Dataset: task1533_daily_dialog_formal_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 130.14 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 136.4 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 137.09 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task002_quoref_answer_generation

  • Dataset: task002_quoref_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 214 tokens
    • mean: 255.53 tokens
    • max: 256 tokens
    • min: 214 tokens
    • mean: 255.5 tokens
    • max: 256 tokens
    • min: 224 tokens
    • mean: 255.58 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1297_qasc_question_answering

  • Dataset: task1297_qasc_question_answering
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 61 tokens
    • mean: 84.44 tokens
    • max: 134 tokens
    • min: 59 tokens
    • mean: 85.31 tokens
    • max: 130 tokens
    • min: 58 tokens
    • mean: 84.94 tokens
    • max: 125 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task305_jeopardy_answer_generation_normal

  • Dataset: task305_jeopardy_answer_generation_normal
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 27.68 tokens
    • max: 59 tokens
    • min: 9 tokens
    • mean: 27.48 tokens
    • max: 45 tokens
    • min: 11 tokens
    • mean: 27.42 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task029_winogrande_full_object

  • Dataset: task029_winogrande_full_object
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.38 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.34 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.24 tokens
    • max: 10 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1327_qa_zre_answer_generation_from_question

  • Dataset: task1327_qa_zre_answer_generation_from_question
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 54.88 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 52.02 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 56.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task326_jigsaw_classification_obscene

  • Dataset: task326_jigsaw_classification_obscene
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 63.85 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 76.17 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 72.28 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1542_every_ith_element_from_starting

  • Dataset: task1542_every_ith_element_from_starting
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 125.18 tokens
    • max: 245 tokens
    • min: 13 tokens
    • mean: 123.56 tokens
    • max: 244 tokens
    • min: 13 tokens
    • mean: 121.24 tokens
    • max: 238 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task570_recipe_nlg_ner_generation

  • Dataset: task570_recipe_nlg_ner_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 74.84 tokens
    • max: 250 tokens
    • min: 5 tokens
    • mean: 73.97 tokens
    • max: 256 tokens
    • min: 8 tokens
    • mean: 76.51 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1409_dart_text_generation

  • Dataset: task1409_dart_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 67.5 tokens
    • max: 174 tokens
    • min: 18 tokens
    • mean: 72.28 tokens
    • max: 170 tokens
    • min: 17 tokens
    • mean: 67.22 tokens
    • max: 164 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task401_numeric_fused_head_reference

  • Dataset: task401_numeric_fused_head_reference
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 109.31 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 114.71 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 120.55 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task846_pubmedqa_classification

  • Dataset: task846_pubmedqa_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 32 tokens
    • mean: 86.22 tokens
    • max: 246 tokens
    • min: 33 tokens
    • mean: 85.64 tokens
    • max: 225 tokens
    • min: 28 tokens
    • mean: 94.03 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1712_poki_classification

  • Dataset: task1712_poki_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 53.16 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 56.97 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 63.57 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task344_hybridqa_answer_generation

  • Dataset: task344_hybridqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 22.21 tokens
    • max: 50 tokens
    • min: 8 tokens
    • mean: 21.92 tokens
    • max: 58 tokens
    • min: 7 tokens
    • mean: 22.19 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task875_emotion_classification

  • Dataset: task875_emotion_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 23.18 tokens
    • max: 75 tokens
    • min: 4 tokens
    • mean: 18.52 tokens
    • max: 63 tokens
    • min: 5 tokens
    • mean: 20.35 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1214_atomic_classification_xwant

  • Dataset: task1214_atomic_classification_xwant
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 19.64 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 19.36 tokens
    • max: 29 tokens
    • min: 14 tokens
    • mean: 19.54 tokens
    • max: 31 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task106_scruples_ethical_judgment

  • Dataset: task106_scruples_ethical_judgment
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 30.0 tokens
    • max: 70 tokens
    • min: 14 tokens
    • mean: 28.89 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 28.73 tokens
    • max: 58 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task238_iirc_answer_from_passage_answer_generation

  • Dataset: task238_iirc_answer_from_passage_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 138 tokens
    • mean: 242.78 tokens
    • max: 256 tokens
    • min: 165 tokens
    • mean: 242.57 tokens
    • max: 256 tokens
    • min: 173 tokens
    • mean: 243.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1391_winogrande_easy_answer_generation

  • Dataset: task1391_winogrande_easy_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 31.63 tokens
    • max: 54 tokens
    • min: 26 tokens
    • mean: 31.36 tokens
    • max: 48 tokens
    • min: 25 tokens
    • mean: 31.3 tokens
    • max: 49 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task195_sentiment140_classification

  • Dataset: task195_sentiment140_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 22.47 tokens
    • max: 118 tokens
    • min: 4 tokens
    • mean: 18.84 tokens
    • max: 79 tokens
    • min: 5 tokens
    • mean: 21.25 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task163_count_words_ending_with_letter

  • Dataset: task163_count_words_ending_with_letter
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 32.05 tokens
    • max: 54 tokens
    • min: 28 tokens
    • mean: 31.69 tokens
    • max: 57 tokens
    • min: 28 tokens
    • mean: 31.58 tokens
    • max: 43 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task579_socialiqa_classification

  • Dataset: task579_socialiqa_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 54.11 tokens
    • max: 132 tokens
    • min: 36 tokens
    • mean: 53.52 tokens
    • max: 103 tokens
    • min: 40 tokens
    • mean: 54.12 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task569_recipe_nlg_text_generation

  • Dataset: task569_recipe_nlg_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 192.16 tokens
    • max: 256 tokens
    • min: 55 tokens
    • mean: 193.74 tokens
    • max: 256 tokens
    • min: 37 tokens
    • mean: 199.11 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1602_webquestion_question_genreation

  • Dataset: task1602_webquestion_question_genreation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 23.95 tokens
    • max: 112 tokens
    • min: 12 tokens
    • mean: 24.6 tokens
    • max: 112 tokens
    • min: 12 tokens
    • mean: 22.6 tokens
    • max: 120 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task747_glucose_cause_emotion_detection

  • Dataset: task747_glucose_cause_emotion_detection
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 68.23 tokens
    • max: 112 tokens
    • min: 36 tokens
    • mean: 68.25 tokens
    • max: 108 tokens
    • min: 36 tokens
    • mean: 68.75 tokens
    • max: 99 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task219_rocstories_title_answer_generation

  • Dataset: task219_rocstories_title_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 67.62 tokens
    • max: 97 tokens
    • min: 45 tokens
    • mean: 66.65 tokens
    • max: 97 tokens
    • min: 41 tokens
    • mean: 66.89 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task178_quartz_question_answering

  • Dataset: task178_quartz_question_answering
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 57.96 tokens
    • max: 110 tokens
    • min: 28 tokens
    • mean: 57.18 tokens
    • max: 111 tokens
    • min: 28 tokens
    • mean: 56.74 tokens
    • max: 102 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task103_facts2story_long_text_generation

  • Dataset: task103_facts2story_long_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 52 tokens
    • mean: 80.34 tokens
    • max: 143 tokens
    • min: 51 tokens
    • mean: 82.24 tokens
    • max: 157 tokens
    • min: 49 tokens
    • mean: 78.57 tokens
    • max: 136 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task301_record_question_generation

  • Dataset: task301_record_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 140 tokens
    • mean: 210.76 tokens
    • max: 256 tokens
    • min: 139 tokens
    • mean: 209.62 tokens
    • max: 256 tokens
    • min: 143 tokens
    • mean: 209.06 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1369_healthfact_sentence_generation

  • Dataset: task1369_healthfact_sentence_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 110 tokens
    • mean: 243.14 tokens
    • max: 256 tokens
    • min: 101 tokens
    • mean: 242.95 tokens
    • max: 256 tokens
    • min: 113 tokens
    • mean: 251.89 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task515_senteval_odd_word_out

  • Dataset: task515_senteval_odd_word_out
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 19.75 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 19.02 tokens
    • max: 38 tokens
    • min: 7 tokens
    • mean: 18.93 tokens
    • max: 35 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task496_semeval_answer_generation

  • Dataset: task496_semeval_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 28.06 tokens
    • max: 46 tokens
    • min: 18 tokens
    • mean: 27.74 tokens
    • max: 45 tokens
    • min: 19 tokens
    • mean: 27.69 tokens
    • max: 45 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1658_billsum_summarization

  • Dataset: task1658_billsum_summarization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1204_atomic_classification_hinderedby

  • Dataset: task1204_atomic_classification_hinderedby
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 21.98 tokens
    • max: 35 tokens
    • min: 14 tokens
    • mean: 22.01 tokens
    • max: 34 tokens
    • min: 14 tokens
    • mean: 21.48 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1392_superglue_multirc_answer_verification

  • Dataset: task1392_superglue_multirc_answer_verification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 128 tokens
    • mean: 241.47 tokens
    • max: 256 tokens
    • min: 127 tokens
    • mean: 241.68 tokens
    • max: 256 tokens
    • min: 136 tokens
    • mean: 241.8 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task306_jeopardy_answer_generation_double

  • Dataset: task306_jeopardy_answer_generation_double
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 27.73 tokens
    • max: 47 tokens
    • min: 10 tokens
    • mean: 27.13 tokens
    • max: 46 tokens
    • min: 11 tokens
    • mean: 27.69 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1286_openbookqa_question_answering

  • Dataset: task1286_openbookqa_question_answering
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 39.38 tokens
    • max: 85 tokens
    • min: 23 tokens
    • mean: 38.71 tokens
    • max: 96 tokens
    • min: 22 tokens
    • mean: 38.22 tokens
    • max: 89 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task159_check_frequency_of_words_in_sentence_pair

  • Dataset: task159_check_frequency_of_words_in_sentence_pair
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 50.34 tokens
    • max: 67 tokens
    • min: 44 tokens
    • mean: 50.29 tokens
    • max: 67 tokens
    • min: 44 tokens
    • mean: 50.51 tokens
    • max: 66 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task151_tomqa_find_location_easy_clean

  • Dataset: task151_tomqa_find_location_easy_clean
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 50.63 tokens
    • max: 79 tokens
    • min: 37 tokens
    • mean: 50.35 tokens
    • max: 74 tokens
    • min: 37 tokens
    • mean: 50.53 tokens
    • max: 74 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task323_jigsaw_classification_sexually_explicit

  • Dataset: task323_jigsaw_classification_sexually_explicit
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 66.74 tokens
    • max: 248 tokens
    • min: 5 tokens
    • mean: 77.15 tokens
    • max: 248 tokens
    • min: 6 tokens
    • mean: 75.88 tokens
    • max: 251 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task037_qasc_generate_related_fact

  • Dataset: task037_qasc_generate_related_fact
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 22.02 tokens
    • max: 50 tokens
    • min: 13 tokens
    • mean: 21.97 tokens
    • max: 42 tokens
    • min: 13 tokens
    • mean: 21.87 tokens
    • max: 40 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task027_drop_answer_type_generation

  • Dataset: task027_drop_answer_type_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 87 tokens
    • mean: 229.25 tokens
    • max: 256 tokens
    • min: 74 tokens
    • mean: 230.99 tokens
    • max: 256 tokens
    • min: 71 tokens
    • mean: 232.46 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1596_event2mind_text_generation_2

  • Dataset: task1596_event2mind_text_generation_2
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.92 tokens
    • max: 18 tokens
    • min: 6 tokens
    • mean: 10.0 tokens
    • max: 19 tokens
    • min: 6 tokens
    • mean: 10.05 tokens
    • max: 18 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task141_odd-man-out_classification_category

  • Dataset: task141_odd-man-out_classification_category
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 18.45 tokens
    • max: 28 tokens
    • min: 16 tokens
    • mean: 18.39 tokens
    • max: 26 tokens
    • min: 16 tokens
    • mean: 18.46 tokens
    • max: 25 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task194_duorc_answer_generation

  • Dataset: task194_duorc_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 149 tokens
    • mean: 251.91 tokens
    • max: 256 tokens
    • min: 147 tokens
    • mean: 252.15 tokens
    • max: 256 tokens
    • min: 148 tokens
    • mean: 251.93 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task679_hope_edi_english_text_classification

  • Dataset: task679_hope_edi_english_text_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 27.42 tokens
    • max: 199 tokens
    • min: 4 tokens
    • mean: 26.83 tokens
    • max: 205 tokens
    • min: 5 tokens
    • mean: 29.66 tokens
    • max: 194 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task246_dream_question_generation

  • Dataset: task246_dream_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 80.19 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 80.98 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 86.73 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1195_disflqa_disfluent_to_fluent_conversion

  • Dataset: task1195_disflqa_disfluent_to_fluent_conversion
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 19.8 tokens
    • max: 41 tokens
    • min: 9 tokens
    • mean: 19.78 tokens
    • max: 40 tokens
    • min: 2 tokens
    • mean: 20.34 tokens
    • max: 44 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task065_timetravel_consistent_sentence_classification

  • Dataset: task065_timetravel_consistent_sentence_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 55 tokens
    • mean: 79.64 tokens
    • max: 117 tokens
    • min: 51 tokens
    • mean: 79.21 tokens
    • max: 110 tokens
    • min: 53 tokens
    • mean: 79.78 tokens
    • max: 110 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task351_winomt_classification_gender_identifiability_anti

  • Dataset: task351_winomt_classification_gender_identifiability_anti
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 21.77 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.69 tokens
    • max: 31 tokens
    • min: 16 tokens
    • mean: 21.8 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task580_socialiqa_answer_generation

  • Dataset: task580_socialiqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 52.45 tokens
    • max: 107 tokens
    • min: 35 tokens
    • mean: 51.1 tokens
    • max: 86 tokens
    • min: 35 tokens
    • mean: 50.97 tokens
    • max: 87 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task583_udeps_eng_coarse_pos_tagging

  • Dataset: task583_udeps_eng_coarse_pos_tagging
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 40.78 tokens
    • max: 185 tokens
    • min: 12 tokens
    • mean: 40.09 tokens
    • max: 185 tokens
    • min: 12 tokens
    • mean: 40.55 tokens
    • max: 185 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task202_mnli_contradiction_classification

  • Dataset: task202_mnli_contradiction_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 74.1 tokens
    • max: 190 tokens
    • min: 28 tokens
    • mean: 76.44 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 75.12 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task222_rocstories_two_chioce_slotting_classification

  • Dataset: task222_rocstories_two_chioce_slotting_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 73.15 tokens
    • max: 105 tokens
    • min: 48 tokens
    • mean: 73.22 tokens
    • max: 100 tokens
    • min: 49 tokens
    • mean: 72.05 tokens
    • max: 102 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task498_scruples_anecdotes_whoiswrong_classification

  • Dataset: task498_scruples_anecdotes_whoiswrong_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 225.53 tokens
    • max: 256 tokens
    • min: 47 tokens
    • mean: 231.91 tokens
    • max: 256 tokens
    • min: 47 tokens
    • mean: 230.65 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task067_abductivenli_answer_generation

  • Dataset: task067_abductivenli_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 26.79 tokens
    • max: 40 tokens
    • min: 14 tokens
    • mean: 26.12 tokens
    • max: 42 tokens
    • min: 15 tokens
    • mean: 26.33 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task616_cola_classification

  • Dataset: task616_cola_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 12.79 tokens
    • max: 33 tokens
    • min: 5 tokens
    • mean: 12.55 tokens
    • max: 33 tokens
    • min: 6 tokens
    • mean: 12.25 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task286_olid_offense_judgment

  • Dataset: task286_olid_offense_judgment
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 33.05 tokens
    • max: 145 tokens
    • min: 5 tokens
    • mean: 31.09 tokens
    • max: 171 tokens
    • min: 5 tokens
    • mean: 30.89 tokens
    • max: 169 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task188_snli_neutral_to_entailment_text_modification

  • Dataset: task188_snli_neutral_to_entailment_text_modification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.81 tokens
    • max: 79 tokens
    • min: 18 tokens
    • mean: 31.16 tokens
    • max: 84 tokens
    • min: 18 tokens
    • mean: 33.04 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task223_quartz_explanation_generation

  • Dataset: task223_quartz_explanation_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 31.45 tokens
    • max: 68 tokens
    • min: 13 tokens
    • mean: 31.82 tokens
    • max: 68 tokens
    • min: 13 tokens
    • mean: 29.1 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task820_protoqa_answer_generation

  • Dataset: task820_protoqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 14.84 tokens
    • max: 29 tokens
    • min: 7 tokens
    • mean: 14.52 tokens
    • max: 27 tokens
    • min: 6 tokens
    • mean: 14.23 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task196_sentiment140_answer_generation

  • Dataset: task196_sentiment140_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 36.15 tokens
    • max: 72 tokens
    • min: 17 tokens
    • mean: 32.89 tokens
    • max: 61 tokens
    • min: 17 tokens
    • mean: 36.14 tokens
    • max: 72 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1678_mathqa_answer_selection

  • Dataset: task1678_mathqa_answer_selection
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 69.95 tokens
    • max: 177 tokens
    • min: 30 tokens
    • mean: 68.73 tokens
    • max: 146 tokens
    • min: 33 tokens
    • mean: 69.24 tokens
    • max: 160 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task349_squad2.0_answerable_unanswerable_question_classification

  • Dataset: task349_squad2.0_answerable_unanswerable_question_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 53 tokens
    • mean: 175.57 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 175.84 tokens
    • max: 256 tokens
    • min: 53 tokens
    • mean: 175.49 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task154_tomqa_find_location_hard_noise

  • Dataset: task154_tomqa_find_location_hard_noise
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 129 tokens
    • mean: 175.63 tokens
    • max: 253 tokens
    • min: 126 tokens
    • mean: 175.85 tokens
    • max: 249 tokens
    • min: 128 tokens
    • mean: 177.2 tokens
    • max: 254 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task333_hateeval_classification_hate_en

  • Dataset: task333_hateeval_classification_hate_en
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 38.62 tokens
    • max: 117 tokens
    • min: 7 tokens
    • mean: 37.48 tokens
    • max: 109 tokens
    • min: 7 tokens
    • mean: 36.83 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task235_iirc_question_from_subtext_answer_generation

  • Dataset: task235_iirc_question_from_subtext_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 52.54 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 50.77 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 55.44 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1554_scitail_classification

  • Dataset: task1554_scitail_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.72 tokens
    • max: 38 tokens
    • min: 7 tokens
    • mean: 25.6 tokens
    • max: 68 tokens
    • min: 7 tokens
    • mean: 24.39 tokens
    • max: 59 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task210_logic2text_structured_text_generation

  • Dataset: task210_logic2text_structured_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 31.83 tokens
    • max: 101 tokens
    • min: 13 tokens
    • mean: 30.89 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 32.76 tokens
    • max: 89 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task035_winogrande_question_modification_person

  • Dataset: task035_winogrande_question_modification_person
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 31 tokens
    • mean: 36.24 tokens
    • max: 50 tokens
    • min: 31 tokens
    • mean: 35.8 tokens
    • max: 55 tokens
    • min: 31 tokens
    • mean: 35.46 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task230_iirc_passage_classification

  • Dataset: task230_iirc_passage_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1356_xlsum_title_generation

  • Dataset: task1356_xlsum_title_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 59 tokens
    • mean: 239.39 tokens
    • max: 256 tokens
    • min: 58 tokens
    • mean: 241.03 tokens
    • max: 256 tokens
    • min: 64 tokens
    • mean: 248.12 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1726_mathqa_correct_answer_generation

  • Dataset: task1726_mathqa_correct_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 43.95 tokens
    • max: 156 tokens
    • min: 12 tokens
    • mean: 42.44 tokens
    • max: 129 tokens
    • min: 11 tokens
    • mean: 42.8 tokens
    • max: 133 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task302_record_classification

  • Dataset: task302_record_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 194 tokens
    • mean: 253.52 tokens
    • max: 256 tokens
    • min: 198 tokens
    • mean: 252.98 tokens
    • max: 256 tokens
    • min: 195 tokens
    • mean: 252.9 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task380_boolq_yes_no_question

  • Dataset: task380_boolq_yes_no_question
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 133.18 tokens
    • max: 256 tokens
    • min: 26 tokens
    • mean: 138.06 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 137.06 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task212_logic2text_classification

  • Dataset: task212_logic2text_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 33.56 tokens
    • max: 146 tokens
    • min: 14 tokens
    • mean: 32.24 tokens
    • max: 146 tokens
    • min: 14 tokens
    • mean: 33.17 tokens
    • max: 127 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task748_glucose_reverse_cause_event_detection

  • Dataset: task748_glucose_reverse_cause_event_detection
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 68.0 tokens
    • max: 105 tokens
    • min: 38 tokens
    • mean: 67.24 tokens
    • max: 106 tokens
    • min: 39 tokens
    • mean: 68.82 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task834_mathdataset_classification

  • Dataset: task834_mathdataset_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 27.89 tokens
    • max: 83 tokens
    • min: 6 tokens
    • mean: 28.2 tokens
    • max: 83 tokens
    • min: 5 tokens
    • mean: 27.11 tokens
    • max: 93 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task350_winomt_classification_gender_identifiability_pro

  • Dataset: task350_winomt_classification_gender_identifiability_pro
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 21.8 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.62 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.81 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task191_hotpotqa_question_generation

  • Dataset: task191_hotpotqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 198 tokens
    • mean: 255.91 tokens
    • max: 256 tokens
    • min: 238 tokens
    • mean: 255.94 tokens
    • max: 256 tokens
    • min: 255 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task236_iirc_question_from_passage_answer_generation

  • Dataset: task236_iirc_question_from_passage_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 135 tokens
    • mean: 238.16 tokens
    • max: 256 tokens
    • min: 155 tokens
    • mean: 237.5 tokens
    • max: 256 tokens
    • min: 154 tokens
    • mean: 239.56 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task217_rocstories_ordering_answer_generation

  • Dataset: task217_rocstories_ordering_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 72.48 tokens
    • max: 107 tokens
    • min: 48 tokens
    • mean: 72.44 tokens
    • max: 107 tokens
    • min: 48 tokens
    • mean: 71.11 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task568_circa_question_generation

  • Dataset: task568_circa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 9.65 tokens
    • max: 25 tokens
    • min: 4 tokens
    • mean: 9.52 tokens
    • max: 20 tokens
    • min: 4 tokens
    • mean: 8.98 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task614_glucose_cause_event_detection

  • Dataset: task614_glucose_cause_event_detection
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 67.94 tokens
    • max: 102 tokens
    • min: 39 tokens
    • mean: 67.3 tokens
    • max: 106 tokens
    • min: 38 tokens
    • mean: 68.61 tokens
    • max: 103 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task361_spolin_yesand_prompt_response_classification

  • Dataset: task361_spolin_yesand_prompt_response_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 46.89 tokens
    • max: 137 tokens
    • min: 17 tokens
    • mean: 46.11 tokens
    • max: 119 tokens
    • min: 17 tokens
    • mean: 47.3 tokens
    • max: 128 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task421_persent_sentence_sentiment_classification

  • Dataset: task421_persent_sentence_sentiment_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 67.26 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 70.21 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 72.11 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task203_mnli_sentence_generation

  • Dataset: task203_mnli_sentence_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 38.83 tokens
    • max: 175 tokens
    • min: 14 tokens
    • mean: 35.68 tokens
    • max: 175 tokens
    • min: 13 tokens
    • mean: 33.77 tokens
    • max: 170 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task420_persent_document_sentiment_classification

  • Dataset: task420_persent_document_sentiment_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 222.98 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 233.17 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 228.48 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task153_tomqa_find_location_hard_clean

  • Dataset: task153_tomqa_find_location_hard_clean
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 161.63 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 160.81 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 164.26 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task346_hybridqa_classification

  • Dataset: task346_hybridqa_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 32.85 tokens
    • max: 68 tokens
    • min: 18 tokens
    • mean: 32.03 tokens
    • max: 63 tokens
    • min: 19 tokens
    • mean: 31.88 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1211_atomic_classification_hassubevent

  • Dataset: task1211_atomic_classification_hassubevent
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 16.25 tokens
    • max: 31 tokens
    • min: 11 tokens
    • mean: 16.07 tokens
    • max: 29 tokens
    • min: 11 tokens
    • mean: 16.8 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task360_spolin_yesand_response_generation

  • Dataset: task360_spolin_yesand_response_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 22.68 tokens
    • max: 89 tokens
    • min: 6 tokens
    • mean: 21.02 tokens
    • max: 92 tokens
    • min: 7 tokens
    • mean: 20.67 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task510_reddit_tifu_title_summarization

  • Dataset: task510_reddit_tifu_title_summarization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 216.21 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 218.0 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 221.49 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task511_reddit_tifu_long_text_summarization

  • Dataset: task511_reddit_tifu_long_text_summarization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 239.99 tokens
    • max: 256 tokens
    • min: 76 tokens
    • mean: 239.55 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 244.85 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task345_hybridqa_answer_generation

  • Dataset: task345_hybridqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 22.24 tokens
    • max: 50 tokens
    • min: 10 tokens
    • mean: 21.66 tokens
    • max: 70 tokens
    • min: 8 tokens
    • mean: 20.97 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task270_csrg_counterfactual_context_generation

  • Dataset: task270_csrg_counterfactual_context_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 63 tokens
    • mean: 100.12 tokens
    • max: 158 tokens
    • min: 63 tokens
    • mean: 98.52 tokens
    • max: 142 tokens
    • min: 62 tokens
    • mean: 100.4 tokens
    • max: 141 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task307_jeopardy_answer_generation_final

  • Dataset: task307_jeopardy_answer_generation_final
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 29.63 tokens
    • max: 46 tokens
    • min: 15 tokens
    • mean: 29.27 tokens
    • max: 53 tokens
    • min: 15 tokens
    • mean: 29.25 tokens
    • max: 43 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task001_quoref_question_generation

  • Dataset: task001_quoref_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 201 tokens
    • mean: 255.1 tokens
    • max: 256 tokens
    • min: 99 tokens
    • mean: 254.46 tokens
    • max: 256 tokens
    • min: 173 tokens
    • mean: 255.11 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task089_swap_words_verification

  • Dataset: task089_swap_words_verification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 12.91 tokens
    • max: 28 tokens
    • min: 9 tokens
    • mean: 12.67 tokens
    • max: 24 tokens
    • min: 9 tokens
    • mean: 12.26 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1196_atomic_classification_oeffect

  • Dataset: task1196_atomic_classification_oeffect
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 18.77 tokens
    • max: 41 tokens
    • min: 14 tokens
    • mean: 18.57 tokens
    • max: 30 tokens
    • min: 14 tokens
    • mean: 18.5 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task080_piqa_answer_generation

  • Dataset: task080_piqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 10.89 tokens
    • max: 33 tokens
    • min: 3 tokens
    • mean: 10.71 tokens
    • max: 24 tokens
    • min: 3 tokens
    • mean: 10.16 tokens
    • max: 26 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1598_nyc_long_text_generation

  • Dataset: task1598_nyc_long_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 35.48 tokens
    • max: 56 tokens
    • min: 17 tokens
    • mean: 35.6 tokens
    • max: 56 tokens
    • min: 20 tokens
    • mean: 36.56 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task240_tweetqa_question_generation

  • Dataset: task240_tweetqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 51.19 tokens
    • max: 94 tokens
    • min: 25 tokens
    • mean: 50.8 tokens
    • max: 92 tokens
    • min: 20 tokens
    • mean: 51.63 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task615_moviesqa_answer_generation

  • Dataset: task615_moviesqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 11.44 tokens
    • max: 23 tokens
    • min: 7 tokens
    • mean: 11.45 tokens
    • max: 19 tokens
    • min: 5 tokens
    • mean: 11.41 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1347_glue_sts-b_similarity_classification

  • Dataset: task1347_glue_sts-b_similarity_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 31.16 tokens
    • max: 88 tokens
    • min: 16 tokens
    • mean: 31.12 tokens
    • max: 92 tokens
    • min: 16 tokens
    • mean: 31.04 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task114_is_the_given_word_longest

  • Dataset: task114_is_the_given_word_longest
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 28.95 tokens
    • max: 68 tokens
    • min: 25 tokens
    • mean: 28.46 tokens
    • max: 48 tokens
    • min: 25 tokens
    • mean: 28.75 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task292_storycommonsense_character_text_generation

  • Dataset: task292_storycommonsense_character_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 43 tokens
    • mean: 68.1 tokens
    • max: 98 tokens
    • min: 46 tokens
    • mean: 67.4 tokens
    • max: 104 tokens
    • min: 43 tokens
    • mean: 69.04 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task115_help_advice_classification

  • Dataset: task115_help_advice_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 2 tokens
    • mean: 19.9 tokens
    • max: 91 tokens
    • min: 3 tokens
    • mean: 18.14 tokens
    • max: 92 tokens
    • min: 4 tokens
    • mean: 19.28 tokens
    • max: 137 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task431_senteval_object_count

  • Dataset: task431_senteval_object_count
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.75 tokens
    • max: 37 tokens
    • min: 7 tokens
    • mean: 15.14 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 15.78 tokens
    • max: 35 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1360_numer_sense_multiple_choice_qa_generation

  • Dataset: task1360_numer_sense_multiple_choice_qa_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 32 tokens
    • mean: 40.58 tokens
    • max: 54 tokens
    • min: 32 tokens
    • mean: 40.28 tokens
    • max: 53 tokens
    • min: 32 tokens
    • mean: 40.2 tokens
    • max: 60 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task177_para-nmt_paraphrasing

  • Dataset: task177_para-nmt_paraphrasing
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.73 tokens
    • max: 59 tokens
    • min: 9 tokens
    • mean: 18.88 tokens
    • max: 58 tokens
    • min: 9 tokens
    • mean: 18.29 tokens
    • max: 36 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task132_dais_text_modification

  • Dataset: task132_dais_text_modification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.3 tokens
    • max: 15 tokens
    • min: 6 tokens
    • mean: 9.1 tokens
    • max: 15 tokens
    • min: 6 tokens
    • mean: 10.14 tokens
    • max: 15 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task269_csrg_counterfactual_story_generation

  • Dataset: task269_csrg_counterfactual_story_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 49 tokens
    • mean: 79.75 tokens
    • max: 111 tokens
    • min: 53 tokens
    • mean: 79.41 tokens
    • max: 116 tokens
    • min: 48 tokens
    • mean: 79.46 tokens
    • max: 114 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task233_iirc_link_exists_classification

  • Dataset: task233_iirc_link_exists_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 145 tokens
    • mean: 235.19 tokens
    • max: 256 tokens
    • min: 142 tokens
    • mean: 233.32 tokens
    • max: 256 tokens
    • min: 151 tokens
    • mean: 234.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task161_count_words_containing_letter

  • Dataset: task161_count_words_containing_letter
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 31.0 tokens
    • max: 53 tokens
    • min: 27 tokens
    • mean: 30.83 tokens
    • max: 61 tokens
    • min: 27 tokens
    • mean: 30.52 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1205_atomic_classification_isafter

  • Dataset: task1205_atomic_classification_isafter
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 20.94 tokens
    • max: 37 tokens
    • min: 14 tokens
    • mean: 20.64 tokens
    • max: 35 tokens
    • min: 14 tokens
    • mean: 21.51 tokens
    • max: 37 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task571_recipe_nlg_ner_generation

  • Dataset: task571_recipe_nlg_ner_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 117.62 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 117.51 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 109.25 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1292_yelp_review_full_text_categorization

  • Dataset: task1292_yelp_review_full_text_categorization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 135.37 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 144.75 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 145.27 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task428_senteval_inversion

  • Dataset: task428_senteval_inversion
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.59 tokens
    • max: 32 tokens
    • min: 7 tokens
    • mean: 14.63 tokens
    • max: 31 tokens
    • min: 7 tokens
    • mean: 15.31 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task311_race_question_generation

  • Dataset: task311_race_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 115 tokens
    • mean: 254.55 tokens
    • max: 256 tokens
    • min: 137 tokens
    • mean: 254.56 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 255.54 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task429_senteval_tense

  • Dataset: task429_senteval_tense
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.9 tokens
    • max: 37 tokens
    • min: 6 tokens
    • mean: 14.12 tokens
    • max: 33 tokens
    • min: 7 tokens
    • mean: 15.33 tokens
    • max: 36 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task403_creak_commonsense_inference

  • Dataset: task403_creak_commonsense_inference
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 30.04 tokens
    • max: 104 tokens
    • min: 13 tokens
    • mean: 29.3 tokens
    • max: 108 tokens
    • min: 13 tokens
    • mean: 29.47 tokens
    • max: 122 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task929_products_reviews_classification

  • Dataset: task929_products_reviews_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 69.18 tokens
    • max: 126 tokens
    • min: 6 tokens
    • mean: 70.54 tokens
    • max: 123 tokens
    • min: 6 tokens
    • mean: 70.28 tokens
    • max: 123 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task582_naturalquestion_answer_generation

  • Dataset: task582_naturalquestion_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 11.69 tokens
    • max: 25 tokens
    • min: 10 tokens
    • mean: 11.64 tokens
    • max: 24 tokens
    • min: 10 tokens
    • mean: 11.72 tokens
    • max: 25 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task237_iirc_answer_from_subtext_answer_generation

  • Dataset: task237_iirc_answer_from_subtext_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 66.47 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 64.67 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 61.4 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task050_multirc_answerability

  • Dataset: task050_multirc_answerability
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 32.35 tokens
    • max: 112 tokens
    • min: 14 tokens
    • mean: 31.51 tokens
    • max: 83 tokens
    • min: 15 tokens
    • mean: 32.03 tokens
    • max: 159 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task184_break_generate_question

  • Dataset: task184_break_generate_question
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 39.76 tokens
    • max: 147 tokens
    • min: 13 tokens
    • mean: 38.97 tokens
    • max: 149 tokens
    • min: 13 tokens
    • mean: 39.62 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task669_ambigqa_answer_generation

  • Dataset: task669_ambigqa_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 12.91 tokens
    • max: 23 tokens
    • min: 10 tokens
    • mean: 12.88 tokens
    • max: 27 tokens
    • min: 11 tokens
    • mean: 12.72 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task169_strategyqa_sentence_generation

  • Dataset: task169_strategyqa_sentence_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 35.3 tokens
    • max: 65 tokens
    • min: 22 tokens
    • mean: 34.36 tokens
    • max: 60 tokens
    • min: 19 tokens
    • mean: 33.36 tokens
    • max: 65 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task500_scruples_anecdotes_title_generation

  • Dataset: task500_scruples_anecdotes_title_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 224.51 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 232.39 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 234.4 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task241_tweetqa_classification

  • Dataset: task241_tweetqa_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 31 tokens
    • mean: 61.75 tokens
    • max: 92 tokens
    • min: 36 tokens
    • mean: 61.98 tokens
    • max: 106 tokens
    • min: 31 tokens
    • mean: 61.67 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1345_glue_qqp_question_paraprashing

  • Dataset: task1345_glue_qqp_question_paraprashing
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 16.62 tokens
    • max: 60 tokens
    • min: 6 tokens
    • mean: 15.77 tokens
    • max: 69 tokens
    • min: 6 tokens
    • mean: 16.61 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task218_rocstories_swap_order_answer_generation

  • Dataset: task218_rocstories_swap_order_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 72.42 tokens
    • max: 118 tokens
    • min: 48 tokens
    • mean: 72.62 tokens
    • max: 102 tokens
    • min: 47 tokens
    • mean: 72.14 tokens
    • max: 106 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task613_politifact_text_generation

  • Dataset: task613_politifact_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 24.71 tokens
    • max: 75 tokens
    • min: 7 tokens
    • mean: 23.58 tokens
    • max: 56 tokens
    • min: 5 tokens
    • mean: 22.87 tokens
    • max: 61 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1167_penn_treebank_coarse_pos_tagging

  • Dataset: task1167_penn_treebank_coarse_pos_tagging
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 53.81 tokens
    • max: 200 tokens
    • min: 16 tokens
    • mean: 53.49 tokens
    • max: 220 tokens
    • min: 16 tokens
    • mean: 54.95 tokens
    • max: 202 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1422_mathqa_physics

  • Dataset: task1422_mathqa_physics
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 34 tokens
    • mean: 72.14 tokens
    • max: 164 tokens
    • min: 38 tokens
    • mean: 71.53 tokens
    • max: 157 tokens
    • min: 39 tokens
    • mean: 72.08 tokens
    • max: 155 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task247_dream_answer_generation

  • Dataset: task247_dream_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 38 tokens
    • mean: 159.4 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 157.79 tokens
    • max: 256 tokens
    • min: 41 tokens
    • mean: 167.32 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task199_mnli_classification

  • Dataset: task199_mnli_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 43.33 tokens
    • max: 127 tokens
    • min: 11 tokens
    • mean: 44.68 tokens
    • max: 149 tokens
    • min: 11 tokens
    • mean: 44.31 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task164_mcscript_question_answering_text

  • Dataset: task164_mcscript_question_answering_text
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 150 tokens
    • mean: 200.67 tokens
    • max: 256 tokens
    • min: 150 tokens
    • mean: 200.46 tokens
    • max: 256 tokens
    • min: 142 tokens
    • mean: 200.89 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1541_agnews_classification

  • Dataset: task1541_agnews_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 53.39 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 52.89 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 53.84 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task516_senteval_conjoints_inversion

  • Dataset: task516_senteval_conjoints_inversion
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.31 tokens
    • max: 34 tokens
    • min: 8 tokens
    • mean: 18.97 tokens
    • max: 34 tokens
    • min: 8 tokens
    • mean: 18.91 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task294_storycommonsense_motiv_text_generation

  • Dataset: task294_storycommonsense_motiv_text_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 40.09 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 40.44 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 39.58 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task501_scruples_anecdotes_post_type_verification

  • Dataset: task501_scruples_anecdotes_post_type_verification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 231.44 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 235.23 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 234.84 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task213_rocstories_correct_ending_classification

  • Dataset: task213_rocstories_correct_ending_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 62 tokens
    • mean: 86.03 tokens
    • max: 125 tokens
    • min: 60 tokens
    • mean: 85.66 tokens
    • max: 131 tokens
    • min: 59 tokens
    • mean: 86.01 tokens
    • max: 131 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task821_protoqa_question_generation

  • Dataset: task821_protoqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 14.61 tokens
    • max: 61 tokens
    • min: 5 tokens
    • mean: 14.97 tokens
    • max: 35 tokens
    • min: 5 tokens
    • mean: 13.79 tokens
    • max: 93 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task493_review_polarity_classification

  • Dataset: task493_review_polarity_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 99.85 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 104.97 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 112.97 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task308_jeopardy_answer_generation_all

  • Dataset: task308_jeopardy_answer_generation_all
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 27.97 tokens
    • max: 50 tokens
    • min: 10 tokens
    • mean: 27.0 tokens
    • max: 44 tokens
    • min: 9 tokens
    • mean: 27.52 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1595_event2mind_text_generation_1

  • Dataset: task1595_event2mind_text_generation_1
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.9 tokens
    • max: 18 tokens
    • min: 6 tokens
    • mean: 9.96 tokens
    • max: 20 tokens
    • min: 6 tokens
    • mean: 10.03 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task040_qasc_question_generation

  • Dataset: task040_qasc_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 15.03 tokens
    • max: 29 tokens
    • min: 7 tokens
    • mean: 15.04 tokens
    • max: 30 tokens
    • min: 8 tokens
    • mean: 13.79 tokens
    • max: 32 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task231_iirc_link_classification

  • Dataset: task231_iirc_link_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 179 tokens
    • mean: 246.14 tokens
    • max: 256 tokens
    • min: 170 tokens
    • mean: 246.33 tokens
    • max: 256 tokens
    • min: 161 tokens
    • mean: 246.99 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1727_wiqa_what_is_the_effect

  • Dataset: task1727_wiqa_what_is_the_effect
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 95.04 tokens
    • max: 183 tokens
    • min: 44 tokens
    • mean: 95.1 tokens
    • max: 185 tokens
    • min: 43 tokens
    • mean: 95.37 tokens
    • max: 183 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task578_curiosity_dialogs_answer_generation

  • Dataset: task578_curiosity_dialogs_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 230.36 tokens
    • max: 256 tokens
    • min: 118 tokens
    • mean: 235.58 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 229.92 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task310_race_classification

  • Dataset: task310_race_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 101 tokens
    • mean: 254.92 tokens
    • max: 256 tokens
    • min: 218 tokens
    • mean: 255.81 tokens
    • max: 256 tokens
    • min: 101 tokens
    • mean: 254.92 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task309_race_answer_generation

  • Dataset: task309_race_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 75 tokens
    • mean: 254.76 tokens
    • max: 256 tokens
    • min: 204 tokens
    • mean: 255.48 tokens
    • max: 256 tokens
    • min: 75 tokens
    • mean: 255.23 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task379_agnews_topic_classification

  • Dataset: task379_agnews_topic_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 54.44 tokens
    • max: 193 tokens
    • min: 20 tokens
    • mean: 54.58 tokens
    • max: 175 tokens
    • min: 21 tokens
    • mean: 55.12 tokens
    • max: 187 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task030_winogrande_full_person

  • Dataset: task030_winogrande_full_person
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.63 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.52 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.39 tokens
    • max: 11 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1540_parsed_pdfs_summarization

  • Dataset: task1540_parsed_pdfs_summarization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 188.0 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 189.34 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 192.03 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task039_qasc_find_overlapping_words

  • Dataset: task039_qasc_find_overlapping_words
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 30.57 tokens
    • max: 55 tokens
    • min: 16 tokens
    • mean: 30.03 tokens
    • max: 57 tokens
    • min: 16 tokens
    • mean: 30.68 tokens
    • max: 60 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1206_atomic_classification_isbefore

  • Dataset: task1206_atomic_classification_isbefore
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 21.27 tokens
    • max: 40 tokens
    • min: 14 tokens
    • mean: 20.85 tokens
    • max: 31 tokens
    • min: 14 tokens
    • mean: 21.37 tokens
    • max: 31 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task157_count_vowels_and_consonants

  • Dataset: task157_count_vowels_and_consonants
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 27.98 tokens
    • max: 41 tokens
    • min: 24 tokens
    • mean: 27.87 tokens
    • max: 41 tokens
    • min: 24 tokens
    • mean: 28.32 tokens
    • max: 39 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task339_record_answer_generation

  • Dataset: task339_record_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 171 tokens
    • mean: 234.55 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 233.87 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 232.63 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task453_swag_answer_generation

  • Dataset: task453_swag_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 18.38 tokens
    • max: 60 tokens
    • min: 9 tokens
    • mean: 18.13 tokens
    • max: 63 tokens
    • min: 9 tokens
    • mean: 17.47 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task848_pubmedqa_classification

  • Dataset: task848_pubmedqa_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 249.24 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 249.85 tokens
    • max: 256 tokens
    • min: 84 tokens
    • mean: 251.72 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task673_google_wellformed_query_classification

  • Dataset: task673_google_wellformed_query_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 11.6 tokens
    • max: 27 tokens
    • min: 6 tokens
    • mean: 11.2 tokens
    • max: 24 tokens
    • min: 6 tokens
    • mean: 11.37 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task676_ollie_relationship_answer_generation

  • Dataset: task676_ollie_relationship_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 50.98 tokens
    • max: 113 tokens
    • min: 29 tokens
    • mean: 48.82 tokens
    • max: 134 tokens
    • min: 30 tokens
    • mean: 51.69 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task268_casehold_legal_answer_generation

  • Dataset: task268_casehold_legal_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 235 tokens
    • mean: 255.94 tokens
    • max: 256 tokens
    • min: 156 tokens
    • mean: 255.5 tokens
    • max: 256 tokens
    • min: 226 tokens
    • mean: 255.95 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task844_financial_phrasebank_classification

  • Dataset: task844_financial_phrasebank_classification
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 40.06 tokens
    • max: 86 tokens
    • min: 13 tokens
    • mean: 38.31 tokens
    • max: 78 tokens
    • min: 15 tokens
    • mean: 38.91 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task330_gap_answer_generation

  • Dataset: task330_gap_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 107.15 tokens
    • max: 256 tokens
    • min: 44 tokens
    • mean: 108.5 tokens
    • max: 256 tokens
    • min: 45 tokens
    • mean: 111.29 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task595_mocha_answer_generation

  • Dataset: task595_mocha_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 94.29 tokens
    • max: 178 tokens
    • min: 21 tokens
    • mean: 95.79 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 117.82 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1285_kpa_keypoint_matching

  • Dataset: task1285_kpa_keypoint_matching
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 52.19 tokens
    • max: 92 tokens
    • min: 29 tokens
    • mean: 50.09 tokens
    • max: 84 tokens
    • min: 31 tokens
    • mean: 53.0 tokens
    • max: 88 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task234_iirc_passage_line_answer_generation

  • Dataset: task234_iirc_passage_line_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 143 tokens
    • mean: 234.48 tokens
    • max: 256 tokens
    • min: 155 tokens
    • mean: 235.32 tokens
    • max: 256 tokens
    • min: 146 tokens
    • mean: 236.21 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task494_review_polarity_answer_generation

  • Dataset: task494_review_polarity_answer_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 107.59 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 114.18 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 114.95 tokens
    • max: 249 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task670_ambigqa_question_generation

  • Dataset: task670_ambigqa_question_generation
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 12.7 tokens
    • max: 26 tokens
    • min: 11 tokens
    • mean: 12.46 tokens
    • max: 23 tokens
    • min: 11 tokens
    • mean: 12.24 tokens
    • max: 18 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task289_gigaword_summarization

  • Dataset: task289_gigaword_summarization
  • Size: 634 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 634 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 51.28 tokens
    • max: 87 tokens
    • min: 27 tokens
    • mean: 51.71 tokens
    • max: 87 tokens
    • min: 25 tokens
    • mean: 51.14 tokens
    • max: 87 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

npr

  • Dataset: npr
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 12.18 tokens
    • max: 34 tokens
    • min: 12 tokens
    • mean: 146.68 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 109.65 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

nli

  • Dataset: nli
  • Size: 49,548 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 21.0 tokens
    • max: 229 tokens
    • min: 4 tokens
    • mean: 11.74 tokens
    • max: 38 tokens
    • min: 5 tokens
    • mean: 11.98 tokens
    • max: 45 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

SimpleWiki

  • Dataset: SimpleWiki
  • Size: 5,006 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 29.27 tokens
    • max: 256 tokens
    • min: 8 tokens
    • mean: 33.55 tokens
    • max: 256 tokens
    • min: 8 tokens
    • mean: 55.34 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

amazon_review_2018

  • Dataset: amazon_review_2018
  • Size: 99,032 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 11.29 tokens
    • max: 31 tokens
    • min: 11 tokens
    • mean: 87.93 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 69.37 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

ccnews_title_text

  • Dataset: ccnews_title_text
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 15.71 tokens
    • max: 57 tokens
    • min: 22 tokens
    • mean: 209.36 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 197.52 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

agnews

  • Dataset: agnews
  • Size: 44,606 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 11.84 tokens
    • max: 46 tokens
    • min: 11 tokens
    • mean: 40.9 tokens
    • max: 256 tokens
    • min: 13 tokens
    • mean: 44.47 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

xsum

  • Dataset: xsum
  • Size: 9,948 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 27.96 tokens
    • max: 86 tokens
    • min: 36 tokens
    • mean: 227.43 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 229.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

msmarco

  • Dataset: msmarco
  • Size: 173,290 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 9.09 tokens
    • max: 39 tokens
    • min: 19 tokens
    • mean: 82.25 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 79.69 tokens
    • max: 220 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

yahoo_answers_title_answer

  • Dataset: yahoo_answers_title_answer
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 16.8 tokens
    • max: 69 tokens
    • min: 6 tokens
    • mean: 78.53 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 87.35 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

squad_pairs

  • Dataset: squad_pairs
  • Size: 24,774 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 14.48 tokens
    • max: 33 tokens
    • min: 28 tokens
    • mean: 152.39 tokens
    • max: 256 tokens
    • min: 28 tokens
    • mean: 160.54 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

wow

  • Dataset: wow
  • Size: 29,716 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 90.07 tokens
    • max: 256 tokens
    • min: 29 tokens
    • mean: 111.81 tokens
    • max: 150 tokens
    • min: 92 tokens
    • mean: 113.15 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_counterfactual-avs_triplets

  • Dataset: mteb-amazon_counterfactual-avs_triplets
  • Size: 3,991 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 27.26 tokens
    • max: 137 tokens
    • min: 12 tokens
    • mean: 26.57 tokens
    • max: 96 tokens
    • min: 12 tokens
    • mean: 26.88 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_massive_intent-avs_triplets

  • Dataset: mteb-amazon_massive_intent-avs_triplets
  • Size: 11,405 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 9.49 tokens
    • max: 30 tokens
    • min: 3 tokens
    • mean: 9.19 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 9.49 tokens
    • max: 25 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_massive_scenario-avs_triplets

  • Dataset: mteb-amazon_massive_scenario-avs_triplets
  • Size: 11,405 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 9.59 tokens
    • max: 30 tokens
    • min: 3 tokens
    • mean: 8.97 tokens
    • max: 30 tokens
    • min: 3 tokens
    • mean: 9.69 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_reviews_multi-avs_triplets

  • Dataset: mteb-amazon_reviews_multi-avs_triplets
  • Size: 198,000 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 49.83 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 51.32 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 49.66 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-banking77-avs_triplets

  • Dataset: mteb-banking77-avs_triplets
  • Size: 9,947 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 16.19 tokens
    • max: 68 tokens
    • min: 5 tokens
    • mean: 15.76 tokens
    • max: 79 tokens
    • min: 5 tokens
    • mean: 15.78 tokens
    • max: 87 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-emotion-avs_triplets

  • Dataset: mteb-emotion-avs_triplets
  • Size: 15,840 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 21.76 tokens
    • max: 65 tokens
    • min: 5 tokens
    • mean: 17.13 tokens
    • max: 62 tokens
    • min: 5 tokens
    • mean: 21.95 tokens
    • max: 65 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-imdb-avs_triplets

  • Dataset: mteb-imdb-avs_triplets
  • Size: 24,647 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 34 tokens
    • mean: 207.65 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 222.57 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 207.98 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-mtop_domain-avs_triplets

  • Dataset: mteb-mtop_domain-avs_triplets
  • Size: 15,523 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 10.29 tokens
    • max: 31 tokens
    • min: 4 tokens
    • mean: 9.7 tokens
    • max: 28 tokens
    • min: 4 tokens
    • mean: 10.01 tokens
    • max: 28 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-mtop_intent-avs_triplets

  • Dataset: mteb-mtop_intent-avs_triplets
  • Size: 15,523 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 10.11 tokens
    • max: 33 tokens
    • min: 4 tokens
    • mean: 9.64 tokens
    • max: 34 tokens
    • min: 4 tokens
    • mean: 10.13 tokens
    • max: 33 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-toxic_conversations_50k-avs_triplets

  • Dataset: mteb-toxic_conversations_50k-avs_triplets
  • Size: 49,421 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 68.39 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 91.3 tokens
    • max: 256 tokens
    • min: 4 tokens
    • mean: 70.1 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-tweet_sentiment_extraction-avs_triplets

  • Dataset: mteb-tweet_sentiment_extraction-avs_triplets
  • Size: 27,245 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 20.32 tokens
    • max: 49 tokens
    • min: 3 tokens
    • mean: 20.24 tokens
    • max: 51 tokens
    • min: 3 tokens
    • mean: 20.98 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

covid-bing-query-gpt4-avs_triplets

  • Dataset: covid-bing-query-gpt4-avs_triplets
  • Size: 4,942 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 15.22 tokens
    • max: 49 tokens
    • min: 14 tokens
    • mean: 37.46 tokens
    • max: 167 tokens
    • min: 18 tokens
    • mean: 37.77 tokens
    • max: 128 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 18,269 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 15.5 tokens
    • max: 56 tokens
    • min: 6 tokens
    • mean: 143.45 tokens
    • max: 256 tokens
    • min: 4 tokens
    • mean: 145.01 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss medi-mteb-dev_cosine_accuracy
0 0 - - 0.8503
0.0175 500 1.9411 1.9039 0.8588
0.0351 1000 1.5495 0.9698 0.8698
0.0526 1500 1.3527 0.7684 0.8753
0.0701 2000 1.1995 0.7102 0.8777
0.0877 2500 1.1782 0.6829 0.8793
0.1052 3000 1.1662 0.6633 0.8830
0.1227 3500 1.139 0.6510 0.8844
0.1403 4000 1.1389 0.6429 0.8851
0.1578 4500 1.1381 0.6273 0.8863
0.1753 5000 1.0616 0.6225 0.8869
0.1929 5500 1.114 0.6169 0.8872
0.2104 6000 0.9854 0.6108 0.8886
0.2279 6500 1.081 0.6047 0.8900
0.2455 7000 0.9899 0.5983 0.8912
0.2630 7500 1.0551 0.5931 0.8921
0.2805 8000 1.0515 0.5882 0.8930
0.2981 8500 1.0384 0.5768 0.8946
0.3156 9000 1.0545 0.5716 0.8945
0.3331 9500 1.006 0.5744 0.8959
0.3507 10000 0.9629 0.5719 0.8960
0.3682 10500 1.0877 0.5600 0.8958
0.3857 11000 1.0594 0.5639 0.8975
0.4033 11500 1.0708 0.5672 0.8975
0.4208 12000 1.0275 0.5481 0.8986
0.4383 12500 0.9467 0.5552 0.9007
0.4559 13000 1.0048 0.5524 0.9008
0.4734 13500 1.0135 0.5482 0.9002
0.4909 14000 0.9579 0.5428 0.9002
0.5085 14500 0.9534 0.5373 0.9015
0.5260 15000 0.9225 0.5347 0.9025
0.5435 15500 0.9936 0.5384 0.9011
0.5611 16000 0.926 0.5298 0.9028
0.5786 16500 0.9904 0.5338 0.9034
0.5961 17000 0.9302 0.5281 0.9033
0.6137 17500 0.908 0.5332 0.9025
0.6312 18000 0.8936 0.5322 0.9046
0.6487 18500 0.9549 0.5312 0.9039
0.6663 19000 0.9498 0.5319 0.9030
0.6838 19500 0.9291 0.5279 0.9038
0.7013 20000 0.9573 0.5165 0.9017
0.7189 20500 0.9395 0.5223 0.9039
0.7364 21000 0.8753 0.5335 0.9009
0.7539 21500 0.95 0.5173 0.9040
0.7715 22000 0.9656 0.5451 0.9043
0.7890 22500 0.9145 0.5305 0.9033
0.8065 23000 0.9768 0.5135 0.9041
0.8241 23500 0.8779 0.5185 0.9037
0.8416 24000 0.9603 0.5338 0.9036
0.8591 24500 0.9045 0.5090 0.9056
0.8767 25000 0.9536 0.5254 0.9043
0.8942 25500 0.8499 0.5388 0.9023
0.9117 26000 0.88 0.5676 0.9011
0.9293 26500 0.8884 0.5127 0.9046
0.9468 27000 0.8556 0.5227 0.9065
0.9643 27500 0.8641 0.5901 0.9027
0.9819 28000 0.8884 0.4982 0.9054
0.9994 28500 0.8404 0.5078 0.9064
1.0169 29000 0.8613 0.5211 0.9052
1.0345 29500 0.8971 0.5061 0.9065
1.0520 30000 0.9426 0.5118 0.9062
1.0695 30500 0.8791 0.5062 0.9062
1.0871 31000 0.8953 0.5056 0.9044
1.1046 31500 0.9229 0.5002 0.9065
1.1221 32000 0.8914 0.4912 0.9088
1.1397 32500 0.9105 0.4973 0.9086
1.1572 33000 0.9168 0.4954 0.9074
1.1747 33500 0.845 0.5073 0.9088
1.1923 34000 0.9209 0.4890 0.9088
1.2098 34500 0.8014 0.5063 0.9063
1.2273 35000 0.8888 0.5270 0.9070
1.2449 35500 0.8269 0.5062 0.9059
1.2624 36000 0.8637 0.4951 0.9054
1.2799 36500 0.8796 0.4922 0.9083
1.2975 37000 0.8644 0.4851 0.9068
1.3150 37500 0.8907 0.5396 0.9069
1.3325 38000 0.8477 0.4944 0.9082
1.3501 38500 0.8237 0.4915 0.9081
1.3676 39000 0.9217 0.4918 0.9083
1.3851 39500 0.887 0.4955 0.9064
1.4027 40000 0.9172 0.5259 0.9077
1.4202 40500 0.8693 0.5002 0.9092
1.4377 41000 0.8223 0.5109 0.9084
1.4553 41500 0.8554 0.4859 0.9079
1.4728 42000 0.8772 0.4850 0.9079
1.4903 42500 0.8232 0.4860 0.9088
1.5079 43000 0.8218 0.4917 0.9083
1.5254 43500 0.7905 0.4839 0.9094
1.5429 44000 0.847 0.5150 0.9081
1.5605 44500 0.7929 0.5234 0.9082
1.5780 45000 0.8621 0.5084 0.9094
1.5955 45500 0.7908 0.4980 0.9092
1.6131 46000 0.792 0.5385 0.9071
1.6306 46500 0.7569 0.5405 0.9088
1.6481 47000 0.8178 0.5172 0.9078
1.6657 47500 0.8101 0.5379 0.9082
1.6832 48000 0.8013 0.5627 0.9068
1.7007 48500 0.8298 0.5947 0.9072
1.7183 49000 0.8028 0.5302 0.9076
1.7358 49500 0.7663 0.5523 0.9066
1.7533 50000 0.8255 0.5361 0.9080
1.7709 50500 0.8354 0.5373 0.9080
1.7884 51000 0.7917 0.5546 0.9079
1.8059 51500 0.837 0.5113 0.9085
1.8235 52000 0.7488 0.5037 0.9082
1.8410 52500 0.8439 0.5349 0.9084
1.8585 53000 0.7688 0.5279 0.9083
1.8761 53500 0.8205 0.5496 0.9071
1.8936 54000 0.7256 0.5454 0.9075
1.9111 54500 0.7536 0.5582 0.9060
1.9287 55000 0.7544 0.5331 0.9075
1.9462 55500 0.7332 0.5139 0.9091
1.9637 56000 0.7244 0.5767 0.9078
1.9813 56500 0.7574 0.4962 0.9084
1.9988 57000 0.7116 0.5210 0.9090
2.0163 57500 0.7376 0.5196 0.9088
2.0339 58000 0.768 0.5609 0.9086
2.0514 58500 0.8056 0.5230 0.9081
2.0689 59000 0.7744 0.5527 0.9077
2.0865 59500 0.7543 0.4949 0.9090
2.1040 60000 0.8 0.4925 0.9095
2.1215 60500 0.7664 0.4989 0.9093
2.1391 61000 0.7849 0.4956 0.9106
2.1566 61500 0.7955 0.5312 0.9099
2.1741 62000 0.7326 0.5126 0.9112
2.1917 62500 0.7975 0.4701 0.9114
2.2092 63000 0.7001 0.5118 0.9093
2.2267 63500 0.7477 0.5371 0.9102
2.2443 64000 0.7227 0.5536 0.9083
2.2618 64500 0.7687 0.5174 0.9102
2.2793 65000 0.7633 0.4925 0.9102
2.2969 65500 0.7572 0.5059 0.9093
2.3144 66000 0.7846 0.5391 0.9088
2.3319 66500 0.7434 0.4991 0.9111
2.3495 67000 0.7124 0.5115 0.9107
2.3670 67500 0.8085 0.4974 0.9086
2.3845 68000 0.7879 0.5114 0.9089
2.4021 68500 0.7977 0.5297 0.9086
2.4196 69000 0.782 0.5251 0.9103
2.4371 69500 0.7237 0.5568 0.9088
2.4547 70000 0.7556 0.5008 0.9098
2.4722 70500 0.777 0.4784 0.9097
2.4897 71000 0.7205 0.4993 0.9097
2.5073 71500 0.7237 0.5096 0.9102
2.5248 72000 0.6976 0.4833 0.9107
2.5423 72500 0.7572 0.5234 0.9092
2.5599 73000 0.7012 0.5339 0.9096
2.5774 73500 0.7799 0.5056 0.9107
2.5949 74000 0.7036 0.4961 0.9101
2.6125 74500 0.6932 0.5656 0.9088
2.6300 75000 0.6676 0.5347 0.9097
2.6475 75500 0.7246 0.5110 0.9101
2.6651 76000 0.715 0.5551 0.9096
2.6826 76500 0.7298 0.5658 0.9106
2.7001 77000 0.7349 0.5571 0.9106
2.7177 77500 0.721 0.5667 0.9100
2.7352 78000 0.6863 0.5616 0.9066
2.7527 78500 0.739 0.5419 0.9101
2.7703 79000 0.7529 0.5343 0.9107
2.7878 79500 0.7008 0.5601 0.9107
2.8053 80000 0.7655 0.5189 0.9097
2.8229 80500 0.6666 0.5073 0.9106
2.8404 81000 0.7551 0.5381 0.9102
2.8579 81500 0.6769 0.5650 0.9092
2.8755 82000 0.7508 0.5189 0.9097
2.8930 82500 0.6418 0.5521 0.9094
2.9105 83000 0.6808 0.5490 0.9095
2.9281 83500 0.6833 0.5524 0.9092
2.9456 84000 0.6508 0.5229 0.9105
2.9631 84500 0.6576 0.5789 0.9100
2.9807 85000 0.6778 0.5075 0.9108
2.9982 85500 0.642 0.5139 0.9107
3.0157 86000 0.6596 0.5337 0.9104
3.0333 86500 0.6769 0.5713 0.9106
3.0508 87000 0.7349 0.5374 0.9103
3.0683 87500 0.7034 0.5680 0.9094
3.0859 88000 0.6853 0.5130 0.9106
3.1034 88500 0.726 0.5093 0.9123
3.1209 89000 0.6939 0.5078 0.9104
3.1385 89500 0.7085 0.4847 0.9125
3.1560 90000 0.7118 0.5154 0.9113
3.1735 90500 0.6755 0.5066 0.9121
3.1911 91000 0.718 0.4665 0.9129
3.2086 91500 0.6277 0.5047 0.9111
3.2261 92000 0.6907 0.5292 0.9123
3.2437 92500 0.6624 0.5414 0.9103
3.2612 93000 0.6943 0.5274 0.9101
3.2787 93500 0.6979 0.4985 0.9110
3.2963 94000 0.6858 0.5156 0.9099
3.3138 94500 0.7221 0.5062 0.9114
3.3313 95000 0.6647 0.5129 0.9108
3.3489 95500 0.6572 0.5213 0.9127
3.3664 96000 0.7417 0.4926 0.9119
3.3839 96500 0.7237 0.5090 0.9104
3.4015 97000 0.7218 0.5336 0.9111
3.4190 97500 0.7091 0.5062 0.9128
3.4365 98000 0.668 0.5727 0.9118
3.4541 98500 0.6724 0.5106 0.9119
3.4716 99000 0.7331 0.4740 0.9130
3.4891 99500 0.6427 0.5021 0.9119
3.5067 100000 0.6659 0.5037 0.9119
3.5242 100500 0.6413 0.5024 0.9109
3.5417 101000 0.6889 0.5277 0.9109
3.5593 101500 0.6401 0.5389 0.9103
3.5768 102000 0.7116 0.5114 0.9111
3.5943 102500 0.6511 0.5124 0.9112
3.6119 103000 0.6392 0.5505 0.9096
3.6294 103500 0.6049 0.5306 0.9099
3.6469 104000 0.675 0.5219 0.9098
3.6645 104500 0.6498 0.5392 0.9100
3.6820 105000 0.6774 0.5609 0.9097
3.6995 105500 0.6655 0.5441 0.9107
3.7171 106000 0.6664 0.5713 0.9113
3.7346 106500 0.6343 0.5742 0.9086
3.7521 107000 0.6686 0.5225 0.9113
3.7697 107500 0.7018 0.5221 0.9111
3.7872 108000 0.6479 0.5641 0.9113
3.8047 108500 0.7005 0.5352 0.9123
3.8223 109000 0.6068 0.5007 0.9107
3.8398 109500 0.6846 0.5593 0.9102
3.8573 110000 0.6272 0.5458 0.9107
3.8749 110500 0.685 0.5178 0.9100
3.8924 111000 0.5992 0.5200 0.9102
3.9099 111500 0.6231 0.5488 0.9101
3.9275 112000 0.6343 0.5496 0.9100
3.9450 112500 0.593 0.5207 0.9115
3.9625 113000 0.6017 0.5679 0.9108
3.9801 113500 0.6218 0.5174 0.9113
3.9976 114000 0.5916 0.5108 0.9118
4.0151 114500 0.603 0.5259 0.9117
4.0327 115000 0.6215 0.5362 0.9121
4.0502 115500 0.6784 0.5343 0.9112
4.0677 116000 0.65 0.5488 0.9114
4.0853 116500 0.632 0.4905 0.9119
4.1028 117000 0.6708 0.5091 0.9129
4.1203 117500 0.6374 0.5228 0.9124
4.1379 118000 0.6593 0.4976 0.9125
4.1554 118500 0.649 0.5151 0.9109
4.1729 119000 0.629 0.5303 0.9124
4.1905 119500 0.6709 0.4868 0.9121
4.2080 120000 0.5803 0.5177 0.9130
4.2255 120500 0.6356 0.5329 0.9140
4.2431 121000 0.6075 0.5057 0.9129
4.2606 121500 0.6463 0.5084 0.9126
4.2781 122000 0.6408 0.4859 0.9127
4.2957 122500 0.6331 0.5210 0.9114
4.3132 123000 0.6719 0.4893 0.9122
4.3308 123500 0.6227 0.5126 0.9129
4.3483 124000 0.6144 0.5293 0.9136
4.3658 124500 0.6589 0.4978 0.9127
4.3834 125000 0.6849 0.5195 0.9122
4.4009 125500 0.6731 0.5150 0.9119
4.4184 126000 0.658 0.4890 0.9136
4.4360 126500 0.6256 0.5271 0.9134
4.4535 127000 0.6295 0.5182 0.9129
4.4710 127500 0.6804 0.4870 0.9133
4.4886 128000 0.5868 0.4831 0.9129
4.5061 128500 0.6316 0.4963 0.9135
4.5236 129000 0.5873 0.5179 0.9149
4.5412 129500 0.6383 0.5188 0.9126
4.5587 130000 0.5936 0.5420 0.9117
4.5762 130500 0.654 0.5248 0.9123
4.5938 131000 0.6172 0.5067 0.9130
4.6113 131500 0.5766 0.5335 0.9117
4.6288 132000 0.5688 0.5345 0.9106
4.6464 132500 0.6254 0.5352 0.9115
4.6639 133000 0.5978 0.5244 0.9117
4.6814 133500 0.6332 0.5511 0.9119
4.6990 134000 0.6209 0.5356 0.9120
4.7165 134500 0.6166 0.5532 0.9125
4.7340 135000 0.5897 0.5888 0.9105
4.7516 135500 0.624 0.5153 0.9123
4.7691 136000 0.6563 0.5260 0.9134
4.7866 136500 0.6098 0.5603 0.9122
4.8042 137000 0.6313 0.5390 0.9124
4.8217 137500 0.5737 0.5093 0.9129
4.8392 138000 0.6475 0.5320 0.9114
4.8568 138500 0.5752 0.5531 0.9120
4.8743 139000 0.6378 0.4997 0.9114
4.8918 139500 0.5641 0.5121 0.9120
4.9094 140000 0.5771 0.5343 0.9114
4.9269 140500 0.5869 0.5277 0.9124
4.9444 141000 0.5417 0.5105 0.9143
4.9620 141500 0.5517 0.5664 0.9133
4.9795 142000 0.589 0.5326 0.9122
4.9970 142500 0.5449 0.5236 0.9136
5.0146 143000 0.5687 0.5217 0.9141
5.0321 143500 0.5815 0.5520 0.9131
5.0496 144000 0.6309 0.5290 0.9125
5.0672 144500 0.6086 0.5305 0.9128
5.0847 145000 0.5905 0.5044 0.9135
5.1022 145500 0.6242 0.5113 0.9144
5.1198 146000 0.603 0.5263 0.9137
5.1373 146500 0.6187 0.5086 0.9131
5.1548 147000 0.6007 0.5291 0.9136
5.1724 147500 0.5934 0.5113 0.9131
5.1899 148000 0.6208 0.4981 0.9142
5.2074 148500 0.5524 0.5414 0.9146
5.2250 149000 0.5941 0.5274 0.9146
5.2425 149500 0.5694 0.5315 0.9140
5.2600 150000 0.6045 0.5177 0.9138
5.2776 150500 0.5928 0.4923 0.9146
5.2951 151000 0.594 0.5209 0.9138
5.3126 151500 0.6303 0.5014 0.9137
5.3302 152000 0.5867 0.5151 0.9135
5.3477 152500 0.5686 0.5244 0.9142
5.3652 153000 0.6198 0.5063 0.9140
5.3828 153500 0.6458 0.5403 0.9131
5.4003 154000 0.6284 0.4988 0.9140
5.4178 154500 0.6192 0.5008 0.9143
5.4354 155000 0.5943 0.5334 0.9134
5.4529 155500 0.5725 0.5270 0.9141
5.4704 156000 0.656 0.4985 0.9146
5.4880 156500 0.5562 0.4863 0.9137
5.5055 157000 0.5888 0.5099 0.9141
5.5230 157500 0.5329 0.5039 0.9149
5.5406 158000 0.619 0.5232 0.9136
5.5581 158500 0.5528 0.5471 0.9135
5.5756 159000 0.6086 0.5226 0.9125
5.5932 159500 0.5895 0.5072 0.9132
5.6107 160000 0.5358 0.5419 0.9139
5.6282 160500 0.5438 0.5334 0.9121
5.6458 161000 0.579 0.5548 0.9118
5.6633 161500 0.5636 0.5257 0.9127
5.6808 162000 0.5984 0.5520 0.9136
5.6984 162500 0.581 0.5314 0.9135
5.7159 163000 0.5923 0.5665 0.9132
5.7334 163500 0.5433 0.5717 0.9121
5.7510 164000 0.583 0.5338 0.9137
5.7685 164500 0.6272 0.5275 0.9137
5.7860 165000 0.576 0.5657 0.9130
5.8036 165500 0.5983 0.5457 0.9131
5.8211 166000 0.5389 0.5252 0.9141
5.8386 166500 0.6035 0.5478 0.9131
5.8562 167000 0.5398 0.5334 0.9136
5.8737 167500 0.5986 0.5021 0.9136
5.8912 168000 0.5383 0.5261 0.9137
5.9088 168500 0.5376 0.5374 0.9128
5.9263 169000 0.5555 0.5375 0.9136
5.9438 169500 0.5182 0.5230 0.9137
5.9614 170000 0.5175 0.5653 0.9143
5.9789 170500 0.5572 0.5433 0.9141
5.9964 171000 0.5169 0.5035 0.9151
6.0140 171500 0.5336 0.5178 0.9149
6.0315 172000 0.5479 0.5427 0.9141
6.0490 172500 0.5885 0.5417 0.9137
6.0666 173000 0.5694 0.5232 0.9138
6.0841 173500 0.5634 0.5074 0.9142
6.1016 174000 0.5888 0.5102 0.9145
6.1192 174500 0.576 0.5225 0.9148
6.1367 175000 0.5843 0.5161 0.9144
6.1542 175500 0.5635 0.5244 0.9141
6.1718 176000 0.5666 0.5088 0.9149
6.1893 176500 0.5868 0.5185 0.9150
6.2068 177000 0.5211 0.5348 0.9154
6.2244 177500 0.5672 0.5268 0.9150
6.2419 178000 0.5286 0.5431 0.9141
6.2594 178500 0.5723 0.5359 0.9154
6.2770 179000 0.5648 0.5016 0.9154
6.2945 179500 0.5566 0.5200 0.9145
6.3120 180000 0.6074 0.5132 0.9145
6.3296 180500 0.5473 0.5294 0.9145
6.3471 181000 0.5325 0.5380 0.9150
6.3646 181500 0.5868 0.5243 0.9149
6.3822 182000 0.6155 0.5368 0.9143
6.3997 182500 0.5944 0.4978 0.9149
6.4172 183000 0.5838 0.5224 0.9146
6.4348 183500 0.5644 0.5384 0.9146
6.4523 184000 0.5471 0.5549 0.9152
6.4698 184500 0.6198 0.5101 0.9147
6.4874 185000 0.5304 0.5016 0.9152
6.5049 185500 0.5621 0.5076 0.9155
6.5224 186000 0.5027 0.5085 0.9148
6.5400 186500 0.5882 0.5293 0.9147
6.5575 187000 0.5228 0.5374 0.9152
6.5750 187500 0.5717 0.5233 0.9140
6.5926 188000 0.5651 0.5269 0.9136
6.6101 188500 0.5182 0.5328 0.9140
6.6276 189000 0.508 0.5250 0.9134
6.6452 189500 0.5464 0.5427 0.9128
6.6627 190000 0.5362 0.5137 0.9136
6.6802 190500 0.5732 0.5161 0.9148
6.6978 191000 0.5466 0.5416 0.9136
6.7153 191500 0.5501 0.5736 0.9137
6.7328 192000 0.5258 0.5528 0.9130
6.7504 192500 0.5589 0.5380 0.9142
6.7679 193000 0.5947 0.5297 0.9148
6.7854 193500 0.5579 0.5590 0.9145
6.8030 194000 0.5644 0.5412 0.9142
6.8205 194500 0.5128 0.5181 0.9137
6.8380 195000 0.5802 0.5451 0.9136
6.8556 195500 0.5002 0.5293 0.9144
6.8731 196000 0.5763 0.5153 0.9140
6.8906 196500 0.5205 0.5261 0.9144
6.9082 197000 0.5112 0.5342 0.9149
6.9257 197500 0.523 0.5503 0.9140
6.9432 198000 0.4875 0.5420 0.9148
6.9608 198500 0.4963 0.5638 0.9142
6.9783 199000 0.5327 0.5536 0.9149
6.9958 199500 0.4822 0.5224 0.9141
7.0134 200000 0.5078 0.5300 0.9140
7.0309 200500 0.5208 0.5486 0.9149
7.0484 201000 0.5641 0.5442 0.9148
7.0660 201500 0.5484 0.5165 0.9143
7.0835 202000 0.5289 0.5206 0.9142
7.1010 202500 0.557 0.5178 0.9146
7.1186 203000 0.556 0.5190 0.9147
7.1361 203500 0.5567 0.5244 0.9143
7.1536 204000 0.5376 0.5212 0.9148
7.1712 204500 0.5448 0.5138 0.9150
7.1887 205000 0.5541 0.5231 0.9155
7.2062 205500 0.5006 0.5261 0.9155
7.2238 206000 0.5366 0.5184 0.9159
7.2413 206500 0.5127 0.5360 0.9148
7.2588 207000 0.5469 0.5225 0.9148
7.2764 207500 0.5414 0.5080 0.9152
7.2939 208000 0.5361 0.5135 0.9151
7.3114 208500 0.5833 0.5132 0.9147
7.3290 209000 0.515 0.5282 0.9137
7.3465 209500 0.5165 0.5362 0.9154
7.3640 210000 0.5551 0.5327 0.9159
7.3816 210500 0.5845 0.5409 0.9143
7.3991 211000 0.5798 0.5057 0.9147
7.4166 211500 0.5614 0.5275 0.9149
7.4342 212000 0.5445 0.5175 0.9153
7.4517 212500 0.5175 0.5424 0.9139
7.4692 213000 0.6043 0.5075 0.9148
7.4868 213500 0.5051 0.5067 0.9154
7.5043 214000 0.5337 0.5143 0.9153
7.5218 214500 0.4822 0.5049 0.9156
7.5394 215000 0.5722 0.5359 0.9153
7.5569 215500 0.5014 0.5306 0.9147
7.5744 216000 0.5441 0.5222 0.9138
7.5920 216500 0.5391 0.5261 0.9138
7.6095 217000 0.494 0.5275 0.9144
7.6270 217500 0.4881 0.5268 0.9141
7.6446 218000 0.5263 0.5381 0.9138
7.6621 218500 0.5017 0.5209 0.9134
7.6796 219000 0.5566 0.5347 0.9138
7.6972 219500 0.5201 0.5519 0.9135
7.7147 220000 0.5269 0.5718 0.9143
7.7322 220500 0.5125 0.5442 0.9135
7.7498 221000 0.5307 0.5292 0.9142
7.7673 221500 0.5718 0.5179 0.9140
7.7848 222000 0.5345 0.5512 0.9147
7.8024 222500 0.5456 0.5447 0.9143
7.8199 223000 0.4889 0.5197 0.9144
7.8374 223500 0.5532 0.5487 0.9146
7.8550 224000 0.4902 0.5257 0.9137
7.8725 224500 0.5535 0.5095 0.9135
7.8900 225000 0.4988 0.5404 0.9141
7.9076 225500 0.4883 0.5280 0.9143
7.9251 226000 0.4975 0.5458 0.9133
7.9426 226500 0.4698 0.5357 0.9147
7.9602 227000 0.4831 0.5391 0.9143
7.9777 227500 0.5073 0.5492 0.9148
7.9952 228000 0.4637 0.5140 0.9148
8.0128 228500 0.4817 0.5200 0.9137
8.0303 229000 0.5078 0.5370 0.9146
8.0478 229500 0.5342 0.5497 0.9149
8.0654 230000 0.5317 0.5179 0.9156
8.0829 230500 0.5074 0.5286 0.9151
8.1004 231000 0.5302 0.5165 0.9162
8.1180 231500 0.5481 0.5200 0.9163
8.1355 232000 0.538 0.5216 0.9161
8.1530 232500 0.5168 0.5189 0.9152
8.1706 233000 0.5118 0.5195 0.9153
8.1881 233500 0.5394 0.5192 0.9155
8.2056 234000 0.488 0.5100 0.9153
8.2232 234500 0.5214 0.5162 0.9161
8.2407 235000 0.4944 0.5343 0.9149
8.2582 235500 0.5226 0.5190 0.9152
8.2758 236000 0.5234 0.5146 0.9159
8.2933 236500 0.5165 0.5011 0.9153
8.3108 237000 0.5599 0.5129 0.9152
8.3284 237500 0.4991 0.5212 0.9154
8.3459 238000 0.5007 0.5383 0.9148
8.3634 238500 0.5406 0.5394 0.9154
8.3810 239000 0.5606 0.5445 0.9147
8.3985 239500 0.5626 0.5143 0.9149
8.4160 240000 0.5353 0.5338 0.9156
8.4336 240500 0.5168 0.5208 0.9158
8.4511 241000 0.5058 0.5312 0.9146
8.4686 241500 0.5919 0.5143 0.9149
8.4862 242000 0.4883 0.5149 0.9159
8.5037 242500 0.5072 0.5132 0.9156
8.5212 243000 0.4655 0.5111 0.9148
8.5388 243500 0.5592 0.5269 0.9155
8.5563 244000 0.4836 0.5217 0.9152
8.5738 244500 0.5299 0.5269 0.9143
8.5914 245000 0.5081 0.5206 0.9136
8.6089 245500 0.48 0.5159 0.9144
8.6264 246000 0.4713 0.5272 0.9141
8.6440 246500 0.5038 0.5287 0.9139
8.6615 247000 0.4872 0.5199 0.9142
8.6790 247500 0.5429 0.5227 0.9138
8.6966 248000 0.5042 0.5402 0.9136
8.7141 248500 0.511 0.5530 0.9141
8.7316 249000 0.5097 0.5374 0.9131
8.7492 249500 0.4974 0.5312 0.9138
8.7667 250000 0.5617 0.5381 0.9148
8.7842 250500 0.5234 0.5476 0.9150
8.8018 251000 0.5133 0.5447 0.9147
8.8193 251500 0.488 0.5270 0.9148
8.8368 252000 0.5377 0.5325 0.9144
8.8544 252500 0.479 0.5324 0.9145
8.8719 253000 0.5329 0.5200 0.9140
8.8894 253500 0.4744 0.5346 0.9140
8.9070 254000 0.4827 0.5333 0.9145
8.9245 254500 0.4757 0.5415 0.9139
8.9420 255000 0.4504 0.5307 0.9147
8.9596 255500 0.4657 0.5337 0.9146
8.9771 256000 0.4976 0.5473 0.9150
8.9946 256500 0.459 0.5214 0.9144
9.0122 257000 0.4615 0.5296 0.9147
9.0297 257500 0.5019 0.5312 0.9149
9.0472 258000 0.5142 0.5379 0.9152
9.0648 258500 0.5174 0.5197 0.9150
9.0823 259000 0.4896 0.5277 0.9155
9.0998 259500 0.5114 0.5240 0.9161
9.1174 260000 0.529 0.5293 0.9155
9.1349 260500 0.5305 0.5242 0.9157
9.1524 261000 0.4941 0.5160 0.9155
9.1700 261500 0.5025 0.5274 0.9153
9.1875 262000 0.5148 0.5198 0.9155
9.2050 262500 0.4882 0.5116 0.9160
9.2226 263000 0.4964 0.5139 0.9155
9.2401 263500 0.4792 0.5284 0.9153
9.2576 264000 0.5089 0.5175 0.9154
9.2752 264500 0.5124 0.5188 0.9154
9.2927 265000 0.4968 0.5153 0.9152
9.3102 265500 0.5454 0.5129 0.9152
9.3278 266000 0.4858 0.5209 0.9147
9.3453 266500 0.4822 0.5257 0.9148
9.3628 267000 0.5343 0.5298 0.9148
9.3804 267500 0.5443 0.5303 0.9145
9.3979 268000 0.546 0.5204 0.9153
9.4154 268500 0.5253 0.5326 0.9154
9.4330 269000 0.5062 0.5270 0.9154
9.4505 269500 0.4901 0.5284 0.9150
9.4680 270000 0.5675 0.5271 0.9154
9.4856 270500 0.4831 0.5263 0.9152
9.5031 271000 0.4873 0.5256 0.9152
9.5206 271500 0.4576 0.5208 0.9155
9.5382 272000 0.5392 0.5250 0.9154
9.5557 272500 0.4716 0.5238 0.9158
9.5732 273000 0.5202 0.5282 0.9156
9.5908 273500 0.5036 0.5284 0.9149
9.6083 274000 0.4645 0.5216 0.9151
9.6258 274500 0.4683 0.5273 0.9154
9.6434 275000 0.4881 0.5307 0.9154
9.6609 275500 0.4677 0.5234 0.9155
9.6784 276000 0.54 0.5212 0.9153
9.6960 276500 0.4948 0.5277 0.9150
9.7135 277000 0.5008 0.5293 0.9150
9.7310 277500 0.4907 0.5307 0.9147
9.7486 278000 0.4876 0.5276 0.9144
9.7661 278500 0.539 0.5324 0.9145
9.7836 279000 0.5147 0.5325 0.9145
9.8012 279500 0.5095 0.5367 0.9150
9.8187 280000 0.476 0.5333 0.9147
9.8362 280500 0.5189 0.5325 0.9150
9.8538 281000 0.4633 0.5342 0.9149
9.8713 281500 0.5199 0.5314 0.9146
9.8888 282000 0.4645 0.5312 0.9151
9.9064 282500 0.4702 0.5339 0.9151
9.9239 283000 0.4609 0.5362 0.9151
9.9414 283500 0.4365 0.5340 0.9152
9.9590 284000 0.4587 0.5339 0.9152
9.9765 284500 0.4861 0.5355 0.9153
9.9940 285000 0.4473 0.5352 0.9153

Framework Versions

  • Python: 3.10.10
  • Sentence Transformers: 3.4.0.dev0
  • Transformers: 4.46.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 0.34.2
  • Datasets: 2.21.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
17
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-64-final

Finetuned
(181)
this model

Evaluation results