What is the best performant "dataset" in MS MARCO Mined Triplets?

#1
by kyeongpil - opened

Thank you for sharing these valuable corpora. I appreciate your contribution to the research community.

I noticed that there are several datasets available in the MS MARCO Mined Triplets collection.
However, I'm facing a dilemma in selecting the most suitable dataset for training my reranking model, as I can only choose one.

​Have you conducted any analysis on these corpora regarding their performance in training retrieval and reranking models? Such insights would be tremendously helpful in making an informed decision.

Thank you for your time and assistance.

Best regards

Sentence Transformers org

Hello!

The datasets that I've made available in the collection that you mention all originate from the https://huggingface.co/datasets/sentence-transformers/msmarco-hard-negatives dataset set up by my predecessor @nreimers . Sadly, I'm not sure if he ever ran experiments on which of these datasets has the highest quality.
Beyond that, I know that @antoinelouis used this hard negatives dataset to finetune some camembert bi-encoder models, e.g.: https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR
He opted to use all datasets except for the BM25 negatives. Perhaps he can elaborate on why he made that choice.

Lastly, each of these datasets has a lot of subsets:

  • triplet: top 1 negative per pair
  • triplet-all: top 50 negatives per pair
  • triplet-hard: top 50 negatives per pair, but filtering such that similarity(query, positive) > similarity(query, negative) + margin
  • triplet-50: top 50 negatives per pair, formatted as a 52-tuple

The third one is especially interesting from a quality perspective. You could argue that if more samples were filtered away, then this is because the bi-encoder model corresponds more with the cross-encoder model that we use for the filtering. If we assume that the cross-encoder model is "gold", then the dataset with the least samples in the triplet-hard subset is the "best" dataset, i.e. the dataset with the most similar negatives.

  • sentence-transformers/msmarco-msmarco-distilbert-base-tas-b: 16M
  • sentence-transformers/msmarco-msmarco-distilbert-base-v3: 17.3M
  • sentence-transformers/msmarco-msmarco-MiniLM-L-6-v3: 13.6M
  • sentence-transformers/msmarco-distilbert-margin-mse-cls-dot-v2: 12.5M
  • sentence-transformers/msmarco-distilbert-margin-mse-cls-dot-v1: 12.8M
  • sentence-transformers/msmarco-distilbert-margin-mse-mean-dot-v1: 12.6M
  • sentence-transformers/msmarco-mpnet-margin-mse-mean-v1: 12M
  • sentence-transformers/msmarco-co-condenser-margin-mse-cls-v1: 11.8M
  • sentence-transformers/msmarco-distilbert-margin-mse-mnrl-mean-v1: 12.1M
  • sentence-transformers/msmarco-distilbert-margin-mse-sym-mnrl-mean-v1: 12.1M
  • sentence-transformers/msmarco-distilbert-margin-mse-sym-mnrl-mean-v2: 12.1M
  • sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1: 11.7M
  • sentence-transformers/msmarco-bm25: 19.1M

So, then BM25 is the "worst" and sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 is the "best". But I'm afraid that negative hardness is a tricky beast: you don't want your negatives to be too similar (as then they should instead just be positives), and you don't want them to be too dissimilar as then they don't add much. Taking the dataset with the most similar negatives is not guaranteed to result in the best model. I would actually be very grateful if someone created a training setup with Sentence Transformers to finetune e.g. bert-base-cased on each of these datasets with the same seeds etc. We would then learn which of these datasets is most useful, and which of the training splits is most useful. Right now I've just released the 4 kind of subsets so people can figure out what works best for them.

If I was in your shoes now: I would take the triplet-hard subset from https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 if you can handle 11M training samples. If that is too much, then I would filter the triplet-hard subset to only take the first triplet for each pair. This should be the top negative where similarity(query, positive) > similarity(query, negative) + margin still holds.

  • Tom Aarsen

@tomaarsen I really appreciate for your careful answer! Your answer is very helpful to choose the dataset :)
I have another question. Are the hard negatives sorted by their scores from the cross encoder in descending order?

Sign up or log in to comment