bm25s
Repo for storing evaluation results of the bm25s
implementation in MTEB. BM25 acts as a competitive hard-to-beat baseline.
Note that this repository does not contain the model implementation, but you can fetch the implementation in MTEB using:
import mteb
task = mteb.get_benchmark("RTEB(beta)") # the v2 English benchmark
model = mteb.get_model("mteb/baseline-bm25s")
results = mteb.evaluate(model, task)
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Evaluation results
- AILACasedocs Default Test on
mteb/AILA_casedocs source
27.837 * - AILACasedocs on
mteb/AILA_casedocs source
27.837 * - AILAStatutes Default Test on
mteb/AILA_statutes source
21.618 * - AILAStatutes on
mteb/AILA_statutes source
21.618 * - AppsRetrieval Default Test on
CoIR-Retrieval/apps source
4.764 * - AppsRetrieval on
CoIR-Retrieval/apps source
4.764 * - ArguAna Default Test on
mteb/arguana source leaderboard
49.276 * - ArguAna on
mteb/arguana source leaderboard
49.276 * - CQADupstackAndroidRetrieval Default Test on
mteb/CQADupstackAndroidRetrieval source
39.693 * - CQADupstackAndroidRetrieval on
mteb/CQADupstackAndroidRetrieval source
39.693 *