I testing the snowflake's model in scifact dataset, the results were strange.

#143
by whatsupbro - opened

I just using the mteb github code, using the cos_sim,
NDCG@10:
Snowflake/snowflake - arctic - embed - s. 0.6008(69.92)
Snowflake/snowflake - arctic-embed - xs 0.4828(64.51)
Snowflake/snowflake - arctic - embed - m - v1.5 0.3646(71.59)
Snowflake/snowflake - arctic - embed - l 0.1776(73.82)
The () is mteb leaderboard score.
I also use the code to testing other model like all-miniLM,e5, bge, gte, the results were right.

I want to know whether my testing step has some errors.

Massive Text Embedding Benchmark org

Hi @ilovestudy! Happy that you are testing out the benchmark. We are more than happy to take questions over at: https://github.com/embeddings-benchmark/mteb

KennethEnevoldsen changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment