14B model detected as 7B

#1049
by djuna - opened

I've been working on merging a 14 billion parameter model recently, but when it comes time to evaluate the model, the system indicates that the model has only 7 billion parameter instead of the expected 14 billion. It's funny when the top 7 billion model is actually 14 billion

Open LLM Leaderboard org

Hi @djuna ,

Could you please provide the request file for the model you submitted so we will be able to check the number of parameters?

When you filter 7-8B size on the spaces, more than 10+ model is actually 14B

There are quite a few models in the leaderboard where the indicated size is half the actual size:

  • maldv/Qwentile2.5-32B-Instruct
  • CultriX/Qwen2.5-14B-Wernickev3

...and many others, most of them Qwen-derived.

Sign up or log in to comment