Token Classification
GLiNER
PyTorch
English

Incorrect number of parameters in models?

#12
by UssuHug - opened

I’ve worked with the gliner_medium and gliner_multi models, and although their model cards state that both have 209M parameters, I found this isn't accurate. The number of trainable parameters is actually 195M for gliner_medium and 289M for gliner_multi. This discrepancy is at least due to the difference in vocabulary sizes: 128,004 for gliner_medium and 250,105 for gliner_multi. I encountered this issue while comparing the quantized versions of the models and trying to understand the size differences between them.

Sign up or log in to comment