Model Summary

This repository hosts quantized versions of the bge-m3 embedding model.

Format: GGUF
Converter: llama.cpp 82e3b03c11826d20a24ab66d60f4de58f48ddcdb
Quantizer: LM-Kit.NET 2024.9.0

For more detailed information on the base model, please visit the following links

Downloads last month
5,606
GGUF
Model size
567M params
Architecture
bert
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using lm-kit/bge-m3-gguf 1