--- license: mit language: - en tags: - sparse sparsity quantized onnx embeddings int8 --- This is the sparsified ONNX variant of the [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference pipeline and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization (INT8) and unstructured pruning (50%). Current list of sparse and quantized bge ONNX models: [zeroshot/bge-large-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-large-en-v1.5-sparse) [zeroshot/bge-large-en-v1.5-quant](https://huggingface.co/zeroshot/bge-large-en-v1.5-quant) [zeroshot/bge-base-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-base-en-v1.5-sparse) [zeroshot/bge-base-en-v1.5-quant](https://huggingface.co/zeroshot/bge-base-en-v1.5-quant) [zeroshot/bge-small-en-v1.5-sparse](https://huggingface.co/zeroshot/bge-small-en-v1.5-sparse) [zeroshot/bge-small-en-v1.5-quant](https://huggingface.co/zeroshot/bge-small-en-v1.5-quant)