vectorizer.vanilla / README.md
youval's picture
Update model card
590b2b2
|
raw
history blame
4.07 kB
---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
language:
- en
---
# Model Card for `vectorizer.vanilla`
This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index.
Model name: `vectorizer.vanilla`
## Supported Languages
The model was trained and tested in the following languages:
- English
## Scores
| Metric | Value |
|:-----------------------|------:|
| Relevance (Recall@100) | 0.639 |
Note that the relevance score is computed as an average over 14 retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 5 ms |
| NVIDIA A10 | FP32 | 2 ms | 20 ms |
| NVIDIA T4 | FP16 | 1 ms | 14 ms |
| NVIDIA T4 | FP32 | 2 ms | 53 ms |
| NVIDIA L4 | FP16 | 1 ms | 5 ms |
| NVIDIA L4 | FP32 | 3 ms | 25 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 300 MiB |
| FP32 | 500 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 23 million
- Base language model: [English MiniLM-L6-H384](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
- Insensitive to casing and accents
- Output dimensions: 256 (reduced with an additional dense layer)
- Training procedure: Query-passage-negative triplets for datasets that have mined hard negative data, Query-passage pairs for the rest. Number of negatives is augmented with in-batch negative strategy.
### Training Data
The model have been trained using all datasets that are cited in the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model.
### Evaluation Metrics
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | Recall@100 |
|:------------------|-----------:|
| Average | 0.639 |
| | |
| Arguana | 0.969 |
| CLIMATE-FEVER | 0.509 |
| DBPedia Entity | 0.409 |
| FEVER | 0.839 |
| FiQA-2018 | 0.702 |
| HotpotQA | 0.609 |
| MS MARCO | 0.849 |
| NFCorpus | 0.315 |
| NQ | 0.786 |
| Quora | 0.995 |
| SCIDOCS | 0.497 |
| SciFact | 0.911 |
| TREC-COVID | 0.129 |
| Webis-Touche-2020 | 0.427 |