File size: 3,453 Bytes
cb2fde5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
language:
- de
- en
- es
- fr
---
# Model Card for `answer-finder-v1-L-multilingual`
This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.
Model name: `answer-finder-v1-L-multilingual`
## Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
## Scores
| Metric | Value |
|:--------------------------------------------------------------|-------:|
| F1 Score on SQuAD v2 EN with Hugging Face evaluation pipeline | 75 |
| F1 Score on SQuAD v2 EN with Haystack evaluation pipeline | 75 |
| F1 Score on SQuAD v2 FR with Haystack evaluation pipeline | 73.4 |
| F1 Score on SQuAD v2 DE with Haystack evaluation pipeline | 90.8 |
| F1 Score on SQuAD v2 ES with Haystack evaluation pipeline | 67.1 |
## Inference Time
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 2 ms | 30 ms |
| NVIDIA A10 | FP32 | 4 ms | 83 ms |
| NVIDIA T4 | FP16 | 3 ms | 65 ms |
| NVIDIA T4 | FP32 | 14 ms | 373 ms |
| NVIDIA L4 | FP16 | 2 ms | 38 ms |
| NVIDIA L4 | FP32 | 5 ms | 124 ms |
**Note that the Answer Finder models are only used at query time.**
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 550 MiB |
| FP32 | 1050 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 110 million
- Base language model: [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
pre-trained by Sinequa in English, French, German and Spanish
- Insensitive to casing and accents
### Training Data
- [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/)
- [French-SQuAD](https://github.com/Alikabbadj/French-SQuAD) + French translation of SQuAD v2 "impossible" query-passage pairs
- [GermanQuAD](https://www.deepset.ai/germanquad) + German translation of SQuAD v2 "impossible" query-passage pairs
- [SQuAD-es-v2](https://github.com/ccasimiro88/TranslateAlignRetrieve)
|