Cyrile Delestre commited on
Commit
99e23eb
·
1 Parent(s): a1383a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ widget:
12
  DistilCamemBERT-QA
13
  ==================
14
 
15
- We present DistilCamemBERT-QA which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Quention-Answering task for the french language. This model is constructed over two datasets Fquad v1.0 and Piaf which are composed of contexts and questions with their answers inside the context.
16
 
17
  This modelization is close to [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue especially as in a context of cross-encoding like for this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power thanks to DistilCamemBERT.
18
 
 
12
  DistilCamemBERT-QA
13
  ==================
14
 
15
+ We present DistilCamemBERT-QA which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Quention-Answering task for the french language. This model is constructed over two datasets FQuAD v1.0 and Piaf which are composed of contexts and questions with their answers inside the context.
16
 
17
  This modelization is close to [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue especially as in a context of cross-encoding like for this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power thanks to DistilCamemBERT.
18