This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from:

  • mnli {train, dev and test}
  • snli {train, dev and test}
  • xnli {train, dev and test}
  • fever {train, dev and test}
  • anli {train}

The model is validated on ANLI training sets, including R1, R2 and R3. The following results can be expected on the testing splits.

Split Accuracy
R1 0.6610
R2 0.4990
R3 0.4425

Multilabels

label2id={
    "contradiction": 0,
    "entailment": 1,
    "neutral": 2,
},
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.