roberta-large-wanli / README.md
alisawuffles's picture
Update README.md
9624ab4
|
raw
history blame
1.51 kB
metadata
language:
  - en
tags:
  - text-classification
widget:
  - text: I almost forgot to eat lunch.</s></s>I didn't forget to eat lunch.
  - text: I almost forgot to eat lunch.</s></s>I forgot to eat lunch.
  - text: I ate lunch.</s></s>I almost forgot to eat lunch.
datasets:
  - alisawuffles/WANLI

This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset (Liu et al., 2022). It outperforms the roberta-large-mnli model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.

How to use

from transformers import RobertaTokenizer, RobertaForSequenceClassification

model = RobertaForSequenceClassification.from_pretrained('alisawuffles/roberta-large-wanli')
tokenizer = RobertaTokenizer.from_pretrained('alisawuffles/roberta-large-wanli')

x = tokenizer("I almost forgot to eat lunch.", "I didn't forget to eat lunch.", hypothesis, return_tensors='pt', max_length=128, truncation=True)
logits = model(**x).logits
probs = logits.softmax(dim=1).squeeze(0)
label_id = torch.argmax(probs).item()
prediction = model.config.id2label[label_id]

Citation

@misc{liu-etal-2022-wanli,
    title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
    author = "Liu, Alisa  and
      Swayamdipta, Swabha  and
      Smith, Noah A.  and
      Choi, Yejin",
    month = jan,
    year = "2022",
    url = "https://arxiv.org/pdf/2201.05955",
}