Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- multi_nli
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-classification
|
8 |
+
---
|
9 |
+
|
10 |
+
# DeBERTa-v3 (large) fine-tuned to Multi-NLI (MNLI)
|
11 |
+
This model is for Textual Entailment (aka NLI), i.e., predict whether `textA` is supported by `textB`. More specifically, it's a 2-way classification where the relationship between `textA` and `textB` (A -> B) can be **entail, neutral, contradict**.
|
12 |
+
|
13 |
+
- Input: (`textA`, `textB`)
|
14 |
+
- Output: prob(entail), prob(contradict)
|
15 |
+
|
16 |
+
Note that during training, all 3 labels (entail, neural, contradict) were used. But for this model, the neural output head has been removed.
|
17 |
+
|
18 |
+
## Model Details
|
19 |
+
- Base model: [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
|
20 |
+
- Training data: [MNLI](https://huggingface.co/datasets/multi_nli)
|
21 |
+
- Training details: num_epochs = 3, batch_size = 16, `textA=hypothesis`, `textB=premise`
|
22 |
+
|
23 |
+
## Example
|
24 |
+
|
25 |
+
```python
|
26 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
27 |
+
tokenizer = AutoTokenizer.from_pretrained("potsawee/deberta-v3-large-mnli")
|
28 |
+
model = AutoModelForSequenceClassification.from_pretrained("potsawee/deberta-v3-large-mnli")
|
29 |
+
|
30 |
+
textA = "Kyle Walker has a personal issue"
|
31 |
+
textB = "Kyle Walker will remain Manchester City captain following reports about his private life, says boss Pep Guardiola."
|
32 |
+
|
33 |
+
inputs = tokenizer.batch_encode_plus(
|
34 |
+
batch_text_or_text_pairs=[(textA, textB)],
|
35 |
+
add_special_tokens=True, return_tensors="pt",
|
36 |
+
)
|
37 |
+
logits = model(**inputs).logits # neutral is already removed
|
38 |
+
probs = torch.softmax(logits, dim=-1)[0]
|
39 |
+
# probs = [0.7080, 0.2920], meaning that prob(entail) = 0.708, prob(contradict) = 0.292
|
40 |
+
```
|
41 |
+
|
42 |
+
## Citation
|
43 |
+
|
44 |
+
```bibtex
|
45 |
+
@article{manakul2023selfcheckgpt,
|
46 |
+
title={Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models},
|
47 |
+
author={Manakul, Potsawee and Liusie, Adian and Gales, Mark JF},
|
48 |
+
journal={arXiv preprint arXiv:2303.08896},
|
49 |
+
year={2023}
|
50 |
+
}
|
51 |
+
```
|