File size: 1,805 Bytes
10e8e91
 
 
 
 
0dbf719
0339a81
10e8e91
 
 
 
d9500ba
 
10e8e91
 
0339a81
10e8e91
 
0339a81
 
 
10e8e91
 
 
 
 
 
 
 
 
2961e12
 
98daddf
10e8e91
0024494
 
 
10e8e91
98daddf
10e8e91
 
 
 
e73865c
10e8e91
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
language: 
  - en

tags:
- text2text-generation
- mednli

datasets:
- pubmed
- pmc/open_access
widget:
- text: "mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable"
---

# SciFive Pubmed+PMC Large on MedNLI

## Introduction

Finetuned SciFive Pubmed+PMC Large model achieved state-of-the-art results on [MedNLI (Medical Natural Language Inference)](https://paperswithcode.com/sota/natural-language-inference-on-mednli)

Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)

Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_

## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). 
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI")  
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI")
model.cuda()

sent_1 = "In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA."
sent_2 = "The patient is hemodynamically stable"
text =  f"mednli: sentence1: {sent_1} sentence2: {sent_2}"

encoding = tokenizer.encode_plus(text, padding='max_length', max_length=256, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")

outputs = model.generate(
    input_ids=input_ids, attention_mask=attention_masks,
    max_length=8,
    early_stopping=True
)

for output in outputs:
    line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
    print(line)
```