UMCU commited on
Commit
aeb31ad
1 Parent(s): 379d618

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -11,6 +11,34 @@ This model is a finetuned RoBERTa-based model called RobBERT, this model is pre-
11
  ## Intended use
12
  The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 32-max token windows surrounding the concept-to-be negated.
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ## Data
15
  The pre-trained model was trained the Dutch section of OSCAR (about 39GB), and is described here: http://dx.doi.org/10.18653/v1/2020.findings-emnlp.292.
16
 
 
11
  ## Intended use
12
  The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 32-max token windows surrounding the concept-to-be negated.
13
 
14
+ ## Minimal example
15
+
16
+ ```python
17
+ tokenizer = AutoTokenizer\
18
+ .from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
19
+ model = AutoModelForTokenClassification\
20
+ .from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
21
+
22
+ some_text = "De patient was niet aanspreekbaar en hij zag er grauw uit. \
23
+ Hij heeft de inspanningstest echter goed doorstaan."
24
+ inputs = tokenizer(some_text, return_tensors='pt')
25
+ output = model.forward(inputs)
26
+ probas = torch.nn.functional.softmax(output.logits[0]).detach().numpy()
27
+
28
+ # koppel aan tokens
29
+ input_tokens = tokenizer.convert_ids_to_tokens(inputs['input_ids'][0])
30
+ target_map = {0: 'B-Negated', 1:'B-NotNegated',2:'I-Negated',3:'I-NotNegated'}
31
+ results = [{'token': input_tokens[idx],
32
+ 'proba_negated': proba_arr[0]+proba_arr[2],
33
+ 'proba_not_negated': proba_arr[1]+proba_arr[3]
34
+ }
35
+ for idx,proba_arr in enumerate(probas)]
36
+
37
+ ```
38
+
39
+ It is perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format.
40
+
41
+
42
  ## Data
43
  The pre-trained model was trained the Dutch section of OSCAR (about 39GB), and is described here: http://dx.doi.org/10.18653/v1/2020.findings-emnlp.292.
44