CLTL commited on
Commit
c56aea2
1 Parent(s): 3d8468d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: nl
3
+ license: mit
4
+ pipeline_tag: text-classification
5
+ inference: false
6
+ ---
7
+
8
+ # Regression Model for Exercise Tolerance Functioning Levels (ICF b455)
9
+
10
+ ## Description
11
+ A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
12
+
13
+ ## Functioning levels
14
+ Level | Meaning
15
+ ---|---
16
+ 5 | MET>6. Can tolerate jogging, hard exercises, running, climbing stairs fast, sports.
17
+ 4 | 4≤MET≤6. Can tolerate walking / cycling at a brisk pace, considerable effort (e.g. cycling from 16 km/h), heavy housework.
18
+ 3 | 3≤MET<4. Can tolerate walking / cycling at a normal pace, gardening, exercises without equipment.
19
+ 2 | 2≤MET<3. Can tolerate walking at a slow to moderate pace, grocery shopping, light housework.
20
+ 1 | 1≤MET<2. Can tolerate sitting activities.
21
+ 0 | 0≤MET<1. Can physically tolerate only recumbent activities.
22
+
23
+ The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
24
+
25
+ ## Intended uses and limitations
26
+ - The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
27
+ - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
28
+
29
+ ## How to use
30
+ To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
31
+ ```
32
+ from simpletransformers.classification import ClassificationModel
33
+
34
+ model = ClassificationModel(
35
+ 'roberta',
36
+ 'CLTL/icf-levels-ins',
37
+ use_cuda=False,
38
+ )
39
+
40
+ example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
41
+ _, raw_outputs = model.predict([example])
42
+ predictions = np.squeeze(raw_outputs)
43
+ ```
44
+ The prediction on the example is:
45
+ ```
46
+ 3.13
47
+ ```
48
+ The raw outputs look like this:
49
+ ```
50
+ [[3.1300993]]
51
+ ```
52
+
53
+ ## Training data
54
+ - The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
55
+ - The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
56
+
57
+ ## Training procedure
58
+ The default training parameters of Simple Transformers were used, including:
59
+ - Optimizer: AdamW
60
+ - Learning rate: 4e-5
61
+ - Num train epochs: 1
62
+ - Train batch size: 8
63
+
64
+ ## Evaluation results
65
+ The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
66
+
67
+ | | Sentence-level | Note-level
68
+ |---|---|---
69
+ mean absolute error | 0.69 | 0.61
70
+ mean squared error | 0.80 | 0.64
71
+ root mean squared error | 0.89 | 0.80
72
+
73
+ ## Authors and references
74
+ ### Authors
75
+ Jenia Kim, Piek Vossen
76
+
77
+ ### References
78
+ TBD