File size: 3,345 Bytes
8792f07
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---

# Regression Model for Weight Maintenance Functioning Levels (ICF b530)

## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.

## Functioning levels
Level | Meaning
---|---
4 | Healthy weight, no unintentional weight loss or gain, SNAQ 0 or 1.
3 | Some unintentional weight loss or gain, or lost a lot of weight but gained some of it back afterwards.
2 | Moderate unintentional weight loss or gain (more than 3 kg in the last month), SNAQ 2.
1 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months), SNAQ ≥ 3.
0 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months) and admitted to ICU.

The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.

## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.

## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel

model = ClassificationModel(
    'roberta',
    'CLTL/icf-levels-mbw',
    use_cuda=False,
)

example = 'Tijdens opname >10 kg afgevallen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.95
```
The raw outputs look like this:
```
[[1.95429301]]
```

## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).

## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8

## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).

| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.81 | 0.60
mean squared error | 0.83 | 0.56
root mean squared error | 0.91 | 0.75

## Authors and references
### Authors
Jenia Kim, Piek Vossen

### References
TBD