metadata
library_name: transformers
base_model: moritzbur/lilt-GottBERT-base
tags:
- generated_from_trainer
datasets:
- xfund
model-index:
- name: lilt-GottBERT-base-xfund-de
results: []
lilt-GottBERT-base-xfund-de
This model is a fine-tuned version of moritzbur/lilt-GottBERT-base on the xfund dataset. It achieves the following results on the evaluation set:
- Loss: 1.7402
- Answer: {'precision': 0.7931914893617021, 'recall': 0.8589861751152074, 'f1': 0.8247787610619469, 'number': 1085}
- Header: {'precision': 0.5581395348837209, 'recall': 0.41379310344827586, 'f1': 0.4752475247524752, 'number': 58}
- Question: {'precision': 0.7877906976744186, 'recall': 0.7465564738292011, 'f1': 0.7666195190947666, 'number': 726}
- Overall Precision: 0.7859
- Overall Recall: 0.8015
- Overall F1: 0.7936
- Overall Accuracy: 0.7255
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
0.0373 | 20.0 | 200 | 1.8211 | {'precision': 0.7350565428109854, 'recall': 0.8387096774193549, 'f1': 0.7834696513129574, 'number': 1085} | {'precision': 0.5135135135135135, 'recall': 0.3275862068965517, 'f1': 0.4, 'number': 58} | {'precision': 0.7130102040816326, 'recall': 0.7699724517906336, 'f1': 0.7403973509933776, 'number': 726} | 0.7227 | 0.7961 | 0.7576 | 0.7076 |
0.0345 | 40.0 | 400 | 2.1454 | {'precision': 0.7412698412698413, 'recall': 0.8608294930875576, 'f1': 0.796588486140725, 'number': 1085} | {'precision': 0.48148148148148145, 'recall': 0.4482758620689655, 'f1': 0.4642857142857143, 'number': 58} | {'precision': 0.6554809843400448, 'recall': 0.8071625344352618, 'f1': 0.7234567901234568, 'number': 726} | 0.7002 | 0.8272 | 0.7584 | 0.6866 |
0.0114 | 60.0 | 600 | 2.0185 | {'precision': 0.8492723492723493, 'recall': 0.7529953917050691, 'f1': 0.7982413287738153, 'number': 1085} | {'precision': 0.7857142857142857, 'recall': 0.3793103448275862, 'f1': 0.5116279069767441, 'number': 58} | {'precision': 0.7317073170731707, 'recall': 0.7851239669421488, 'f1': 0.7574750830564784, 'number': 726} | 0.7965 | 0.7539 | 0.7746 | 0.7294 |
0.0043 | 80.0 | 800 | 1.7402 | {'precision': 0.7931914893617021, 'recall': 0.8589861751152074, 'f1': 0.8247787610619469, 'number': 1085} | {'precision': 0.5581395348837209, 'recall': 0.41379310344827586, 'f1': 0.4752475247524752, 'number': 58} | {'precision': 0.7877906976744186, 'recall': 0.7465564738292011, 'f1': 0.7666195190947666, 'number': 726} | 0.7859 | 0.8015 | 0.7936 | 0.7255 |
0.0013 | 100.0 | 1000 | 1.8975 | {'precision': 0.8072727272727273, 'recall': 0.8184331797235023, 'f1': 0.8128146453089244, 'number': 1085} | {'precision': 0.5, 'recall': 0.41379310344827586, 'f1': 0.4528301886792453, 'number': 58} | {'precision': 0.7246022031823746, 'recall': 0.8154269972451791, 'f1': 0.7673363577446531, 'number': 726} | 0.7654 | 0.8047 | 0.7846 | 0.7248 |
0.0009 | 120.0 | 1200 | 1.8875 | {'precision': 0.8050314465408805, 'recall': 0.8258064516129032, 'f1': 0.8152866242038216, 'number': 1085} | {'precision': 0.6666666666666666, 'recall': 0.3793103448275862, 'f1': 0.48351648351648346, 'number': 58} | {'precision': 0.7094017094017094, 'recall': 0.800275482093664, 'f1': 0.7521035598705502, 'number': 726} | 0.7628 | 0.8020 | 0.7820 | 0.7334 |
0.0003 | 140.0 | 1400 | 1.9918 | {'precision': 0.8246575342465754, 'recall': 0.832258064516129, 'f1': 0.8284403669724771, 'number': 1085} | {'precision': 0.4716981132075472, 'recall': 0.43103448275862066, 'f1': 0.45045045045045046, 'number': 58} | {'precision': 0.7354430379746836, 'recall': 0.800275482093664, 'f1': 0.766490765171504, 'number': 726} | 0.7786 | 0.8074 | 0.7928 | 0.7316 |
0.0003 | 160.0 | 1600 | 2.4537 | {'precision': 0.7632850241545893, 'recall': 0.8737327188940092, 'f1': 0.8147829823807479, 'number': 1085} | {'precision': 0.6857142857142857, 'recall': 0.41379310344827586, 'f1': 0.5161290322580646, 'number': 58} | {'precision': 0.7536231884057971, 'recall': 0.7878787878787878, 'f1': 0.7703703703703704, 'number': 726} | 0.7583 | 0.8261 | 0.7908 | 0.6903 |
0.0004 | 180.0 | 1800 | 2.1619 | {'precision': 0.785593220338983, 'recall': 0.8543778801843318, 'f1': 0.8185430463576159, 'number': 1085} | {'precision': 0.5641025641025641, 'recall': 0.3793103448275862, 'f1': 0.4536082474226804, 'number': 58} | {'precision': 0.7718579234972678, 'recall': 0.778236914600551, 'f1': 0.7750342935528121, 'number': 726} | 0.7760 | 0.8101 | 0.7927 | 0.7197 |
0.0003 | 200.0 | 2000 | 2.1507 | {'precision': 0.7948051948051948, 'recall': 0.8460829493087557, 'f1': 0.8196428571428571, 'number': 1085} | {'precision': 0.631578947368421, 'recall': 0.41379310344827586, 'f1': 0.5, 'number': 58} | {'precision': 0.7438551099611902, 'recall': 0.7920110192837465, 'f1': 0.7671781187458305, 'number': 726} | 0.7716 | 0.8117 | 0.7911 | 0.7207 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0