stefan-it's picture
readme: add initial version of model card
fb98934
|
raw
history blame
No virus
4.62 kB
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmBERT as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 |
| bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 |
| bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 |
| bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 |
[1]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/hmbench/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️