Edit model card

MediAlbertina

The first publicly available medical language models trained with real European Portuguese data.

MediAlbertina is a family of encoders from the Bert family, DeBERTaV2-based, resulting from the continuation of the pre-training of PORTULAN's Albertina models with Electronic Medical Records shared by Portugal's largest public hospital.

Like its antecessors, MediAlbertina models are distributed under the MIT license.

Model Description

MediAlbertina PT-PT 900M was created through domain adaptation of Albertina PT-PT 900M on real European Portuguese EMRs by employing masked language modeling. It underwent evaluation through fine-tuning for the Information Extraction (IE) tasks Named Entity Recognition (NER) and Assertion Status (AStatus) on more than 10k manually annotated entities belonging to the following classes: Diagnosis, Symptom, Vital Sign, Result, Medical Procedure, Medication, Dosage, and Progress. In both tasks, MediAlbertina achieved superior results to its antecessors, demonstrating the effectiveness of this domain adaptation, and its potential for medical AI in Portugal.

Model NER single-model NER multi-models Assertion Status
F1-score F1-score F1-score
albertina-900m-portuguese-ptpt-encoder 0.813 0.811 0.687
medialbertina_pt-pt_900m 0.832 0.848 0.755

Data

MediAlbertina PT-PT 900M was trained on more than 15M sentences and 300M tokens from 2.6M fully anonymized and unique Electronic Medical Records (EMRs) from Portugal's largest public hospital. This data was acquired under the framework of the FCT project DSAIPA/AI/0122/2020 AIMHealth-Mobile Applications Based on Artificial Intelligence.

How to use

from transformers import pipeline

unmasker = pipeline('fill-mask', model='portugueseNLP/medialbertina_pt-pt_900m')
unmasker("Analgesia com morfina em perfusão (15 [MASK]/kg/h)")

Citation

MediAlbertina is developed by a joint team from ISCTE-IUL, Portugal, and Select Data, CA USA. For a fully detailed description, check the respective publication:

@article{MediAlbertina PT-PT,
      title={MediAlbertina: An European Portuguese medical language model}, 
      author={Miguel Nunes and João Boné and João Ferreira
              and Pedro Chaves and Luís Elvas},
      year={2024},
      journal={CBM},
      volume={182}
      url={https://doi.org/10.1016/j.compbiomed.2024.109233}
}

Please use the above cannonical reference when using or citing this model.

Acknowledgements

This work was financially supported by Project Blockchain.PT – Decentralize Portugal with Blockchain Agenda, (Project no 51), WP2, Call no 02/C05-i01.01/2022, funded by the Portuguese Recovery and Resillience Program (PRR), The Portuguese Republic and The European Union (EU) under the framework of Next Generation EU Program.

Downloads last month
7
Safetensors
Model size
887M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including portugueseNLP/medialbertina_pt-pt_900m