Spanish

Model Card for Biomedical Named Entity Recognition in Spanish Clinical Texts

Our model focuses on Biomedical Named Entity Recognition (NER) in Spanish clinical texts, crucial for automated information extraction in medical research and treatment improvements. It proposes a novel approach using a Multi-Head Conditional Random Field (CRF) classifier to tackle multi-class NER tasks, overcoming challenges of overlapping entity instances. The classes it recognizes include symptoms, procedures, diseases, chemicals, and proteins.

We provide 4 different models, available as branches of this repository.

Model Details

Model Description

  • Developed by: IEETA
  • Model type: Multi-Head-CRF, Roberta Base
  • Language(s) (NLP): Spanish
  • License: MIT
  • Finetuned from model: lcampillos/roberta-es-clinical-trials-ner

Model Sources

  • Repository: IEETA Multi-Head-CRF GitHub
  • Paper: Multi-head CRF classifier for biomedical multi-class Named Entity Recognition on Spanish clinical notes [Awaiting Publication]

Authors:

Uses

Note we do not take any liability for the use of the model in any professional/medical domain. The model is intended for academic purposes only. It performs Named Entity Recognition over 5 classes namely: SYMPTOM PROCEDURE DISEASE PROTEIN CHEMICAL

How to Get Started with the Model

Please refer to our GitHub repository for more information on how to train the model and run inference: IEETA Multi-Head-CRF GitHub

Training Details

Training Data

The training data can be found on IEETA/SPACCC-Spanish-NER, which is further described on the dataset card. The dataset used consists of 4 seperate datasets:

Speeds, Sizes, Times

The models were trained using an Nvidia Quadro RTX 8000. The models for 5 classes took approximately 1 hour to train and occupy around 1GB of disk space. Additionally, this model shows linear complexity (+8 minutes) per entity class to classify.

Testing Data, Factors & Metrics

Testing Data

The testing data can be found on IEETA/SPACCC-Spanish-NER, which is further described on the dataset card.

Metrics

The models were evaluated using the micro-averaged F1-score metric, the standard for entity recognition tasks.

Results

We provide 4 separate models with various hyperparameter changes:

HLs per head Augmentation Percentage Tags Augmentation Probability F1
3 Random 0.25 0.50 78.73
3 Unknown 0.50 0.25 78.50
3 None - - 78.89
1 Random 0.25 0.50 78.89

All models are trained with a context size of 32 tokens for 60 epochs.

Citation

BibTeX:

[Awaiting Publication]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train IEETA/Multi-Head-CRF

Collection including IEETA/Multi-Head-CRF