BioBERT_Italian / README.md
Neuroinformatica's picture
Fixed PsyNIT link
b19eccb verified
|
raw
history blame
No virus
4.49 kB
metadata
pretty_name: BioBERT-ITA
license: cc-by-sa-4.0
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 27319024484
      num_examples: 17203146
  download_size: 14945984639
  dataset_size: 27319024484
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-generation
language:
  - it
tags:
  - medical
  - biology
size_categories:
  - 1B<n<10B

From this repository you can download the BioBERT_Italian dataset.

BioBERT_Italian is the Italian translation of the original BioBERT dataset, composed by millions of abstracts of PubMed papers.

Due to the unavailability of an Italian equivalent for the millions of abstracts and full-text scientific papers used by English, BERT-based biomedical models, we leveraged machine translation to obtain an Italian biomedical corpus based on PubMed abstracts and train BioBIT.

Corpus statistics:

  • Total Tokens^: 6.2 billions
  • Average tokens per example: 359
  • Max tokens per example: 2132
  • Min tokens per example: 5
  • Standard deviation: 137

^Tokenization with BioBIT tokenizer

BioBIT Model

BioBIT has been evaluated on 3 downstream tasks: NER (Named Entity Recognition), extractive QA (Question Answering), RE (Relation Extraction). Here are the results, summarized:

MedPsyNIT Model

We also fine-tuned BioBIT on PsyNIT (Psychiatric Ner for ITalian), a native Italian NER (Named Entity Recognition) dataset, composed by Italian Research Hospital Centro San Giovanni Di Dio Fatebenefratelli.

Correspondence to

Claudio Crema ([email protected]), Tommaso Mario Buonocore ([email protected])

Citation

@article{BUONOCORE2023104431,
title = {Localizing in-domain adaptation of transformer-based biomedical language models},
journal = {Journal of Biomedical Informatics},
volume = {144},
pages = {104431},
year = {2023},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2023.104431},
url = {https://www.sciencedirect.com/science/article/pii/S1532046423001521},
author = {Tommaso Mario Buonocore and Claudio Crema and Alberto Redolfi and Riccardo Bellazzi and Enea Parimbelli},
keywords = {Natural language processing, Deep learning, Language model, Biomedical text mining, Transformer}
}

@article{CREMA2023104557,
title = {Advancing Italian biomedical information extraction with transformers-based models: Methodological insights and multicenter practical application},
journal = {Journal of Biomedical Informatics},
volume = {148},
pages = {104557},
year = {2023},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2023.104557},
url = {https://www.sciencedirect.com/science/article/pii/S1532046423002782},
author = {Claudio Crema and Tommaso Mario Buonocore and Silvia Fostinelli and Enea Parimbelli and Federico Verde and Cira Fundarò and Marina Manera and Matteo Cotta Ramusino and Marco Capelli and Alfredo Costa and Giuliano Binetti and Riccardo Bellazzi and Alberto Redolfi},
keywords = {Natural language processing, Deep learning, Biomedical text mining, Language model, Transformer}
}