SurgicBERTa
SurgicBERTa is a language model based on RoBERTa-base (Liu et al., 2019) architecture. We adapted RoBERTa-base to different surgical textbooks and academic papers via continued pretraining. This amount to about 7 million words and 300k surgical sentences. We used the full text of the books and papers in training, not just abstracts. Specific details of the adaptive pretraining procedure and evaluation tasks can be found in the paper below cited.
Citation
If using this model, please cite the following paper:
@article{bombieri_et_al_SurgicBERTa_2023,
title = {Surgicberta: a pre-trained language model for procedural surgical language},
journal = {International Journal of Data Science and Analytics},
year = {2023},
doi = { https://doi.org/10.1007/s41060-023-00433-5 },
url = { https://link.springer.com/article/10.1007/s41060-023-00433-5 },
author = {Marco Bombieri and Marco Rospocher and Simone Paolo Ponzetto and Paolo Fiorini},
}
If using this model for Semantic Role Labeling, please cite also the following paper:
@article{bombieri_et_al_surgical_srl_2023,
title = {Machine understanding surgical actions from intervention procedure textbooks},
journal = {Computers in Biology and Medicine},
volume = {152},
pages = {106415},
year = {2023},
issn = {0010-4825},
doi = {https://doi.org/10.1016/j.compbiomed.2022.106415},
url = {https://www.sciencedirect.com/science/article/pii/S0010482522011234},
author = {Marco Bombieri and Marco Rospocher and Simone Paolo Ponzetto and Paolo Fiorini},
keywords = {Semantic role labeling, Surgical data science, Procedural knowledge, Information extraction, Natural language processing}
}
- Downloads last month
- 48