In Biomedical Named Entity Recognition (BioNER), the use of current cutting-edge deep learning-based methods, such as deep bidirectional transformers (e.g. BERT, GPT-3), can be substantially hampered by the absence of publicly accessible annotated datasets. When the BioNER system is required to annotate multiple entity types, various challenges arise because the majority of current publicly available datasets contain annotations for just one entity type: for example, mentions of disease entities may not be annotated in a dataset specialized in the recognition of drugs , resulting in a poor ground truth when using the two datasets to train a single multi-task model. In this work, we propose TaughtNet , a knowledge distillation based framework allowing us to fine-tune a single multi-task student model by leveraging both the ground truth and the knowledge of single-task teachers . Our experiments on the recognition of mentions of diseases, chemical compounds and genes show the appropriateness and relevance of our approach w.r.t. strong state-of-the-art baselines in terms of precision, recall and F1 scores. Moreover, TaughtNet allows us to train smaller and lighter student models, which may be easier to be used in real-world scenarios, where they have to be deployed on limited memory hardware devices and guarantee fast inferences, and shows a high potential to provide explainability. We publicly release both our code on github and our multi-task model on the huggingface repository.

TaughtNet: Learning Multi-Task Biomedical Named Entity Recognition From Single-Task Teachers / Moscato, Vincenzo; Postiglione, Marco; Sansone, Carlo; Sperli', Giancarlo. - In: IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS. - ISSN 2168-2194. - 27:5(2023), pp. 2512-2523. [10.1109/JBHI.2023.3244044]

TaughtNet: Learning Multi-Task Biomedical Named Entity Recognition From Single-Task Teachers

Vincenzo Moscato;Marco Postiglione;Carlo Sansone;Giancarlo Sperli
2023

Abstract

In Biomedical Named Entity Recognition (BioNER), the use of current cutting-edge deep learning-based methods, such as deep bidirectional transformers (e.g. BERT, GPT-3), can be substantially hampered by the absence of publicly accessible annotated datasets. When the BioNER system is required to annotate multiple entity types, various challenges arise because the majority of current publicly available datasets contain annotations for just one entity type: for example, mentions of disease entities may not be annotated in a dataset specialized in the recognition of drugs , resulting in a poor ground truth when using the two datasets to train a single multi-task model. In this work, we propose TaughtNet , a knowledge distillation based framework allowing us to fine-tune a single multi-task student model by leveraging both the ground truth and the knowledge of single-task teachers . Our experiments on the recognition of mentions of diseases, chemical compounds and genes show the appropriateness and relevance of our approach w.r.t. strong state-of-the-art baselines in terms of precision, recall and F1 scores. Moreover, TaughtNet allows us to train smaller and lighter student models, which may be easier to be used in real-world scenarios, where they have to be deployed on limited memory hardware devices and guarantee fast inferences, and shows a high potential to provide explainability. We publicly release both our code on github and our multi-task model on the huggingface repository.
2023
TaughtNet: Learning Multi-Task Biomedical Named Entity Recognition From Single-Task Teachers / Moscato, Vincenzo; Postiglione, Marco; Sansone, Carlo; Sperli', Giancarlo. - In: IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS. - ISSN 2168-2194. - 27:5(2023), pp. 2512-2523. [10.1109/JBHI.2023.3244044]
File in questo prodotto:
File Dimensione Formato  
TaughtNet_Learning_Multi-Task_Biomedical_Named_Entity_Recognition_From_Single-Task_Teachers.pdf

accesso aperto

Licenza: Creative commons
Dimensione 3.31 MB
Formato Adobe PDF
3.31 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/911266
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact