A review on Natural Language Processing Models for COVID-19 research

Karl Hall, Victor Chang, Chrisina Jayne

Research output: Contribution to journalArticlepeer-review

83 Downloads (Pure)

Abstract

This survey paper reviews Natural Language Processing Models and their use in COVID-19 research in two main areas. Firstly, a range of transformer-based biomedical pretrained language models are evaluated using the BLURB benchmark. Secondly, models used in sentiment analysis surrounding COVID-19 vaccination are evaluated. We filtered literature curated from various repositories such as PubMed and Scopus and reviewed 27 papers. When evaluated using the BLURB benchmark, the novel T-BPLM BioLinkBERT gives groundbreaking results by incorporating document link knowledge and hyperlinking into its pretraining. Sentiment analysis of COVID-19 vaccination through various Twitter API tools has shown the public’s sentiment towards vaccination to be mostly positive. Finally, we outline some limitations and potential solutions to drive the research community to improve the models used for NLP tasks.

Original languageEnglish
Article number100078
Number of pages11
JournalHealthcare Analytics
Volume2
Early online date19 Jul 2022
DOIs
Publication statusPublished - 1 Nov 2022

Fingerprint

Dive into the research topics of 'A review on Natural Language Processing Models for COVID-19 research'. Together they form a unique fingerprint.

Cite this