Text-based predictions of COVID-19 diagnosis from self-reported chemosensory descriptions

Hongyang Li, Richard C. Gerkin, Alyssa Bakke, Raquel Norel, Guillermo Cecchi, Christophe Laudamiel, Masha Y. Niv, Kathrin Ohla, John E. Hayes, Valentina Parma, Pablo Meyer

Research output: Contribution to journalArticlepeer-review


Background There is a prevailing view that humans' capacity to use language to characterize sensations like odors or tastes is poor, providing an unreliable source of information. Methods Here, we developed a machine learning method based on Natural Language Processing (NLP) using Large Language Models (LLM) to predict COVID-19 diagnosis solely based on text descriptions of acute changes in chemosensation, i.e., smell, taste and chemesthesis, caused by the disease. The dataset of more than 1500 subjects was obtained from survey responses early in the COVID-19 pandemic, in Spring 2020. Results When predicting COVID-19 diagnosis, our NLP model performs comparably (AUC ROC similar to 0.65) to models based on self-reported changes in function collected via quantitative rating scales. Further, our NLP model could attribute importance of words when performing the prediction; sentiment and descriptive words such as ``smell'', ``taste'', ``sense'', had strong contributions to the predictions. In addition, adjectives describing specific tastes or smells such as ``salty'', ``sweet'', ``spicy'', and ``sour'' also contributed considerably to predictions. Conclusions Our results show that the description of perceptual symptoms caused by a viral infection can be used to fine-tune an LLM model to correctly predict and interpret the diagnostic status of a subject. In the future, similar models may have utility for patient verbatims from online health portals or electronic health records.
Original languageEnglish
Issue number1
StatePublished - 27 Jul 2023


Dive into the research topics of 'Text-based predictions of COVID-19 diagnosis from self-reported chemosensory descriptions'. Together they form a unique fingerprint.

Cite this