Your browser doesn't support javascript.
Deep Learning-Based Multi-modal COVID-19 Screening by Socially Assistive Robots Using Cough and Breathing Symptoms
14th International Conference on Social Robotics, ICSR 2022 ; 13818 LNAI:217-227, 2022.
Article in English | Scopus | ID: covidwho-2257940
ABSTRACT
In this paper, we present the development of a novel autonomous social robot deep learning architecture capable of real-time COVID-19 screening during human-robot interactions. The architecture allows for autonomous preliminary multi-modal COVID-19 detection of cough and breathing symptoms using a VGG16 deep learning framework. We train and validate our VGG16 network using existing COVID datasets. We then perform real-time non-contact preliminary COVID-19 screening experiments with the Pepper robot. The results for our deep learning architecture demonstrate 1) an average computation time of 4.57 s for detection, and 2) an accuracy of 84.4% with respect to self-reported COVID symptoms. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keywords

Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 14th International Conference on Social Robotics, ICSR 2022 Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 14th International Conference on Social Robotics, ICSR 2022 Year: 2022 Document Type: Article