Your browser doesn't support javascript.
CNN-LSTM Hybrid Real-Time IoT-Based Cognitive Approaches for ISLR with WebRTC: Auditory Impaired Assistive Technology.
Gupta, Meenu; Thakur, Narina; Bansal, Dhruvi; Chaudhary, Gopal; Davaasambuu, Battulga; Hua, Qiaozhi.
  • Gupta M; Department of Computer Science and Engineering, Chandigarh University, Punjab, India.
  • Thakur N; CSE Department, Bhagwan Parshuram Institute of Technology, New Delhi, India.
  • Bansal D; Department of Electrical and Electronics Engineering Department, Bharati Vidyapeeth's College of Engineering New Delhi, New Delhi, India.
  • Chaudhary G; Bharati Vidyapeeth's College of Engineering, New Delhi, India.
  • Davaasambuu B; Department of Electronics and Communication Engineering, School of Engineering and Applied Sciences, National University of Mongolia, Ulan Bator, Mongolia.
  • Hua Q; Computer School, Hubei University of Arts and Science, Xiangyang 441000, China.
J Healthc Eng ; 2022: 3978627, 2022.
Article in English | MEDLINE | ID: covidwho-1997246
ABSTRACT
In the era of modern technology, people may readily communicate through facial expressions, body language, and other means. As the use of the Internet evolves, it may be a boon to the medical fields. Recently, the Internet of Medical Things (IoMT) has provided a broader platform to handle difficulties linked to healthcare, including people's listening and hearing impairment. Although there are many translators that exist to help people of various linguistic backgrounds communicate more effectively. Using kinesics linguistics, one may assess or comprehend the communications of auditory and hearing-impaired persons who are standing next to each other. When looking at the present COVID-19 scenario, individuals are still linked in some way via online platforms; however, persons with disabilities have communication challenges with online platforms. The work provided in this research serves as a communication bridge inside the challenged community and the rest of the globe. The proposed work for Indian Sign Linguistic Recognition (ISLR) uses three-dimensional convolutional neural networks (3D-CNNs) and long short-term memory (LSTM) technique for analysis. A conventional hand gesture recognition system involves identifying the hand and its location or orientation, extracting certain essential features and applying an appropriate machine learning algorithm to recognise the completed action. In the calling interface of the web application, WebRTC has been implemented. A teleprompting technology is also used in the web app, which transforms sign language into audible sound. The proposed web app's average recognition rate is 97.21%.
Subject(s)

Full text: Available Collection: International databases Database: MEDLINE Main subject: Self-Help Devices / COVID-19 Limits: Humans Language: English Journal: J Healthc Eng Year: 2022 Document Type: Article Affiliation country: 2022

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Main subject: Self-Help Devices / COVID-19 Limits: Humans Language: English Journal: J Healthc Eng Year: 2022 Document Type: Article Affiliation country: 2022