Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Intell Neurosci ; 2022: 3019194, 2022.
Article in English | MEDLINE | ID: mdl-35463246

ABSTRACT

A novel multimodal biometric system is proposed using three-dimensional (3D) face and ear for human recognition. The proposed model overcomes the drawbacks of unimodal biometric systems and solves the 2D biometric problems such as occlusion and illumination. In the proposed model, initially, the principal component analysis (PCA) is utilized for 3D face recognition. Thereafter, the iterative closest point (ICP) is utilized for 3D ear recognition. Finally, the 3D face is fused with a 3D ear using score-level fusion. The simulations are performed on the Face Recognition Grand Challenge database and the University of Notre Dame Collection F database for 3D face and 3D ear datasets, respectively. Experimental results reveal that the proposed model achieves an accuracy of 99.25% using the proposed score-level fusion. Comparative analyses show that the proposed method performs better than other state-of-the-art biometric algorithms in terms of accuracy.


Subject(s)
Biometric Identification , Biometry , Algorithms , Biometric Identification/methods , Biometry/methods , Face/anatomy & histology , Humans , Principal Component Analysis
2.
J Healthc Eng ; 2022: 6005446, 2022.
Article in English | MEDLINE | ID: mdl-35388315

ABSTRACT

Human-computer interaction (HCI) has seen a paradigm shift from textual or display-based control toward more intuitive control modalities such as voice, gesture, and mimicry. Particularly, speech has a great deal of information, conveying information about the speaker's inner condition and his/her aim and desire. While word analysis enables the speaker's request to be understood, other speech features disclose the speaker's mood, purpose, and motive. As a result, emotion recognition from speech has become critical in current human-computer interaction systems. Moreover, the findings of the several professions involved in emotion recognition are difficult to combine. Many sound analysis methods have been developed in the past. However, it was not possible to provide an emotional analysis of people in a live speech. Today, the development of artificial intelligence and the high performance of deep learning methods bring studies on live data to the fore. This study aims to detect emotions in the human voice using artificial intelligence methods. One of the most important requirements of artificial intelligence works is data. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) open-source dataset was used in the study. The RAVDESS dataset contains more than 2000 data recorded as speeches and songs by 24 actors. Data were collected for eight different moods from the actors. It was aimed at detecting eight different emotion classes, including neutral, calm, happy, sad, angry, fearful, disgusted, and surprised moods. The multilayer perceptron (MLP) classifier, a widely used supervised learning algorithm, was preferred for classification. The proposed model's performance was compared with that of similar studies, and the results were evaluated. An overall accuracy of 81% was obtained for classifying eight different emotions by using the proposed model on the RAVDESS dataset.


Subject(s)
Artificial Intelligence , Speech , Computers , Emotions , Female , Humans , Male , Neural Networks, Computer
3.
Comput Intell Neurosci ; 2022: 7463091, 2022.
Article in English | MEDLINE | ID: mdl-35401731

ABSTRACT

Emotions play an essential role in human relationships, and many real-time applications rely on interpreting the speaker's emotion from their words. Speech emotion recognition (SER) modules aid human-computer interface (HCI) applications, but they are challenging to implement because of the lack of balanced data for training and clarity about which features are sufficient for categorization. This research discusses the impact of the classification approach, identifying the most appropriate combination of features and data augmentation on speech emotion detection accuracy. Selection of the correct combination of handcrafted features with the classifier plays an integral part in reducing computation complexity. The suggested classification model, a 1D convolutional neural network (1D CNN), outperforms traditional machine learning approaches in classification. Unlike most earlier studies, which examined emotions primarily through a single language lens, our analysis looks at numerous language data sets. With the most discriminating features and data augmentation, our technique achieves 97.09%, 96.44%, and 83.33% accuracy for the BAVED, ANAD, and SAVEE data sets, respectively.


Subject(s)
Neural Networks, Computer , Speech , Computers , Emotions , Humans , Language
4.
J Healthc Eng ; 2022: 8732213, 2022.
Article in English | MEDLINE | ID: mdl-35273786

ABSTRACT

Telehealth and remote patient monitoring (RPM) have been critical components that have received substantial attention and gained hold since the pandemic's beginning. Telehealth and RPM allow easy access to patient data and help provide high-quality care to patients at a low cost. This article proposes an Intelligent Remote Patient Activity Tracking System system that can monitor patient activities and vitals during those activities based on the attached sensors. An Internet of Things- (IoT-) enabled health monitoring device is designed using machine learning models to track patient's activities such as running, sleeping, walking, and exercising, the vitals during those activities such as body temperature and heart rate, and the patient's breathing pattern during such activities. Machine learning models are used to identify different activities of the patient and analyze the patient's respiratory health during various activities. Currently, the machine learning models are used to detect cough and healthy breathing only. A web application is also designed to track the data uploaded by the proposed devices.


Subject(s)
Internet of Things , Telemedicine , Artificial Intelligence , Humans , Machine Learning , Monitoring, Physiologic
5.
Sensors (Basel) ; 22(6)2022 Mar 19.
Article in English | MEDLINE | ID: mdl-35336548

ABSTRACT

Recognizing human emotions by machines is a complex task. Deep learning models attempt to automate this process by rendering machines to exhibit learning capabilities. However, identifying human emotions from speech with good performance is still challenging. With the advent of deep learning algorithms, this problem has been addressed recently. However, most research work in the past focused on feature extraction as only one method for training. In this research, we have explored two different methods of extracting features to address effective speech emotion recognition. Initially, two-way feature extraction is proposed by utilizing super convergence to extract two sets of potential features from the speech data. For the first set of features, principal component analysis (PCA) is applied to obtain the first feature set. Thereafter, a deep neural network (DNN) with dense and dropout layers is implemented. In the second approach, mel-spectrogram images are extracted from audio files, and the 2D images are given as input to the pre-trained VGG-16 model. Extensive experiments and an in-depth comparative analysis over both the feature extraction methods with multiple algorithms and over two datasets are performed in this work. The RAVDESS dataset provided significantly better accuracy than using numeric features on a DNN.


Subject(s)
Deep Learning , Speech , Algorithms , Emotions , Humans , Neural Networks, Computer
6.
Comput Intell Neurosci ; 2022: 2103975, 2022.
Article in English | MEDLINE | ID: mdl-35116063

ABSTRACT

The drones can be used to detect a group of people who are unmasked and do not maintain social distance. In this paper, a deep learning-enabled drone is designed for mask detection and social distance monitoring. A drone is one of the unmanned systems that can be automated. This system mainly focuses on Industrial Internet of Things (IIoT) monitoring using Raspberry Pi 4. This drone automation system sends alerts to the people via speaker for maintaining the social distance. This system captures images and detects unmasked persons using faster regions with convolutional neural network (faster R-CNN) model. When the system detects unmasked persons, it sends their details to respective authorities and the nearest police station. The built model covers the majority of face detection using different benchmark datasets. OpenCV camera utilizes 24/7 service reports on a daily basis using Raspberry Pi 4 and a faster R-CNN algorithm.


Subject(s)
Internet of Things , Algorithms , Humans , Neural Networks, Computer
7.
Expert Syst ; 39(6): e12834, 2022 Jul.
Article in English | MEDLINE | ID: mdl-34898797

ABSTRACT

Following the COVID-19 pandemic, there has been an increase in interest in using digital resources to contain pandemics. To avoid, detect, monitor, regulate, track, and manage diseases, predict outbreaks and conduct data analysis and decision-making processes, a variety of digital technologies are used, ranging from artificial intelligence (AI)-powered machine learning (ML) or deep learning (DL) focused applications to blockchain technology and big data analytics enabled by cloud computing and the internet of things (IoT). In this paper, we look at how emerging technologies such as the IoT and sensors, AI, ML, DL, blockchain, augmented reality, virtual reality, cloud computing, big data, robots and drones, intelligent mobile apps, and 5G are advancing health care and paving the way to combat the COVID-19 pandemic. The aim of this research is to look at possible technologies, processes, and tools for addressing COVID-19 issues such as pre-screening, early detection, monitoring infected/quarantined individuals, forecasting future infection rates, and more. We also look at the research possibilities that have arisen as a result of the use of emerging technology to handle the COVID-19 crisis.

8.
Environ Res ; 199: 111370, 2021 08.
Article in English | MEDLINE | ID: mdl-34043971

ABSTRACT

Heavy metal ions in aqueous solutions are taken into account as one of the most harmful environmental issues that ominously affect human health. Pb(II) is a common pollutant among heavy metals found in industrial wastewater, and various methods were developed to remove the Pb(II). The adsorption method was more efficient, cheap, and eco-friendly to remove the Pb(II) from aqueous solutions. The removal efficiency depends on the process parameters (initial concentration, the adsorbent dosage of T-Fe3O4 nanocomposites, residence time, and adsorbent pH). The relationship between the process parameters and output is non-linear and complex. The purpose of the present study is to develop an artificial neural networks (ANN) model to estimate and analyze the relationship between Pb(II) removal and adsorption process parameters. The model was trained with the backpropagation algorithm. The model was validated with the unseen datasets. The correlation coefficient adj.R2 values for total datasets is 0.991. The relationship between the parameters and Pb(II) removal was analyzed by sensitivity analysis and creating a virtual adsorption process. The study determined that the ANN modeling was a reliable tool for predicting and optimizing adsorption process parameters for maximum lead removal from aqueous solutions.


Subject(s)
Nanocomposites , Water Pollutants, Chemical , Adsorption , Ferric Compounds , Humans , Hydrogen-Ion Concentration , Kinetics , Lead , Neural Networks, Computer , Solutions , Water Pollutants, Chemical/analysis
9.
Comput Intell Neurosci ; 2021: 8522839, 2021.
Article in English | MEDLINE | ID: mdl-34987569

ABSTRACT

Security of the software system is a prime focus area for software development teams. This paper explores some data science methods to build a knowledge management system that can assist the software development team to ensure a secure software system is being developed. Various approaches in this context are explored using data of insurance domain-based software development. These approaches will facilitate an easy understanding of the practical challenges associated with actual-world implementation. This paper also discusses the capabilities of language modeling and its role in the knowledge system. The source code is modeled to build a deep software security analysis model. The proposed model can help software engineers build secure software by assessing the software security during software development time. Extensive experiments show that the proposed models can efficiently explore the software language modeling capabilities to classify software systems' security vulnerabilities.


Subject(s)
Language , Software , Computer Security
SELECTION OF CITATIONS
SEARCH DETAIL
...