Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Brain Res ; 1806: 148300, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-36842569

RESUMO

Irregular growth of cells in the skull is recognized as a brain tumor that can have two types such as benign and malignant. There exist various methods which are used by oncologists to assess the existence of brain tumors such as blood tests or visual assessments. Moreover, the noninvasive magnetic resonance imaging (MRI) technique without ionizing radiation has been commonly utilized for diagnosis. However, the segmentation in 3-dimensional MRI is time-consuming and the outcomes mainly depend on the operator's experience. Therefore, a novel and robust automated brain tumor detector has been suggested based on segmentation and fusion of features. To improve the localization results, we pre-processed the images using Gaussian Filter (GF), and SynthStrip: a tool for brain skull stripping. We utilized two known benchmarks for training and testing i.e., Figshare and Harvard. The proposed methodology attained 99.8% accuracy, 99.3% recall, 99.4% precision, 99.5% F1 score, and 0.989 AUC. We performed the comparative analysis of our approach with prevailing DL, classical, and segmentation-based approaches. Additionally, we also performed the cross-validation using Harvard dataset attaining 99.3% identification accuracy. The outcomes exhibit that our approach offers significant outcomes than existing methods and outperforms them.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Imageamento por Ressonância Magnética/métodos
2.
Comput Intell Neurosci ; 2022: 3019194, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35463246

RESUMO

A novel multimodal biometric system is proposed using three-dimensional (3D) face and ear for human recognition. The proposed model overcomes the drawbacks of unimodal biometric systems and solves the 2D biometric problems such as occlusion and illumination. In the proposed model, initially, the principal component analysis (PCA) is utilized for 3D face recognition. Thereafter, the iterative closest point (ICP) is utilized for 3D ear recognition. Finally, the 3D face is fused with a 3D ear using score-level fusion. The simulations are performed on the Face Recognition Grand Challenge database and the University of Notre Dame Collection F database for 3D face and 3D ear datasets, respectively. Experimental results reveal that the proposed model achieves an accuracy of 99.25% using the proposed score-level fusion. Comparative analyses show that the proposed method performs better than other state-of-the-art biometric algorithms in terms of accuracy.


Assuntos
Identificação Biométrica , Biometria , Algoritmos , Identificação Biométrica/métodos , Biometria/métodos , Face/anatomia & histologia , Humanos , Análise de Componente Principal
3.
Comput Intell Neurosci ; 2022: 8777355, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35378817

RESUMO

Sign language is the native language of deaf people, which they use in their daily life, and it facilitates the communication process between deaf people. The problem faced by deaf people is targeted using sign language technique. Sign language refers to the use of the arms and hands to communicate, particularly among those who are deaf. This varies depending on the person and the location from which they come. As a result, there is no standardization about the sign language to be used; for example, American, British, Chinese, and Arab sign languages are all distinct. Here, in this study we trained a model, which will be able to classify the Arabic sign language, which consists of 32 Arabic alphabet sign classes. In images, sign language is detected through the pose of the hand. In this study, we proposed a framework, which consists of two CNN models, and each of them is individually trained on the training set. The final predictions of the two models were ensembled to achieve higher results. The dataset used in this study is released in 2019 and is called as ArSL2018. It is launched at the Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia. The main contribution in this study is resizing the images to 64 ∗ 64 pixels, converting from grayscale images to three-channel images, and then applying the median filter to the images, which acts as lowpass filtering in order to smooth the images and reduce noise and to make the model more robust to avoid overfitting. Then, the preprocessed image is fed into two different models, which are ResNet50 and MobileNetV2. ResNet50 and MobileNetV2 architectures were implemented together. The results we achieved on the test set for the whole data are with an accuracy of about 97% after applying many preprocessing techniques and different hyperparameters for each model, and also different data augmentation techniques.


Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência , Gestos , Computadores , Humanos , Idioma , Língua de Sinais , Estados Unidos
4.
J Healthc Eng ; 2022: 6005446, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35388315

RESUMO

Human-computer interaction (HCI) has seen a paradigm shift from textual or display-based control toward more intuitive control modalities such as voice, gesture, and mimicry. Particularly, speech has a great deal of information, conveying information about the speaker's inner condition and his/her aim and desire. While word analysis enables the speaker's request to be understood, other speech features disclose the speaker's mood, purpose, and motive. As a result, emotion recognition from speech has become critical in current human-computer interaction systems. Moreover, the findings of the several professions involved in emotion recognition are difficult to combine. Many sound analysis methods have been developed in the past. However, it was not possible to provide an emotional analysis of people in a live speech. Today, the development of artificial intelligence and the high performance of deep learning methods bring studies on live data to the fore. This study aims to detect emotions in the human voice using artificial intelligence methods. One of the most important requirements of artificial intelligence works is data. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) open-source dataset was used in the study. The RAVDESS dataset contains more than 2000 data recorded as speeches and songs by 24 actors. Data were collected for eight different moods from the actors. It was aimed at detecting eight different emotion classes, including neutral, calm, happy, sad, angry, fearful, disgusted, and surprised moods. The multilayer perceptron (MLP) classifier, a widely used supervised learning algorithm, was preferred for classification. The proposed model's performance was compared with that of similar studies, and the results were evaluated. An overall accuracy of 81% was obtained for classifying eight different emotions by using the proposed model on the RAVDESS dataset.


Assuntos
Inteligência Artificial , Fala , Computadores , Emoções , Feminino , Humanos , Masculino , Redes Neurais de Computação
5.
Comput Intell Neurosci ; 2022: 7463091, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35401731

RESUMO

Emotions play an essential role in human relationships, and many real-time applications rely on interpreting the speaker's emotion from their words. Speech emotion recognition (SER) modules aid human-computer interface (HCI) applications, but they are challenging to implement because of the lack of balanced data for training and clarity about which features are sufficient for categorization. This research discusses the impact of the classification approach, identifying the most appropriate combination of features and data augmentation on speech emotion detection accuracy. Selection of the correct combination of handcrafted features with the classifier plays an integral part in reducing computation complexity. The suggested classification model, a 1D convolutional neural network (1D CNN), outperforms traditional machine learning approaches in classification. Unlike most earlier studies, which examined emotions primarily through a single language lens, our analysis looks at numerous language data sets. With the most discriminating features and data augmentation, our technique achieves 97.09%, 96.44%, and 83.33% accuracy for the BAVED, ANAD, and SAVEE data sets, respectively.


Assuntos
Redes Neurais de Computação , Fala , Computadores , Emoções , Humanos , Idioma
6.
J Healthc Eng ; 2022: 8732213, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35273786

RESUMO

Telehealth and remote patient monitoring (RPM) have been critical components that have received substantial attention and gained hold since the pandemic's beginning. Telehealth and RPM allow easy access to patient data and help provide high-quality care to patients at a low cost. This article proposes an Intelligent Remote Patient Activity Tracking System system that can monitor patient activities and vitals during those activities based on the attached sensors. An Internet of Things- (IoT-) enabled health monitoring device is designed using machine learning models to track patient's activities such as running, sleeping, walking, and exercising, the vitals during those activities such as body temperature and heart rate, and the patient's breathing pattern during such activities. Machine learning models are used to identify different activities of the patient and analyze the patient's respiratory health during various activities. Currently, the machine learning models are used to detect cough and healthy breathing only. A web application is also designed to track the data uploaded by the proposed devices.


Assuntos
Internet das Coisas , Telemedicina , Inteligência Artificial , Humanos , Aprendizado de Máquina , Monitorização Fisiológica
7.
Sensors (Basel) ; 22(6)2022 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-35336548

RESUMO

Recognizing human emotions by machines is a complex task. Deep learning models attempt to automate this process by rendering machines to exhibit learning capabilities. However, identifying human emotions from speech with good performance is still challenging. With the advent of deep learning algorithms, this problem has been addressed recently. However, most research work in the past focused on feature extraction as only one method for training. In this research, we have explored two different methods of extracting features to address effective speech emotion recognition. Initially, two-way feature extraction is proposed by utilizing super convergence to extract two sets of potential features from the speech data. For the first set of features, principal component analysis (PCA) is applied to obtain the first feature set. Thereafter, a deep neural network (DNN) with dense and dropout layers is implemented. In the second approach, mel-spectrogram images are extracted from audio files, and the 2D images are given as input to the pre-trained VGG-16 model. Extensive experiments and an in-depth comparative analysis over both the feature extraction methods with multiple algorithms and over two datasets are performed in this work. The RAVDESS dataset provided significantly better accuracy than using numeric features on a DNN.


Assuntos
Aprendizado Profundo , Fala , Algoritmos , Emoções , Humanos , Redes Neurais de Computação
8.
Comput Intell Neurosci ; 2022: 2103975, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35116063

RESUMO

The drones can be used to detect a group of people who are unmasked and do not maintain social distance. In this paper, a deep learning-enabled drone is designed for mask detection and social distance monitoring. A drone is one of the unmanned systems that can be automated. This system mainly focuses on Industrial Internet of Things (IIoT) monitoring using Raspberry Pi 4. This drone automation system sends alerts to the people via speaker for maintaining the social distance. This system captures images and detects unmasked persons using faster regions with convolutional neural network (faster R-CNN) model. When the system detects unmasked persons, it sends their details to respective authorities and the nearest police station. The built model covers the majority of face detection using different benchmark datasets. OpenCV camera utilizes 24/7 service reports on a daily basis using Raspberry Pi 4 and a faster R-CNN algorithm.


Assuntos
Internet das Coisas , Algoritmos , Humanos , Redes Neurais de Computação
9.
Expert Syst ; 39(6): e12834, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34898797

RESUMO

Following the COVID-19 pandemic, there has been an increase in interest in using digital resources to contain pandemics. To avoid, detect, monitor, regulate, track, and manage diseases, predict outbreaks and conduct data analysis and decision-making processes, a variety of digital technologies are used, ranging from artificial intelligence (AI)-powered machine learning (ML) or deep learning (DL) focused applications to blockchain technology and big data analytics enabled by cloud computing and the internet of things (IoT). In this paper, we look at how emerging technologies such as the IoT and sensors, AI, ML, DL, blockchain, augmented reality, virtual reality, cloud computing, big data, robots and drones, intelligent mobile apps, and 5G are advancing health care and paving the way to combat the COVID-19 pandemic. The aim of this research is to look at possible technologies, processes, and tools for addressing COVID-19 issues such as pre-screening, early detection, monitoring infected/quarantined individuals, forecasting future infection rates, and more. We also look at the research possibilities that have arisen as a result of the use of emerging technology to handle the COVID-19 crisis.

10.
Environ Res ; 199: 111370, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34043971

RESUMO

Heavy metal ions in aqueous solutions are taken into account as one of the most harmful environmental issues that ominously affect human health. Pb(II) is a common pollutant among heavy metals found in industrial wastewater, and various methods were developed to remove the Pb(II). The adsorption method was more efficient, cheap, and eco-friendly to remove the Pb(II) from aqueous solutions. The removal efficiency depends on the process parameters (initial concentration, the adsorbent dosage of T-Fe3O4 nanocomposites, residence time, and adsorbent pH). The relationship between the process parameters and output is non-linear and complex. The purpose of the present study is to develop an artificial neural networks (ANN) model to estimate and analyze the relationship between Pb(II) removal and adsorption process parameters. The model was trained with the backpropagation algorithm. The model was validated with the unseen datasets. The correlation coefficient adj.R2 values for total datasets is 0.991. The relationship between the parameters and Pb(II) removal was analyzed by sensitivity analysis and creating a virtual adsorption process. The study determined that the ANN modeling was a reliable tool for predicting and optimizing adsorption process parameters for maximum lead removal from aqueous solutions.


Assuntos
Nanocompostos , Poluentes Químicos da Água , Adsorção , Compostos Férricos , Humanos , Concentração de Íons de Hidrogênio , Cinética , Chumbo , Redes Neurais de Computação , Soluções , Poluentes Químicos da Água/análise
11.
Comput Intell Neurosci ; 2021: 8522839, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34987569

RESUMO

Security of the software system is a prime focus area for software development teams. This paper explores some data science methods to build a knowledge management system that can assist the software development team to ensure a secure software system is being developed. Various approaches in this context are explored using data of insurance domain-based software development. These approaches will facilitate an easy understanding of the practical challenges associated with actual-world implementation. This paper also discusses the capabilities of language modeling and its role in the knowledge system. The source code is modeled to build a deep software security analysis model. The proposed model can help software engineers build secure software by assessing the software security during software development time. Extensive experiments show that the proposed models can efficiently explore the software language modeling capabilities to classify software systems' security vulnerabilities.


Assuntos
Idioma , Software , Segurança Computacional
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...