Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(12)2024 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-38931541

RESUMO

Driving while drowsy poses significant risks, including reduced cognitive function and the potential for accidents, which can lead to severe consequences such as trauma, economic losses, injuries, or death. The use of artificial intelligence can enable effective detection of driver drowsiness, helping to prevent accidents and enhance driver performance. This research aims to address the crucial need for real-time and accurate drowsiness detection to mitigate the impact of fatigue-related accidents. Leveraging ultra-wideband radar data collected over five minutes, the dataset was segmented into one-minute chunks and transformed into grayscale images. Spatial features are retrieved from the images using a two-dimensional Convolutional Neural Network. Following that, these features were used to train and test multiple machine learning classifiers. The ensemble classifier RF-XGB-SVM, which combines Random Forest, XGBoost, and Support Vector Machine using a hard voting criterion, performed admirably with an accuracy of 96.6%. Additionally, the proposed approach was validated with a robust k-fold score of 97% and a standard deviation of 0.018, demonstrating significant results. The dataset is augmented using Generative Adversarial Networks, resulting in improved accuracies for all models. Among them, the RF-XGB-SVM model outperformed the rest with an accuracy score of 99.58%.


Assuntos
Inteligência Artificial , Condução de Veículo , Redes Neurais de Computação , Radar , Máquina de Vetores de Suporte , Humanos , Algoritmos , Aprendizado de Máquina
2.
Cogn Neurodyn ; 17(5): 1229-1259, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37786662

RESUMO

Driving a vehicle is a complex, multidimensional, and potentially risky activity demanding full mobilization and utilization of physiological and cognitive abilities. Drowsiness, often caused by stress, fatigue, and illness declines cognitive capabilities that affect drivers' capability and cause many accidents. Drowsiness-related road accidents are associated with trauma, physical injuries, and fatalities, and often accompany economic loss. Drowsy-related crashes are most common in young people and night shift workers. Real-time and accurate driver drowsiness detection is necessary to bring down the drowsy driving accident rate. Many researchers endeavored for systems to detect drowsiness using different features related to vehicles, and drivers' behavior, as well as, physiological measures. Keeping in view the rising trend in the use of physiological measures, this study presents a comprehensive and systematic review of the recent techniques to detect driver drowsiness using physiological signals. Different sensors augmented with machine learning are utilized which subsequently yield better results. These techniques are analyzed with respect to several aspects such as data collection sensor, environment consideration like controlled or dynamic, experimental set up like real traffic or driving simulators, etc. Similarly, by investigating the type of sensors involved in experiments, this study discusses the advantages and disadvantages of existing studies and points out the research gaps. Perceptions and conceptions are made to provide future research directions for drowsiness detection techniques based on physiological signals.

3.
Diagnostics (Basel) ; 13(18)2023 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-37761248

RESUMO

A novel approach is presented in this study for the classification of lower limb disorders, with a specific emphasis on the knee, hip, and ankle. The research employs gait analysis and the extraction of PoseNet features from video data in order to effectively identify and categorize these disorders. The PoseNet algorithm facilitates the extraction of key body joint movements and positions from videos in a non-invasive and user-friendly manner, thereby offering a comprehensive representation of lower limb movements. The features that are extracted are subsequently standardized and employed as inputs for a range of machine learning algorithms, such as Random Forest, Extra Tree Classifier, Multilayer Perceptron, Artificial Neural Networks, and Convolutional Neural Networks. The models undergo training and testing processes using a dataset consisting of 174 real patients and normal individuals collected at the Tehsil Headquarter Hospital Sadiq Abad. The evaluation of their performance is conducted through the utilization of K-fold cross-validation. The findings exhibit a notable level of accuracy and precision in the classification of various lower limb disorders. Notably, the Artificial Neural Networks model achieves the highest accuracy rate of 98.84%. The proposed methodology exhibits potential in enhancing the diagnosis and treatment planning of lower limb disorders. It presents a non-invasive and efficient method of analyzing gait patterns and identifying particular conditions.

4.
Diagnostics (Basel) ; 13(6)2023 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-36980404

RESUMO

Chronic obstructive pulmonary disease (COPD) is a severe and chronic ailment that is currently ranked as the third most common cause of mortality across the globe. COPD patients often experience debilitating symptoms such as chronic coughing, shortness of breath, and fatigue. Sadly, the disease frequently goes undiagnosed until it is too late, leaving patients without the care they desperately need. So, COPD detection at an early stage is crucial to prevent further damage to the lungs and improve quality of life. Traditional COPD detection methods often rely on physical examinations and tests such as spirometry, chest radiography, blood gas tests, and genetic tests. However, these methods may not always be accurate or accessible. One of the key vital signs for detecting COPD is the patient's respiration rate. However, it is crucial to consider a patient's medical and demographic characteristics simultaneously for better detection results. To address this issue, this study aims to detect COPD patients using artificial intelligence techniques. To achieve this goal, a novel framework is proposed that utilizes ultra-wideband (UWB) radar-based temporal and spectral features to build machine learning and deep learning models. This new set of temporal and spectral features is extracted from respiration data collected non-invasively from 1.5 m distance using UWB radar. Different machine learning and deep learning models are trained and tested on the collected dataset. The findings are promising, with a high accuracy score of 100% for COPD detection. This means that the proposed framework could potentially save lives by identifying COPD patients at an early stage. The k-fold cross-validation technique and performance comparison with the state-of-the-art studies are applied to validate its performance, ensuring that the results are robust and reliable. The high accuracy score achieved in the study implies that the proposed framework has the potential for the efficient detection of COPD at an early stage.

5.
Sensors (Basel) ; 22(20)2022 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-36298382

RESUMO

Noisy environments, changes and variations in the volume of speech, and non-face-to-face conversations impair the user experience with hearing aids. Generally, a hearing aid amplifies sounds so that a hearing-impaired person can listen, converse, and actively engage in daily activities. Presently, there are some sophisticated hearing aid algorithms available that operate on numerous frequency bands to not only amplify but also provide tuning and noise filtering to minimize background distractions. One of those is the BioAid assistive hearing system, which is an open-source, freely available downloadable app with twenty-four tuning settings. Critically, with this device, a person suffering with hearing loss must manually alter the settings/tuning of their hearing device when their surroundings and scene changes in order to attain a comfortable level of hearing. However, this manual switching among multiple tuning settings is inconvenient and cumbersome since the user is forced to switch to the state that best matches the scene every time the auditory environment changes. The goal of this study is to eliminate this manual switching and automate the BioAid with a scene classification algorithm so that the system automatically identifies the user-selected preferences based on adequate training. The aim of acoustic scene classification is to recognize the audio signature of one of the predefined scene classes that best represent the environment in which it was recorded. BioAid, an open-source biological inspired hearing aid algorithm, is used after conversion to Python. The proposed method consists of two main parts: classification of auditory scenes and selection of hearing aid tuning settings based on user experiences. The DCASE2017 dataset is utilized for scene classification. Among the many classifiers that were trained and tested, random forests have the highest accuracy of 99.7%. In the second part, clean speech audios from the LJ speech dataset are combined with scenes, and the user is asked to listen to the resulting audios and adjust the presets and subsets. A CSV file stores the selection of presets and subsets at which the user can hear clearly against the scenes. Various classifiers are trained on the dataset of user preferences. After training, clean speech audio was convolved with the scene and fed as input to the scene classifier that predicts the scene. The predicted scene was then fed as input to the preset classifier that predicts the user's choice for preset and subset. The BioAid is automatically tuned to the predicted selection. The accuracy of random forest in the prediction of presets and subsets was 100%. This proposed approach has great potential to eliminate the tedious manual switching of hearing assistive device parameters by allowing hearing-impaired individuals to actively participate in daily life by automatically adjusting hearing aid settings based on the acoustic scene.


Assuntos
Auxiliares de Audição , Perda Auditiva , Percepção da Fala , Humanos , Ruído , Perda Auditiva/terapia , Acústica
6.
Sensors (Basel) ; 21(24)2021 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-34960430

RESUMO

Emotion recognition gained increasingly prominent attraction from a multitude of fields recently due to their wide use in human-computer interaction interface, therapy, and advanced robotics, etc. Human speech, gestures, facial expressions, and physiological signals can be used to recognize different emotions. Despite the discriminating properties to recognize emotions, the first three methods have been regarded as ineffective as the probability of human's voluntary and involuntary concealing the real emotions can not be ignored. Physiological signals, on the other hand, are capable of providing more objective, and reliable emotion recognition. Based on physiological signals, several methods have been introduced for emotion recognition, yet, predominantly such approaches are invasive involving the placement of on-body sensors. The efficacy and accuracy of these approaches are hindered by the sensor malfunctioning and erroneous data due to human limbs movement. This study presents a non-invasive approach where machine learning complements the impulse radio ultra-wideband (IR-UWB) signals for emotion recognition. First, the feasibility of using IR-UWB for emotion recognition is analyzed followed by determining the state of emotions into happiness, disgust, and fear. These emotions are triggered using carefully selected video clips to human subjects involving both males and females. The convincing evidence that different breathing patterns are linked with different emotions has been leveraged to discriminate between different emotions. Chest movement of thirty-five subjects is obtained using IR-UWB radar while watching the video clips in solitude. Extensive signal processing is applied to the obtained chest movement signals to estimate respiration rate per minute (RPM). The RPM estimated by the algorithm is validated by repeated measurements by a commercially available Pulse Oximeter. A dataset is maintained comprising gender, RPM, age, and associated emotions which are further used with several machine learning algorithms for automatic recognition of human emotions. Experiments reveal that IR-UWB possesses the potential to differentiate between different human emotions with a decent accuracy of 76% without placing any on-body sensors. Separate analysis for male and female participants reveals that males experience high arousal for happiness while females experience intense fear emotions. For disgust emotion, no large difference is found for male and female participants. To the best of the authors' knowledge, this study presents the first non-invasive approach using the IR-UWB radar for emotion recognition.


Assuntos
Radar , Processamento de Sinais Assistido por Computador , Emoções , Feminino , Humanos , Aprendizado de Máquina , Masculino , Respiração
7.
Sensors (Basel) ; 21(14)2021 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-34300572

RESUMO

Drowsiness when in command of a vehicle leads to a decline in cognitive performance that affects driver behavior, potentially causing accidents. Drowsiness-related road accidents lead to severe trauma, economic consequences, impact on others, physical injury and/or even death. Real-time and accurate driver drowsiness detection and warnings systems are necessary schemes to reduce tiredness-related driving accident rates. The research presented here aims at the classification of drowsy and non-drowsy driver states based on respiration rate detection by non-invasive, non-touch, impulsive radio ultra-wideband (IR-UWB) radar. Chest movements of 40 subjects were acquired for 5 m using a lab-placed IR-UWB radar system, and respiration per minute was extracted from the resulting signals. A structured dataset was obtained comprising respiration per minute, age and label (drowsy/non-drowsy). Different machine learning models, namely, Support Vector Machine, Decision Tree, Logistic regression, Gradient Boosting Machine, Extra Tree Classifier and Multilayer Perceptron were trained on the dataset, amongst which the Support Vector Machine shows the best accuracy of 87%. This research provides a ground truth for verification and assessment of UWB to be used effectively for driver drowsiness detection based on respiration.


Assuntos
Condução de Veículo , Humanos , Redes Neurais de Computação , Taxa Respiratória , Máquina de Vetores de Suporte , Vigília
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...