Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(14)2023 Jul 16.
Article in English | MEDLINE | ID: mdl-37514735

ABSTRACT

Earthquakes are cataclysmic events that can harm structures and human existence. The estimation of seismic damage to buildings remains a challenging task due to several environmental uncertainties. The damage grade categorization of a building takes a significant amount of time and work. The early analysis of the damage rate of concrete building structures is essential for addressing the need to repair and avoid accidents. With this motivation, an ANOVA-Statistic-Reduced Deep Fully Connected Neural Network (ASR-DFCNN) model is proposed that can grade damages accurately by considering significant damage features. A dataset containing 26 attributes from 762,106 damaged buildings was used for the model building. This work focused on analyzing the importance of feature selection and enhancing the accuracy of damage grade categorization. Initially, a dataset without primary feature selection was utilized for damage grade categorization using various machine learning (ML) classifiers, and the performance was recorded. Secondly, ANOVA was applied to the original dataset to eliminate the insignificant attributes for determining the damage grade. The selected features were subjected to 10-component principal component analysis (PCA) to scrutinize the top-ten-ranked significant features that contributed to grading the building damage. The 10-component ANOVA PCA-reduced (ASR) dataset was applied to the classifiers for damage grade prediction. The results showed that the Bagging classifier with the reduced dataset produced the greatest accuracy of 83% among all the classifiers considering an 80:20 ratio of data for the training and testing phases. To enhance the performance of prediction, a deep fully connected convolutional neural network (DFCNN) was implemented with a reduced dataset (ASR). The proposed ASR-DFCNN model was designed with the sequential keras model with four dense layers, with the first three dense layers fitted with the ReLU activation function and the final dense layer fitted with a tanh activation function with a dropout of 0.2. The ASR-DFCNN model was compiled with a NADAM optimizer with the weight decay of L2 regularization. The damage grade categorization performance of the ASR-DFCNN model was compared with that of other ML classifiers using precision, recall, F-Scores, and accuracy values. From the results, it is evident that the ASR-DFCNN model performance was better, with 98% accuracy.

2.
Sensors (Basel) ; 22(19)2022 Sep 26.
Article in English | MEDLINE | ID: mdl-36236392

ABSTRACT

Earthquakes cause liquefaction, which disturbs the design phase during the building construction process. The potential of earthquake-induced liquefaction was estimated initially based on analytical and numerical methods. The conventional methods face problems in providing empirical formulations in the presence of uncertainties. Accordingly, machine learning (ML) algorithms were implemented to predict the liquefaction potential. Although the ML models perform well with the specific liquefaction dataset, they fail to produce accurate results when used on other datasets. This study proposes a stacked generalization model (SGM), constructed by aggregating algorithms with the best performances, such as the multilayer perceptron regressor (MLPR), support vector regression (SVR), and linear regressor, to build an efficient prediction model to estimate the potential of earthquake-induced liquefaction on settlements. The dataset from the Korean Geotechnical Information database system and the standard penetration test conducted on the 2016 Pohang earthquake in South Korea were used. The model performance was evaluated by using the R2 score, mean-square error (MSE), standard deviation, covariance, and root-MSE. Model validation was performed to compare the performance of the proposed SGM with SVR and MLPR models. The proposed SGM yielded the best performance compared with those of the other base models.


Subject(s)
Earthquakes , Algorithms , Machine Learning , Neural Networks, Computer , Soil
3.
J Healthc Eng ; 2022: 5691203, 2022.
Article in English | MEDLINE | ID: mdl-35047153

ABSTRACT

In 6G edge communication networks, the machine learning models play a major role in enabling intelligent decision-making in case of optimal resource allocation in case of the healthcare system. However, it causes a bottleneck, in the form of sophisticated memory calculations, between the hidden layers and the cost of communication between the edge devices/edge nodes and the cloud centres, while transmitting the data from the healthcare management system to the cloud centre via edge nodes. In order to reduce these hurdles, it is important to share workloads to further eliminate the problems related to complicated memory calculations and transmission costs. The effort aims mainly to reduce storage costs and cloud computing associated with neural networks as the complexity of the computations increases with increasing numbers of hidden layers. This study modifies federated teaching to function with distributed assignment resource settings as a distributed deep learning model. It improves the capacity to learn from the data and assigns an ideal workload depending on the limited available resources, slow network connection, and more edge devices. Current network status can be sent to the cloud centre by the edge devices and edge nodes autonomously using cybertwin, meaning that local data are often updated to calculate global data. The simulation shows how effective resource management and allocation is better than standard approaches. It is seen from the results that the proposed method achieves higher resource utilization and success rate than existing methods. Index Terms are fuzzy, healthcare, bioinformatics, 6G wireless communication, cybertwin, machine learning, neural network, and edge.


Subject(s)
Cloud Computing , Delivery of Health Care , Computer Simulation , Humans , Resource Allocation , Technology
4.
Sensors (Basel) ; 21(21)2021 Oct 23.
Article in English | MEDLINE | ID: mdl-34770332

ABSTRACT

In recent years, speech recognition technology has become a more common notion. Speech quality and intelligibility are critical for the convenience and accuracy of information transmission in speech recognition. The speech processing systems used to converse or store speech are usually designed for an environment without any background noise. However, in a real-world atmosphere, background intervention in the form of background noise and channel noise drastically reduces the performance of speech recognition systems, resulting in imprecise information transfer and exhausting the listener. When communication systems' input or output signals are affected by noise, speech enhancement techniques try to improve their performance. To ensure the correctness of the text produced from speech, it is necessary to reduce the external noises involved in the speech audio. Reducing the external noise in audio is difficult as the speech can be of single, continuous or spontaneous words. In automatic speech recognition, there are various typical speech enhancement algorithms available that have gained considerable attention. However, these enhancement algorithms work well in simple and continuous audio signals only. Thus, in this study, a hybridized speech recognition algorithm to enhance the speech recognition accuracy is proposed. Non-linear spectral subtraction, a well-known speech enhancement algorithm, is optimized with the Hidden Markov Model and tested with 6660 medical speech transcription audio files and 1440 Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) audio files. The performance of the proposed model is compared with those of various typical speech enhancement algorithms, such as iterative signal enhancement algorithm, subspace-based speech enhancement, and non-linear spectral subtraction. The proposed cascaded hybrid algorithm was found to achieve a minimum word error rate of 9.5% and 7.6% for medical speech and RAVDESS speech, respectively. The cascading of the speech enhancement and speech-to-text conversion architectures results in higher accuracy for enhanced speech recognition. The evaluation results confirm the incorporation of the proposed method with real-time automatic speech recognition medical applications where the complexity of terms involved is high.


Subject(s)
Speech , Voice , Algorithms , Emotions , Noise
5.
Front Public Health ; 8: 599550, 2020.
Article in English | MEDLINE | ID: mdl-33330341

ABSTRACT

In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.


Subject(s)
COVID-19/diagnosis , COVID-19/physiopathology , Lung/diagnostic imaging , SARS-CoV-2/pathogenicity , Symptom Assessment/methods , Tomography, X-Ray Computed/methods , Algorithms , Deep Learning , Humans , Machine Learning , Neural Networks, Computer , Sensitivity and Specificity
6.
Mater Today Proc ; 2020 Dec 09.
Article in English | MEDLINE | ID: mdl-33318952

ABSTRACT

Computational methods for machine learning (ML) have shown their meaning for the projection of potential results for informed decisions. Machine learning algorithms have been applied for a long time in many applications requiring the detection of adverse risk factors. This study shows the ability to predict the number of individuals who are affected by the COVID-19[1] as a potential threat to human beings by ML modelling. In this analysis, the risk factors of COVID-19 were exponential smoothing (ES). The Lower Absolute Reductor and Selection Operator, (LASSo), Vector Assistance (SVM), four normal potential forecasts, such as Linear Regression (LR)). [2] Each of these machine-learning models has three distinct kinds of predictions: the number of newly infected COVID 19 people, mortality rates and the recovered COVID-19 estimates in the next 10 days. These approaches are better used in the latest COVID-19 situation, as shown by the findings of the analysis. The LR, that is effective in predicting new cases of corona, death numbers and recovery.

SELECTION OF CITATIONS
SEARCH DETAIL
...