Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38959147

RESUMO

All three contrast-enhanced (CE) phases (e.g., Arterial, Portal Venous, and Delay) are crucial for diagnosing liver tumors. However, acquiring all three phases is constrained due to contrast agents (CAs) risks, long imaging time, and strict imaging criteria. In this paper, we propose a novel Common-Unique Decomposition Driven Diffusion Model (CUDD-DM), capable of converting any two input phases in three phases into the remaining one, thereby reducing patient wait time, conserving medical resources, and reducing the use of CAs. 1) The Common-Unique Feature Decomposition Module, by utilizing spectral decomposition to capture both common and unique features among different inputs, not only learns correlations in highly similar areas between two input phases but also learns differences in different areas, thereby laying a foundation for the synthesis of remaining phase. 2) The Multi-scale Temporal Reset Gates Module, by bidirectional comparing lesions in current and multiple historical slices, maximizes reliance on previous slices when no lesions and minimizes this reliance when lesions are present, thereby preventing interference between consecutive slices. 3) The Diffusion Model-Driven Lesion Detail Synthesis Module, by employing a continuous and progressive generation process, accurately captures detailed features between data distributions, thereby avoiding the loss of detail caused by traditional methods (e.g., GAN) that overfocus on global distributions. Extensive experiments on a generalized CE liver tumor dataset have demonstrated that our CUDD-DM achieves state-of-the-art performance (improved the SSIM by at least 2.2% (lesions area 5.3%) comparing the seven leading methods). These results demonstrate that CUDD-DM advances CE liver tumor imaging technology.

2.
PeerJ Comput Sci ; 10: e2035, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855251

RESUMO

Currently, most traffic simulations require residents' travel plans as input data; however, in real scenarios, it is difficult to obtain real residents' travel behavior data for various reasons, such as a large amount of data and the protection of residents' privacy. This study proposes a method combining a convolutional neural network (CNN) and a long short-term memory network (LSTM) for analyzing and compensating spatiotemporal features in residents' travel data. By exploiting the spatial feature extraction capability of CNNs and the advantages of LSTMs in processing time-series data, the aim is to achieve a traffic simulation close to a real scenario using limited data by modeling travel time and space. The experimental results show that the method proposed in this article is closer to the real data in terms of the average traveling distance compared with the use of the modulation method and the statistical estimation method. The new strategy we propose can significantly reduce the deviation of the model from the original data, thereby significantly reducing the basic error rate by about 50%.

3.
Neural Netw ; 169: 685-697, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37972512

RESUMO

With the growing exploration of marine resources, underwater image enhancement has gained significant attention. Recent advances in convolutional neural networks (CNN) have greatly impacted underwater image enhancement techniques. However, conventional CNN-based methods typically employ a single network structure, which may compromise robustness in challenging conditions. Additionally, commonly used UNet networks generally force fusion from low to high resolution for each layer, leading to inaccurate contextual information encoding. To address these issues, we propose a novel network called Cascaded Network with Multi-level Sub-networks (CNMS), which encompasses the following key components: (a) a cascade mechanism based on local modules and global networks for extracting feature representations with richer semantics and enhanced spatial precision, (b) information exchange between different resolution streams, and (c) a triple attention module for extracting attention-based features. CNMS selectively cascades multiple sub-networks through triple attention modules to extract distinct features from underwater images, bolstering the network's robustness and improving generalization capabilities. Within the sub-network, we introduce a Multi-level Sub-network (MSN) that spans multiple resolution streams, combining contextual information from various scales while preserving the original underwater images' high-resolution spatial details. Comprehensive experiments on multiple underwater datasets demonstrate that CNMS outperforms state-of-the-art methods in image enhancement tasks.


Assuntos
Generalização Psicológica , Aumento da Imagem , Redes Neurais de Computação , Semântica , Processamento de Imagem Assistida por Computador
4.
PeerJ Comput Sci ; 9: e1599, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077566

RESUMO

Background: Alzheimer's disease (AD) is a disease that manifests itself with a deterioration in all mental activities, daily activities, and behaviors, especially memory, due to the constantly increasing damage to some parts of the brain as people age. Detecting AD at an early stage is a significant challenge. Various diagnostic devices are used to diagnose AD. Magnetic Resonance Images (MRI) devices are widely used to analyze and classify the stages of AD. However, the time-consuming process of recording the affected areas of the brain in the images obtained from these devices is another challenge. Therefore, conventional techniques cannot detect the early stage of AD. Methods: In this study, we proposed a deep learning model supported by a fusion loss model that includes fully connected layers and residual blocks to solve the above-mentioned challenges. The proposed model has been trained and tested on the publicly available T1-weighted MRI-based KAGGLE dataset. Data augmentation techniques were used after various preliminary operations were applied to the data set. Results: The proposed model effectively classified four AD classes in the KAGGLE dataset. The proposed model reached the test accuracy of 0.973 in binary classification and 0.982 in multi-class classification thanks to experimental studies and provided a superior classification performance than other studies in the literature. The proposed method can be used online to detect AD and has the feature of a system that will help doctors in the decision-making process.

5.
Comput Biol Med ; 161: 107031, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37211002

RESUMO

In this paper, we proposed a novel approach to diagnose and classify Parkinson's Disease (PD) using ensemble learning and 1D-PDCovNN, a novel deep learning technique. PD is a neurodegenerative disorder; early detection and correct classification are essential for better disease management. The primary aim of this study is to develop a robust approach to diagnosing and classifying PD using EEG signals. As the dataset, we have used the San Diego Resting State EEG dataset to evaluate our proposed method. The proposed method mainly consists of three stages. In the first stage, the Independent Component Analysis (ICA) method has been used as the pre-processing method to filter out the blink noises from the EEG signals. Also, the effect of the band showing motor cortex activity in the 7-30 Hz frequency band of EEG signals in diagnosing and classifying Parkinson's disease from EEG signals has been investigated. In the second stage, the Common Spatial Pattern (CSP) method has been used as the feature extraction to extract useful information from EEG signals. Finally, an ensemble learning approach, Dynamic Classifier Selection (DCS) in Modified Local Accuracy (MLA), has been employed in the third stage, consisting of seven different classifiers. As the classifier method, DCS in MLA, XGBoost, and 1D-PDCovNN classifier has been used to classify the EEG signals as the PD and healthy control (HC). We first used dynamic classifier selection to diagnose and classify Parkinson's disease (PD) from EEG signals, and promising results have been obtained. The performance of the proposed approach has been evaluated using the classification accuracy, F-1 score, kappa score, Jaccard score, ROC curve, recall, and precision values in the classification of PD with the proposed models. In the classification of PD, the combination of DCS in MLA achieved an accuracy of 99,31%. The results of this study demonstrate that the proposed approach can be used as a reliable tool for early diagnosis and classification of PD.


Assuntos
Eletroencefalografia , Doença de Parkinson , Humanos , Eletroencefalografia/métodos , Doença de Parkinson/diagnóstico , Algoritmos , Máquina de Vetores de Suporte , Córtex Cerebral
6.
Diagnostics (Basel) ; 13(7)2023 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-37046499

RESUMO

This paper investigates new feature extraction and regression methods for predicting cuffless blood pressure from PPG signals. Cuffless blood pressure is a technology that measures blood pressure without needing a cuff. This technology can be used in various medical applications, including home health monitoring, clinical uses, and portable devices. The new feature extraction method involves extracting meaningful features (time and chaotic features) from the PPG signals in the prediction of systolic blood pressure (SBP) and diastolic blood pressure (DBP) values. These extracted features are then used as inputs to regression models, which are used to predict cuffless blood pressure. The regression model performances were evaluated using root mean squared error (RMSE), R2, mean square error (MSE), and the mean absolute error (MAE). The obtained RMSE was 4.277 for systolic blood pressure (SBP) values using the Matérn 5/2 Gaussian process regression model. The obtained RMSE was 2.303 for diastolic blood pressure (DBP) values using the rational quadratic Gaussian process regression model. The results of this study have shown that the proposed feature extraction and regression models can predict cuffless blood pressure with reasonable accuracy. This study provides a novel approach for predicting cuffless blood pressure and can be used to develop more accurate models in the future.

7.
Diagnostics (Basel) ; 13(3)2023 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-36766680

RESUMO

This study uses machine learning to perform the hearing test (audiometry) processes autonomously with EEG signals. Sounds with different amplitudes and wavelengths given to the person tested in standard hearing tests are assigned randomly with the interface designed with MATLAB GUI. The person stated that he heard the random size sounds he listened to with headphones but did not take action if he did not hear them. Simultaneously, EEG (electro-encephalography) signals were followed, and the waves created in the brain by the sounds that the person attended and did not hear were recorded. EEG data generated at the end of the test were pre-processed, and then feature extraction was performed. The heard and unheard information received from the MATLAB interface was combined with the EEG signals, and it was determined which sounds the person heard and which they did not hear. During the waiting period between the sounds given via the interface, no sound was given to the person. Therefore, these times are marked as not heard in EEG signals. In this study, brain signals were measured with Brain Products Vamp 16 EEG device, and then EEG raw data were created using the Brain Vision Recorder program and MATLAB. After the data set was created from the signal data produced by the heard and unheard sounds in the brain, machine learning processes were carried out with the PYTHON programming language. The raw data created with MATLAB was taken with the Python programming language, and after the pre-processing steps were completed, machine learning methods were applied to the classification algorithms. Each raw EEG data has been detected by the Count Vectorizer method. The importance of each EEG signal in all EEG data has been calculated using the TF-IDF (Term Frequency-Inverse Document Frequency) method. The obtained dataset has been classified according to whether people can hear the sound. Naïve Bayes, Light Gradient Strengthening Machine (LGBM), support vector machine (SVM), decision tree, k-NN, logistic regression, and random forest classifier algorithms have been applied in the analysis. The algorithms selected in our study were preferred because they showed superior performance in ML and succeeded in analyzing EEG signals. Selected classification algorithms also have features of being used online. Naïve Bayes, Light Gradient Strengthening Machine (LGBM), support vector machine (SVM), decision tree, k-NN, logistic regression, and random forest classifier algorithms were used. In the analysis of EEG signals, Light Gradient Strengthening Machine (LGBM) was obtained as the best method. It was determined that the most successful algorithm in prediction was the prediction of the LGBM classification algorithm, with a success rate of 84%. This study has revealed that hearing tests can also be performed using brain waves detected by an EEG device. Although a completely independent hearing test can be created, an audiologist or doctor may be needed to evaluate the results.

8.
Diagnostics (Basel) ; 13(2)2023 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-36673072

RESUMO

Melanoma is known worldwide as a malignant tumor and the fastest-growing skin cancer type. It is a very life-threatening disease with a high mortality rate. Automatic melanoma detection improves the early detection of the disease and the survival rate. In accordance with this purpose, we presented a multi-task learning approach based on melanoma recognition with dermoscopy images. Firstly, an effective pre-processing approach based on max pooling, contrast, and shape filters is used to eliminate hair details and to perform image enhancement operations. Next, the lesion region was segmented with a VGGNet model-based FCN Layer architecture using enhanced images. Later, a cropping process was performed for the detected lesions. Then, the cropped images were converted to the input size of the classifier model using the very deep super-resolution neural network approach, and the decrease in image resolution was minimized. Finally, a deep learning network approach based on pre-trained convolutional neural networks was developed for melanoma classification. We used the International Skin Imaging Collaboration, a publicly available dermoscopic skin lesion dataset in experimental studies. While the performance measures of accuracy, specificity, precision, and sensitivity, obtained for segmentation of the lesion region, were produced at rates of 96.99%, 92.53%, 97.65%, and 98.41%, respectively, the performance measures achieved rates for classification of 97.73%, 99.83%, 99.83%, and 95.67%, respectively.

9.
IEEE J Biomed Health Inform ; 27(2): 944-955, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36367916

RESUMO

Atrial fibrillation (AF) is one of the clinic's most common arrhythmias with high morbidity and mortality. Developing an intelligent auxiliary diagnostic model of AF based on a body surface electrocardiogram (ECG) is necessary. Convolutional neural network (CNN) is one of the most commonly used models for AF recognition. However, typical CNN is not compatible with variable-duration ECG, so it is hard to demonstrate its universality and generalization in practical applications. Hence, this paper proposes a novel Time-adaptive densely network named MP-DLNet-F. The MP-DLNet module solves the problem of incompatibility between variable-duration ECG and 1D-CNN. In addition, the feature enhancement module and data imbalance processing module are respectively used to enhance the perception of temporal-quality information and decrease the sensitivity to data imbalance. The experimental results indicate that the proposed MP-DLNet-F achieved 87.98% classification accuracy, and F1-score of 0.847 on the CinC2017 database for 10-second cropped/padded single-lead ECG fragments. Furthermore, we deploy transfer learning techniques to test heterogeneous datasets, and in the CPSC2018 12-lead dataset, the method improved the average accuracy and F1-score by 21.81% and 16.14%, respectively. Experimental results indicate that our method can update the constructed model's parameters and precisely forecast AF with different duration distributions and lead distributions. Combining these advantages, MP-DLNet-F can exemplify all kinds of varied-duration or imbalance medical signal processing problems such as Electroencephalogram (EEG) and Photoplethysmography (PPG).


Assuntos
Fibrilação Atrial , Humanos , Fibrilação Atrial/diagnóstico , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Eletrocardiografia/métodos , Fotopletismografia/métodos , Algoritmos
10.
Comput Math Methods Med ; 2022: 2157322, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35936380

RESUMO

Segmentation of skin lesions plays a very important role in the early detection of skin cancer. However, indistinguishability due to various artifacts such as hair and contrast between normal skin and lesioned skin is an important challenge for specialist dermatologists. Computer-aided diagnostic systems using deep convolutional neural networks are gaining importance in order to cope with difficulties. This study focuses on deep learning-based fusion networks and fusion loss functions. For the automatic segmentation of skin lesions, U-Net (U-Net + ResNet 2D) with 2D residual blocks and 2D volumetric convolutional neural networks were fused for the first time in this study. Also, a new fusion loss function is proposed by combining Dice Loss (DL) and Focal Tversky Loss (FTL) to make the proposed fused model more robust. Of the 2594 image dataset, 20% is reserved for test data and 80% for training data. In test data training, a Jaccard score of 0.837 and a dice score of 0.918 were obtained. The proposed model was also scored on the ISIC 2018 Task 1 test images, whose ground truths were not shared. The proposed model performed well and achieved a Jaccard index of 0.800 and a dice score of 0.880 in the ISIC 2018 Task 1 test set. In addition, it has been observed that the new fused loss function obtained by fusing Focal Tversky Loss and Dice Loss functions in the proposed model increases the robustness of the model in the tests. The proposed new loss function fusion model has outstripped the cutting-edge approaches in the literature.


Assuntos
Dermatopatias , Neoplasias Cutâneas , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
11.
Comput Math Methods Med ; 2022: 8717263, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35924113

RESUMO

Speech is one form of biometric that combines both physiological and behavioral features. It is beneficial for remote-access transactions over telecommunication networks. Presently, this task is the most challenging one for researchers. People's mental status in the form of emotions is quite complex, and its complexity depends upon internal behavior. Emotion and facial behavior are essential characteristics through which human internal thought can be predicted. Speech is one of the mechanisms through which human's various internal reflections can be expected and extracted by focusing on the vocal track, the flow of voice, voice frequency, etc. Human voice specimens of different ages can be emotions that can be predicted through a deep learning approach using feature removal behavior prediction that will help build a step intelligent healthcare system strong and provide data to various doctors of medical institutes and hospitals to understand the physiological behavior of humans. Healthcare is a clinical area with data concentrated where many details are accessed, generated, and circulated periodically. Healthcare systems with many existing approaches like tracing and tracking continuously disclose the system's constraints in controlling patient data privacy and security. In the healthcare system, majority of the work involves swapping or using decisively confidential and personal data. A key issue is the modeling of approaches that guarantee the value of health-related data while protecting privacy and observing high behavioral standards. This will encourage large-scale perception, especially as healthcare information collection is expected to continue far off this current ongoing pandemic. So, the research section is looking for a privacy-preserving, secure, and sustainable system by using a technology called Blockchain. Data related to healthcare and distribution among institutions is a very challenging task. Storage of facts in the centralized form is a targeted choice for cyber hackers and initiates an accordant sight of patients' facts which will cause a problem in sharing information over a network. So, this research paper's approach based on Blockchain for sharing sufferer data in a secured manner is presented. Finally, the proposed model for extracting optimum value in error rate and accuracy was analyzed using different feature removal approaches to determine which feature removal performs better with different voice specimen variations. The proposed method increases the rate of correct evidence collection and minimizes the loss and authentication issues and using feature extraction based on text validation increases the sustainability of the healthcare system.


Assuntos
Blockchain , Redes de Comunicação de Computadores , Segurança Computacional , Confidencialidade , Atenção à Saúde , Humanos , Privacidade
12.
Comput Math Methods Med ; 2022: 5714454, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35903432

RESUMO

Objective: Measurement and monitoring of blood pressure are of great importance for preventing diseases such as cardiovascular and stroke caused by hypertension. Therefore, there is a need for advanced artificial intelligence-based systolic and diastolic blood pressure systems with a new technological infrastructure with a noninvasive process. The study is aimed at determining the minimum ECG time required for calculating systolic and diastolic blood pressure based on the Electrocardiography (ECG) signal. Methodology. The study includes ECG recordings of five individuals taken from the IEEE database, measured during daily activity. For the study, each signal was divided into epochs of 2-4-6-8-10-12-14-16-18-20 seconds. Twenty-five features were extracted from each epoched signal. The dimension of the dataset was reduced by using Spearman's feature selection algorithm. Analysis based on metrics was carried out by applying machine learning algorithms to the obtained dataset. Gaussian process regression exponential (GPR) machine learning algorithm was preferred because it is easy to integrate into embedded systems. Results: The MAPE estimation performance values for diastolic and systolic blood pressure values for 16-second epochs were 2.44 mmHg and 1.92 mmHg, respectively. Conclusion: According to the study results, it is evaluated that systolic and diastolic blood pressure values can be calculated with a high-performance ratio with 16-second ECG signals.


Assuntos
Inteligência Artificial , Determinação da Pressão Arterial , Algoritmos , Pressão Sanguínea/fisiologia , Determinação da Pressão Arterial/métodos , Eletrocardiografia , Humanos , Aprendizado de Máquina
13.
Comput Math Methods Med ; 2022: 2733965, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35693266

RESUMO

Lung cancer has emerged as a major cause of death among all demographics worldwide, largely caused by a proliferation of smoking habits. However, early detection and diagnosis of lung cancer through technological improvements can save the lives of millions of individuals affected globally. Computerized tomography (CT) scan imaging is a proven and popular technique in the medical field, but diagnosing cancer with only CT scans is a difficult task even for doctors and experts. This is why computer-assisted diagnosis has revolutionized disease diagnosis, especially cancer detection. This study looks at 20 CT scan images of lungs. In a preprocessing step, we chose the best filter to be applied to medical CT images between median, Gaussian, 2D convolution, and mean. From there, it was established that the median filter is the most appropriate. Next, we improved image contrast by applying adaptive histogram equalization. Finally, the preprocessed image with better quality is subjected to two optimization algorithms, fuzzy c-means and k-means clustering. The performance of these algorithms was then compared. Fuzzy c-means showed the highest accuracy of 98%. The feature was extracted using Gray Level Cooccurrence Matrix (GLCM). In classification, a comparison between three algorithms-bagging, gradient boosting, and ensemble (SVM, MLPNN, DT, logistic regression, and KNN)-was performed. Gradient boosting performed the best among these three, having an accuracy of 90.9%.


Assuntos
Detecção Precoce de Câncer , Neoplasias Pulmonares , Algoritmos , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Aprendizado de Máquina , Tomografia Computadorizada por Raios X/métodos
14.
Comput Math Methods Med ; 2022: 1556025, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35529266

RESUMO

Due to the proliferation of COVID-19, the world is in a terrible condition and human life is at risk. The SARS-CoV-2 virus had a significant impact on public health, social issues, and financial issues. Thousands of individuals are infected on a regular basis in India, which is one of the populations most seriously impacted by the pandemic. Despite modern medical and technical technology, predicting the spread of the virus has been extremely difficult. Predictive models have been used by health systems such as hospitals, to get insight into the influence of COVID-19 on outbreaks and possible resources, by minimizing the dangers of transmission. As a result, the main focus of this research is on building a COVID-19 predictive analytic technique. In the Indian dataset, Prophet, ARIMA, and stacked LSTM-GRU models were employed to forecast the number of confirmed and active cases. State-of-the-art models such as the recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), linear regression, polynomial regression, autoregressive integrated moving average (ARIMA), and Prophet were used to compare the outcomes of the prediction. After predictive research, the stacked LSTM-GRU model forecast was found to be more consistent than existing models, with better prediction results. Although the stacked model necessitates a large dataset for training, it aids in creating a higher level of abstraction in the final results and the maximization of the model's memory size. The GRU, on the other hand, assists in vanishing gradient resolution. The study findings reveal that the proposed stacked LSTM and GRU model outperforms all other models in terms of R square and RMSE and that the coupled stacked LSTM and GRU model outperforms all other models in terms of R square and RMSE. This forecasting aids in determining the future transmission paths of the virus.


Assuntos
Síndrome da Imunodeficiência Adquirida , COVID-19 , COVID-19/epidemiologia , Previsões , Humanos , Índia/epidemiologia , Pandemias , SARS-CoV-2
15.
Expert Syst Appl ; 180: 115141, 2021 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-33967405

RESUMO

X-ray units have become one of the most advantageous candidates for triaging the new Coronavirus disease COVID-19 infected patients thanks to its relatively low radiation dose, ease of access, practical, reduced prices, and quick imaging process. This research intended to develop a reliable convolutional-neural-network (CNN) model for the classification of COVID-19 from chest X-ray views. Moreover, it is aimed to prevent bias issues due to the database. Transfer learning-based CNN model was developed by using a sum of 1,218 chest X-ray images (CXIs) consisting of 368 COVID-19 pneumonia and 850 other pneumonia cases by pre-trained architectures, including DenseNet-201, ResNet-18, and SqueezeNet. The chest X-ray images were acquired from publicly available databases, and each individual image was carefully selected to prevent any bias problem. A stratified 5-fold cross-validation approach was utilized with a ratio of 90% for training and 10% for the testing (unseen folds), in which 20% of training data was used as a validation set to prevent overfitting problems. The binary classification performances of the proposed CNN models were evaluated by the testing data. The activation mapping approach was implemented to improve the causality and visuality of the radiograph. The outcomes demonstrated that the proposed CNN model built on DenseNet-201 architecture outperformed amongst the others with the highest accuracy, precision, recall, and F1-scores of 94.96%, 89.74%, 94.59%, and 92.11%, respectively. The results indicated that the reliable diagnosis of COVID-19 pneumonia from CXIs based on the CNN model opens the door to accelerate triage, save critical time, and prioritize resources besides assisting the radiologists.

16.
PeerJ Comput Sci ; 7: e405, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33817048

RESUMO

BACKGROUND: Otitis media (OM) is the infection and inflammation of the mucous membrane covering the Eustachian with the airy cavities of the middle ear and temporal bone. OM is also one of the most common ailments. In clinical practice, the diagnosis of OM is carried out by visual inspection of otoscope images. This vulnerable process is subjective and error-prone. METHODS: In this study, a novel computer-aided decision support model based on the convolutional neural network (CNN) has been developed. To improve the generalized ability of the proposed model, a combination of the channel and spatial model (CBAM), residual blocks, and hypercolumn technique is embedded into the proposed model. All experiments were performed on an open-access tympanic membrane dataset that consists of 956 otoscopes images collected into five classes. RESULTS: The proposed model yielded satisfactory classification achievement. The model ensured an overall accuracy of 98.26%, sensitivity of 97.68%, and specificity of 99.30%. The proposed model produced rather superior results compared to the pre-trained CNNs such as AlexNet, VGG-Nets, GoogLeNet, and ResNets. Consequently, this study points out that the CNN model equipped with the advanced image processing techniques is useful for OM diagnosis. The proposed model may help to field specialists in achieving objective and repeatable results, decreasing misdiagnosis rate, and supporting the decision-making processes.

17.
Appl Soft Comput ; 110: 107610, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36569211

RESUMO

In this work, an artificial intelligence network-based smart camera system prototype, which tracks social distance using a bird's-eye perspective, has been developed. "MobileNet SSD-v3", "Faster-R-CNN Inception-v2", "Faster-R-CNN ResNet-50" models have been utilized to identify people in video sequences. The final prototype based on the Faster R-CNN model is an integrated embedded system that detects social distance with the camera. The software developed using the "Nvidia Jetson Nano" development kit and Raspberry Pi camera module calculates all necessary actions in itself, detects social distance violations, makes audible and light warnings, and reports the results to the server. It is predicted that the developed smart camera prototype can be integrated into public spaces within the "sustainable smart cities," the scope that the world is on the verge of a change.

18.
Int J Med Inform ; 144: 104300, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33069058

RESUMO

OBJECTIVE: Hospital performance evaluation is vital in terms of managing hospitals and informing patients about hospital possibilities. Also, it plays a key role in planning essential issues such as electrical energy management and cybersecurity in hospitals. In addition to being able to make this measurement objectively with the help of various indicators, it can become very complicated with the participation of subjective expert thoughts in the process. METHOD: As a result of budget cuts in health expenditures worldwide, the necessity of using hospital resources most efficiently emerges. The most effective way to do this is to determine the evaluation criteria effectively. Machine learning (ML) is the current method to determine these criteria, determined by consulting with experts in the past. ML methods, which can remain utterly objective concerning all indicators, offer fair and reliable results quickly and automatically. Based on this idea, this study provides an automated healthcare system evaluation framework by automatically assigning weights to specific indicators. First, the ability of hands to be used as input and output is measured. RESULTS: As a result of this measurement, indicators are divided into only input group (group A) and both input and output group (group B). In the second step, the total effect of each input on the output is calculated by using the indicators in group B as output sequentially using the random forest of the regression tree model. CONCLUSION: Finally, the total effect of each indicator on the healthcare system is determined. Thus, the whole system is evaluated objectively instead of a subjective evaluation based on a single output.


Assuntos
Segurança Computacional , Hospitais , Atenção à Saúde , Humanos , Aprendizado de Máquina
19.
Appl Soft Comput ; 97: 106580, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32837453

RESUMO

A pneumonia of unknown causes, which was detected in Wuhan, China, and spread rapidly throughout the world, was declared as Coronavirus disease 2019 (COVID-19). Thousands of people have lost their lives to this disease. Its negative effects on public health are ongoing. In this study, an intelligence computer-aided model that can automatically detect positive COVID-19 cases is proposed to support daily clinical applications. The proposed model is based on the convolution neural network (CNN) architecture and can automatically reveal discriminative features on chest X-ray images through its convolution with rich filter families, abstraction, and weight-sharing characteristics. Contrary to the generally used transfer learning approach, the proposed deep CNN model was trained from scratch. Instead of the pre-trained CNNs, a novel serial network consisting of five convolution layers was designed. This CNN model was utilized as a deep feature extractor. The extracted deep discriminative features were used to feed the machine learning algorithms, which were k-nearest neighbor, support vector machine (SVM), and decision tree. The hyperparameters of the machine learning models were optimized using the Bayesian optimization algorithm. The experiments were conducted on a public COVID-19 radiology database. The database was divided into two parts as training and test sets with 70% and 30% rates, respectively. As a result, the most efficient results were ensured by the SVM classifier with an accuracy of 98.97%, a sensitivity of 89.39%, a specificity of 99.75%, and an F-score of 96.72%. Consequently, a cheap, fast, and reliable intelligence tool has been provided for COVID-19 infection detection. The developed model can be used to assist field specialists, physicians, and radiologists in the decision-making process. Thanks to the proposed tool, the misdiagnosis rates can be reduced, and the proposed model can be used as a retrospective evaluation tool to validate positive COVID-19 infection cases.

20.
Med Hypotheses ; 141: 109690, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32278892

RESUMO

BACKGROUND AND OBJECTIVE: Brain-computer interfaces (BCI) have started to be used with the development of computer technology in order to enable individuals who are in this situation to communicate with their environment or move. This study focused on the spelling system that transforms the brain activities obtained with EEG signals into writing. In BCI systems working with P300 obtained from 64 electrodes, data recording and processing cause high cost and high processing load. By reducing the number of electrodes used, the physical dimensions, costs, and processing loads of the systems can be reduced. The main problem at this stage is to determine which electrodes are more effective. Randomness-based optimization methods perform their experiments within the framework of a specific fitness function, resulting in near-best results rather than the best result. The electrodes chosen as a result of the study are expected to contribute positively to the classifier performance. At the same time, an unbalanced data set is balanced, and an increase in system performance is expected. METHOD: Electrode selection was performed in both the original dataset and ADASYN dataset using the Genetic Algorithm and Binary Particle Swarm Optimization methods. As a dataset, Wadsworth BCI Dataset (P300 Evoked Potentials) was used in the study. The channels chosen most frequently by optimization methods were determined and compared with the 64-channel classification results using LS-SVM and LDA. RESULT: As a result of the optimization processes, the eight channels selected most frequently, the channels selected more than the average of all the selected channels and 64 channel results were compared. The highest accuracy was achieved with the LDA classifier for user A with 29 channels selected with BPSO with 97.250%. CONCLUSIONS: The results obtained in the study showed that the number of channels decreased by optimization methods increases the classification performance. In addition, classifier training and test times have been greatly reduced. The application of the ADASYN method did not result in any significant difference.


Assuntos
Interfaces Cérebro-Computador , Algoritmos , Eletrodos , Eletroencefalografia , Potenciais Evocados , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...