Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Med Inform Decis Mak ; 24(1): 37, 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38321416

ABSTRACT

The most common eye infection in people with diabetes is diabetic retinopathy (DR). It might cause blurred vision or even total blindness. Therefore, it is essential to promote early detection to prevent or alleviate the impact of DR. However, due to the possibility that symptoms may not be noticeable in the early stages of DR, it is difficult for doctors to identify them. Therefore, numerous predictive models based on machine learning (ML) and deep learning (DL) have been developed to determine all stages of DR. However, existing DR classification models cannot classify every DR stage or use a computationally heavy approach. Common metrics such as accuracy, F1 score, precision, recall, and AUC-ROC score are not reliable for assessing DR grading. This is because they do not account for two key factors: the severity of the discrepancy between the assigned and predicted grades and the ordered nature of the DR grading scale. This research proposes computationally efficient ensemble methods for the classification of DR. These methods leverage pre-trained model weights, reducing training time and resource requirements. In addition, data augmentation techniques are used to address data limitations, improve features, and improve generalization. This combination offers a promising approach for accurate and robust DR grading. In particular, we take advantage of transfer learning using models trained on DR data and employ CLAHE for image enhancement and Gaussian blur for noise reduction. We propose a three-layer classifier that incorporates dropout and ReLU activation. This design aims to minimize overfitting while effectively extracting features and assigning DR grades. We prioritize the Quadratic Weighted Kappa (QWK) metric due to its sensitivity to label discrepancies, which is crucial for an accurate diagnosis of DR. This combined approach achieves state-of-the-art QWK scores (0.901, 0.967 and 0.944) in the Eyepacs, Aptos, and Messidor datasets.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Physicians , Humans , Diabetic Retinopathy/diagnosis , Algorithms , Machine Learning , Image Interpretation, Computer-Assisted/methods
2.
Big Data ; 2023 Mar 22.
Article in English | MEDLINE | ID: mdl-36946747

ABSTRACT

An efficient fake news detector becomes essential as the accessibility of social media platforms increases rapidly. Previous studies mainly focused on designing the models solely based on individual data sets and might suffer from degradable performance. Therefore, developing a robust model for a combined data set with diverse knowledge becomes crucial. However, designing the model with a combined data set requires extensive training time and sequential workload to obtain optimal performance without having some prior knowledge about the model's parameters. The presented study here will help solve these issues by introducing the unified training strategy to have a base structure for the classifier and all hyperparameters from individual models using a pretrained transformer model. The performance of the proposed model is noted using three publicly available data sets, namely ISOT and others from the Kaggle website. The results indicate that the proposed unified training strategy surpassed the existing models such as Random Forests, convolutional neural networks, and long short-term memory, with 97% accuracy and achieved the F1 score of 0.97. Furthermore, there was a significant reduction in training time by almost 1.5 to 1.8 × by removing words lower than three letters from the input samples. We also did extensive performance analysis by varying the number of encoder blocks to build compact models and trained on the combined data set. We justify that reducing encoder blocks resulted in lower performance from the obtained results.

3.
BMC Med Inform Decis Mak ; 21(1): 101, 2021 03 16.
Article in English | MEDLINE | ID: mdl-33726723

ABSTRACT

BACKGROUND: Blood glucose (BG) management is crucial for type-1 diabetes patients resulting in the necessity of reliable artificial pancreas or insulin infusion systems. In recent years, deep learning techniques have been utilized for a more accurate BG level prediction system. However, continuous glucose monitoring (CGM) readings are susceptible to sensor errors. As a result, inaccurate CGM readings would affect BG prediction and make it unreliable, even if the most optimal machine learning model is used. METHODS: In this work, we propose a novel approach to predicting blood glucose level with a stacked Long short-term memory (LSTM) based deep recurrent neural network (RNN) model considering sensor fault. We use the Kalman smoothing technique for the correction of the inaccurate CGM readings due to sensor error. RESULTS: For the OhioT1DM (2018) dataset, containing eight weeks' data from six different patients, we achieve an average RMSE of 6.45 and 17.24 mg/dl for 30 min and 60 min of prediction horizon (PH), respectively. CONCLUSIONS: To the best of our knowledge, this is the leading average prediction accuracy for the ohioT1DM dataset. Different physiological information, e.g., Kalman smoothed CGM data, carbohydrates from the meal, bolus insulin, and cumulative step counts in a fixed time interval, are crafted to represent meaningful features used as input to the model. The goal of our approach is to lower the difference between the predicted CGM values and the fingerstick blood glucose readings-the ground truth. Our results indicate that the proposed approach is feasible for more reliable BG forecasting that might improve the performance of the artificial pancreas and insulin infusion system for T1D diabetes management.


Subject(s)
Blood Glucose , Diabetes Mellitus, Type 1 , Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1/drug therapy , Humans , Insulin Infusion Systems , Neural Networks, Computer
4.
IEEE Trans Neural Netw Learn Syst ; 29(11): 5277-5291, 2018 11.
Article in English | MEDLINE | ID: mdl-29994641

ABSTRACT

In this paper, we deal with the learning problem when using an adaptive filtering method. For the learning system in filtering, the knowledge is obtained and updated based on the newly acquired information that is extracted and learned from the sequential samples over time. Effective measurement on the informativeness of a sample and reasonable subsequent treatment on the sample will improve the learning performance. This paper proposes a sequential outlier criterion for sparsification of online adaptive filtering. The method is proposed to achieve effective informativeness measurement of online filtering to obtain a more accurate and more compact network in the learning process. In the proposed method, the measurement on the samples' informativeness is established based on the historical sequentially adjacent samples, and then the informative-measured samples are treated individually by the learning system based on whether the sample is informative, redundant, or abnormal. With our method, a more sensible learning process can be achieved with valid knowledge extracted, and the optimal network in the learning system can be obtained. Simulations based on static function estimation, Mackey-Glass time series prediction, and Lorenz chaotic time series prediction demonstrate that the proposed method can provide more effective classification on samples and more accurate networks in online adaptive filtering.

SELECTION OF CITATIONS
SEARCH DETAIL
...