Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Med Inform Decis Mak ; 24(1): 37, 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38321416

ABSTRACT

The most common eye infection in people with diabetes is diabetic retinopathy (DR). It might cause blurred vision or even total blindness. Therefore, it is essential to promote early detection to prevent or alleviate the impact of DR. However, due to the possibility that symptoms may not be noticeable in the early stages of DR, it is difficult for doctors to identify them. Therefore, numerous predictive models based on machine learning (ML) and deep learning (DL) have been developed to determine all stages of DR. However, existing DR classification models cannot classify every DR stage or use a computationally heavy approach. Common metrics such as accuracy, F1 score, precision, recall, and AUC-ROC score are not reliable for assessing DR grading. This is because they do not account for two key factors: the severity of the discrepancy between the assigned and predicted grades and the ordered nature of the DR grading scale. This research proposes computationally efficient ensemble methods for the classification of DR. These methods leverage pre-trained model weights, reducing training time and resource requirements. In addition, data augmentation techniques are used to address data limitations, improve features, and improve generalization. This combination offers a promising approach for accurate and robust DR grading. In particular, we take advantage of transfer learning using models trained on DR data and employ CLAHE for image enhancement and Gaussian blur for noise reduction. We propose a three-layer classifier that incorporates dropout and ReLU activation. This design aims to minimize overfitting while effectively extracting features and assigning DR grades. We prioritize the Quadratic Weighted Kappa (QWK) metric due to its sensitivity to label discrepancies, which is crucial for an accurate diagnosis of DR. This combined approach achieves state-of-the-art QWK scores (0.901, 0.967 and 0.944) in the Eyepacs, Aptos, and Messidor datasets.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Physicians , Humans , Diabetic Retinopathy/diagnosis , Algorithms , Machine Learning , Image Interpretation, Computer-Assisted/methods
2.
Neural Netw ; 151: 16-33, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35367735

ABSTRACT

Recent theoretical and experimental works have connected Hebbian plasticity with the reinforcement learning (RL) paradigm, producing a class of trial-and-error learning in artificial neural networks known as neo-Hebbian plasticity. Inspired by the role of the neuromodulator dopamine in synaptic modification, neo-Hebbian RL methods extend unsupervised Hebbian learning rules with value-based modulation to selectively reinforce associations. This reinforcement allows for learning exploitative behaviors and produces RL models with strong biological plausibility. The review begins with coverage of fundamental concepts in rate- and spike-coded models. We introduce Hebbian correlation detection as a basis for modification of synaptic weighting and progress to neo-Hebbian RL models guided solely by extrinsic rewards. We then analyze state-of-the-art neo-Hebbian approaches to the exploration-exploitation balance under the RL paradigm, emphasizing works that employ additional mechanics to modulate that dynamic. Our review of neo-Hebbian RL methods in this context indicates substantial potential for novel improvements in exploratory learning, primarily through stronger incorporation of intrinsic motivators. We provide a number of research suggestions for this pursuit by drawing from modern theories and results in neuroscience and psychology. The exploration-exploitation balance is a central issue in RL research, and this review is the first to focus on it under the neo-Hebbian RL framework.


Subject(s)
Neural Networks, Computer , Reinforcement, Psychology , Learning , Reward
3.
BMC Med Inform Decis Mak ; 21(1): 101, 2021 03 16.
Article in English | MEDLINE | ID: mdl-33726723

ABSTRACT

BACKGROUND: Blood glucose (BG) management is crucial for type-1 diabetes patients resulting in the necessity of reliable artificial pancreas or insulin infusion systems. In recent years, deep learning techniques have been utilized for a more accurate BG level prediction system. However, continuous glucose monitoring (CGM) readings are susceptible to sensor errors. As a result, inaccurate CGM readings would affect BG prediction and make it unreliable, even if the most optimal machine learning model is used. METHODS: In this work, we propose a novel approach to predicting blood glucose level with a stacked Long short-term memory (LSTM) based deep recurrent neural network (RNN) model considering sensor fault. We use the Kalman smoothing technique for the correction of the inaccurate CGM readings due to sensor error. RESULTS: For the OhioT1DM (2018) dataset, containing eight weeks' data from six different patients, we achieve an average RMSE of 6.45 and 17.24 mg/dl for 30 min and 60 min of prediction horizon (PH), respectively. CONCLUSIONS: To the best of our knowledge, this is the leading average prediction accuracy for the ohioT1DM dataset. Different physiological information, e.g., Kalman smoothed CGM data, carbohydrates from the meal, bolus insulin, and cumulative step counts in a fixed time interval, are crafted to represent meaningful features used as input to the model. The goal of our approach is to lower the difference between the predicted CGM values and the fingerstick blood glucose readings-the ground truth. Our results indicate that the proposed approach is feasible for more reliable BG forecasting that might improve the performance of the artificial pancreas and insulin infusion system for T1D diabetes management.


Subject(s)
Blood Glucose , Diabetes Mellitus, Type 1 , Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1/drug therapy , Humans , Insulin Infusion Systems , Neural Networks, Computer
4.
Biol Cybern ; 94(1): 33-45, 2006 Jan.
Article in English | MEDLINE | ID: mdl-16283375

ABSTRACT

Synchrony-driven recruitment learning addresses the question of how arbitrary concepts, represented by synchronously active ensembles, may be acquired within a randomly connected static graph of neuron-like elements. Recruitment learning in hierarchies is an inherently unstable process. This paper presents conditions on parameters for a feedforward network to ensure stable recruitment hierarchies. The parameter analysis is conducted by using a stochastic population approach to model a spiking neural network. The resulting network converges to activate a desired number of units at each stage of the hierarchy. The original recruitment method is modified first by increasing feedforward connection density for ensuring sufficient activation, then by incorporating temporally distributed feedforward delays for separating inputs temporally, and finally by limiting excess activation via lateral inhibition. The task of activating a desired number of units from a population is performed similarly to a temporal k-winners-take-all network.


Subject(s)
Models, Neurological , Neural Networks, Computer , Signal Processing, Computer-Assisted , Synapses/physiology , Synaptic Transmission/physiology , Feedback, Physiological , Linear Models , Pattern Recognition, Automated , Stochastic Processes
5.
Neural Netw ; 16(5-6): 593-600, 2003.
Article in English | MEDLINE | ID: mdl-12850012

ABSTRACT

The temporal correlation hypothesis proposes using distributed synchrony for the binding of different stimulus features. However, synchronized spikes must travel over cortical circuits that have varying length pathways, leading to mismatched arrival times. This raises the question of how initial stimulus-dependent synchrony might be preserved at a destination binding site. Earlier, we proposed constraints on tolerance and segregation parameters for a phase-coding approach, within cortical circuits, to address this question [Proceedings of the International Joint Conference on Neural Networks, Washington, DC, 2001]. The purpose of the present paper is twofold. First, we conduct simulation experiments to test the proposed constraints. Second, we explore the practicality of temporal binding to drive a process of long-term memory formation based on a recruitment learning method [Biol. Cybernet. 46 (1982) 27]. A network based on Valiant's neuroidal architecture [Circuits of the mind, 1994] is used to demonstrate the coalition between temporal binding and recruitment. Complementing similar approaches, we implement a continuous-time learning procedure allowing computation with spiking neurons. The viability of the proposed binding scheme is investigated by conducting simulation studies which examine binding errors. In the simulation, binding errors cause the perception of illusory conjunctions among features belonging to separate objects. Our results indicate that when tolerance and segregation parameters obey our proposed constraints, the assemblies of correct bindings are dominant over assemblies of spurious bindings in reasonable operating conditions.


Subject(s)
Learning , Models, Neurological , Learning/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...