Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Biomedicines ; 12(6)2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38927516

ABSTRACT

This article addresses the semantic segmentation of laparoscopic surgery images, placing special emphasis on the segmentation of structures with a smaller number of observations. As a result of this study, adjustment parameters are proposed for deep neural network architectures, enabling a robust segmentation of all structures in the surgical scene. The U-Net architecture with five encoder-decoders (U-Net5ed), SegNet-VGG19, and DeepLabv3+ employing different backbones are implemented. Three main experiments are conducted, working with Rectified Linear Unit (ReLU), Gaussian Error Linear Unit (GELU), and Swish activation functions. The applied loss functions include Cross Entropy (CE), Focal Loss (FL), Tversky Loss (TL), Dice Loss (DiL), Cross Entropy Dice Loss (CEDL), and Cross Entropy Tversky Loss (CETL). The performance of Stochastic Gradient Descent with momentum (SGDM) and Adaptive Moment Estimation (Adam) optimizers is compared. It is qualitatively and quantitatively confirmed that DeepLabv3+ and U-Net5ed architectures yield the best results. The DeepLabv3+ architecture with the ResNet-50 backbone, Swish activation function, and CETL loss function reports a Mean Accuracy (MAcc) of 0.976 and Mean Intersection over Union (MIoU) of 0.977. The semantic segmentation of structures with a smaller number of observations, such as the hepatic vein, cystic duct, Liver Ligament, and blood, verifies that the obtained results are very competitive and promising compared to the consulted literature. The proposed selected parameters were validated in the YOLOv9 architecture, which showed an improvement in semantic segmentation compared to the results obtained with the original architecture.

2.
Physiol Meas ; 44(3)2023 03 10.
Article in English | MEDLINE | ID: mdl-36896841

ABSTRACT

Objective. Automatic detection of Electrocardiograms (ECG) quality is fundamental to minimize costs and risks related to delayed diagnosis due to low ECG quality. Most algorithms to assess ECG quality include non-intuitive parameters. Also, they were developed using data non-representative of a real-world scenario, in terms of pathological ECGs and overrepresentation of low-quality ECG. Therefore, we introduce an algorithm to assess 12-lead ECG quality, Noise Automatic Classification Algorithm (NACA) developed in Telehealth Network of Minas Gerais (TNMG).Approach. NACA estimates a signal-to-noise ratio (SNR) for each ECG lead, where 'signal' is an estimated heartbeat template, and 'noise' is the discrepancy between the template and the ECG heartbeat. Then, clinically-inspired rules based on SNR are used to classify the ECG as acceptable or unacceptable. NACA was compared with Quality Measurement Algorithm (QMA), the winner of Computing in Cardiology Challenge 2011 (ChallengeCinC) by using five metrics: sensitivity (Se), specificity (Sp), positive predictive value (PPV),F2, and cost reduction resulting from adoption of the algorithm. Two datasets were used for validation: TestTNMG, consisting of 34 310 ECGs received by TNMG (1% unacceptable and 50% pathological); ChallengeCinC, consisting of 1000 ECGs (23% unacceptable, higher than real-world scenario).Main results. Both algorithms reached a similar performance on ChallengeCinC, although NACA performed considerably better than QMA in TestTNMG (Se = 0.89 versus 0.21; Sp = 0.99 versus 0.98; PPV = 0.59 versus 0.08;F2= 0.76 versus 0.16 and cost reduction 2.3 ± 1.8% versus 0.3 ± 0.3%, respectively).Significance. Implementing of NACA in a telecardiology service results in evident health and financial benefits for the patients and the healthcare system.


Subject(s)
Signal Processing, Computer-Assisted , Telemedicine , Humans , Electrocardiography/methods , Heart Rate , Algorithms
3.
Brief Bioinform ; 20(5): 1607-1620, 2019 09 27.
Article in English | MEDLINE | ID: mdl-29800232

ABSTRACT

MOTIVATION: The importance of microRNAs (miRNAs) is widely recognized in the community nowadays because these short segments of RNA can play several roles in almost all biological processes. The computational prediction of novel miRNAs involves training a classifier for identifying sequences having the highest chance of being precursors of miRNAs (pre-miRNAs). The big issue with this task is that well-known pre-miRNAs are usually few in comparison with the hundreds of thousands of candidate sequences in a genome, which results in high class imbalance. This imbalance has a strong influence on most standard classifiers, and if not properly addressed in the model and the experiments, not only performance reported can be completely unrealistic but also the classifier will not be able to work properly for pre-miRNA prediction. Besides, another important issue is that for most of the machine learning (ML) approaches already used (supervised methods), it is necessary to have both positive and negative examples. The selection of positive examples is straightforward (well-known pre-miRNAs). However, it is difficult to build a representative set of negative examples because they should be sequences with hairpin structure that do not contain a pre-miRNA. RESULTS: This review provides a comprehensive study and comparative assessment of methods from these two ML approaches for dealing with the prediction of novel pre-miRNAs: supervised and unsupervised training. We present and analyze the ML proposals that have appeared during the past 10 years in literature. They have been compared in several prediction tasks involving two model genomes and increasing imbalance levels. This work provides a review of existing ML approaches for pre-miRNA prediction and fair comparisons of the classifiers with same features and data sets, instead of just a revision of published software tools. The results and the discussion can help the community to select the most adequate bioinformatics approach according to the prediction task at hand. The comparative results obtained suggest that from low to mid-imbalance levels between classes, supervised methods can be the best. However, at very high imbalance levels, closer to real case scenarios, models including unsupervised and deep learning can provide better performance.


Subject(s)
Machine Learning , MicroRNAs/physiology , Animals , Computational Biology , Humans , MicroRNAs/chemistry , MicroRNAs/genetics
4.
Methods Mol Biol ; 1654: 29-37, 2017.
Article in English | MEDLINE | ID: mdl-28986781

ABSTRACT

The computational prediction of novel microRNAs (miRNAs) within a full genome involves identifying sequences having the highest chance of being bona fide miRNA precursors (pre-miRNAs). These sequences are usually named candidates to miRNA. The well-known pre-miRNAs are usually only a few in comparison to the hundreds of thousands of potential candidates to miRNA that have to be analyzed. Although the selection of positive labeled examples is straightforward, it is very difficult to build a set of negative examples in order to obtain a good set of training samples for a supervised method. In this chapter we describe an approach to this problem, based on the unsupervised clustering of unlabeled sequences from genome-wide data, and the well-known miRNA precursors for the organism under study. Therefore, the protocol developed allows for quick identification of the best candidates to miRNA as those sequences clustered together with known precursors.


Subject(s)
Computational Biology/methods , MicroRNAs/genetics , RNA, Long Noncoding/genetics , Animals , Humans
5.
F1000Res ; 52016.
Article in English | MEDLINE | ID: mdl-28003875

ABSTRACT

Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

SELECTION OF CITATIONS
SEARCH DETAIL