Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 423
Filter
1.
Rev. colomb. cir ; 39(5): 691-701, Septiembre 16, 2024. fig
Article in Spanish | LILACS | ID: biblio-1571841

ABSTRACT

Introducción. La formación integral de los residentes excede el conocimiento teórico y la técnica operatoria. Frente a la complejidad de la cirugía moderna, su incertidumbre y dinamismo, es necesario redefinir la comprensión de la educación quirúrgica y promover capacidades adaptativas en los futuros cirujanos para manejar efectivamente el entorno. Estos aspectos se refieren a la experticia adaptativa. Métodos. La presente revisión narrativa propone una definición de la educación quirúrgica con énfasis en la experticia adaptativa, y un enfoque para su adopción en la práctica. Resultados. Con base en la literatura disponible, la educación quirúrgica representa un proceso dinámico que se sitúa en la intersección de la complejidad de la cultura quirúrgica, del aprendizaje en el sitio de trabajo y de la calidad en el cuidado de la salud, dirigido a la formación de capacidades cognitivas, manuales y adaptativas en el futuro cirujano, que le permitan proveer cuidado de alto valor en un sistema de trabajo colectivo, mientras se fortalece su identidad profesional. La experticia adaptativa del residente es una capacidad fundamental para maximizar su desempeño frente a estas características de la educación quirúrgica. En la literatura disponible se encuentran seis estrategias para fortalecer esta capacidad. Conclusión. La experticia adaptativa es una capacidad esperada y necesaria en el médico residente de cirugía, para hacer frente a la complejidad de la educación quirúrgica. Existen estrategias prácticas que pueden ayudar a fortalecerla, las cuales deben ser evaluadas en nuevos estudios.


Introduction. The comprehensive training of residents exceeds theoretical knowledge and operative technique. Faced with the complexity of modern surgery, its uncertainty and dynamism, it is necessary to redefine the understanding of surgical education and promote adaptive capabilities in future surgeons for the effective management of the environment. These aspects refer to adaptive expertise. Methods. The present narrative review proposes a definition of surgical education with an emphasis on adaptive expertise, and an approach for its adoption in practice. Results. Based on the available literature, surgical education represents a dynamic process that is situated at the intersection of the complexity of surgical culture, learning in the workplace, and quality in health care, aimed at training of cognitive, manual, and adaptive capacities in the future surgeon, which allow them to provide high-value care in a collective work system, while strengthening their professional identity. Resident's adaptive expertise is a fundamental capacity to maximize his or her performance in the face of these characteristics of surgical education. In the available literature there are six strategies to strengthen this capacity. Conclusion. Adaptive expertise is an expected and necessary capacity in the surgical resident to deal with the complexity of surgical education. There are practical strategies that can help strengthen it, which must be evaluated in new studies.


Subject(s)
Humans , Education, Medical, Graduate , Deep Learning , Professional Competence , General Surgery , Vocational Education , Metacognition
2.
Curr Med Chem ; 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39092736

ABSTRACT

BACKGROUND: Computational assessment of the energetics of protein-ligand complexes is a challenge in the early stages of drug discovery. Previous comparative studies on computational methods to calculate the binding affinity showed that targeted scoring functions outperform universal models. OBJECTIVE: The goal here is to review the application of a simple physics-based model to estimate the binding. The focus is on a mass-spring system developed to predict binding affinity against cyclin-dependent kinase. METHOD: Publications in PubMed were searched to find mass-spring models to predict binding affinity. Crystal structures of cyclin-dependent kinases found in the protein data bank and two web servers to calculate affinity based on the atomic coordinates were employed. RESULTS: One recent study showed how a simple physics-based scoring function (named Taba) could contribute to the analysis of protein-ligand interactions. Taba methodology outperforms robust physics-based models implemented in docking programs such as AutoDock4 and Molegro Virtual Docker. Predictive metrics of 27 scoring functions and energy terms highlight the superior performance of the Taba scoring function for cyclin- dependent kinase. CONCLUSION: The recent progress of machine learning methods and the availability of these techniques through free libraries boosted the development of more accurate models to address protein-ligand interactions. Combining a naïve mass-spring system with machine-learning techniques generated a targeted scoring function with superior predictive performance to estimate pKi.

3.
Parasit Vectors ; 17(1): 329, 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39095920

ABSTRACT

BACKGROUND: Identifying mosquito vectors is crucial for controlling diseases. Automated identification studies using the convolutional neural network (CNN) have been conducted for some urban mosquito vectors but not yet for sylvatic mosquito vectors that transmit the yellow fever. We evaluated the ability of the AlexNet CNN to identify four mosquito species: Aedes serratus, Aedes scapularis, Haemagogus leucocelaenus and Sabethes albiprivus and whether there is variation in AlexNet's ability to classify mosquitoes based on pictures of four different body regions. METHODS: The specimens were photographed using a cell phone connected to a stereoscope. Photographs were taken of the full-body, pronotum and lateral view of the thorax, which were pre-processed to train the AlexNet algorithm. The evaluation was based on the confusion matrix, the accuracy (ten pseudo-replicates) and the confidence interval for each experiment. RESULTS: Our study found that the AlexNet can accurately identify mosquito pictures of the genus Aedes, Sabethes and Haemagogus with over 90% accuracy. Furthermore, the algorithm performance did not change according to the body regions submitted. It is worth noting that the state of preservation of the mosquitoes, which were often damaged, may have affected the network's ability to differentiate between these species and thus accuracy rates could have been even higher. CONCLUSIONS: Our results support the idea of applying CNNs for artificial intelligence (AI)-driven identification of mosquito vectors of tropical diseases. This approach can potentially be used in the surveillance of yellow fever vectors by health services and the population as well.


Subject(s)
Aedes , Mosquito Vectors , Neural Networks, Computer , Yellow Fever , Animals , Mosquito Vectors/classification , Yellow Fever/transmission , Aedes/classification , Aedes/physiology , Algorithms , Image Processing, Computer-Assisted/methods , Culicidae/classification , Artificial Intelligence
4.
J Environ Manage ; 367: 121996, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39088905

ABSTRACT

Monitoring forest canopies is vital for ecological studies, particularly for assessing epiphytes in rain forest ecosystems. Traditional methods for studying epiphytes, such as climbing trees and building observation structures, are labor, cost intensive and risky. Unmanned Aerial Vehicles (UAVs) have emerged as a valuable tool in this domain, offering botanists a safer and more cost-effective means to collect data. This study leverages AI-assisted techniques to enhance the identification and mapping of epiphytes using UAV imagery. The primary objective of this research is to evaluate the effectiveness of AI-assisted methods compared to traditional approaches in segmenting/identifying epiphytes from UAV images collected in a reserve forest in Costa Rica. Specifically, the study investigates whether Deep Learning (DL) models can accurately identify epiphytes during complex backgrounds, even with a limited dataset of varying image quality. Systematically, this study compares three traditional image segmentation methods Auto Cluster, Watershed, and Level Set with two DL-based segmentation networks: the UNet and the Vision Transformer-based TransUNet. Results obtained from this study indicate that traditional methods struggle with the complexity of vegetation backgrounds and variability in target characteristics. Epiphyte identification results were quantitatively evaluated using the Jaccard score. Among traditional methods, Watershed scored 0.10, Auto Cluster 0.13, and Level Set failed to identify the target. In contrast, AI-assisted models performed better, with UNet scoring 0.60 and TransUNet 0.65. These results highlight the potential of DL approaches to improve the accuracy and efficiency of epiphyte identification and mapping, advancing ecological research and conservation.


Subject(s)
Unmanned Aerial Devices , Costa Rica , Ecosystem , Environmental Monitoring/methods , Deep Learning , Artificial Intelligence , Forests , Plants , Rainforest , Trees
5.
Food Res Int ; 192: 114836, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39147524

ABSTRACT

The classification of carambola, also known as starfruit, according to quality parameters is usually conducted by trained human evaluators through visual inspections. This is a costly and subjective method that can generate high variability in results. As an alternative, computer vision systems (CVS) combined with deep learning (DCVS) techniques have been introduced in the industry as a powerful and an innovative tool for the rapid and non-invasive classification of fruits. However, validating the learning capability and trustworthiness of a DL model, aka black box, to obtain insights can be challenging. To reduce this gap, we propose an integrated eXplainable Artificial Intelligence (XAI) method for the classification of carambolas at different maturity stages. We compared two Residual Neural Networks (ResNet) and Visual Transformers (ViT) to identify the image regions that are enhanced by a Random Forest (RF) model, with the aim of providing more detailed information at the feature level for classifying the maturity stage. Changes in fruit colour and physicochemical data throughout the maturity stages were analysed, and the influence of these parameters on the maturity stages was evaluated using the Gradient-weighted Class Activation Mapping (Grad-CAM), the Attention Maps using RF importance. The proposed approach provides a visualization and description of the most important regions that led to the model decision, in wide visualization follows the models an importance features from RF. Our approach has promising potential for standardized and rapid carambolas classification, achieving 91 % accuracy with ResNet and 95 % with ViT, with potential application for other fruits.


Subject(s)
Averrhoa , Fruit , Neural Networks, Computer , Fruit/growth & development , Fruit/classification , Averrhoa/chemistry , Deep Learning , Artificial Intelligence , Color
6.
Ann Hepatol ; 29(5): 101528, 2024.
Article in English | MEDLINE | ID: mdl-38971372

ABSTRACT

INTRODUCTION AND OBJECTIVES: Despite the huge clinical burden of MASLD, validated tools for early risk stratification are lacking, and heterogeneous disease expression and a highly variable rate of progression to clinical outcomes result in prognostic uncertainty. We aimed to investigate longitudinal electronic health record-based outcome prediction in MASLD using a state-of-the-art machine learning model. PATIENTS AND METHODS: n = 940 patients with histologically-defined MASLD were used to develop a deep-learning model for all-cause mortality prediction. Patient timelines, spanning 12 years, were fully-annotated with demographic/clinical characteristics, ICD-9 and -10 codes, blood test results, prescribing data, and secondary care activity. A Transformer neural network (TNN) was trained to output concomitant probabilities of 12-, 24-, and 36-month all-cause mortality. In-sample performance was assessed using 5-fold cross-validation. Out-of-sample performance was assessed in an independent set of n = 528 MASLD patients. RESULTS: In-sample model performance achieved AUROC curve 0.74-0.90 (95 % CI: 0.72-0.94), sensitivity 64 %-82 %, specificity 75 %-92 % and Positive Predictive Value (PPV) 94 %-98 %. Out-of-sample model validation had AUROC 0.70-0.86 (95 % CI: 0.67-0.90), sensitivity 69 %-70 %, specificity 96 %-97 % and PPV 75 %-77 %. Key predictive factors, identified using coefficients of determination, were age, presence of type 2 diabetes, and history of hospital admissions with length of stay >14 days. CONCLUSIONS: A TNN, applied to routinely-collected longitudinal electronic health records, achieved good performance in prediction of 12-, 24-, and 36-month all-cause mortality in patients with MASLD. Extrapolation of our technique to population-level data will enable scalable and accurate risk stratification to identify people most likely to benefit from anticipatory health care and personalized interventions.


Subject(s)
Electronic Health Records , Humans , Male , Female , Middle Aged , Risk Assessment , Aged , Prognosis , Cause of Death , Deep Learning , Risk Factors , Predictive Value of Tests , Non-alcoholic Fatty Liver Disease/mortality , Non-alcoholic Fatty Liver Disease/diagnosis , Adult , Neural Networks, Computer , Retrospective Studies
7.
BMC Bioinformatics ; 25(1): 231, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38969970

ABSTRACT

PURPOSE: In this study, we present DeepVirusClassifier, a tool capable of accurately classifying Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) viral sequences among other subtypes of the coronaviridae family. This classification is achieved through a deep neural network model that relies on convolutional neural networks (CNNs). Since viruses within the same family share similar genetic and structural characteristics, the classification process becomes more challenging, necessitating more robust models. With the rapid evolution of viral genomes and the increasing need for timely classification, we aimed to provide a robust and efficient tool that could increase the accuracy of viral identification and classification processes. Contribute to advancing research in viral genomics and assist in surveilling emerging viral strains. METHODS: Based on a one-dimensional deep CNN, the proposed tool is capable of training and testing on the Coronaviridae family, including SARS-CoV-2. Our model's performance was assessed using various metrics, including F1-score and AUROC. Additionally, artificial mutation tests were conducted to evaluate the model's generalization ability across sequence variations. We also used the BLAST algorithm and conducted comprehensive processing time analyses for comparison. RESULTS: DeepVirusClassifier demonstrated exceptional performance across several evaluation metrics in the training and testing phases. Indicating its robust learning capacity. Notably, during testing on more than 10,000 viral sequences, the model exhibited a more than 99% sensitivity for sequences with fewer than 2000 mutations. The tool achieves superior accuracy and significantly reduced processing times compared to the Basic Local Alignment Search Tool algorithm. Furthermore, the results appear more reliable than the work discussed in the text, indicating that the tool has great potential to revolutionize viral genomic research. CONCLUSION: DeepVirusClassifier is a powerful tool for accurately classifying viral sequences, specifically focusing on SARS-CoV-2 and other subtypes within the Coronaviridae family. The superiority of our model becomes evident through rigorous evaluation and comparison with existing methods. Introducing artificial mutations into the sequences demonstrates the tool's ability to identify variations and significantly contributes to viral classification and genomic research. As viral surveillance becomes increasingly critical, our model holds promise in aiding rapid and accurate identification of emerging viral strains.


Subject(s)
COVID-19 , Deep Learning , Genome, Viral , SARS-CoV-2 , SARS-CoV-2/genetics , SARS-CoV-2/classification , Genome, Viral/genetics , COVID-19/virology , Coronaviridae/genetics , Coronaviridae/classification , Humans , Neural Networks, Computer
8.
Neurosurg Rev ; 47(1): 300, 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-38951288

ABSTRACT

The diagnosis of Moyamoya disease (MMD) relies heavily on imaging, which could benefit from standardized machine learning tools. This study aims to evaluate the diagnostic efficacy of deep learning (DL) algorithms for MMD by analyzing sensitivity, specificity, and the area under the curve (AUC) compared to expert consensus. We conducted a systematic search of PubMed, Embase, and Web of Science for articles published from inception to February 2024. Eligible studies were required to report diagnostic accuracy metrics such as sensitivity, specificity, and AUC, excluding those not in English or using traditional machine learning methods. Seven studies were included, comprising a sample of 4,416 patients, of whom 1,358 had MMD. The pooled sensitivity for common and random effects models was 0.89 (95% CI: 0.85 to 0.92) and 0.92 (95% CI: 0.85 to 0.96), respectively. The pooled specificity was 0.89 (95% CI: 0.86 to 0.91) in the common effects model and 0.91 (95% CI: 0.75 to 0.97) in the random effects model. Two studies reported the AUC alongside their confidence intervals. A meta-analysis synthesizing these findings aggregated a mean AUC of 0.94 (95% CI: 0.92 to 0.96) for common effects and 0.89 (95% CI: 0.76 to 1.02) for random effects models. Deep learning models significantly enhance the diagnosis of MMD by efficiently extracting and identifying complex image patterns with high sensitivity and specificity. Trial registration: CRD42024524998 https://www.crd.york.ac.uk/prospero/displayrecord.php?RecordID=524998.


Subject(s)
Deep Learning , Moyamoya Disease , Moyamoya Disease/diagnosis , Humans , Algorithms , Sensitivity and Specificity
9.
PeerJ ; 12: e17686, 2024.
Article in English | MEDLINE | ID: mdl-39006015

ABSTRACT

In the present investigation, we employ a novel and meticulously structured database assembled by experts, encompassing macrofungi field-collected in Brazil, featuring upwards of 13,894 photographs representing 505 distinct species. The purpose of utilizing this database is twofold: firstly, to furnish training and validation for convolutional neural networks (CNNs) with the capacity for autonomous identification of macrofungal species; secondly, to develop a sophisticated mobile application replete with an advanced user interface. This interface is specifically crafted to acquire images, and, utilizing the image recognition capabilities afforded by the trained CNN, proffer potential identifications for the macrofungal species depicted therein. Such technological advancements democratize access to the Brazilian Funga, thereby enhancing public engagement and knowledge dissemination, and also facilitating contributions from the populace to the expanding body of knowledge concerning the conservation of macrofungal species of Brazil.


Subject(s)
Deep Learning , Fungi , Brazil , Fungi/classification , Fungi/isolation & purification , Biodiversity , Neural Networks, Computer , Databases, Factual
10.
Sensors (Basel) ; 24(14)2024 Jul 18.
Article in English | MEDLINE | ID: mdl-39066062

ABSTRACT

Marker-less hand-eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected coordinates. In this work, we introduce a hand-eye calibration procedure based on the rotation representations inferred by an augmented autoencoder neural network. Learning-based models that attempt to directly regress the spatial transform of objects such as the links of robotic manipulators perform poorly in the orientation domain, but this can be overcome through the analysis of the latent space vectors constructed in the autoencoding process. This technique is computationally inexpensive and can be run in real time in markedly varied lighting and occlusion conditions. To evaluate the procedure, we use a color-depth camera and perform a registration step between the predicted and the captured point clouds to measure translation and orientation errors and compare the results to a baseline based on traditional checkerboard markers.

11.
Bioengineering (Basel) ; 11(7)2024 Jul 02.
Article in English | MEDLINE | ID: mdl-39061753

ABSTRACT

Signal processing is a very useful field of study in the interpretation of signals in many everyday applications. In the case of applications with time-varying signals, one possibility is to consider them as graphs, so graph theory arises, which extends classical methods to the non-Euclidean domain. In addition, machine learning techniques have been widely used in pattern recognition activities in a wide variety of tasks, including health sciences. The objective of this work is to identify and analyze the papers in the literature that address the use of machine learning applied to graph signal processing in health sciences. A search was performed in four databases (Science Direct, IEEE Xplore, ACM, and MDPI), using search strings to identify papers that are in the scope of this review. Finally, 45 papers were included in the analysis, the first being published in 2015, which indicates an emerging area. Among the gaps found, we can mention the need for better clinical interpretability of the results obtained in the papers, that is not to restrict the results or conclusions simply to performance metrics. In addition, a possible research direction is the use of new transforms. It is also important to make new public datasets available that can be used to train the models.

12.
Radiol Bras ; 57: e20230096en, 2024.
Article in English | MEDLINE | ID: mdl-38993952

ABSTRACT

Objective: To develop a natural language processing application capable of automatically identifying benign gallbladder diseases that require surgery, from radiology reports. Materials and Methods: We developed a text classifier to classify reports as describing benign diseases of the gallbladder that do or do not require surgery. We randomly selected 1,200 reports describing the gallbladder from our database, including different modalities. Four radiologists classified the reports as describing benign disease that should or should not be treated surgically. Two deep learning architectures were trained for classification: a convolutional neural network (CNN) and a bidirectional long short-term memory (BiLSTM) network. In order to represent words in vector form, the models included a Word2Vec representation, with dimensions of 300 or 1,000. The models were trained and evaluated by dividing the dataset into training, validation, and subsets (80/10/10). Results: The CNN and BiLSTM performed well in both dimensional spaces. For the 300- and 1,000-dimensional spaces, respectively, the F1-scores were 0.95945 and 0.95302 for the CNN model, compared with 0.96732 and 0.96732 for the BiLSTM model. Conclusion: Our models achieved high performance, regardless of the architecture and dimensional space employed.


Objetivo: Desenvolver uma aplicação de processamento de linguagem natural capaz de identificar automaticamente doenças cirúrgicas benignas da vesícula biliar a partir de laudos radiológicos. Materiais e Métodos: Desenvolvemos um classificador de texto para classificar laudos como contendo ou não doenças cirúrgicas benignas da vesícula biliar. Selecionamos aleatoriamente 1.200 laudos com descrição da vesícula biliar de nosso banco de dados, incluindo diferentes modalidades. Quatro radiologistas classificaram os laudos como doença benigna cirúrgica ou não. Duas arquiteturas de aprendizagem profunda foram treinadas para a classificação: a rede neural convolucional (convolutional neural network - CNN) e a memória longa de curto prazo bidirecional (bidirectional long short-term memory - BiLSTM). Para representar palavras de forma vetorial, os modelos incluíram uma representação Word2Vec, com dimensões variando de 300 a 1000. Os modelos foram treinados e avaliados por meio da divisão do conjunto de dados entre treinamento, validação e teste (80/10/10). Resultados: CNN e BiLSTM tiveram bom desempenho em ambos os espaços dimensionais. Relatamos para 300 e 1000 dimensões, respectivamente, as pontuações F1 de 0,95945 e 0,95302 para o modelo CNN e de 0,96732 e 0,96732 para a BiLSTM. Conclusão: Nossos modelos alcançaram alto desempenho, independentemente de diferentes arquiteturas e espaços dimensionais.

13.
Brief Bioinform ; 25(Supplement_1)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39041915

ABSTRACT

This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.


Subject(s)
Deep Learning , Neural Networks, Computer , Humans , Biomedical Research , Algorithms , Cloud Computing
14.
J Imaging ; 10(7)2024 Jul 03.
Article in English | MEDLINE | ID: mdl-39057732

ABSTRACT

Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model's performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results.

15.
Int. j. morphol ; 42(3): 826-832, jun. 2024. ilus, tab
Article in English | LILACS | ID: biblio-1564601

ABSTRACT

SUMMARY: The study aims to demonstrate the success of deep learning methods in sex prediction using hyoid bone. The images of people aged 15-94 years who underwent neck Computed Tomography (CT) were retrospectively scanned in the study. The neck CT images of the individuals were cleaned using the RadiAnt DICOM Viewer (version 2023.1) program, leaving only the hyoid bone. A total of 7 images in the anterior, posterior, superior, inferior, right, left, and right-anterior-upward directions were obtained from a patient's cut hyoid bone image. 2170 images were obtained from 310 hyoid bones of males, and 1820 images from 260 hyoid bones of females. 3990 images were completed to 5000 images by data enrichment. The dataset was divided into 80 % for training, 10 % for testing, and another 10 % for validation. It was compared with deep learning models DenseNet121, ResNet152, and VGG19. An accuracy rate of 87 % was achieved in the ResNet152 model and 80.2 % in the VGG19 model. The highest rate among the classified models was 89 % in the DenseNet121 model. This model had a specificity of 0.87, a sensitivity of 0.90, an F1 score of 0.89 in women, a specificity of 0.90, a sensitivity of 0.87, and an F1 score of 0.88 in men. It was observed that sex could be predicted from the hyoid bone using deep learning methods DenseNet121, ResNet152, and VGG19. Thus, a method that had not been tried on this bone before was used. This study also brings us one step closer to strengthening and perfecting the use of technologies, which will reduce the subjectivity of the methods and support the expert in the decision-making process of sex prediction.


El estudio tuvo como objetivo demostrar el éxito de los métodos de aprendizaje profundo en la predicción del sexo utilizando el hueso hioides. En el estudio se escanearon retrospectivamente las imágenes de personas de entre 15 y 94 años que se sometieron a una tomografía computarizada (TC) de cuello. Las imágenes de TC del cuello de los individuos se limpiaron utilizando el programa RadiAnt DICOM Viewer (versión 2023.1), dejando solo el hueso hioides. Se obtuvieron un total de 7 imágenes en las direcciones anterior, posterior, superior, inferior, derecha, izquierda y derecha-anterior-superior a partir de una imagen seccionada del hueso hioides de un paciente. Se obtuvieron 2170 imágenes de 310 huesos hioides de hombres y 1820 imágenes de 260 huesos hioides de mujeres. Se completaron 3990 imágenes a 5000 imágenes mediante enriquecimiento de datos. El conjunto de datos se dividió en un 80 % para entrenamiento, un 10 % para pruebas y otro 10 % para validación. Se comparó con los modelos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. Se logró una tasa de precisión del 87 % en el modelo ResNet152 y del 80,2 % en el modelo VGG19. La tasa más alta entre los modelos clasificados fue del 89 % en el modelo DenseNet121. Este modelo tenía una especificidad de 0,87, una sensibilidad de 0,90, una puntuación F1 de 0,89 en mujeres, una especificidad de 0,90, una sensibilidad de 0,87 y una puntuación F1 de 0,88 en hombres. Se observó que se podía predecir el sexo a partir del hueso hioides utilizando los métodos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. De esta manera, se utilizó un método que no se había probado antes en este hueso. Este estudio también nos acerca un paso más al fortalecimiento y perfeccionamiento del uso de tecnologías, que reducirán la subjetividad de los métodos y apoyarán al experto en el proceso de toma de decisiones de predicción del sexo.


Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Aged , Aged, 80 and over , Young Adult , Tomography, X-Ray Computed , Sex Determination by Skeleton , Deep Learning , Hyoid Bone/diagnostic imaging , Predictive Value of Tests , Sensitivity and Specificity , Hyoid Bone/anatomy & histology
16.
Biomedicines ; 12(6)2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38927516

ABSTRACT

This article addresses the semantic segmentation of laparoscopic surgery images, placing special emphasis on the segmentation of structures with a smaller number of observations. As a result of this study, adjustment parameters are proposed for deep neural network architectures, enabling a robust segmentation of all structures in the surgical scene. The U-Net architecture with five encoder-decoders (U-Net5ed), SegNet-VGG19, and DeepLabv3+ employing different backbones are implemented. Three main experiments are conducted, working with Rectified Linear Unit (ReLU), Gaussian Error Linear Unit (GELU), and Swish activation functions. The applied loss functions include Cross Entropy (CE), Focal Loss (FL), Tversky Loss (TL), Dice Loss (DiL), Cross Entropy Dice Loss (CEDL), and Cross Entropy Tversky Loss (CETL). The performance of Stochastic Gradient Descent with momentum (SGDM) and Adaptive Moment Estimation (Adam) optimizers is compared. It is qualitatively and quantitatively confirmed that DeepLabv3+ and U-Net5ed architectures yield the best results. The DeepLabv3+ architecture with the ResNet-50 backbone, Swish activation function, and CETL loss function reports a Mean Accuracy (MAcc) of 0.976 and Mean Intersection over Union (MIoU) of 0.977. The semantic segmentation of structures with a smaller number of observations, such as the hepatic vein, cystic duct, Liver Ligament, and blood, verifies that the obtained results are very competitive and promising compared to the consulted literature. The proposed selected parameters were validated in the YOLOv9 architecture, which showed an improvement in semantic segmentation compared to the results obtained with the original architecture.

17.
J Oral Pathol Med ; 53(7): 444-450, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38831737

ABSTRACT

BACKGROUND: Neural tumors are difficult to distinguish based solely on cellularity and often require immunohistochemical staining to aid in identifying the cell lineage. This article investigates the potential of a Convolutional Neural Network for the histopathological classification of the three most prevalent benign neural tumor types: neurofibroma, perineurioma, and schwannoma. METHODS: A model was developed, trained, and evaluated for classification using the ResNet-50 architecture, with a database of 30 whole-slide images stained in hematoxylin and eosin (106, 782 patches were generated from and divided among the training, validation, and testing subsets, with strategies to avoid data leakage). RESULTS: The model achieved an accuracy of 70% (64% normalized), and showed satisfactory results for differentiating two of the three classes, reaching approximately 97% and 77% as true positives for neurofibroma and schwannoma classes, respectively, and only 7% for perineurioma class. The AUROC curves for neurofibroma and schwannoma classes was 0.83%, and 0.74% for perineurioma. However, the specificity rate for the perineurioma class was greater (83%) than in the other two classes (neurofibroma with 61%, and schwannoma with 60%). CONCLUSION: This investigation demonstrated significant potential for proficient performance with a limitation regarding the perineurioma class (the limited feature variability observed contributed to a lower performance).


Subject(s)
Feasibility Studies , Mouth Neoplasms , Nerve Sheath Neoplasms , Neural Networks, Computer , Neurilemmoma , Neurofibroma , Humans , Neurofibroma/pathology , Neurilemmoma/pathology , Nerve Sheath Neoplasms/pathology , Mouth Neoplasms/pathology , Diagnosis, Differential
18.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38856172

ABSTRACT

With their diverse biological activities, peptides are promising candidates for therapeutic applications, showing antimicrobial, antitumour and hormonal signalling capabilities. Despite their advantages, therapeutic peptides face challenges such as short half-life, limited oral bioavailability and susceptibility to plasma degradation. The rise of computational tools and artificial intelligence (AI) in peptide research has spurred the development of advanced methodologies and databases that are pivotal in the exploration of these complex macromolecules. This perspective delves into integrating AI in peptide development, encompassing classifier methods, predictive systems and the avant-garde design facilitated by deep-generative models like generative adversarial networks and variational autoencoders. There are still challenges, such as the need for processing optimization and careful validation of predictive models. This work outlines traditional strategies for machine learning model construction and training techniques and proposes a comprehensive AI-assisted peptide design and validation pipeline. The evolving landscape of peptide design using AI is emphasized, showcasing the practicality of these methods in expediting the development and discovery of novel peptides within the context of peptide-based drug discovery.


Subject(s)
Artificial Intelligence , Drug Discovery , Peptides , Peptides/chemistry , Peptides/therapeutic use , Peptides/pharmacology , Drug Discovery/methods , Humans , Drug Design , Machine Learning , Computational Biology/methods
19.
J Lipid Atheroscler ; 13(2): 111-121, 2024 May.
Article in English | MEDLINE | ID: mdl-38826186

ABSTRACT

The development of advanced technologies in artificial intelligence (AI) has expanded its applications across various fields. Machine learning (ML), a subcategory of AI, enables computers to recognize patterns within extensive datasets. Furthermore, deep learning, a specialized form of ML, processes inputs through neural network architectures inspired by biological processes. The field of clinical lipidology has experienced significant growth over the past few years, and recently, it has begun to intersect with AI. Consequently, the purpose of this narrative review is to examine the applications of AI in clinical lipidology. This review evaluates various publications concerning the diagnosis of familial hypercholesterolemia, estimation of low-density lipoprotein cholesterol (LDL-C) levels, prediction of lipid goal attainment, challenges associated with statin use, and the influence of cardiometabolic and dietary factors on the discordance between apolipoprotein B and LDL-C. Given the concerns surrounding AI techniques, such as ethical dilemmas, opacity, limited reproducibility, and methodological constraints, it is prudent to establish a framework that enables the medical community to accurately interpret and utilize these emerging technological tools.

20.
Med Biol Eng Comput ; 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38848031

ABSTRACT

Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. For this task, the deep learning techniques' black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett's esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett's esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.

SELECTION OF CITATIONS
SEARCH DETAIL