Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20.054
Filter
1.
J Orthop Surg Res ; 19(1): 324, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38822361

ABSTRACT

BACKGROUND: The patellar height index is important; however, the measurement procedures are time-consuming and prone to significant variability among and within observers. We developed a deep learning-based automatic measurement system for the patellar height and evaluated its performance and generalization ability to accurately measure the patellar height index. METHODS: We developed a dataset containing 3,923 lateral knee X-ray images. Notably, all X-ray images were from three tertiary level A hospitals, and 2,341 cases were included in the analysis after screening. By manually labeling key points, the model was trained using the residual network (ResNet) and high-resolution network (HRNet) for human pose estimation architectures to measure the patellar height index. Various data enhancement techniques were used to enhance the robustness of the model. The root mean square error (RMSE), object keypoint similarity (OKS), and percentage of correct keypoint (PCK) metrics were used to evaluate the training results. In addition, we used the intraclass correlation coefficient (ICC) to assess the consistency between manual and automatic measurements. RESULTS: The HRNet model performed excellently in keypoint detection tasks by comparing different deep learning models. Furthermore, the pose_hrnet_w48 model was particularly outstanding in the RMSE, OKS, and PCK metrics, and the Insall-Salvati index (ISI) automatically calculated by this model was also highly consistent with the manual measurements (intraclass correlation coefficient [ICC], 0.809-0.885). This evidence demonstrates the accuracy and generalizability of this deep learning system in practical applications. CONCLUSION: We successfully developed a deep learning-based automatic measurement system for the patellar height. The system demonstrated accuracy comparable to that of experienced radiologists and a strong generalizability across different datasets. It provides an essential tool for assessing and treating knee diseases early and monitoring and rehabilitation after knee surgery. Due to the potential bias in the selection of datasets in this study, different datasets should be examined in the future to optimize the model so that it can be reliably applied in clinical practice. TRIAL REGISTRATION: The study was registered at the Medical Research Registration and Filing Information System (medicalresearch.org.cn) MR-61-23-013065. Date of registration: May 04, 2023 (retrospectively registered).


Subject(s)
Deep Learning , Patella , Humans , Patella/diagnostic imaging , Patella/anatomy & histology , Retrospective Studies , Male , Female , Automation , Radiography/methods , Middle Aged , Adult
2.
Cancer Discov ; 14(6): 906-908, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38826098

ABSTRACT

SUMMARY: Classifying tumor types using machine learning approaches is not always trivial, particularly for challenging cases such as cancers of unknown primary. In this issue of Cancer Discovery, Darmofal and colleagues describe a new tool that uses information from a clinical sequencing panel to diagnose tumor type, and show that the model is particularly robust. See related article by Darmofal et al., p. 1064 (1).


Subject(s)
Deep Learning , Neoplasms , Humans , Neoplasms/genetics , Neoplasms/diagnosis
3.
Transl Vis Sci Technol ; 13(6): 1, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38829624

ABSTRACT

Purpose: Deep learning architectures can automatically learn complex features and patterns associated with glaucomatous optic neuropathy (GON). However, developing robust algorithms requires a large number of data sets. We sought to train an adversarial model for generating high-quality optic disc images from a large, diverse data set and then assessed the performance of models on generated synthetic images for detecting GON. Methods: A total of 17,060 (6874 glaucomatous and 10,186 healthy) fundus images were used to train deep convolutional generative adversarial networks (DCGANs) for synthesizing disc images for both classes. We then trained two models to detect GON, one solely on these synthetic images and another on a mixed data set (synthetic and real clinical images). Both the models were externally validated on a data set not used for training. The multiple classification metrics were evaluated with 95% confidence intervals. Models' decision-making processes were assessed using gradient-weighted class activation mapping (Grad-CAM) techniques. Results: Following receiver operating characteristic curve analysis, an optimal cup-to-disc ratio threshold for detecting GON from the training data was found to be 0.619. DCGANs generated high-quality synthetic disc images for healthy and glaucomatous eyes. When trained on a mixed data set, the model's area under the receiver operating characteristic curve attained 99.85% on internal validation and 86.45% on external validation. Grad-CAM saliency maps were primarily centered on the optic nerve head, indicating a more precise and clinically relevant attention area of the fundus image. Conclusions: Although our model performed well on synthetic data, training on a mixed data set demonstrated better performance and generalization. Integrating synthetic and real clinical images can optimize the performance of a deep learning model in glaucoma detection. Translational Relevance: Optimizing deep learning models for glaucoma detection through integrating DCGAN-generated synthetic and real-world clinical data can be improved and generalized in clinical practice.


Subject(s)
Deep Learning , Glaucoma , Optic Disk , Optic Nerve Diseases , ROC Curve , Humans , Optic Disk/diagnostic imaging , Optic Disk/pathology , Optic Nerve Diseases/diagnostic imaging , Optic Nerve Diseases/diagnosis , Glaucoma/diagnostic imaging , Glaucoma/diagnosis , Female , Male , Middle Aged , Algorithms
4.
Sci Rep ; 14(1): 12699, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38830932

ABSTRACT

Medical image segmentation has made a significant contribution towards delivering affordable healthcare by facilitating the automatic identification of anatomical structures and other regions of interest. Although convolution neural networks have become prominent in the field of medical image segmentation, they suffer from certain limitations. In this study, we present a reliable framework for producing performant outcomes for the segmentation of pathological structures of 2D medical images. Our framework consists of a novel deep learning architecture, called deep multi-level attention dilated residual neural network (MADR-Net), designed to improve the performance of medical image segmentation. MADR-Net uses a U-Net encoder/decoder backbone in combination with multi-level residual blocks and atrous pyramid scene parsing pooling. To improve the segmentation results, channel-spatial attention blocks were added in the skip connection to capture both the global and local features and superseded the bottleneck layer with an ASPP block. Furthermore, we introduce a hybrid loss function that has an excellent convergence property and enhances the performance of the medical image segmentation task. We extensively validated the proposed MADR-Net on four typical yet challenging medical image segmentation tasks: (1) Left ventricle, left atrium, and myocardial wall segmentation from Echocardiogram images in the CAMUS dataset, (2) Skin cancer segmentation from dermoscopy images in ISIC 2017 dataset, (3) Electron microscopy in FIB-SEM dataset, and (4) Fluid attenuated inversion recovery abnormality from MR images in LGG segmentation dataset. The proposed algorithm yielded significant results when compared to state-of-the-art architectures such as U-Net, Residual U-Net, and Attention U-Net. The proposed MADR-Net consistently outperformed the classical U-Net by 5.43%, 3.43%, and 3.92% relative improvement in terms of dice coefficient, respectively, for electron microscopy, dermoscopy, and MRI. The experimental results demonstrate superior performance on single and multi-class datasets and that the proposed MADR-Net can be utilized as a baseline for the assessment of cross-dataset and segmentation tasks.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Algorithms , Magnetic Resonance Imaging/methods
5.
Commun Biol ; 7(1): 679, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38830995

ABSTRACT

Proteins and nucleic-acids are essential components of living organisms that interact in critical cellular processes. Accurate prediction of nucleic acid-binding residues in proteins can contribute to a better understanding of protein function. However, the discrepancy between protein sequence information and obtained structural and functional data renders most current computational models ineffective. Therefore, it is vital to design computational models based on protein sequence information to identify nucleic acid binding sites in proteins. Here, we implement an ensemble deep learning model-based nucleic-acid-binding residues on proteins identification method, called SOFB, which characterizes protein sequences by learning the semantics of biological dynamics contexts, and then develop an ensemble deep learning-based sequence network to learn feature representation and classification by explicitly modeling dynamic semantic information. Among them, the language learning model, which is constructed from natural language to biological language, captures the underlying relationships of protein sequences, and the ensemble deep learning-based sequence network consisting of different convolutional layers together with Bi-LSTM refines various features for optimal performance. Meanwhile, to address the imbalanced issue, we adopt ensemble learning to train multiple models and then incorporate them. Our experimental results on several DNA/RNA nucleic-acid-binding residue datasets demonstrate that our proposed model outperforms other state-of-the-art methods. In addition, we conduct an interpretability analysis of the identified nucleic acid binding residue sequences based on the attention weights of the language learning model, revealing novel insights into the dynamic semantic information that supports the identified nucleic acid binding residues. SOFB is available at https://github.com/Encryptional/SOFB and https://figshare.com/articles/online_resource/SOFB_figshare_rar/25499452 .


Subject(s)
Deep Learning , Binding Sites , Nucleic Acids/metabolism , Nucleic Acids/chemistry , Proteins/chemistry , Proteins/metabolism , Proteins/genetics , Protein Binding , Computational Biology/methods
6.
Breast Cancer Res ; 26(1): 90, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38831336

ABSTRACT

BACKGROUND: Nottingham histological grade (NHG) is a well established prognostic factor in breast cancer histopathology but has a high inter-assessor variability with many tumours being classified as intermediate grade, NHG2. Here, we evaluate if DeepGrade, a previously developed model for risk stratification of resected tumour specimens, could be applied to risk-stratify tumour biopsy specimens. METHODS: A total of 11,955,755 tiles from 1169 whole slide images of preoperative biopsies from 896 patients diagnosed with breast cancer in Stockholm, Sweden, were included. DeepGrade, a deep convolutional neural network model, was applied for the prediction of low- and high-risk tumours. It was evaluated against clinically assigned grades NHG1 and NHG3 on the biopsy specimen but also against the grades assigned to the corresponding resection specimen using area under the operating curve (AUC). The prognostic value of the DeepGrade model in the biopsy setting was evaluated using time-to-event analysis. RESULTS: Based on preoperative biopsy images, the DeepGrade model predicted resected tumour cases of clinical grades NHG1 and NHG3 with an AUC of 0.908 (95% CI: 0.88; 0.93). Furthermore, out of the 432 resected clinically-assigned NHG2 tumours, 281 (65%) were classified as DeepGrade-low and 151 (35%) as DeepGrade-high. Using a multivariable Cox proportional hazards model the hazard ratio between DeepGrade low- and high-risk groups was estimated as 2.01 (95% CI: 1.06; 3.79). CONCLUSIONS: DeepGrade provided prediction of tumour grades NHG1 and NHG3 on the resection specimen using only the biopsy specimen. The results demonstrate that the DeepGrade model can provide decision support to identify high-risk tumours based on preoperative biopsies, thus improving early treatment decisions.


Subject(s)
Breast Neoplasms , Deep Learning , Neoplasm Grading , Humans , Female , Breast Neoplasms/pathology , Breast Neoplasms/surgery , Middle Aged , Biopsy , Risk Assessment/methods , Prognosis , Aged , Adult , Sweden/epidemiology , Preoperative Period , Neural Networks, Computer , Breast/pathology , Breast/surgery
7.
Nat Commun ; 15(1): 4690, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824132

ABSTRACT

Accurate identification of genetic alterations in tumors, such as Fibroblast Growth Factor Receptor, is crucial for treating with targeted therapies; however, molecular testing can delay patient care due to the time and tissue required. Successful development, validation, and deployment of an AI-based, biomarker-detection algorithm could reduce screening cost and accelerate patient recruitment. Here, we develop a deep-learning algorithm using >3000 H&E-stained whole slide images from patients with advanced urothelial cancers, optimized for high sensitivity to avoid ruling out trial-eligible patients. The algorithm is validated on a dataset of 350 patients, achieving an area under the curve of 0.75, specificity of 31.8% at 88.7% sensitivity, and projected 28.7% reduction in molecular testing. We successfully deploy the system in a non-interventional study comprising 89 global study clinical sites and demonstrate its potential to prioritize/deprioritize molecular testing resources and provide substantial cost savings in the drug development and clinical settings.


Subject(s)
Algorithms , Deep Learning , Humans , Biomarkers, Tumor/metabolism , Biomarkers, Tumor/genetics , Clinical Trials as Topic , Urinary Bladder Neoplasms/pathology , Urinary Bladder Neoplasms/genetics , Urinary Bladder Neoplasms/diagnosis , Male , Female , Patient Selection , Urologic Neoplasms/pathology , Urologic Neoplasms/diagnosis , Urologic Neoplasms/genetics
8.
Sci Rep ; 14(1): 12601, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824162

ABSTRACT

Data categorization is a top concern in medical data to predict and detect illnesses; thus, it is applied in modern healthcare informatics. In modern informatics, machine learning and deep learning models have enjoyed great attention for categorizing medical data and improving illness detection. However, the existing techniques, such as features with high dimensionality, computational complexity, and long-term execution duration, raise fundamental problems. This study presents a novel classification model employing metaheuristic methods to maximize efficient positives on Chronic Kidney Disease diagnosis. The medical data is initially massively pre-processed, where the data is purified with various mechanisms, including missing values resolution, data transformation, and the employment of normalization procedures. The focus of such processes is to leverage the handling of the missing values and prepare the data for deep analysis. We adopt the Binary Grey Wolf Optimization method, a reliable subset selection feature using metaheuristics. This operation is aimed at improving illness prediction accuracy. In the classification step, the model adopts the Extreme Learning Machine with hidden nodes through data optimization to predict the presence of CKD. The complete classifier evaluation employs established measures, including recall, specificity, kappa, F-score, and accuracy, in addition to the feature selection. Data related to the study show that the proposed approach records high levels of accuracy, which is better than the existing models.


Subject(s)
Medical Informatics , Renal Insufficiency, Chronic , Humans , Renal Insufficiency, Chronic/diagnosis , Medical Informatics/methods , Machine Learning , Deep Learning , Algorithms , Male , Female , Middle Aged
9.
Sci Rep ; 14(1): 12623, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824208

ABSTRACT

Crowd flow prediction has been studied for a variety of purposes, ranging from the private sector such as location selection of stores according to the characteristics of commercial districts and customer-tailored marketing to the public sector for social infrastructure design such as transportation networks. Its importance is even greater in light of the spread of contagious diseases such as COVID-19. In many cases, crowd flow can be divided into subgroups by common characteristics such as gender, age, location type, etc. If we use such hierarchical structure of the data effectively, we can improve prediction accuracy of crowd flow for subgroups. But the existing prediction models do not consider such hierarchical structure of the data. In this study, we propose a deep learning model based on global-local structure of the crowd flow data, which utilizes the overall(global) and subdivided by the types of sites(local) crowd flow data simultaneously to predict the crowd flow of each subgroup. The experiment result shows that the proposed model improves the prediction accuracy of each sub-divided subgroup by 5.2% (Table 5 Cat #9)-45.95% (Table 11 Cat #5), depending on the data set. This result comes from the comparison with the related works under the same condition that use target category data to predict each subgroup. In addition, when we refine the global data composition by considering the correlation between subgroups and excluding low correlated subgroups, the prediction accuracy is further improved by 5.6-48.65%.


Subject(s)
COVID-19 , Crowding , Deep Learning , Humans , COVID-19/epidemiology , SARS-CoV-2
10.
Sci Rep ; 14(1): 12630, 2024 06 02.
Article in English | MEDLINE | ID: mdl-38824210

ABSTRACT

In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.


Subject(s)
Cone-Beam Computed Tomography , Phantoms, Imaging , Tooth , Humans , Tooth/diagnostic imaging , Tooth/anatomy & histology , Cone-Beam Computed Tomography/methods , Dentistry/methods , Image Processing, Computer-Assisted/methods , Deep Learning
11.
Sci Rep ; 14(1): 12615, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824217

ABSTRACT

Standard clinical practice to assess fetal well-being during labour utilises monitoring of the fetal heart rate (FHR) using cardiotocography. However, visual evaluation of FHR signals can result in subjective interpretations leading to inter and intra-observer disagreement. Therefore, recent studies have proposed deep-learning-based methods to interpret FHR signals and detect fetal compromise. These methods have typically focused on evaluating fixed-length FHR segments at the conclusion of labour, leaving little time for clinicians to intervene. In this study, we propose a novel FHR evaluation method using an input length invariant deep learning model (FHR-LINet) to progressively evaluate FHR as labour progresses and achieve rapid detection of fetal compromise. Using our FHR-LINet model, we obtained approximately 25% reduction in the time taken to detect fetal compromise compared to the state-of-the-art multimodal convolutional neural network while achieving 27.5%, 45.0%, 56.5% and 65.0% mean true positive rate at 5%, 10%, 15% and 20% false positive rate respectively. A diagnostic system based on our approach could potentially enable earlier intervention for fetal compromise and improve clinical outcomes.


Subject(s)
Cardiotocography , Deep Learning , Heart Rate, Fetal , Heart Rate, Fetal/physiology , Humans , Pregnancy , Female , Cardiotocography/methods , Neural Networks, Computer , Fetal Monitoring/methods , Signal Processing, Computer-Assisted , Fetus
12.
Sci Rep ; 14(1): 12598, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824219

ABSTRACT

To tackle the difficulty of extracting features from one-dimensional spectral signals using traditional spectral analysis, a metabolomics analysis method is proposed to locate two-dimensional correlated spectral feature bands and combine it with deep learning classification for wine origin traceability. Metabolomics analysis was performed on 180 wine samples from 6 different wine regions using UPLC-Q-TOF-MS. Indole, Sulfacetamide, and caffeine were selected as the main differential components. By analyzing the molecular structure of these components and referring to the main functional groups on the infrared spectrum, characteristic band regions with wavelengths in the range of 1000-1400 nm and 1500-1800 nm were selected. Draw two-dimensional correlation spectra (2D-COS) separately, generate synchronous correlation spectra and asynchronous correlation spectra, establish convolutional neural network (CNN) classification models, and achieve the purpose of wine origin traceability. The experimental results demonstrate that combining two segments of two-dimensional characteristic spectra determined by metabolomics screening with convolutional neural networks yields optimal classification results. This validates the effectiveness of using metabolomics screening to determine spectral feature regions in tracing wine origin. This approach effectively removes irrelevant variables while retaining crucial chemical information, enhancing spectral resolution. This integrated approach strengthens the classification model's understanding of samples, significantly increasing accuracy.


Subject(s)
Deep Learning , Metabolomics , Wine , Wine/analysis , Metabolomics/methods , Neural Networks, Computer , Chromatography, High Pressure Liquid/methods , Mass Spectrometry/methods
13.
BMC Infect Dis ; 24(1): 551, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824500

ABSTRACT

BACKGROUND: Leishmaniasis, an illness caused by protozoa, accounts for a substantial number of human fatalities globally, thereby emerging as one of the most fatal parasitic diseases. The conventional methods employed for detecting the Leishmania parasite through microscopy are not only time-consuming but also susceptible to errors. Therefore, the main objective of this study is to develop a model based on deep learning, a subfield of artificial intelligence, that could facilitate automated diagnosis of leishmaniasis. METHODS: In this research, we introduce LeishFuNet, a deep learning framework designed for detecting Leishmania parasites in microscopic images. To enhance the performance of our model through same-domain transfer learning, we initially train four distinct models: VGG19, ResNet50, MobileNetV2, and DenseNet 169 on a dataset related to another infectious disease, COVID-19. These trained models are then utilized as new pre-trained models and fine-tuned on a set of 292 self-collected high-resolution microscopic images, consisting of 138 positive cases and 154 negative cases. The final prediction is generated through the fusion of information analyzed by these pre-trained models. Grad-CAM, an explainable artificial intelligence technique, is implemented to demonstrate the model's interpretability. RESULTS: The final results of utilizing our model for detecting amastigotes in microscopic images are as follows: accuracy of 98.95 1.4%, specificity of 98 2.67%, sensitivity of 100%, precision of 97.91 2.77%, F1-score of 98.92 1.43%, and Area Under Receiver Operating Characteristic Curve of 99 1.33. CONCLUSION: The newly devised system is precise, swift, user-friendly, and economical, thus indicating the potential of deep learning as a substitute for the prevailing leishmanial diagnostic techniques.


Subject(s)
Deep Learning , Leishmania , Leishmaniasis , Microscopy , Telemedicine , Humans , Leishmaniasis/parasitology , Leishmaniasis/diagnosis , Leishmania/isolation & purification , Microscopy/methods , COVID-19 , SARS-CoV-2/isolation & purification
14.
Biomed Eng Online ; 23(1): 50, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824547

ABSTRACT

BACKGROUND: Over 60% of epilepsy patients globally are children, whose early diagnosis and treatment are critical for their development and can substantially reduce the disease's burden on both families and society. Numerous algorithms for automated epilepsy detection from EEGs have been proposed. Yet, the occurrence of epileptic seizures during an EEG exam cannot always be guaranteed in clinical practice. Models that exclusively use seizure EEGs for detection risk artificially enhanced performance metrics. Therefore, there is a pressing need for a universally applicable model that can perform automatic epilepsy detection in a variety of complex real-world scenarios. METHOD: To address this problem, we have devised a novel technique employing a temporal convolutional neural network with self-attention (TCN-SA). Our model comprises two primary components: a TCN for extracting time-variant features from EEG signals, followed by a self-attention (SA) layer that assigns importance to these features. By focusing on key features, our model achieves heightened classification accuracy for epilepsy detection. RESULTS: The efficacy of our model was validated on a pediatric epilepsy dataset we collected and on the Bonn dataset, attaining accuracies of 95.50% on our dataset, and 97.37% (A v. E), and 93.50% (B vs E), respectively. When compared with other deep learning architectures (temporal convolutional neural network, self-attention network, and standardized convolutional neural network) using the same datasets, our TCN-SA model demonstrated superior performance in the automated detection of epilepsy. CONCLUSION: The proven effectiveness of the TCN-SA approach substantiates its potential as a valuable tool for the automated detection of epilepsy, offering significant benefits in diverse and complex real-world clinical settings.


Subject(s)
Electroencephalography , Epilepsy , Neural Networks, Computer , Epilepsy/diagnosis , Humans , Signal Processing, Computer-Assisted , Automation , Child , Deep Learning , Diagnosis, Computer-Assisted/methods , Time Factors
15.
Cancer Immunol Immunother ; 73(8): 153, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833187

ABSTRACT

BACKGROUND: The non-invasive biomarkers for predicting immunotherapy response are urgently needed to prevent both premature cessation of treatment and ineffective extension. This study aimed to construct a non-invasive model for predicting immunotherapy response, based on the integration of deep learning and habitat radiomics in patients with advanced non-small cell lung cancer (NSCLC). METHODS: Independent patient cohorts from three medical centers were enrolled for training (n = 164) and test (n = 82). Habitat imaging radiomics features were derived from sub-regions clustered from individual's tumor by K-means method. The deep learning features were extracted based on 3D ResNet algorithm. Pearson correlation coefficient, T test and least absolute shrinkage and selection operator regression were used to select features. Support vector machine was applied to implement deep learning and habitat radiomics, respectively. Then, a combination model was developed integrating both sources of data. RESULTS: The combination model obtained a strong well-performance, achieving area under receiver operating characteristics curve of 0.865 (95% CI 0.772-0.931). The model significantly discerned high and low-risk patients, and exhibited a significant benefit in the clinical use. CONCLUSION: The integration of deep-leaning and habitat radiomics contributed to predicting response to immunotherapy in patients with NSCLC. The developed integration model may be used as potential tool for individual immunotherapy management.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Deep Learning , Immunotherapy , Lung Neoplasms , Humans , Carcinoma, Non-Small-Cell Lung/therapy , Carcinoma, Non-Small-Cell Lung/immunology , Carcinoma, Non-Small-Cell Lung/diagnostic imaging , Carcinoma, Non-Small-Cell Lung/pathology , Lung Neoplasms/therapy , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/immunology , Immunotherapy/methods , Female , Male , Middle Aged , Aged , Prognosis , ROC Curve , Radiomics
16.
J Robot Surg ; 18(1): 237, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833204

ABSTRACT

A major obstacle in applying machine learning for medical fields is the disparity between the data distribution of the training images and the data encountered in clinics. This phenomenon can be explained by inconsistent acquisition techniques and large variations across the patient spectrum. The result is poor translation of the trained models to the clinic, which limits their implementation in medical practice. Patient-specific trained networks could provide a potential solution. Although patient-specific approaches are usually infeasible because of the expenses associated with on-the-fly labeling, the use of generative adversarial networks enables this approach. This study proposes a patient-specific approach based on generative adversarial networks. In the presented training pipeline, the user trains a patient-specific segmentation network with extremely limited data which is supplemented with artificial samples generated by generative adversarial models. This approach is demonstrated in endoscopic video data captured during fetoscopic laser coagulation, a procedure used for treating twin-to-twin transfusion syndrome by ablating the placental blood vessels. Compared to a standard deep learning segmentation approach, the pipeline was able to achieve an intersection over union score of 0.60 using only 20 annotated images compared to 100 images using a standard approach. Furthermore, training with 20 annotated images without the use of the pipeline achieves an intersection over union score of 0.30, which, therefore, corresponds to a 100% increase in performance when incorporating the pipeline. A pipeline using GANs was used to generate artificial data which supplements the real data, this allows patient-specific training of a segmentation network. We show that artificial images generated using GANs significantly improve performance in vessel segmentation and that training patient-specific models can be a viable solution to bring automated vessel segmentation to the clinic.


Subject(s)
Placenta , Humans , Pregnancy , Placenta/blood supply , Placenta/diagnostic imaging , Female , Deep Learning , Image Processing, Computer-Assisted/methods , Fetofetal Transfusion/surgery , Fetofetal Transfusion/diagnostic imaging , Machine Learning , Robotic Surgical Procedures/methods , Neural Networks, Computer
17.
Invest Ophthalmol Vis Sci ; 65(6): 6, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38833259

ABSTRACT

Purpose: To develop Choroidalyzer, an open-source, end-to-end pipeline for segmenting the choroid region, vessels, and fovea, and deriving choroidal thickness, area, and vascular index. Methods: We used 5600 OCT B-scans (233 subjects, six systemic disease cohorts, three device types, two manufacturers). To generate region and vessel ground-truths, we used state-of-the-art automatic methods following manual correction of inaccurate segmentations, with foveal positions manually annotated. We trained a U-Net deep learning model to detect the region, vessels, and fovea to calculate choroid thickness, area, and vascular index in a fovea-centered region of interest. We analyzed segmentation agreement (AUC, Dice) and choroid metrics agreement (Pearson, Spearman, mean absolute error [MAE]) in internal and external test sets. We compared Choroidalyzer to two manual graders on a small subset of external test images and examined cases of high error. Results: Choroidalyzer took 0.299 seconds per image on a standard laptop and achieved excellent region (Dice: internal 0.9789, external 0.9749), very good vessel segmentation performance (Dice: internal 0.8817, external 0.8703), and excellent fovea location prediction (MAE: internal 3.9 pixels, external 3.4 pixels). For thickness, area, and vascular index, Pearson correlations were 0.9754, 0.9815, and 0.8285 (internal)/0.9831, 0.9779, 0.7948 (external), respectively (all P < 0.0001). Choroidalyzer's agreement with graders was comparable to the intergrader agreement across all metrics. Conclusions: Choroidalyzer is an open-source, end-to-end pipeline that accurately segments the choroid and reliably extracts thickness, area, and vascular index. Especially choroidal vessel segmentation is a difficult and subjective task, and fully automatic methods like Choroidalyzer could provide objectivity and standardization.


Subject(s)
Choroid , Tomography, Optical Coherence , Humans , Choroid/blood supply , Choroid/diagnostic imaging , Tomography, Optical Coherence/methods , Male , Female , Middle Aged , Aged , Deep Learning , Retinal Vessels/diagnostic imaging , Fovea Centralis/diagnostic imaging , Fovea Centralis/blood supply , Adult , Reproducibility of Results
18.
PLoS One ; 19(5): e0298373, 2024.
Article in English | MEDLINE | ID: mdl-38691542

ABSTRACT

Pulse repetition interval modulation (PRIM) is integral to radar identification in modern electronic support measure (ESM) and electronic intelligence (ELINT) systems. Various distortions, including missing pulses, spurious pulses, unintended jitters, and noise from radar antenna scans, often hinder the accurate recognition of PRIM. This research introduces a novel three-stage approach for PRIM recognition, emphasizing the innovative use of PRI sound. A transfer learning-aided deep convolutional neural network (DCNN) is initially used for feature extraction. This is followed by an extreme learning machine (ELM) for real-time PRIM classification. Finally, a gray wolf optimizer (GWO) refines the network's robustness. To evaluate the proposed method, we develop a real experimental dataset consisting of sound of six common PRI patterns. We utilized eight pre-trained DCNN architectures for evaluation, with VGG16 and ResNet50V2 notably achieving recognition accuracies of 97.53% and 96.92%. Integrating ELM and GWO further optimized the accuracy rates to 98.80% and 97.58. This research advances radar identification by offering an enhanced method for PRIM recognition, emphasizing the potential of PRI sound to address real-world distortions in ESM and ELINT systems.


Subject(s)
Deep Learning , Neural Networks, Computer , Sound , Radar , Algorithms , Pattern Recognition, Automated/methods
19.
Cereb Cortex ; 34(13): 72-83, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38696605

ABSTRACT

Autism spectrum disorder has been emerging as a growing public health threat. Early diagnosis of autism spectrum disorder is crucial for timely, effective intervention and treatment. However, conventional diagnosis methods based on communications and behavioral patterns are unreliable for children younger than 2 years of age. Given evidences of neurodevelopmental abnormalities in autism spectrum disorder infants, we resort to a novel deep learning-based method to extract key features from the inherently scarce, class-imbalanced, and heterogeneous structural MR images for early autism diagnosis. Specifically, we propose a Siamese verification framework to extend the scarce data, and an unsupervised compressor to alleviate data imbalance by extracting key features. We also proposed weight constraints to cope with sample heterogeneity by giving different samples different voting weights during validation, and used Path Signature to unravel meaningful developmental features from the two-time point data longitudinally. We further extracted machine learning focused brain regions for autism diagnosis. Extensive experiments have shown that our method performed well under practical scenarios, transcending existing machine learning methods and providing anatomical insights for autism early diagnosis.


Subject(s)
Autism Spectrum Disorder , Brain , Deep Learning , Early Diagnosis , Humans , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/diagnosis , Infant , Brain/diagnostic imaging , Brain/pathology , Magnetic Resonance Imaging/methods , Child, Preschool , Male , Female , Autistic Disorder/diagnosis , Autistic Disorder/diagnostic imaging , Autistic Disorder/pathology , Unsupervised Machine Learning
20.
Brief Bioinform ; 25(3)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38695119

ABSTRACT

Sequence similarity is of paramount importance in biology, as similar sequences tend to have similar function and share common ancestry. Scoring matrices, such as PAM or BLOSUM, play a crucial role in all bioinformatics algorithms for identifying similarities, but have the drawback that they are fixed, independent of context. We propose a new scoring method for amino acid similarity that remedies this weakness, being contextually dependent. It relies on recent advances in deep learning architectures that employ self-supervised learning in order to leverage the power of enormous amounts of unlabelled data to generate contextual embeddings, which are vector representations for words. These ideas have been applied to protein sequences, producing embedding vectors for protein residues. We propose the E-score between two residues as the cosine similarity between their embedding vector representations. Thorough testing on a wide variety of reference multiple sequence alignments indicate that the alignments produced using the new $E$-score method, especially ProtT5-score, are significantly better than those obtained using BLOSUM matrices. The new method proposes to change the way alignments are computed, with far-reaching implications in all areas of textual data that use sequence similarity. The program to compute alignments based on various $E$-scores is available as a web server at e-score.csd.uwo.ca. The source code is freely available for download from github.com/lucian-ilie/E-score.


Subject(s)
Algorithms , Computational Biology , Sequence Alignment , Sequence Alignment/methods , Computational Biology/methods , Software , Sequence Analysis, Protein/methods , Amino Acid Sequence , Proteins/chemistry , Proteins/genetics , Deep Learning , Databases, Protein
SELECTION OF CITATIONS
SEARCH DETAIL
...