Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
J Imaging ; 8(5)2022 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-35621885

RESUMO

Colorectal cancer (CRC) is a leading cause of mortality worldwide, and preventive screening modalities such as colonoscopy have been shown to noticeably decrease CRC incidence and mortality. Improving colonoscopy quality remains a challenging task due to limiting factors including the training levels of colonoscopists and the variability in polyp sizes, morphologies, and locations. Deep learning methods have led to state-of-the-art systems for the identification of polyps in colonoscopy videos. In this study, we show that deep learning can also be applied to the segmentation of polyps in real time, and the underlying models can be trained using mostly weakly labeled data, in the form of bounding box annotations that do not contain precise contour information. A novel dataset, Polyp-Box-Seg of 4070 colonoscopy images with polyps from over 2000 patients, is collected, and a subset of 1300 images is manually annotated with segmentation masks. A series of models is trained to evaluate various strategies that utilize bounding box annotations for segmentation tasks. A model trained on the 1300 polyp images with segmentation masks achieves a dice coefficient of 81.52%, which improves significantly to 85.53% when using a weakly supervised strategy leveraging bounding box images. The Polyp-Box-Seg dataset, together with a real-time video demonstration of the segmentation system, are publicly available.

2.
Bioinformatics ; 38(7): 2064-2065, 2022 03 28.
Artigo em Inglês | MEDLINE | ID: mdl-35108364

RESUMO

MOTIVATION: Accurately predicting protein secondary structure and relative solvent accessibility is important for the study of protein evolution, structure and an early-stage component of typical protein 3D structure prediction pipelines. RESULTS: We present a new improved version of the SSpro/ACCpro suite of predictors for the prediction of protein secondary structure (in three and eight classes) and relative solvent accessibility. The changes include improved, TensorFlow-trained, deep learning predictors, a richer set of profile features (232 features per residue position) and sequence-only features (71 features per position), a more recent Protein Data Bank (PDB) snapshot for training, better hyperparameter tuning and improvements made to the HOMOLpro module, which leverages structural information from protein segment homologs in the PDB. The new SSpro 6 outperforms the previous version (SSpro 5) by 3-4% in Q3 accuracy and, when used with HOMOLPRO, reaches accuracy in the 95-100% range. AVAILABILITY AND IMPLEMENTATION: The predictors' software, data and web servers are available through the SCRATCH suite of protein structure predictors at http://scratch.proteomics.ics.uci.edu. To maximize comptatibility and ease of use, the deep learning predictors are re-implemented as pure Python/numpy code without TensorFlow dependency. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Aprendizado Profundo , Solventes/química , Proteínas/química , Estrutura Secundária de Proteína , Software
3.
Ophthalmol Glaucoma ; 5(4): 402-412, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34798322

RESUMO

PURPOSE: Accurate identification of iridocorneal structures on gonioscopy is difficult to master, and errors can lead to grave surgical complications. This study aimed to develop and train convolutional neural networks (CNNs) to accurately identify the trabecular meshwork (TM) in gonioscopic videos in real time for eventual clinical integrations. DESIGN: Cross-sectional study. PARTICIPANTS: Adult patients with open angle were identified in academic glaucoma clinics in both Taipei, Taiwan, and Irvine, California. METHODS: Neural Encoder-Decoder CNNs (U-nets) were trained to predict a curve marking the TM using an expert-annotated data set of 378 gonioscopy images. The model was trained and evaluated with stratified cross-validation grouped by patients to ensure uncorrelated training and testing sets, as well as on a separate test set and 3 intraoperative gonioscopic videos of ab interno trabeculotomy with Trabectome (totaling 90 seconds long, 30 frames per second). We also evaluated our model's performance by comparing its accuracy against ophthalmologists. MAIN OUTCOME MEASURES: Successful development of real-time-capable CNNs that are accurate in predicting and marking the TM's position in video frames of gonioscopic views. Models were evaluated in comparison with human expert annotations of static images and video data. RESULTS: The best CNN model produced test set predictions with a median deviation of 0.8% of the video frame's height (15.25 µm) from the human experts' annotations. This error is less than the average vertical height of the TM. The worst test frame prediction of this model had an average deviation of 4% of the frame height (76.28 µm), which is still considered a successful prediction. When challenged with unseen images, the CNN model scored greater than 2 standard deviations above the mean performance of the surveyed general ophthalmologists. CONCLUSIONS: Our CNN model can identify the TM in gonioscopy videos in real time with remarkable accuracy, allowing it to be used in connection with a video camera intraoperatively. This model can have applications in surgical training, automated screenings, and intraoperative guidance. The dataset developed in this study is one of the first publicly available gonioscopy image banks (https://lin.hs.uci.edu/research), which may encourage future investigations in this topic.


Assuntos
Aprendizado Profundo , Malha Trabecular , Adulto , Estudos Transversais , Gonioscopia , Humanos , Pressão Intraocular , Malha Trabecular/cirurgia
4.
Lasers Surg Med ; 53(1): 171-178, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32960994

RESUMO

BACKGROUND AND OBJECTIVES: One of the challenges in developing effective hair loss therapies is the lack of reliable methods to monitor treatment response or alopecia progression. In this study, we propose the use of optical coherence tomography (OCT) and automated deep learning to non-invasively evaluate hair and follicle counts that may be used to monitor the success of hair growth therapy more accurately and efficiently. STUDY DESIGN/MATERIALS AND METHODS: We collected 70 OCT scans from 14 patients with alopecia and trained a convolutional neural network (CNN) to automatically count all follicles present in the scans. The model is based on a dual approach of both detecting hair follicles and estimating the local hair density in order to give accurate counts even for cases where two or more adjacent hairs are in close proximity to each other. RESULTS: We evaluate our system on 70 OCT manually labeled scans taken at different scalp locations from 14 patients, with 20 of those redundantly labeled by two human expert OCT operators. When comparing the individual human predictions and considering the exact locations of hair and follicle predictions, we find that the two human raters disagree with each other on approximately 22% of hairs and follicles. Overall, the deep learning (DL) system predicts the number of follicles with an error rate of 11.8% and the number of hairs with an error rate of 18.7% on average on the 70 scans. The OCT system can capture one scalp location in three seconds, and the DL model can make all predictions in less than a second after processing the scan, which takes half a minute using an unoptimized implementation. CONCLUSION: This approach is well-positioned to become the standard for non-invasive evaluation of hair growth treatment progress in patients, saving significant amounts of time and effort compared with manual evaluation. Lasers Surg. Med. © 2020 Wiley Periodicals, Inc.


Assuntos
Aprendizado Profundo , Couro Cabeludo , Alopecia/diagnóstico por imagem , Cabelo , Folículo Piloso/diagnóstico por imagem , Humanos , Couro Cabeludo/diagnóstico por imagem , Tomografia de Coerência Óptica
5.
Comput Struct Biotechnol J ; 18: 2281-2289, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32994887

RESUMO

The use of evolutionary profiles to predict protein secondary structure, as well as other protein structural features, has been standard practice since the 1990s. Using profiles in the input of such predictors, in place or in addition to the sequence itself, leads to significantly more accurate predictions. While profiles can enhance structural signals, their role remains somewhat surprising as proteins do not use profiles when folding in vivo. Furthermore, the same sequence-based redundancy reduction protocols initially derived to train and evaluate sequence-based predictors, have been applied to train and evaluate profile-based predictors. This can lead to unfair comparisons since profiles may facilitate the bleeding of information between training and test sets. Here we use the extensively studied problem of secondary structure prediction to better evaluate the role of profiles and show that: (1) high levels of profile similarity between training and test proteins are observed when using standard sequence-based redundancy protocols; (2) the gain in accuracy for profile-based predictors, over sequence-based predictors, strongly relies on these high levels of profile similarity between training and test proteins; and (3) the overall accuracy of a profile-based predictor on a given protein dataset provides a biased measure when trying to estimate the actual accuracy of the predictor, or when comparing it to other predictors. We show, however, that this bias can be mitigated by implementing a new protocol (EVALpro) which evaluates the accuracy of profile-based predictors as a function of the profile similarity between training and test proteins. Such a protocol not only allows for a fair comparison of the predictors on equally hard or easy examples, but also reduces the impact of choosing a given similarity cutoff when selecting test proteins. The EVALpro program is available in the SCRATCH suite ( www.scratch.proteomics.ics.uci.edu) and can be downloaded at: www.download.igb.uci.edu/#evalpro.

6.
Comput Struct Biotechnol J ; 18: 967-972, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32368331

RESUMO

Total Shoulder Arthroplasty (TSA) is a type of surgery in which the damaged ball of the shoulder is replaced with a prosthesis. Many years later, this prosthesis may be in need of servicing or replacement. In some situations, such as when the patient has changed his country of residence, the model and the manufacturer of the prosthesis may be unknown to the patient and primary doctor. Correct identification of the implant's model prior to surgery is required for selecting the correct equipment and procedure. We present a novel way to automatically classify shoulder implants in X-ray images. We employ deep learning models and compare their performance to alternative classifiers, such as random forests and gradient boosting. We find that deep convolutional neural networks outperform other classifiers significantly if and only if out-of-domain data such as ImageNet is used to pre-train the models. In a data set containing X-ray images of shoulder implants from 4 manufacturers and 16 different models, deep learning is able to identify the correct manufacturer with an accuracy of approximately 80% in 10-fold cross validation, while other classifiers achieve an accuracy of 56% or less. We believe that this approach will be a useful tool in clinical practice, and is likely applicable to other kinds of prostheses.

7.
Transplant Proc ; 52(1): 246-258, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31926745

RESUMO

Prediction models of post-liver transplant mortality are crucial so that donor organs are not allocated to recipients with unreasonably high probabilities of mortality. Machine learning algorithms, particularly deep neural networks (DNNs), can often achieve higher predictive performance than conventional models. In this study, we trained a DNN to predict 90-day post-transplant mortality using preoperative variables and compared the performance to that of the Survival Outcomes Following Liver Transplantation (SOFT) and Balance of Risk (BAR) scores, using United Network of Organ Sharing data on adult patients who received a deceased donor liver transplant between 2005 and 2015 (n = 57,544). The DNN was trained using 202 features, and the best DNN's architecture consisted of 5 hidden layers with 110 neurons each. The area under the receiver operating characteristics curve (AUC) of the best DNN model was 0.703 (95% CI: 0.682-0.726) as compared to 0.655 (95% CI: 0.633-0.678) and 0.688 (95% CI: 0.667-0.711) for the BAR score and SOFT score, respectively. In conclusion, despite the complexity of DNN, it did not achieve a significantly higher discriminative performance than the SOFT score. Future risk models will likely benefit from the inclusion of other data sources, including high-resolution clinical features for which DNNs are particularly apt to outperform conventional statistical methods.


Assuntos
Simulação por Computador , Aprendizado Profundo , Transplante de Fígado/mortalidade , Adulto , Feminino , Humanos , Doadores Vivos , Masculino , Curva ROC , Sistema de Registros
8.
IEEE/ACM Trans Comput Biol Bioinform ; 16(3): 1029-1035, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-29993583

RESUMO

Likely drug candidates which are identified in traditional pre-clinical drug screens often fail in patient trials, increasing the societal burden of drug discovery. A major contributing factor to this phenomenon is the failure of traditional in vitro models of drug response to accurately mimic many of the more complex properties of human biology. We have recently introduced a new microphysiological system for growing vascularized, perfused microtissues that more accurately models human physiology and is suitable for large drug screens. In this work, we develop a machine learning model that can quickly and accurately flag compounds which effectively disrupt vascular networks from images taken before and after drug application in vitro. The system is based on a convolutional neural network and achieves near perfect accuracy while committing potentially no expensive false negatives.


Assuntos
Antineoplásicos/farmacologia , Aprendizado Profundo , Descoberta de Drogas/métodos , Processamento de Imagem Assistida por Computador , Neoplasias/tratamento farmacológico , Neovascularização Patológica/diagnóstico por imagem , Técnicas de Cultura de Células , Matriz Extracelular/metabolismo , Humanos , Microscopia , Neoplasias/diagnóstico por imagem , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão
9.
Gastroenterology ; 155(4): 1069-1078.e8, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29928897

RESUMO

BACKGROUND & AIMS: The benefit of colonoscopy for colorectal cancer prevention depends on the adenoma detection rate (ADR). The ADR should reflect the adenoma prevalence rate, which is estimated to be higher than 50% in the screening-age population. However, the ADR by colonoscopists varies from 7% to 53%. It is estimated that every 1% increase in ADR lowers the risk of interval colorectal cancers by 3%-6%. New strategies are needed to increase the ADR during colonoscopy. We tested the ability of computer-assisted image analysis using convolutional neural networks (CNNs; a deep learning model for image analysis) to improve polyp detection, a surrogate of ADR. METHODS: We designed and trained deep CNNs to detect polyps using a diverse and representative set of 8,641 hand-labeled images from screening colonoscopies collected from more than 2000 patients. We tested the models on 20 colonoscopy videos with a total duration of 5 hours. Expert colonoscopists were asked to identify all polyps in 9 de-identified colonoscopy videos, which were selected from archived video studies, with or without benefit of the CNN overlay. Their findings were compared with those of the CNN using CNN-assisted expert review as the reference. RESULTS: When tested on manually labeled images, the CNN identified polyps with an area under the receiver operating characteristic curve of 0.991 and an accuracy of 96.4%. In the analysis of colonoscopy videos in which 28 polyps were removed, 4 expert reviewers identified 8 additional polyps without CNN assistance that had not been removed and identified an additional 17 polyps with CNN assistance (45 in total). All polyps removed and identified by expert review were detected by the CNN. The CNN had a false-positive rate of 7%. CONCLUSION: In a set of 8,641 colonoscopy images containing 4,088 unique polyps, the CNN identified polyps with a cross-validation accuracy of 96.4% and an area under the receiver operating characteristic curve of 0.991. The CNN system detected and localized polyps well within real-time constraints using an ordinary desktop machine with a contemporary graphics processing unit. This system could increase the ADR and decrease interval colorectal cancers but requires validation in large multicenter trials.


Assuntos
Pólipos Adenomatosos/patologia , Pólipos do Colo/patologia , Colonoscopia/métodos , Neoplasias Colorretais/patologia , Diagnóstico por Computador/métodos , Detecção Precoce de Câncer/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Área Sob a Curva , Estudos de Viabilidade , Humanos , Variações Dependentes do Observador , Valor Preditivo dos Testes , Prognóstico , Curva ROC , Reprodutibilidade dos Testes , Gravação em Vídeo
10.
J Chem Inf Model ; 58(2): 207-211, 2018 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-29320180

RESUMO

Deep learning methods applied to problems in chemoinformatics often require the use of recursive neural networks to handle data with graphical structure and variable size. We present a useful classification of recursive neural network approaches into two classes, the inner and outer approach. The inner approach uses recursion inside the underlying graph, to essentially "crawl" the edges of the graph, while the outer approach uses recursion outside the underlying graph, to aggregate information over progressively longer distances in an orthogonal direction. We illustrate the inner and outer approaches on several examples. More importantly, we provide open-source implementations [available at www.github.com/Chemoinformatics/InnerOuterRNN and cdb.ics.uci.edu ] for both approaches in Tensorflow which can be used in combination with training data to produce efficient models for predicting the physical, chemical, and biological properties of small molecules.


Assuntos
Bases de Dados de Compostos Químicos , Aprendizado Profundo , Algoritmos , Teorema de Bayes , Bibliotecas de Moléculas Pequenas/química , Software
11.
Nat Methods ; 14(4): 435-442, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28250467

RESUMO

Teravoxel volume electron microscopy data sets from neural tissue can now be acquired in weeks, but data analysis requires years of manual labor. We developed the SyConn framework, which uses deep convolutional neural networks and random forest classifiers to infer a richly annotated synaptic connectivity matrix from manual neurite skeleton reconstructions by automatically identifying mitochondria, synapses and their types, axons, dendrites, spines, myelin, somata and cell types. We tested our approach on serial block-face electron microscopy data sets from zebrafish, mouse and zebra finch, and computed the synaptic wiring of songbird basal ganglia. We found that, for example, basal-ganglia cell types with high firing rates in vivo had higher densities of mitochondria and vesicles and that synapse sizes and quantities scaled systematically, depending on the innervated postsynaptic cell types.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Microscopia Eletrônica/métodos , Sinapses/fisiologia , Animais , Axônios/ultraestrutura , Dendritos/ultraestrutura , Camundongos , Redes Neurais de Computação , Neuritos/ultraestrutura , Software , Peixe-Zebra
12.
Neuroimage ; 129: 460-469, 2016 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-26808333

RESUMO

Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N=53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N=135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Neuroimagem/métodos , Humanos , Aumento da Imagem/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Crânio
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...