Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Clin Otolaryngol ; 48(6): 888-894, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37488094

RESUMO

BACKGROUND: Classifying sphenoid pneumatisation is an important but often overlooked task in reporting sinus CT scans. Artificial intelligence (AI) and one of its key methods, convolutional neural networks (CNNs), can create algorithms that can learn from data without being programmed with explicit rules and have shown utility in radiological image classification. OBJECTIVE: To determine if a trained CNN can accurately classify sphenoid sinus pneumatisation on CT sinus imaging. METHODS: Sagittal slices through the natural ostium of the sphenoid sinus were extracted from retrospectively collected bone-window CT scans of the paranasal sinuses for consecutive patients over 6 years. Two blinded Otolaryngology residents reviewed each image and classified the sphenoid sinus pneumatisation as either conchal, presellar or sellar. An AI algorithm was developed using the Microsoft Azure Custom Vision deep learning platform to classify the pattern of pneumatisation. RESULTS: Seven hundred eighty images from 400 patients were used to train the algorithm, which was then tested on a further 118 images from 62 patients. The algorithm achieved an accuracy of 93.2% (95% confidence interval [CI] 87.1-97.0), 87.3% (95% CI 79.9-92.7) and 85.6% (95% CI 78.0-91.4) in correctly identifying conchal, presellar and sellar sphenoid pneumatisation, respectively. The overall weighted accuracy of the CNN was 85.9%. CONCLUSION: The CNN described demonstrated a moderately accurate classification of sphenoid pneumatisation subtypes on CT scans. The use of CNN-based assistive tools may enable surgeons to achieve safer operative planning through routine automated reporting allowing greater resources to be directed towards the identification of pathology.

2.
Ann Otol Rhinol Laryngol ; 132(4): 417-430, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35651308

RESUMO

INTRODUCTION: Convolutional neural networks (CNNs) represent a state-of-the-art methodological technique in AI and deep learning, and were specifically created for image classification and computer vision tasks. CNNs have been applied in radiology in a number of different disciplines, mostly outside otolaryngology, potentially due to a lack of familiarity with this technology within the otolaryngology community. CNNs have the potential to revolutionize clinical practice by reducing the time required to perform manual tasks. This literature search aims to present a comprehensive systematic review of the published literature with regard to CNNs and their utility to date in ENT radiology. METHODS: Data were extracted from a variety of databases including PubMED, Proquest, MEDLINE Open Knowledge Maps, and Gale OneFile Computer Science. Medical subject headings (MeSH) terms and keywords were used to extract related literature from each databases inception to October 2020. Inclusion criteria were studies where CNNs were used as the main intervention and CNNs focusing on radiology relevant to ENT. Titles and abstracts were reviewed followed by the contents. Once the final list of articles was obtained, their reference lists were also searched to identify further articles. RESULTS: Thirty articles were identified for inclusion in this study. Studies utilizing CNNs in most ENT subspecialties were identified. Studies utilized CNNs for a number of tasks including identification of structures, presence of pathology, and segmentation of tumors for radiotherapy planning. All studies reported a high degree of accuracy of CNNs in performing the chosen task. CONCLUSION: This study provides a better understanding of CNN methodology used in ENT radiology demonstrating a myriad of potential uses for this exciting technology including nodule and tumor identification, identification of anatomical variation, and segmentation of tumors. It is anticipated that this field will continue to evolve and these technologies and methodologies will become more entrenched in our everyday practice.


Assuntos
Otolaringologia , Radiologia , Humanos , Redes Neurais de Computação , Radiografia
3.
BMC Bioinformatics ; 23(1): 431, 2022 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-36253726

RESUMO

BACKGROUND: Predicting morphological changes to anatomical structures from 3D shapes such as blood vessels or appearance of the face is a growing interest to clinicians. Machine learning (ML) has had great success driving predictions in 2D, however, methods suitable for 3D shapes are unclear and the use cases unknown. OBJECTIVE AND METHODS: This systematic review aims to identify the clinical implementation of 3D shape prediction and ML workflows. Ovid-MEDLINE, Embase, Scopus and Web of Science were searched until 28th March 2022. RESULTS: 13,754 articles were identified, with 12 studies meeting final inclusion criteria. These studies involved prediction of the face, head, aorta, forearm, and breast, with most aiming to visualize shape changes after surgical interventions. ML algorithms identified were regressions (67%), artificial neural networks (25%), and principal component analysis (8%). Meta-analysis was not feasible due to the heterogeneity of the outcomes. CONCLUSION: 3D shape prediction is a nascent but growing area of research in medicine. This review revealed the feasibility of predicting 3D shapes using ML clinically, which could play an important role for clinician-patient visualization and communication. However, all studies were early phase and there were inconsistent language and reporting. Future work could develop guidelines for publication and promote open sharing of source code.


Assuntos
Corpo Humano , Aprendizado de Máquina , Algoritmos , Humanos , Redes Neurais de Computação
4.
IEEE Trans Med Imaging ; 41(11): 3266-3277, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35679380

RESUMO

The identification of melanoma involves an integrated analysis of skin lesion images acquired using clinical and dermoscopy modalities. Dermoscopic images provide a detailed view of the subsurface visual structures that supplement the macroscopic details from clinical images. Visual melanoma diagnosis is commonly based on the 7-point visual category checklist (7PC), which involves identifying specific characteristics of skin lesions. The 7PC contains intrinsic relationships between categories that can aid classification, such as shared features, correlations, and the contributions of categories towards diagnosis. Manual classification is subjective and prone to intra- and interobserver variability. This presents an opportunity for automated methods to aid in diagnostic decision support. Current state-of-the-art methods focus on a single image modality (either clinical or dermoscopy) and ignore information from the other, or do not fully leverage the complementary information from both modalities. Furthermore, there is not a method to exploit the 'intercategory' relationships in the 7PC. In this study, we address these issues by proposing a graph-based intercategory and intermodality network (GIIN) with two modules. A graph-based relational module (GRM) leverages intercategorical relations, intermodal relations, and prioritises the visual structure details from dermoscopy by encoding category representations in a graph network. The category embedding learning module (CELM) captures representations that are specialised for each category and support the GRM. We show that our modules are effective at enhancing classification performance using three public datasets (7PC, ISIC 2017, and ISIC 2018), and that our method outperforms state-of-the-art methods at classifying the 7PC categories and diagnosis.


Assuntos
Melanoma , Dermatopatias , Neoplasias Cutâneas , Humanos , Dermoscopia/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Melanoma/diagnóstico por imagem
5.
Clin Otolaryngol ; 47(3): 401-413, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35253378

RESUMO

OBJECTIVES: To summarise the accuracy of artificial intelligence (AI) computer vision algorithms to classify ear disease from otoscopy. DESIGN: Systematic review and meta-analysis. METHODS: Using the PRISMA guidelines, nine online databases were searched for articles that used AI computer vision algorithms developed from various methods (convolutional neural networks, artificial neural networks, support vector machines, decision trees and k-nearest neighbours) to classify otoscopic images. Diagnostic classes of interest: normal tympanic membrane, acute otitis media (AOM), otitis media with effusion (OME), chronic otitis media (COM) with or without perforation, cholesteatoma and canal obstruction. MAIN OUTCOME MEASURES: Accuracy to correctly classify otoscopic images compared to otolaryngologists (ground truth). The Quality Assessment of Diagnostic Accuracy Studies Version 2 tool was used to assess the quality of methodology and risk of bias. RESULTS: Thirty-nine articles were included. Algorithms achieved 90.7% (95%CI: 90.1-91.3%) accuracy to difference between normal or abnormal otoscopy images in 14 studies. The most common multiclassification algorithm (3 or more diagnostic classes) achieved 97.6% (95%CI: 97.3-97.9%) accuracy to differentiate between normal, AOM and OME in three studies. AI algorithms outperformed human assessors to classify otoscopy images achieving 93.4% (95%CI: 90.5-96.4%) versus 73.2% (95%CI: 67.9-78.5%) accuracy in three studies. Convolutional neural networks achieved the highest accuracy compared to other classification methods. CONCLUSION: AI can classify ear disease from otoscopy. A concerted effort is required to establish a comprehensive and reliable otoscopy database for algorithm training. An AI-supported otoscopy system may assist health care workers, trainees and primary care practitioners with less otology experience identify ear disease.


Assuntos
Otopatias , Otite Média com Derrame , Otite Média , Inteligência Artificial , Humanos , Otite Média/diagnóstico , Otite Média com Derrame/diagnóstico , Otoscópios , Otoscopia/métodos
6.
Otol Neurotol ; 43(4): 481-488, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35239622

RESUMO

OBJECTIVE: To develop an artificial intelligence image classification algorithm to triage otoscopic images from rural and remote Australian Aboriginal and Torres Strait Islander children. STUDY DESIGN: Retrospective observational study. SETTING: Tertiary referral center. PATIENTS: Rural and remote Aboriginal and Torres Strait Islander children who underwent tele-otology ear health screening in the Northern Territory, Australia between 2010 and 2018. INTERVENTIONS: Otoscopic images were labeled by otolaryngologists to classify the ground truth. Deep and transfer learning methods were used to develop an image classification algorithm. MAIN OUTCOME MEASURES: Accuracy, sensitivity, specificity, positive predictive value, negative predictive value, area under the curve (AUC) of the resultant algorithm compared with the ground truth. RESULTS: Six thousand five hundred twenty seven images were used (5927 images for training and 600 for testing). The algorithm achieved an accuracy of 99.3% for acute otitis media, 96.3% for chronic otitis media, 77.8% for otitis media with effusion (OME), and 98.2% to classify wax/obstructed canal. To differentiate between multiple diagnoses, the algorithm achieved 74.4 to 92.8% accuracy and an AUC of 0.963 to 0.997. The most common incorrect classification pattern was OME misclassified as normal tympanic membranes. CONCLUSIONS: The paucity of access to tertiary otolaryngology care for rural and remote Aboriginal and Torres Strait Islander communities may contribute to an under-identification of ear disease. Computer vision image classification algorithms can accurately classify ear disease from otoscopic images of Indigenous Australian children. In the future, a validated algorithm may integrate with existing telemedicine initiatives to support effective triage and facilitate early treatment and referral.


Assuntos
Otopatias , Otite Média com Derrame , Otite Média , Algoritmos , Inteligência Artificial , Austrália , Criança , Computadores , Otopatias/diagnóstico por imagem , Humanos , Havaiano Nativo ou Outro Ilhéu do Pacífico , Otite Média/diagnóstico , Triagem
7.
Sci Rep ; 12(1): 2173, 2022 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-35140267

RESUMO

Radiogenomics relationships (RRs) aims to identify statistically significant correlations between medical image features and molecular characteristics from analysing tissue samples. Previous radiogenomics studies mainly relied on a single category of image feature extraction techniques (ETs); these are (i) handcrafted ETs that encompass visual imaging characteristics, curated from knowledge of human experts and, (ii) deep ETs that quantify abstract-level imaging characteristics from large data. Prior studies therefore failed to leverage the complementary information that are accessible from fusing the ETs. In this study, we propose a fused feature signature (FFSig): a selection of image features from handcrafted and deep ETs (e.g., transfer learning and fine-tuning of deep learning models). We evaluated the FFSig's ability to better represent RRs compared to individual ET approaches with two public datasets: the first dataset was used to build the FFSig using 89 patients with non-small cell lung cancer (NSCLC) comprising of gene expression data and CT images of the thorax and the upper abdomen for each patient; the second NSCLC dataset comprising of 117 patients with CT images and RNA-Seq data and was used as the validation set. Our results show that our FFSig encoded complementary imaging characteristics of tumours and identified more RRs with a broader range of genes that are related to important biological functions such as tumourigenesis. We suggest that the FFSig has the potential to identify important RRs that may assist cancer diagnosis and treatment in the future.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/genética , Genômica por Imageamento , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/genética , Aprendizado Profundo , Ontologia Genética , Humanos , RNA-Seq , Tomografia Computadorizada por Raios X , Transcriptoma
8.
Phys Med Biol ; 66(24)2021 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-34818637

RESUMO

Objective.Positron emission tomography-computed tomography (PET-CT) is regarded as the imaging modality of choice for the management of soft-tissue sarcomas (STSs). Distant metastases (DM) are the leading cause of death in STS patients and early detection is important to effectively manage tumors with surgery, radiotherapy and chemotherapy. In this study, we aim to early detect DM in patients with STS using their PET-CT data.Approach.We derive a new convolutional neural network method for early DM detection. The novelty of our method is the introduction of a constrained hierarchical multi-modality feature learning approach to integrate functional imaging (PET) features with anatomical imaging (CT) features. In addition, we removed the reliance on manual input, e.g. tumor delineation, for extracting imaging features.Main results.Our experimental results on a well-established benchmark PET-CT dataset show that our method achieved the highest accuracy (0.896) and AUC (0.903) scores when compared to the state-of-the-art methods (unpaired student's t-testp-value < 0.05).Significance.Our method could be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.


Assuntos
Aprendizado Profundo , Sarcoma , Neoplasias de Tecidos Moles , Humanos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Sarcoma/diagnóstico por imagem , Neoplasias de Tecidos Moles/diagnóstico por imagem
9.
IEEE J Biomed Health Inform ; 25(9): 3507-3516, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33591922

RESUMO

Multimodal positron emission tomography-computed tomography (PET-CT) is used routinely in the assessment of cancer. PET-CT combines the high sensitivity for tumor detection of PET and anatomical information from CT. Tumor segmentation is a critical element of PET-CT but at present, the performance of existing automated methods for this challenging task is low. Segmentation tends to be done manually by different imaging experts, which is labor-intensive and prone to errors and inconsistency. Previous automated segmentation methods largely focused on fusing information that is extracted separately from the PET and CT modalities, with the underlying assumption that each modality contains complementary information. However, these methods do not fully exploit the high PET tumor sensitivity that can guide the segmentation. We introduce a deep learning-based framework in multimodal PET-CT segmentation with a multimodal spatial attention module (MSAM). The MSAM automatically learns to emphasize regions (spatial areas) related to tumors and suppress normal regions with physiologic high-uptake from the PET input. The resulting spatial attention maps are subsequently employed to target a convolutional neural network (CNN) backbone for segmentation of areas with higher tumor likelihood from the CT image. Our experimental results on two clinical PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma (STS) validate the effectiveness of our framework in these different cancer types. We show that our MSAM, with a conventional U-Net backbone, surpasses the state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice similarity coefficient (DSC).


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Atenção , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada
10.
IEEE Trans Med Imaging ; 39(7): 2385-2394, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32012005

RESUMO

The accuracy and robustness of image classification with supervised deep learning are dependent on the availability of large-scale labelled training data. In medical imaging, these large labelled datasets are sparse, mainly related to the complexity in manual annotation. Deep convolutional neural networks (CNNs), with transferable knowledge, have been employed as a solution to limited annotated data through: 1) fine-tuning generic knowledge with a relatively smaller amount of labelled medical imaging data, and 2) learning image representation that is invariant to different domains. These approaches, however, are still reliant on labelled medical image data. Our aim is to use a new hierarchical unsupervised feature extractor to reduce reliance on annotated training data. Our unsupervised approach uses a multi-layer zero-bias convolutional auto-encoder that constrains the transformation of generic features from a pre-trained CNN (for natural images) to non-redundant and locally relevant features for the medical image data. We also propose a context-based feature augmentation scheme to improve the discriminative power of the feature representation. We evaluated our approach on 3 public medical image datasets and compared it to other state-of-the-art supervised CNNs. Our unsupervised approach achieved better accuracy when compared to other conventional unsupervised methods and baseline fine-tuned CNNs.


Assuntos
Diagnóstico por Imagem , Redes Neurais de Computação
11.
IEEE J Transl Eng Health Med ; 7: 1800909, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31857918

RESUMO

OBJECTIVE: Large scale retrospective analysis of fetal ultrasound (US) data is important in the understanding of the cumulative impact of antenatal factors on offspring's health outcomes. Although the benefits are evident, there is a paucity of research into such large scale studies as it requires tedious and expensive effort in manual processing of large scale data repositories. This study presents an automated framework to facilitate retrospective analysis of large scale US data repositories. METHOD: Our framework consists of four modules: (1) an image classifier to distinguish the Brightness (B) -mode images; (2) a fetal image structure identifier to select US images containing user-defined fetal structures of interest (fSOI); (3) a biometry measurement algorithm to measure the fSOIs in the images and, (4) a visual evaluation module to allow clinicians to validate the outcomes. RESULTS: We demonstrated our framework using thalamus as the fSOI from a hospital repository of more than 80,000 patients, consisting of 3,816,967 antenatal US files (DICOM objects). Our framework classified 1,869,105 B-mode images and from which 38,786 thalamus images were identified. We selected a random subset of 1290 US files with 558 B-mode (containing 19 thalamus images and the rest being other US data) and evaluated our framework performance. With the evaluation set, B-mode image classification resulted in accuracy, precision, and recall (APR) of 98.67%, 99.75% and 98.57% respectively. For fSOI identification, APR was 93.12%, 97.76% and 80.78% respectively. CONCLUSION: We introduced a completely automated approach designed to analyze a large scale data repository to enable retrospective clinical research.

12.
Artigo em Inglês | MEDLINE | ID: mdl-31217099

RESUMO

The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.

13.
Med Image Anal ; 56: 140-151, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31229759

RESUMO

The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems. Such supervised approaches, however, are difficult to implement in the medical domain where large volumes of labelled data are difficult to obtain due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. We propose a new convolutional sparse kernel network (CSKN), which is a hierarchical unsupervised feature learning framework that addresses the challenge of learning representative visual features in medical image analysis domains where there is a lack of annotated training data. Our framework has three contributions: (i) we extend kernel learning to identify and represent invariant features across image sub-patches in an unsupervised manner. (ii) We initialise our kernel learning with a layer-wise pre-training scheme that leverages the sparsity inherent in medical images to extract initial discriminative features. (iii) We adapt a multi-scale spatial pyramid pooling (SPP) framework to capture subtle geometric differences between learned visual features. We evaluated our framework in medical image retrieval and classification on three public datasets. Our results show that our CSKN had better accuracy when compared to other conventional unsupervised methods and comparable accuracy to methods that used state-of-the-art supervised convolutional neural networks (CNNs). Our findings indicate that our unsupervised CSKN provides an opportunity to leverage unannotated big data in medical imaging repositories.


Assuntos
Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Aprendizado de Máquina não Supervisionado , Humanos
14.
Ultrasound Med Biol ; 45(5): 1259-1273, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30826153

RESUMO

Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.


Assuntos
Retardo do Crescimento Fetal/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Ultrassonografia Pré-Natal/métodos , Feminino , Humanos , Aprendizado de Máquina , Gravidez , Estudos Retrospectivos
15.
Int J Comput Assist Radiol Surg ; 14(5): 733-744, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30661169

RESUMO

PURPOSE: Our aim was to develop an interactive 3D direct volume rendering (DVR) visualization solution to interpret and analyze complex, serial multi-modality imaging datasets from positron emission tomography-computed tomography (PET-CT). METHODS: Our approach uses: (i) a serial transfer function (TF) optimization to automatically depict particular regions of interest (ROIs) over serial datasets with consistent anatomical structures; (ii) integration of a serial segmentation algorithm to interactively identify and track ROIs on PET; and (iii) parallel graphics processing unit (GPU) implementation for interactive visualization. RESULTS: Our DVR visualization more easily identifies changes in ROIs in serial scans in an automated fashion and parallel GPU computation which enables interactive visualization. CONCLUSIONS: Our approach provides a rapid 3D visualization of relevant ROIs over multiple scans, and we suggest that it can be used as an adjunct to conventional 2D viewing software from scanner vendors.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Linfoma/diagnóstico , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Software , Adulto Jovem
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 6513-6516, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31947333

RESUMO

Missing data is a frequent occurrence in medical and health datasets. The analysis of datasets with missing data can lead to loss in statistical power or biased results. We address this issue with a novel deep learning technique to impute missing values in health data. Our method extends upon an autoencoder to derive a deep learning architecture that can learn the hidden representations of data even when data is perturbed by missing values (noise). Our model is constructed with overcomplete representation and trained with denoising regularization. This allows the latent/hidden layers of our model to effectively extract the relationships between different variables; these relationships are then used to reconstruct missing values. Our contributions include a new loss function designed to avoid local optima, and this helps the model to learn the real distribution of variables in the dataset. We evaluate our method in comparison with other well-established imputation strategies (mean, median imputation, SVD, KNN, matrix factorization and soft impute) on 48,350 Linked Birth/Infant Death Cohort Data records. Our experiments demonstrate that our method achieved lower imputation mean squared error (MSE=0.00988) compared with other imputation methods (with MSE ranging from 0.02 to 0.08). When assessing the imputation quality using the imputed data for prediction tasks, our experiments show that the data imputed by our method yielded better results (F1=70.37%) compared with other imputation methods (ranging from 66 to 69%).


Assuntos
Aprendizado Profundo , Projetos de Pesquisa
17.
IEEE Trans Med Imaging ; 38(6): 1477-1487, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30530316

RESUMO

Automatic event detection in cell videos is essential for monitoring cell populations in biomedicine. Deep learning methods have advantages over traditional approaches for cell event detection due to their ability to capture more discriminative features of cellular processes. Supervised deep learning methods, however, are inherently limited due to the scarcity of annotated data. Unsupervised deep learning methods have shown promise in general (non-cell) videos because they can learn the visual appearance and motion of regularly occurring events. Cell videos, however, can have rapid, irregular changes in cell appearance and motion, such as during cell division and death, which are often the events of most interest. We propose a novel unsupervised two-path input neural network architecture to capture these irregular events with three key elements: 1) a visual encoding path to capture regular spatiotemporal patterns of observed objects with convolutional long short-term memory units; 2) an event detection path to extract information related to irregular events with max-pooling layers; and 3) integration of the hidden states of the two paths to provide a comprehensive representation of the video that is used to simultaneously locate and classify cell events. We evaluated our network in detecting cell division in densely packed stem cells in phase-contrast microscopy videos. Our unsupervised method achieved higher or comparable accuracy to standard and state-of-the-art supervised methods.


Assuntos
Técnicas Citológicas/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagem Molecular/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Animais , Aorta/citologia , Bovinos , Células Cultivadas , Microscopia , Aprendizado de Máquina não Supervisionado , Gravação em Vídeo
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 644-647, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30440479

RESUMO

Tumor histopathology is a crucial step in cancer diagnosis which involves visual inspection of imaging data to detect the presence of tumor cells among healthy tissues. This manual process can be time-consuming, error-prone, and influenced by the expertise of the pathologist. Recent deep learning methods for image classification and detection using convolutional neural networks (CNNs) have demonstrated marked improvements in the accuracy of a variety of medical imaging analysis tasks. However, most well-established deep learning methods require large annotated training datasets that are specific to the particular problem domain; such datasets are difficult to acquire for histopathology data where visual characteristics differ between different tissue types, in addition to the need for precise annotations. In this study, we overcome the lack of annotated training dataset in histopathology images of a particular domain by adapting annotated histopathology images from different domains (tissue types). The data from other tissue types are used to pre-train CNNs into a shared histopathology domain (e.g., stains, cellular structures) such that it can be further tuned/optimized for a specific tissue type. We evaluated our classification method on publically available datasets of histopathology images; the accuracy and area under the receiver operating characteristic curve (AUC) of our method was higher than CNNs trained from scratch on limited data (accuracy: 84.3% vs. 78.3%; AUC: 0.918 vs. 0.867), suggesting that domain adaptation can be a valuable approach to histopathological images classification.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias/classificação , Redes Neurais de Computação , Área Sob a Curva , Técnicas Histológicas , Humanos , Curva ROC
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 2591-2594, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30440938

RESUMO

Dermoscopic imaging is an established technique to detect, track, and diagnose malignant melanoma, and one of the ways to improve this technique is via computer-aided image segmentation. Image segmentation is an important step towards building computerized detection and classification systems by delineating the area of interest, in our case, the skin lesion, from the background. However, current segmentation techniques are hard pressed to account for color artifacts within dermoscopic images that are often incorrectly detected as part of the lesion. Often there are few annotated examples of these artifacts, which limits training segmentation methods like the fully convolutional network (FCN) due to the skewed dataset. We propose to improve FCN training by augmenting the dataset with synthetic images created in a controlled manner using a generative adversarial network (GAN). Our novelty lies in the use of a color label (CL) to specify the different characteristics (approximate size, location, and shape) of the different regions (skin, lesion, artifacts) in the synthetic images. Our GAN is trained to perform style transfer of real melanoma image characteristics (e.g. texture) onto these color labels, allowing us to generate specific types of images containing artifacts. Our experimental results demonstrate that the synthetic images generated by our technique have a lower mean average error when compared to synthetic images generated using traditional binary labels. As a consequence, we demonstrated improvements in melanoma image segmentation when using synthetic images generated by our technique.


Assuntos
Melanoma , Neoplasias Cutâneas , Algoritmos , Cor , Dermoscopia , Humanos
20.
IEEE Trans Biomed Eng ; 64(9): 2065-2074, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28600236

RESUMO

OBJECTIVE: Segmentation of skin lesions is an important step in the automated computer aided diagnosis of melanoma. However, existing segmentation methods have a tendency to over- or under-segment the lesions and perform poorly when the lesions have fuzzy boundaries, low contrast with the background, inhomogeneous textures, or contain artifacts. Furthermore, the performance of these methods are heavily reliant on the appropriate tuning of a large number of parameters as well as the use of effective preprocessing techniques, such as illumination correction and hair removal. METHODS: We propose to leverage fully convolutional networks (FCNs) to automatically segment the skin lesions. FCNs are a neural network architecture that achieves object detection by hierarchically combining low-level appearance information with high-level semantic information. We address the issue of FCN producing coarse segmentation boundaries for challenging skin lesions (e.g., those with fuzzy boundaries and/or low difference in the textures between the foreground and the background) through a multistage segmentation approach in which multiple FCNs learn complementary visual characteristics of different skin lesions; early stage FCNs learn coarse appearance and localization information while late-stage FCNs learn the subtle characteristics of the lesion boundaries. We also introduce a new parallel integration method to combine the complementary information derived from individual segmentation stages to achieve a final segmentation result that has accurate localization and well-defined lesion boundaries, even for the most challenging skin lesions. RESULTS: We achieved an average Dice coefficient of 91.18% on the ISBI 2016 Skin Lesion Challenge dataset and 90.66% on the PH2 dataset. CONCLUSION AND SIGNIFICANCE: Our extensive experimental results on two well-established public benchmark datasets demonstrate that our method is more effective than other state-of-the-art methods for skin lesion segmentation.


Assuntos
Algoritmos , Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Melanoma/patologia , Reconhecimento Automatizado de Padrão/métodos , Neoplasias Cutâneas/patologia , Lógica Fuzzy , Humanos , Aumento da Imagem/métodos , Melanoma/diagnóstico por imagem , Redes Neurais de Computação , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Neoplasias Cutâneas/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...