Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 84: 102680, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36481607

RESUMO

In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.


Assuntos
Benchmarking , Neoplasias Hepáticas , Humanos , Estudos Retrospectivos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Fígado/diagnóstico por imagem , Fígado/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
2.
Med Image Anal ; 44: 1-13, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29169029

RESUMO

In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Humanos , Imageamento Tridimensional , Neoplasias Hepáticas/diagnóstico por imagem , Vértebras Lombares/diagnóstico por imagem , Imageamento por Ressonância Magnética , Masculino , Doenças Prostáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
3.
Radiographics ; 37(7): 2113-2131, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29131760

RESUMO

Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. ©RSNA, 2017.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizagem , Redes Neurais de Computação , Sistemas de Informação em Radiologia , Radiologia/educação , Algoritmos , Humanos , Aprendizado de Máquina
4.
J Healthc Eng ; 2017: 4037190, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29065595

RESUMO

Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.


Assuntos
Colonoscopia , Neoplasias Colorretais/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Algoritmos , Benchmarking , Técnicas de Apoio para a Decisão , Humanos , Redes Neurais de Computação
5.
Comput Biol Med ; 79: 163-172, 2016 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-27810622

RESUMO

The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase).


Assuntos
Endoscopia por Cápsula/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Algoritmos , Motilidade Gastrointestinal/fisiologia , Humanos
6.
Am J Physiol Gastrointest Liver Physiol ; 309(6): G413-9, 2015 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-26251472

RESUMO

We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.


Assuntos
Gastroenteropatias/classificação , Gastroenteropatias/patologia , Intestino Delgado/patologia , Adolescente , Adulto , Idoso , Algoritmos , Endoscopia por Cápsula , Ingestão de Alimentos , Feminino , Gastroenteropatias/fisiopatologia , Motilidade Gastrointestinal , Humanos , Processamento de Imagem Assistida por Computador , Mucosa Intestinal/patologia , Mucosa Intestinal/fisiopatologia , Intestino Delgado/fisiopatologia , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Valores de Referência , Estômago/anatomia & histologia , Adulto Jovem
7.
Comput Biol Med ; 65: 320-30, 2015 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-25912458

RESUMO

Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information.


Assuntos
Algoritmos , Endoscopia por Cápsula/métodos , Motilidade Gastrointestinal , Processamento de Imagem Assistida por Computador/métodos , Feminino , Humanos , Masculino
8.
IEEE J Biomed Health Inform ; 18(6): 1831-8, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25375680

RESUMO

Intestinal contractions are one of the most important events to diagnose motility pathologies of the small intestine. When visualized by wireless capsule endoscopy (WCE), the sequence of frames that represents a contraction is characterized by a clear wrinkle structure in the central frames that corresponds to the folding of the intestinal wall. In this paper, we present a new method to robustly detect wrinkle frames in full WCE videos by using a new mid-level image descriptor that is based on a centrality measure proposed for graphs. We present an extended validation, carried out in a very large database, that shows that the proposed method achieves state-of-the-art performance for this task.


Assuntos
Endoscopia por Cápsula/métodos , Motilidade Gastrointestinal/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Enteropatias/fisiopatologia , Algoritmos , Humanos , Reprodutibilidade dos Testes
9.
Comput Med Imaging Graph ; 37(1): 72-80, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23098835

RESUMO

The Wireless Capsule Endoscopy (WCE) technology allows the visualization of the whole small intestine tract. Since the capsule is freely moving, mainly by the means of peristalsis, the data acquired during the study gives a lot of information about the intestinal motility. However, due to: (1) huge amount of frames, (2) complex intestinal scene appearance and (3) intestinal dynamics that make difficult the visualization of the small intestine physiological phenomena, the analysis of the WCE data requires computer-aided systems to speed up the analysis. In this paper, we propose an efficient algorithm for building a novel representation of the WCE video data, optimal for motility analysis and inspection. The algorithm transforms the 3D video data into 2D longitudinal view by choosing the most informative, from the intestinal motility point of view, part of each frame. This step maximizes the lumen visibility in its longitudinal extension. The task of finding "the best longitudinal view" has been defined as a cost function optimization problem which global minimum is obtained by using Dynamic Programming. Validation on both synthetic data and WCE data shows that the adaptive longitudinal view is a good alternative to the traditional motility analysis done by video analysis. The proposed novel data representation a new, holistic insight into the small intestine motility, allowing to easily define and analyze motility events that are difficult to spot by analyzing WCE video. Moreover, the visual inspection of small intestine motility is 4 times faster then by means of video skimming of the WCE.


Assuntos
Algoritmos , Endoscopia por Cápsula , Motilidade Gastrointestinal , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional , Intestino Delgado , Humanos , Gravação em Vídeo
10.
IEEE Trans Inf Technol Biomed ; 16(6): 1341-52, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24218705

RESUMO

Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content- clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media.


Assuntos
Endoscopia por Cápsula/métodos , Processamento de Imagem Assistida por Computador/métodos , Conteúdo Gastrointestinal , Humanos , Reprodutibilidade dos Testes , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...