Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Biomed Opt Express ; 8(8): 3627-3642, 2017 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-28856040

RESUMO

Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.

2.
Int J Comput Assist Radiol Surg ; 12(10): 1711-1725, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28391583

RESUMO

BACKGROUND: Brainshift is still a major issue in neuronavigation. Incorporating intra-operative ultrasound (iUS) with advanced registration algorithms within the surgical workflow is regarded as a promising approach for a better understanding and management of brainshift. This work is intended to (1) provide three-dimensional (3D) ultrasound reconstructions specifically for brain imaging in order to detect brainshift observed intra-operatively, (2) evaluate a novel iterative intra-operative ultrasound-based deformation correction framework, and (3) validate the performance of the proposed image-registration-based deformation estimation in a clinical environment. METHODS: Eight patients with brain tumors undergoing surgical resection are enrolled in this study. For each patient, a 3D freehand iUS system is employed in combination with an intra-operative navigation (iNav) system, and intra-operative ultrasound data are acquired at three timepoints during surgery. On this foundation, we present a novel resolution-preserving 3D ultrasound reconstruction, as well as a framework to detect brainshift through iterative registration of iUS images. To validate the system, the target registration error (TRE) is evaluated for each patient, and both rigid and elastic registration algorithms are analyzed. RESULTS: The mean TRE based on 3D-iUS improves significantly using the proposed brainshift compensation compared to neuronavigation (iNav) before (2.7 vs. 5.9 mm; [Formula: see text]) and after dural opening (4.2 vs. 6.2 mm, [Formula: see text]), but not after resection (6.7 vs. 7.5 mm; [Formula: see text]). iUS depicts a significant ([Formula: see text]) dynamic spatial brainshift throughout the three timepoints. Accuracy of registration can be improved through rigid and elastic registrations by 29.2 and 33.3%, respectively, after dural opening, and by 5.2 and 0.4%, after resection. CONCLUSION: 3D-iUS systems can improve the detection of brainshift and significantly increase the accuracy of the navigation in a real scenario. 3D-iUS can thus be regarded as a robust, reliable, and feasible technology to enhance neuronavigation.


Assuntos
Algoritmos , Neoplasias Encefálicas/cirurgia , Encéfalo/diagnóstico por imagem , Imageamento Tridimensional/métodos , Neuronavegação/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia/métodos , Adulto , Idoso , Encéfalo/cirurgia , Neoplasias Encefálicas/diagnóstico por imagem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
3.
Artif Intell Med ; 72: 1-11, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27664504

RESUMO

BACKGROUND: In clinical research, the primary interest is often the time until occurrence of an adverse event, i.e., survival analysis. Its application to electronic health records is challenging for two main reasons: (1) patient records are comprised of high-dimensional feature vectors, and (2) feature vectors are a mix of categorical and real-valued features, which implies varying statistical properties among features. To learn from high-dimensional data, researchers can choose from a wide range of methods in the fields of feature selection and feature extraction. Whereas feature selection is well studied, little work focused on utilizing feature extraction techniques for survival analysis. RESULTS: We investigate how well feature extraction methods can deal with features having varying statistical properties. In particular, we consider multiview spectral embedding algorithms, which specifically have been developed for these situations. We propose to use random survival forests to accurately determine local neighborhood relations from right censored survival data. We evaluated 10 combinations of feature extraction methods and 6 survival models with and without intrinsic feature selection in the context of survival analysis on 3 clinical datasets. Our results demonstrate that for small sample sizes - less than 500 patients - models with built-in feature selection (Cox model with ℓ1 penalty, random survival forest, and gradient boosted models) outperform feature extraction methods by a median margin of 6.3% in concordance index (inter-quartile range: [-1.2%;14.6%]). CONCLUSIONS: If the number of samples is insufficient, feature extraction methods are unable to reliably identify the underlying manifold, which makes them of limited use in these situations. For large sample sizes - in our experiments, 2500 samples or more - feature extraction methods perform as well as feature selection methods.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Análise de Sobrevida , Árvores de Decisões , Humanos , Informática Médica , Máquina de Vetores de Suporte
4.
Med Image Anal ; 34: 13-29, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27338173

RESUMO

In this paper, we propose metric Hashing Forests (mHF) which is a supervised variant of random forests tailored for the task of nearest neighbor retrieval through hashing. This is achieved by training independent hashing trees that parse and encode the feature space such that local class neighborhoods are preserved and encoded with similar compact binary codes. At the level of each internal node, locality preserving projections are employed to project data to a latent subspace, where separability between dissimilar points is enhanced. Following which, we define an oblique split that maximally preserves this separability and facilitates defining local neighborhoods of similar points. By incorporating the inverse-lookup search scheme within the mHF, we can then effectively mitigate pairwise neuron similarity comparisons, which allows for scalability to massive databases with little additional time overhead. Exhaustive experimental validations on 22,265 neurons curated from over 120 different archives demonstrate the superior efficacy of mHF in terms of its retrieval performance and precision of classification in contrast to state-of-the-art hashing and metric learning based methods. We conclude that the proposed method can be utilized effectively for similarity-preserving retrieval and categorization in large neuron databases.


Assuntos
Aprendizado de Máquina , Neurônios/classificação , Arquivos , Bases de Dados Factuais , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
5.
Neuroinformatics ; 14(4): 369-85, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27155864

RESUMO

The steadily growing amounts of digital neuroscientific data demands for a reliable, systematic, and computationally effective retrieval algorithm. In this paper, we present Neuron-Miner, which is a tool for fast and accurate reference-based retrieval within neuron image databases. The proposed algorithm is established upon hashing (search and retrieval) technique by employing multiple unsupervised random trees, collectively called as Hashing Forests (HF). The HF are trained to parse the neuromorphological space hierarchically and preserve the inherent neuron neighborhoods while encoding with compact binary codewords. We further introduce the inverse-coding formulation within HF to effectively mitigate pairwise neuron similarity comparisons, thus allowing scalability to massive databases with little additional time overhead. The proposed hashing tool has superior approximation of the true neuromorphological neighborhood with better retrieval and ranking performance in comparison to existing generalized hashing methods. This is exhaustively validated by quantifying the results over 31266 neuron reconstructions from Neuromorpho.org dataset curated from 147 different archives. We envisage that finding and ranking similar neurons through reference-based querying via Neuron Miner would assist neuroscientists in objectively understanding the relationship between neuronal structure and function for applications in comparative anatomy or diagnosis.


Assuntos
Encéfalo/citologia , Mineração de Dados , Processamento de Imagem Assistida por Computador/métodos , Neurônios/citologia , Software , Algoritmos , Animais , Bases de Dados Factuais , Humanos , Aprendizado de Máquina
6.
Med Image Anal ; 32: 1-17, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27035487

RESUMO

In this paper, we propose a supervised domain adaptation (DA) framework for adapting decision forests in the presence of distribution shift between training (source) and testing (target) domains, given few labeled examples. We introduce a novel method for DA through an error-correcting hierarchical transfer relaxation scheme with domain alignment, feature normalization, and leaf posterior reweighting to correct for the distribution shift between the domains. For the first time we apply DA to the challenging problem of extending in vitro trained forests (source domain) for in vivo applications (target domain). The proof-of-concept is provided for in vivo characterization of atherosclerotic tissues using intravascular ultrasound signals, where presence of flowing blood is a source of distribution shift between the two domains. This potentially leads to misclassification upon direct deployment of in vitro trained classifier, thus motivating the need for DA as obtaining reliable in vivo training labels is often challenging if not infeasible. Exhaustive validations and parameter sensitivity analysis substantiate the reliability of the proposed DA framework and demonstrates improved tissue characterization performance for scenarios where adaptation is conducted in presence of only a few examples. The proposed method can thus be leveraged to reduce annotation costs and improve computational efficiency over conventional retraining approaches.


Assuntos
Circulação Coronária , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Ultrassonografia/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
7.
IEEE J Biomed Health Inform ; 20(2): 606-14, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25700476

RESUMO

Intravascular imaging using ultrasound or optical coherence tomography (OCT) is predominantly used to adjunct clinical information in interventional cardiology. OCT provides high-resolution images for detailed investigation of atherosclerosis-induced thickening of the lumen wall resulting in arterial blockage and triggering acute coronary events. However, the stochastic uncertainty of speckles limits effective visual investigation over large volume of pullback data, and clinicians are challenged by their inability to investigate subtle variations in the lumen topology associated with plaque vulnerability and onset of necrosis. This paper presents a lumen segmentation method using OCT imaging physics-based graph representation of signals and random walks image segmentation approaches. The edge weights in the graph are assigned incorporating OCT signal attenuation physics models. Optical backscattering maxima is tracked along each A-scan of OCT and is subsequently refined using global graylevel statistics and used for initializing seeds for the random walks image segmentation. Accuracy of lumen versus tunica segmentation has been measured on 15 in vitro and 6 in vivo pullbacks, each with 150-200 frames using 1) Cohen's kappa coefficient (0.9786 ±0.0061) measured with respect to cardiologist's annotation and 2) divergence of histogram of the segments computed with Kullback-Leibler (5.17 ±2.39) and Bhattacharya measures (0.56 ±0.28). High segmentation accuracy and consistency substantiates the characteristics of this method to reliably segment lumen across pullbacks in the presence of vulnerability cues and necrotic pool and has a deterministic finite time-complexity. This paper in general also illustrates the development of methods and framework for tissue classification and segmentation incorporating cues of tissue-energy interaction physics in imaging.


Assuntos
Vasos Coronários/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Ultrassonografia de Intervenção/métodos , Humanos , Espalhamento de Radiação
8.
F1000Res ; 5: 2676, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28713544

RESUMO

Ensemble methods have been successfully applied in a wide range of scenarios, including survival analysis. However, most ensemble models for survival analysis consist of models that all optimize the same loss function and do not fully utilize the diversity in available models. We propose heterogeneous survival ensembles that combine several survival models, each optimizing a different loss during training. We evaluated our proposed technique in the context of the Prostate Cancer DREAM Challenge, where the objective was to predict survival of patients with metastatic, castrate-resistant prostate cancer from patient records of four phase III clinical trials. Results demonstrate that a diverse set of survival models were preferred over a single model and that our heterogeneous ensemble of survival models outperformed all competing methods with respect to predicting the exact time of death in the Prostate Cancer DREAM Challenge.

9.
BMC Med Inform Decis Mak ; 15: 9, 2015 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-25889930

RESUMO

BACKGROUND: Percutaneous coronary intervention (PCI) is the most commonly performed treatment for coronary atherosclerosis. It is associated with a higher incidence of repeat revascularization procedures compared to coronary artery bypass grafting surgery. Recent results indicate that PCI is only cost-effective for a subset of patients. Estimating risks of treatment options would be an effort toward personalized treatment strategy for coronary atherosclerosis. METHODS: In this paper, we propose to model clinical knowledge about the treatment of coronary atherosclerosis to identify patient-subgroup-specific classifiers to predict the risk of adverse events of different treatment options. We constructed one model for each patient subgroup to account for subgroup-specific interpretation and availability of features and hierarchically aggregated these models to cover the entire data. In addition, we deviated from the current clinical workflow only for patients with high probability of benefiting from an alternative treatment, as suggested by this model. Consequently, we devised a two-stage test with optimized negative and positive predictive values as the main indicators of performance. Our analysis was based on 2,377 patients that underwent PCI. Performance was compared with a conventional classification model and the existing clinical practice by estimating effectiveness, safety, and costs for different endpoints (6 month angiographic restenosis, 12 and 36 month hazardous events). RESULTS: Compared to the current clinical practice, the proposed method achieved an estimated reduction in adverse effects by 25.0% (95% CI, 17.8 to 30.2) for hazardous events at 36 months and 31.2% (95% CI, 25.4 to 39.0) for hazardous events at 12 months. Estimated total savings per patient amounted to $693 and $794 at 12 and 36 months, respectively. The proposed subgroup-specific method outperformed conventional population wide regression: The median area under the receiver operating characteristic curve increased from 0.57 to 0.61 for prediction of angiographic restenosis and from 0.76 to 0.85 for prediction of hazardous events. CONCLUSIONS: The results of this study demonstrated the efficacy of deployment of bare-metal stents and coronary artery bypass grafting surgery for subsets of patients. This is one effort towards development of personalized treatment strategies for patients with coronary atherosclerosis that could significantly impact associated treatment costs.


Assuntos
Aterosclerose/terapia , Tomada de Decisão Clínica/métodos , Doença da Artéria Coronariana/terapia , Sistemas de Apoio a Decisões Clínicas , Complicações Pós-Operatórias/prevenção & controle , Idoso , Ponte de Artéria Coronária/efeitos adversos , Ponte de Artéria Coronária/economia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Intervenção Coronária Percutânea/efeitos adversos , Intervenção Coronária Percutânea/economia , Stents/efeitos adversos , Stents/economia
10.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 627-34, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25485432

RESUMO

In this paper, we introduce a framework for simulating intravascular ultrasound (IVUS) images and radiofrequency (RF) signals from histology image counterparts. We modeled the wave propagation through the Westervelt equation, which is solved explicitly with a finite differences scheme in polar coordinates by taking into account attenuation and non-linear effects. Our results demonstrate good correlation for textural and spectral information driven from simulated IVUS data in contrast to real data, acquired with single-element mechanically rotating 40 MHZ transducer, as ground truth.


Assuntos
Interpretação de Imagem Assistida por Computador/instrumentação , Interpretação de Imagem Assistida por Computador/métodos , Microscopia/instrumentação , Microscopia/métodos , Imagens de Fantasmas , Ultrassonografia de Intervenção/instrumentação , Ultrassonografia de Intervenção/métodos , Desenho de Equipamento , Análise de Falha de Equipamento , Ondas de Rádio , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
Med Image Anal ; 18(1): 103-17, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24184434

RESUMO

Intravascular Ultrasound (IVUS) is a predominant imaging modality in interventional cardiology. It provides real-time cross-sectional images of arteries and assists clinicians to infer about atherosclerotic plaques composition. These plaques are heterogeneous in nature and constitute fibrous tissue, lipid deposits and calcifications. Each of these tissues backscatter ultrasonic pulses and are associated with a characteristic intensity in B-mode IVUS image. However, clinicians are challenged when colocated heterogeneous tissue backscatter mixed signals appearing as non-unique intensity patterns in B-mode IVUS image. Tissue characterization algorithms have been developed to assist clinicians to identify such heterogeneous tissues and assess plaque vulnerability. In this paper, we propose a novel technique coined as Stochastic Driven Histology (SDH) that is able to provide information about co-located heterogeneous tissues. It employs learning of tissue specific ultrasonic backscattering statistical physics and signal confidence primal from labeled data for predicting heterogeneous tissue composition in plaques. We employ a random forest for the purpose of learning such a primal using sparsely labeled and noisy samples. In clinical deployment, the posterior prediction of different lesions constituting the plaque is estimated. Folded cross-validation experiments have been performed with 53 plaques indicating high concurrence with traditional tissue histology. On the wider horizon, this framework enables learning of tissue-energy interaction statistical physics and can be leveraged for promising clinical applications requiring tissue characterization beyond the application demonstrated in this paper.


Assuntos
Inteligência Artificial , Doença da Artéria Coronariana/diagnóstico por imagem , Ecocardiografia/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia de Intervenção/métodos , Algoritmos , Interpretação Estatística de Dados , Humanos , Reprodutibilidade dos Testes , Espalhamento de Radiação , Sensibilidade e Especificidade
12.
Comput Med Imaging Graph ; 38(2): 104-12, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24035737

RESUMO

Coronary artery disease leads to failure of coronary circulation secondary to accumulation of atherosclerotic plaques. In adjunction to primary imaging of such vascular plaques using coronary angiography or alternatively magnetic resonance imaging, intravascular ultrasound (IVUS) is used predominantly for diagnosis and reporting of their vulnerability. In addition to plaque burden estimation, necrosis detection is an important aspect in reporting of IVUS. Since necrotic regions generally appear as hypoechic, with speckle appearance in these regions resembling true shadows or severe signal dropout regions, it contributes to variability in diagnosis. This dilemma in clinical assessment of necrosis imaged with IVUS is addressed in this work. In our approach, fidelity of the backscattered ultrasonic signal received by the imaging transducer is initially estimated. This is followed by identification of true necrosis using statistical physics of ultrasonic backscattering. A random forest machine learning framework is used for the purpose of learning the parameter space defining ultrasonic backscattering distributions related to necrotic regions and discriminating it from non-necrotic shadows. Evidence of hunting down true necrosis in shadows of intravascular ultrasound is presented with ex vivo experiments along with cross-validation using ground truth obtained from histology. Nevertheless, in some rare cases necrosis is marginally over-estimated, primarily on account of non-reliable statistics estimation. This limitation is due to sparse spatial sampling between neighboring scan-lines at location far from the transducer. We suggest considering the geometrical location of detected necrosis together with estimated signal confidence during clinical decision making in view of such limitation.


Assuntos
Algoritmos , Doença da Artéria Coronariana/patologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia de Intervenção/métodos , Simulação por Computador , Humanos , Modelos Cardiovasculares , Necrose/diagnóstico por imagem , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
13.
Med Image Anal ; 17(2): 236-53, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23313331

RESUMO

In this paper, a new segmentation framework with prior knowledge is proposed and applied to the left ventricles in cardiac Cine MRI sequences. We introduce a new formulation of the random walks method, coined as guided random walks, in which prior knowledge is integrated seamlessly. In comparison with existing approaches that incorporate statistical shape models, our method does not extract any principal model of the shape or appearance of the left ventricle. Instead, segmentation is accompanied by retrieving the closest subject in the database that guides the segmentation the best. Using this techniques, rare cases can also effectively exploit prior knowledge from few samples in training set. These cases are usually disregarded in statistical shape models as they are outnumbered by frequent cases (effect of class population). In the worst-case scenario, if there is no matching case in the database to guide the segmentation, performance of the proposed method reaches to the conventional random walks, which is shown to be accurate if sufficient number of seeds is provided. There is a fast solution to the proposed guided random walks by using sparse linear matrix operations and the whole framework can be seamlessly implemented in a parallel architecture. The method has been validated on a comprehensive clinical dataset of 3D+t short axis MR images of 104 subjects from 5 categories (normal, dilated left ventricle, ventricular hypertrophy, recent myocardial infarction, and heart failure). The average segmentation errors were found to be 1.54 mm for the endocardium and 1.48 mm for the epicardium. The method was validated by measuring different algorithmic and physiologic indices and quantified with manual segmentation ground truths, provided by a cardiologist.


Assuntos
Algoritmos , Interpretação Estatística de Dados , Ventrículos do Coração/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
IEEE Trans Biomed Eng ; 59(11): 3039-49, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22907962

RESUMO

Intravascular ultrasound (IVUS) is the predominant imaging modality in the field of interventional cardiology that provides real-time cross-sectional images of coronary arteries and the extent of atherosclerosis. Due to heterogeneity of lesions and stringent spatial/spectral behavior of tissues, atherosclerotic plaque characterization has always been a challenge and still is an open problem. In this paper, we present a systematic framework from in vitro data collection, histology preparation, IVUS-histology registration along with matching procedure, and finally a robust texture-derived unsupervised atherosclerotic plaque labeling. We have performed our algorithm on in vitro and in vivo images acquired with single-element 40 MHz and 64-elements phased array 20 MHz transducers, respectively. In former case, we have quantified results by local contrasting of constructed tissue colormaps with corresponding histology images employing an independent expert and in the latter case, virtual histology images have been utilized for comparison. We tackle one of the main challenges in the field that is the reliability of tissues behind arc of calcified plaques and validate the results through a novel random walks framework by incorporating underlying physics of ultrasound imaging. We conclude that proposed framework is a formidable approach for retrieving imperative information regarding tissues and building a reliable training dataset for supervised classification and its extension for in vivo applications.


Assuntos
Técnicas Histológicas/métodos , Processamento de Imagem Assistida por Computador/métodos , Placa Aterosclerótica/diagnóstico por imagem , Placa Aterosclerótica/patologia , Ultrassonografia de Intervenção/métodos , Algoritmos , Ecocardiografia , Humanos , Miocárdio/patologia
15.
IEEE Trans Inf Technol Biomed ; 16(5): 823-34, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22389156

RESUMO

Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia de Intervenção/métodos , Túnica Adventícia/diagnóstico por imagem , Animais , Humanos , Mamíferos , Processamento de Sinais Assistido por Computador , Túnica Média/diagnóstico por imagem
16.
Artigo em Inglês | MEDLINE | ID: mdl-21095737

RESUMO

We present a new technique to delineate lumen borders in intravascular ultrasound (IVUS) volumes of images acquired with a high-frequency Volcano (Rancho Cordova, CA) 45MHz transducer. Our technique relies on projection of IVUS sub-volumes onto orthogonal directional brushlet functions. Through selective projection of IVUS sub-volumes images and their Fourier transforms, tissue-specific backscattered magnitudes and phases identified within brushlet coefficients. We take advantage of such characteristics and construct 2.5-dimensional (2.5-D) magnitudes-phase histograms of coefficients in the transformed complex brushlet domain that contain distinct peaks corresponding to blood and non-blood regions. We exploit these peaks to mask out coefficients that represent blood regions and ultimately detect the luminal border after spatial regularization employing a parametric deformable model. We quantify our results by comparing them to manually traced borders by an expert on 2 datasets, containing 108 frames. We show that our approach is well suited for isolating coherent (i.e. plaque) structures from incoherent (i.e. blood) ones in IVUS pullbacks and detecting the lumen border, a challenging problem particularly in images acquired with high frequency transducers.


Assuntos
Automação , Vasos Coronários/patologia , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia de Intervenção/métodos , Algoritmos , Cardiologia/métodos , Diagnóstico por Imagem/métodos , Análise de Fourier , Humanos , Informática Médica/métodos , Modelos Estatísticos , Imagens de Fantasmas , Espalhamento de Radiação
17.
Artigo em Inglês | MEDLINE | ID: mdl-19964741

RESUMO

The presence of non-coherent blood speckle patterns makes the assessment of lumen size in intravascular ultrasound (IVUS) images a challenging problem, especially for images acquired with recent high frequency transducers. In this paper, we present a robust three-dimensional (3D) feature extraction algorithm based on the expansion of IVUS cross-sectional images and pullback directions onto an orthonormal complex brushlet basis. Several features are selected from the projections of low-frequency 3D brushlet coefficients. These representations are used as inputs to a neural network that is trained to classify blood maps on IVUS images. We evaluated the algorithm performance using repeated randomized experiments on sub-samples to validate the quantification of the blood maps when compared to expert manual tracings of 258 frames collected from three patients. Our results demonstrate that the proposed features extracted in the brushlet domain capture well the non-coherent structures of blood speckle, enabling identification of blood pools and enhancement of the lumen area.


Assuntos
Artérias/diagnóstico por imagem , Sangue/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia de Intervenção/métodos , Algoritmos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
IEEE Trans Inf Technol Biomed ; 12(3): 315-27, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-18693499

RESUMO

In vivo plaque characterization is an important research field in interventional cardiology. We will study the realistic challenges to this goal by deploying 40 MHz single-element, mechanically rotating transducers. The intrinsic variability among the transducers' spectral parameters as well as tissue signals will be demonstrated. Subsequently, we will show that global data normalization is not suited for data calibration, due to the aforementioned variations as well as the stringent characteristics of spectral features. We will describe the sensitivity of an existing feature extraction algorithm based on eight spectral signatures (integrated backscatter coefficient, slope, midband-fit (MBF), intercept, and maximum and minimum powers and their relative frequencies) to a number of factors, such as the window size and order of the autoregressive (AR) model. It will be further demonstrated that the variations in the transducer's spectral parameters (i.e., center frequency and bandwidth) cause inconsistencies among extracted features. In this paper, two fundamental questions are addressed: 1) what is the best reliable way to extract the most informative features? and 2) which classification algorithm is the most appropriate for this problem? We will present a full-spectrum analysis as an alternative to the eight-feature approach. For the first time, different classification algorithms, such as k-nearest neighbors (k-NN) and linear Fisher, will be employed and their performances quantified. Finally, we will explore the reliability of the training dataset and the complexity of the recognition algorithm and illustrate that these two aspects can highly impact the accuracy of the end result, which has not been considered until now.


Assuntos
Algoritmos , Inteligência Artificial , Doença da Artéria Coronariana/diagnóstico por imagem , Técnicas de Imagem por Elasticidade/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia de Intervenção/métodos , Bases de Dados Factuais , Elasticidade , Humanos , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Conf Proc IEEE Eng Med Biol Soc ; 2006: 3074-7, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17946545

RESUMO

In this paper we present a new automated method for detecting endocardial and epicardial borders in the left (LV) and right ventricles (RV) of the human heart. Our approach relies on morphological operations on both binary and grayscale images. First, the standard power-law transformation is applied on the image. Then, a region of interest (ROI) is selected semi-automatically, followed by automated endocardial and epicardial border extraction based on the selected ROI. In order to get the endocardial contour, the transformed image is thresholded and the maximum area, which indicates the cavity, is selected. Finally, the edge detection is performed and the papillary muscles (PMs) are excluded via a convex-hull method. The epicardial boundary is delineated through a threshold decomposition opening (TDO) approach along with morphological operations. The algorithm extracts the most precise myocardial and RV contours. Experimental results from three normal subjects are shown and quantitatively compared with manually traced contours by an expert. It is concluded that the method performs well in both endocardial and epicardial LV contouring as well as RV cavity detection.


Assuntos
Coração/anatomia & histologia , Imageamento por Ressonância Magnética/estatística & dados numéricos , Algoritmos , Engenharia Biomédica , Endocárdio/anatomia & histologia , Ventrículos do Coração/anatomia & histologia , Humanos , Interpretação de Imagem Assistida por Computador , Pericárdio/anatomia & histologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...