Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 191
Filtrar
1.
IEEE Trans Pattern Anal Mach Intell ; 41(1): 176-189, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-29990011

RESUMO

Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.

2.
Med Phys ; 46(2): 689-703, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30508253

RESUMO

PURPOSE: Benefiting from multi-energy x-ray imaging technology, material decomposition facilitates the characterization of different materials in x-ray imaging. However, the performance of material decomposition is limited by the accuracy of the decomposition model. Due to the presence of nonideal effects in x-ray imaging systems, it is difficult to explicitly build the imaging system models for material decomposition. As an alternative, this paper explores the feasibility of using machine learning approaches for material decomposition tasks. METHODS: In this work, we propose a learning-based pipeline to perform material decomposition. In this pipeline, the step of feature extraction is implemented to integrate more informative features, such as neighboring information, to facilitate material decomposition tasks, and the step of hold-out validation with continuous interleaved sampling is employed to perform model evaluation and selection. We demonstrate the material decomposition capability of our proposed pipeline with promising machine learning algorithms in both simulation and experimentation, the algorithms of which are artificial neural network (ANN), Random Tree, REPTree and Random Forest. The performance was quantitatively evaluated using a simulated XCAT phantom and an anthropomorphic torso phantom. In order to evaluate the proposed method, two measurement-based material decomposition methods were used as the reference methods for comparison studies. In addition, deep learning-based solutions were also investigated to complete this work as a comprehensive comparison of machine learning solution for material decomposition. RESULTS: In both the simulation study and the experimental study, the introduced machine learning algorithms are able to train models for the material decomposition tasks. With the application of neighboring information, the performance of each machine learning algorithm is strongly improved. Compared to the state-of-the-art method, the performance of ANN in the simulation study is an improvement of over 24% in the noiseless scenarios and over 169% in the noisy scenario, while the performance of the Random Forest is an improvement of over 40% and 165%, respectively. Similarly, the performance of ANN in the experimental study is an improvement of over 42% in the denoised scenario and over 45% in the original scenario, while the performance of Random Forest is an improvement by over 33% and 40%, respectively. CONCLUSIONS: The proposed pipeline is able to build generic material decomposition models for different scenarios, and it was validated by quantitative evaluation in both simulation and experimentation. Compared to the reference methods, appropriate features and machine learning algorithms can significantly improve material decomposition performance. The results indicate that it is feasible and promising to perform material decomposition using machine learning methods, and our study will facilitate future efforts toward clinical applications.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Tomografia Computadorizada por Raios X
3.
Med Image Anal ; 48: 203-213, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29966940

RESUMO

Robust and fast detection of anatomical structures represents an important component of medical image analysis technologies. Current solutions for anatomy detection are based on machine learning, and are generally driven by suboptimal and exhaustive search strategies. In particular, these techniques do not effectively address cases of incomplete data, i.e., scans acquired with a partial field-of-view. We address these challenges by following a new paradigm, which reformulates the detection task to teaching an intelligent artificial agent how to actively search for an anatomical structure. Using the principles of deep reinforcement learning with multi-scale image analysis, artificial agents are taught optimal navigation paths in the scale-space representation of an image, while accounting for structures that are missing from the field-of-view. The spatial coherence of the observed anatomical landmarks is ensured using elements from statistical shape modeling and robust estimation theory. Experiments show that our solution outperforms marginal space deep learning, a powerful deep learning method, at detecting different anatomical structures without any failure. The dataset contains 5043 3D-CT volumes from over 2000 patients, totaling over 2,500,000 image slices. In particular, our solution achieves 0% false-positive and 0% false-negative rates at detecting whether the landmarks are captured in the field-of-view of the scan (excluding all border cases), with an average detection accuracy of 2.78 mm. In terms of runtime, we reduce the detection-time of the marginal space deep learning method by 20-30 times to under 40 ms, an unmatched performance for high resolution incomplete 3D-CT data.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Pontos de Referência Anatômicos , Humanos
4.
Med Image Anal ; 48: 131-146, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29913433

RESUMO

This paper introduces an universal and structure-preserving regularization term, called quantile sparse image (QuaSI) prior. The prior is suitable for denoising images from various medical imaging modalities. We demonstrate its effectiveness on volumetric optical coherence tomography (OCT) and computed tomography (CT) data, which show different noise and image characteristics. OCT offers high-resolution scans of the human retina but is inherently impaired by speckle noise. CT on the other hand has a lower resolution and shows high-frequency noise. For the purpose of denoising, we propose a variational framework based on the QuaSI prior and a Huber data fidelity model that can handle 3-D and 3-D+t data. Efficient optimization is facilitated through the use of an alternating direction method of multipliers (ADMM) scheme and the linearization of the quantile filter. Experiments on multiple datasets emphasize the excellent performance of the proposed method.


Assuntos
Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Tomografia Computadorizada por Raios X/métodos , Animais , Artefatos , Olho/diagnóstico por imagem , Oftalmopatias/diagnóstico por imagem , Humanos , Razão Sinal-Ruído , Suínos
5.
MAGMA ; 31(3): 399-414, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29372469

RESUMO

OBJECTIVE: Our aim was to develop and validate a 3D Cartesian Look-Locker [Formula: see text] mapping technique that achieves high accuracy and whole-liver coverage within a single breath-hold. MATERIALS AND METHODS: The proposed method combines sparse Cartesian sampling based on a spatiotemporally incoherent Poisson pattern and k-space segmentation, dedicated for high-temporal-resolution imaging. This combination allows capturing tissue with short relaxation times with volumetric coverage. A joint reconstruction of the 3D + inversion time (TI) data via compressed sensing exploits the spatiotemporal sparsity and ensures consistent quality for the subsequent multistep [Formula: see text] mapping. Data from the National Institute of Standards and Technology (NIST) phantom and 11 volunteers, along with reference 2D Look-Locker acquisitions, are used for validation. 2D and 3D methods are compared based on [Formula: see text] values in different abdominal tissues at 1.5 and 3 T. RESULTS: [Formula: see text] maps obtained from the proposed 3D method compare favorably with those from the 2D reference and additionally allow for reformatting or volumetric analysis. Excellent agreement is shown in phantom [bias[Formula: see text] < 2%, bias[Formula: see text] < 5% for (120; 2000) ms] and volunteer data (3D and 2D deviation < 4% for liver, muscle, and spleen) for clinically acceptable scan (20 s) and reconstruction times (< 4 min). CONCLUSION: Whole-liver [Formula: see text] mapping with high accuracy and precision is feasible in one breath-hold using spatiotemporally incoherent, sparse 3D Cartesian sampling.


Assuntos
Suspensão da Respiração , Interpretação de Imagem Assistida por Computador , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Abdome , Adulto , Idoso , Algoritmos , Calibragem , Feminino , Voluntários Saudáveis , Humanos , Aumento da Imagem , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Imagens de Fantasmas , Distribuição de Poisson , Reprodutibilidade dos Testes , Razão Sinal-Ruído , Fatores de Tempo
6.
IEEE Trans Neural Netw Learn Syst ; 29(5): 2025-2030, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-28362619

RESUMO

This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

7.
MAGMA ; 31(1): 19-31, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28550650

RESUMO

OBJECTIVES: Our objectives were to evaluate a single-breath-hold approach for Cartesian 3-D CINE imaging of the left ventricle with a nearly isotropic resolution of [Formula: see text] and a breath-hold duration of [Formula: see text]19 s against a standard stack of 2-D CINE slices acquired in multiple breath-holds. Validation is performed with data sets from ten healthy volunteers. MATERIALS AND METHODS: A Cartesian sampling pattern based on the spiral phyllotaxis and a compressed sensing reconstruction method are proposed to allow 3-D CINE imaging with high acceleration factors. The fully integrated reconstruction uses multiple graphics processing units to speed up the reconstruction. The 2-D CINE and 3-D CINE are compared based on ventricular function parameters, contrast-to-noise ratio and edge sharpness measurements. RESULTS: Visual comparisons of corresponding short-axis slices of 2-D and 3-D CINE show an excellent match, while 3-D CINE also allows reformatting to other orientations. Ventricular function parameters do not significantly differ from values based on 2-D CINE imaging. Reconstruction times are below 4 min. CONCLUSION: We demonstrate single-breath-hold 3-D CINE imaging in volunteers and three example patient cases, which features fast reconstruction and allows reformatting to arbitrary orientations.


Assuntos
Técnicas de Imagem Cardíaca/métodos , Ventrículos do Coração/diagnóstico por imagem , Imagem Cinética por Ressonância Magnética/métodos , Adulto , Idoso , Algoritmos , Suspensão da Respiração , Técnicas de Imagem Cardíaca/estatística & dados numéricos , Compressão de Dados , Feminino , Voluntários Saudáveis , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Razão Sinal-Ruído , Função Ventricular Esquerda , Adulto Jovem
8.
IEEE Trans Med Imaging ; 37(1): 47-60, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28692969

RESUMO

In X-ray fluoroscopy, static overlays are used to visualize soft tissue. We propose a system for cardiac and respiratory motion compensation of these overlays. It consists of a 3-D motion model created from real-time magnetic resonance (MR) imaging. Multiple sagittal slices are acquired and retrospectively stacked to consistent 3-D volumes. Slice stacking considers cardiac information derived from the ECG and respiratory information extracted from the images. Additionally, temporal smoothness of the stacking is enhanced. Motion is estimated from the MR volumes using deformable 3-D/3-D registration. The motion model itself is a linear direct correspondence model using the same surrogate signals as slice stacking. In X-ray fluoroscopy, only the surrogate signals need to be extracted to apply the motion model and animate the overlay in real time. For evaluation, points are manually annotated in oblique MR slices and in contrast-enhanced X-ray images. The 2-D Euclidean distance of these points is reduced from 3.85 to 2.75 mm in MR and from 3.0 to 1.8 mm in X-ray compared with the static baseline. Furthermore, the motion-compensated overlays are shown qualitatively as images and videos.


Assuntos
Fluoroscopia/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Movimento/fisiologia , Algoritmos , Animais , Coração/diagnóstico por imagem , Humanos , Análise de Regressão , Respiração , Estudos Retrospectivos , Suínos
9.
IEEE Trans Med Imaging ; 36(9): 1939-1954, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28489534

RESUMO

In image-guided interventional procedures, live 2-D X-ray images can be augmented with preoperative 3-D computed tomography or MRI images to provide planning landmarks and enhanced spatial perception. An accurate alignment between the 3-D and 2-D images is a prerequisite for fusion applications. This paper presents a dynamic rigid 2-D/3-D registration framework, which measures the local 3-D-to-2-D misalignment and efficiently constrains the update of both planar and non-planar 3-D rigid transformations using a novel point-to-plane correspondence model. In the simulation evaluation, the proposed method achieved a mean 3-D accuracy of 0.07 mm for the head phantom and 0.05 mm for the thorax phantom using single-view X-ray images. In the evaluation on dynamic motion compensation, our method significantly increases the accuracy comparing with the baseline method. The proposed method is also evaluated on a publicly-available clinical angiogram data set with "gold-standard" registrations. The proposed method achieved a mean 3-D accuracy below 0.8 mm and a mean 2-D accuracy below 0.3 mm using single-view X-ray images. It outperformed the state-of-the-art methods in both accuracy and robustness in single-view registration. The proposed method is intuitive, generic, and suitable for both initial and dynamic registration scenarios.


Assuntos
Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Algoritmos , Angiografia , Imageamento Tridimensional , Imagens de Fantasmas
10.
IEEE Trans Med Imaging ; 36(4): 865-877, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-27654320

RESUMO

Respiratory signals are required for image gating and motion compensation in minimally invasive interventions. In X-ray fluoroscopy, extraction of a respiratory signal can be challenging due to characteristics of interventional imaging, in particular injection of contrast agent and automatic exposure control. We present a novel method for respiratory signal extraction based on dimensionality reduction that can tolerate these events. Images are divided into patches of multiple sizes. Low-dimensional embeddings are generated for each patch using illumination-invariant kernel PCA. Patches with respiratory information are selected automatically by agglomerative clustering. The signals from this respiratory cluster are combined robustly to a single respiratory signal. In the experiments, we evaluate our method on a variety of scenarios. If the diaphragm is visible, we track its superior-inferior motion as ground truth. Our method has a correlation coefficient of more than 91% with the ground truth irrespective of whether or not contrast agent injection or automatic exposure control occur. Additionally, we show that very similar signals are estimated from biplane sequences and from sequences without visible diaphragm. Since all these cases are handled automatically, the method is robust enough to be considered for use in a clinical setting.


Assuntos
Respiração , Algoritmos , Fluoroscopia , Humanos , Processamento de Imagem Assistida por Computador , Movimento (Física) , Aprendizado de Máquina não Supervisionado , Raios X
11.
MAGMA ; 30(2): 189-202, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-27822655

RESUMO

OBJECTIVES: Our aim was to demonstrate the benefits of using locally low-rank (LLR) regularization for the compressed sensing reconstruction of highly-accelerated quantitative water-fat MRI, and to validate fat fraction (FF) and [Formula: see text] relaxation against reference parallel imaging in the abdomen. MATERIALS AND METHODS: Reconstructions using spatial sparsity regularization (SSR) were compared to reconstructions with LLR and the combination of both (LLR+SSR) for up to seven fold accelerated 3-D bipolar multi-echo GRE imaging. For ten volunteers, the agreement with the reference was assessed in FF and [Formula: see text] maps. RESULTS: LLR regularization showed superior noise and artifact suppression compared to reconstructions using SSR. Remaining residual artifacts were further reduced in combination with SSR. Correlation with the reference was excellent for FF with [Formula: see text] = 0.99 (all methods) and good for [Formula: see text] with [Formula: see text] = [0.93, 0.96, 0.95] for SSR, LLR and LLR+SSR. The linear regression gave slope and bias (%) of (0.99, 0.50), (1.01, 0.19) and (1.01, 0.10), and the hepatic FF/[Formula: see text] standard deviation was 3.5%/12.1 s[Formula: see text], 1.9%/6.4 s[Formula: see text] and 1.8%/6.3 s[Formula: see text] for SSR, LLR and LLR+SSR, indicating the least bias and highest SNR for LLR+SSR. CONCLUSION: A novel reconstruction using both spatial and spectral regularization allows obtaining accurate FF and [Formula: see text] maps for prospectively highly accelerated acquisitions.


Assuntos
Tecido Adiposo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Imageamento por Ressonância Magnética/métodos , Tecido Adiposo/metabolismo , Adulto , Algoritmos , Artefatos , Imagem Ecoplanar , Feminino , Humanos , Aumento da Imagem , Interpretação de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Linguagens de Programação , Razão Sinal-Ruído , Água
12.
Int J Comput Assist Radiol Surg ; 12(1): 77-90, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27495998

RESUMO

PURPOSE: During a standard fracture reduction and fixation procedure of the distal radius, only fluoroscopic images are available for planning of the screw placement and monitoring of the drill bit trajectory. Our prototype intra-operative framework integrates planning and drill guidance for a simplified and improved planning transfer. METHODS: Guidance information is extracted using a video camera mounted onto a surgical drill. Real-time feedback of the drill bit position is provided using an augmented view of the planning X-rays. We evaluate the accuracy of the placed screws on plastic bones and on healthy and fractured forearm specimens. We also investigate the difference in accuracy between guided screw placement versus freehand. Moreover, the accuracy of the real-time position feedback of the drill bit is evaluated. RESULTS: A total of 166 screws were placed. On 37 plastic bones, our obtained accuracy was [Formula: see text] mm, [Formula: see text] and [Formula: see text] in tip position and orientation (azimuth and elevation), respectively. On the three healthy forearm specimens, our obtained accuracy was [Formula: see text] mm, [Formula: see text] and [Formula: see text]. On the two fractured specimens, we attained: [Formula: see text] mm, [Formula: see text] and [Formula: see text]. When screw plans were applied freehand (without our guidance system), the achieved accuracy was [Formula: see text] mm, [Formula: see text], while when they were transferred under guidance, we obtained [Formula: see text] mm, [Formula: see text]. CONCLUSIONS: Our results show that our framework is expected to increase the accuracy in screw positioning and to improve robustness w.r.t. freehand placement.


Assuntos
Parafusos Ósseos , Fixação Interna de Fraturas/métodos , Fraturas do Rádio/diagnóstico por imagem , Fraturas do Rádio/cirurgia , Cirurgia Assistida por Computador/métodos , Fluoroscopia , Humanos , Período Intraoperatório , Modelos Anatômicos , Imagens de Fantasmas , Radiografia
13.
Retina ; 36 Suppl 1: S93-S101, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28005667

RESUMO

PURPOSE: To develop a robust, sensitive, and fully automatic algorithm to quantify diabetes-related capillary dropout using optical coherence tomography (OCT) angiography (OCTA). METHODS: A 1,050-nm wavelength, 400 kHz A-scan rate swept-source optical coherence tomography prototype was used to perform volumetric optical coherence tomography angiography imaging over 3 mm × 3 mm fields in normal controls (n = 5), patients with diabetes without diabetic retinopathy (DR) (n = 7), patients with nonproliferative diabetic retinopathy (NPDR) (n = 9), and patients with proliferative diabetic retinopathy (PDR) (n = 5); for each patient, one eye was imaged. A fully automatic algorithm to quantify intercapillary areas was developed. RESULTS: Of the 26 evaluated eyes, the segmentation was successful in 22 eyes (85%). The mean values of the 10 and 20 largest intercapillary areas, either including or excluding the foveal avascular zone, showed a consistent trend of increasing size from normal control eyes, to eyes with diabetic retinopathy but without diabetic retinopathy, to nonproliferative diabetic retinopathy eyes, and finally to PDR eyes. CONCLUSION: Optical coherence tomography angiography-based screening and monitoring of patients with diabetic retinopathy is critically dependent on automated vessel analysis. The algorithm presented was able to automatically extract an intercapillary area-based metric in patients having various stages of diabetic retinopathy. Intercapillary area-based approaches are likely more sensitive to early stage capillary dropout than vascular density-based methods.


Assuntos
Capilares/diagnóstico por imagem , Diabetes Mellitus Tipo 1/diagnóstico por imagem , Diabetes Mellitus Tipo 2/diagnóstico por imagem , Retinopatia Diabética/diagnóstico por imagem , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Estudos de Casos e Controles , Humanos , Estudos Retrospectivos , Tomografia de Coerência Óptica/métodos
14.
Retina ; 36 Suppl 1: S118-S126, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28005670

RESUMO

PURPOSE: Currently available optical coherence tomography angiography systems provide information about blood flux but only limited information about blood flow speed. The authors develop a method for mapping the previously proposed variable interscan time analysis (VISTA) algorithm into a color display that encodes relative blood flow speed. METHODS: Optical coherence tomography angiography was performed with a 1,050 nm, 400 kHz A-scan rate, swept source optical coherence tomography system using a 5 repeated B-scan protocol. Variable interscan time analysis was used to compute the optical coherence tomography angiography signal from B-scan pairs having 1.5 millisecond and 3.0 milliseconds interscan times. The resulting VISTA data were then mapped to a color space for display. RESULTS: The authors evaluated the VISTA visualization algorithm in normal eyes (n = 2), nonproliferative diabetic retinopathy eyes (n = 6), proliferative diabetic retinopathy eyes (n = 3), geographic atrophy eyes (n = 4), and exudative age-related macular degeneration eyes (n = 2). All eyes showed blood flow speed variations, and all eyes with pathology showed abnormal blood flow speeds compared with controls. CONCLUSION: The authors developed a novel method for mapping VISTA into a color display, allowing visualization of relative blood flow speeds. The method was found useful, in a small case series, for visualizing blood flow speeds in a variety of ocular diseases and serves as a step toward quantitative optical coherence tomography angiography.


Assuntos
Retinopatia Diabética/fisiopatologia , Atrofia Geográfica/fisiopatologia , Algoritmos , Velocidade do Fluxo Sanguíneo/fisiologia , Estudos de Casos e Controles , Corioide/irrigação sanguínea , Angiografia por Tomografia Computadorizada/métodos , Retinopatia Diabética/diagnóstico por imagem , Atrofia Geográfica/diagnóstico por imagem , Humanos , Pessoa de Meia-Idade , Imagem Multimodal , Tomografia de Coerência Óptica/métodos
15.
Med Phys ; 43(12): 6455, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27908185

RESUMO

PURPOSE: Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. METHODS: One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. RESULTS: Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. CONCLUSIONS: In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Modelos Teóricos , Tomografia Computadorizada por Raios X , Modelos Lineares , Razão Sinal-Ruído , Fatores de Tempo
16.
J Biomed Opt ; 21(9): 96007, 2016 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-27637005

RESUMO

Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Retina/diagnóstico por imagem , Algoritmos , Humanos , Análise de Ondaletas
17.
PLoS One ; 11(8): e0159337, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27500636

RESUMO

We derive a physically realistic model for the generation of virtual transillumination, white light microscopy images using epi-fluorescence measurements from thick, unsectioned tissue. We demonstrate this technique by generating virtual transillumination H&E images of unsectioned human breast tissue from epi-fluorescence multiphoton microscopy data. The virtual transillumination algorithm is shown to enable improved contrast and color accuracy compared with previous color mapping methods. Finally, we present an open source implementation of the algorithm in OpenGL, enabling real-time GPU-based generation of virtual transillumination microscopy images using conventional fluorescence microscopy systems.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Amarelo de Eosina-(YS)/química , Hematoxilina/química , Interpretação de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Transiluminação/métodos , Algoritmos , Mama/patologia , Neoplasias da Mama/patologia , Feminino , Humanos
18.
Int J Biomed Imaging ; 2016: 2502486, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27516772

RESUMO

Objective. To demonstrate a novel approach of compensating overexposure artifacts in CT scans of the knees without attaching any supporting appliances to the patient. C-Arm CT systems offer the opportunity to perform weight-bearing knee scans on standing patients to diagnose diseases like osteoarthritis. However, one serious issue is overexposure of the detector in regions close to the patella, which can not be tackled with common techniques. Methods. A Kinect camera is used to algorithmically remove overexposure artifacts close to the knee surface. Overexposed near-surface knee regions are corrected by extrapolating the absorption values from more reliable projection data. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates. Results. Artifacts at both knee phantoms are reduced significantly in the reconstructed data and a major part of the truncated regions is restored. Conclusion. The results emphasize the feasibility of the proposed approach. The accuracy of the cross-calibration procedure can be increased to further improve correction results. Significance. The correction method can be extended to a multi-Kinect setup for use in real-world scenarios. Using depth cameras does not require prior scans and offers the possibility of a temporally synchronized correction of overexposure artifacts. To achieve this, we develop a cross-calibration procedure to transform surface points from the Kinect to CT voxel coordinates.

19.
Med Image Anal ; 34: 52-64, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27133269

RESUMO

Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model.


Assuntos
Inteligência Artificial , Simulação por Computador , Medicina de Precisão/métodos , Humanos , Física , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Med Imaging ; 35(5): 1217-1228, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27046846

RESUMO

Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Valva Aórtica/diagnóstico por imagem , Bases de Dados Factuais , Ecocardiografia Transesofagiana , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...