Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Comput Comput Assist Interv ; 14226: 413-422, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38737498

ABSTRACT

Mitigating the effects of image appearance due to variations in computed tomography (CT) acquisition and reconstruction parameters is a challenging inverse problem. We present CTFlow, a normalizing flows-based method for harmonizing CT scans acquired and reconstructed using different doses and kernels to a target scan. Unlike existing state-of-the-art image harmonization approaches that only generate a single output, flow-based methods learn the explicit conditional density and output the entire spectrum of plausible reconstruction, reflecting the underlying uncertainty of the problem. We demonstrate how normalizing flows reduces variability in image quality and the performance of a machine learning algorithm for lung nodule detection. We evaluate the performance of CTFlow by 1) comparing it with other techniques on a denoising task using the AAPM-Mayo Clinical Low-Dose CT Grand Challenge dataset, and 2) demonstrating consistency in nodule detection performance across 186 real-world low-dose CT chest scans acquired at our institution. CTFlow performs better in the denoising task for both peak signal-to-noise ratio and perceptual quality metrics. Moreover, CTFlow produces more consistent predictions across all dose and kernel conditions than generative adversarial network (GAN)-based image harmonization on a lung nodule detection task. The code is available at https://github.com/hsu-lab/ctflow.

2.
Opt Express ; 29(7): 9878-9896, 2021 Mar 29.
Article in English | MEDLINE | ID: mdl-33820153

ABSTRACT

Creating immersive 3D stereoscopic, autostereoscopic, and lightfield experiences are becoming the center point of optical design of future head mounted displays and lightfield displays. However, despite the advancement in 3D and light field displays, there is no consensus on what are the necessary quantized depth levels for such emerging displays at stereoscopic or monocular modalities. Here we start from psychophysical theories and work toward defining and prioritizing quantized levels of depth that would saturate the human depth perception. We propose a general optimization framework, which locates the depth levels in a globally optimal way for band limited displays. While the original problem is computationally intractable, we manage to find a tractable reformulation as maximally covering a region of interest with a selection of hypographs corresponding to the monocular depth of field profiles. The results indicate that on average 1731 stereoscopic and 7 monocular depth levels (distributed optimally from 25 cm to infinity) would saturate the visual depth perception. Such that adding further depth levels adds negligible improvement. Also the first 3 depth levels should be allocated at (148), then (83, 170), then (53, 90, 170) distances respectively from the face plane to minimize the monocular error in the entire population. The study further discusses the 3D spatial profile of the quantized stereoscopic and monocular depth levels. The study provides fundamental guidelines for designing optimal near eye displays, light-field monitors, and 3D screens.


Subject(s)
Depth Perception/physiology , Image Enhancement/instrumentation , Imaging, Three-Dimensional/instrumentation , Vision, Binocular/physiology , Computer-Aided Design , Data Display , Humans , Man-Machine Systems , Models, Theoretical
3.
Article in English | MEDLINE | ID: mdl-32606487

ABSTRACT

We present an interpretable end-to-end computer-aided detection and diagnosis tool for pulmonary nodules on computed tomography (CT) using deep learning-based methods. The proposed network consists of a nodule detector and a nodule malignancy classifier. We used RetinaNet to train a nodule detector using 7,607 slices containing 4,234 nodule annotations and validated it using 2,323 slices containing 1,454 nodule annotations drawn from the LIDC-IDRI dataset. The average precision for the nodule class in the validation set reached 0.24 at an intersection over union (IoU) of 0.5. The trained nodule detector was externally validated using a UCLA dataset. We then used a hierarchical semantic convolutional neural network (HSCNN) to classify whether a nodule was benign or malignant and generate semantic (radiologist-interpretable) features (e.g., mean diameter, consistency, margin), training the model on 149 cases with diagnostic CTs collected from the same UCLA dataset. A total of 149 nodule-centered patches from the UCLA dataset were used to train the HSCNN. Using 5-fold cross validation and data augmentation, the mean AUC and mean accuracy in the validation set for predicting nodule malignancy achieved 0.89 and 0.74, respectively. Meanwhile, the mean accuracy for predicting nodule mean diameter, consistency, and margin were 0.59, 0.74, and 0.75, respectively. We have developed an initial end-to-end pipeline that automatically detects nodules ≥ 5 mm on CT studies and labels identified nodules with radiologist-interpreted features automatically.

SELECTION OF CITATIONS
SEARCH DETAIL
...