Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Int J Comput Assist Radiol Surg ; 19(6): 1121-1128, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38598142

ABSTRACT

PURPOSE: The standard of care for prostate cancer (PCa) diagnosis is the histopathological analysis of tissue samples obtained via transrectal ultrasound (TRUS) guided biopsy. Models built with deep neural networks (DNNs) hold the potential for direct PCa detection from TRUS, which allows targeted biopsy and subsequently enhances outcomes. Yet, there are ongoing challenges with training robust models, stemming from issues such as noisy labels, out-of-distribution (OOD) data, and limited labeled data. METHODS: This study presents LensePro, a unified method that not only excels in label efficiency but also demonstrates robustness against label noise and OOD data. LensePro comprises two key stages: first, self-supervised learning to extract high-quality feature representations from abundant unlabeled TRUS data and, second, label noise-tolerant prototype-based learning to classify the extracted features. RESULTS: Using data from 124 patients who underwent systematic prostate biopsy, LensePro achieves an AUROC, sensitivity, and specificity of 77.9%, 85.9%, and 57.5%, respectively, for detecting PCa in ultrasound. Our model shows it is effective for detecting OOD data in test time, critical for clinical deployment. Ablation studies demonstrate that each component of our method improves PCa detection by addressing one of the three challenges, reinforcing the benefits of a unified approach. CONCLUSION: Through comprehensive experiments, LensePro demonstrates its state-of-the-art performance for TRUS-based PCa detection. Although further research is necessary to confirm its clinical applicability, LensePro marks a notable advancement in enhancing automated computer-aided systems for detecting prostate cancer in ultrasound.


Subject(s)
Neural Networks, Computer , Prostatic Neoplasms , Humans , Male , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Prostatic Neoplasms/diagnosis , Image-Guided Biopsy/methods , Sensitivity and Specificity , Ultrasonography/methods , Deep Learning , Ultrasonography, Interventional/methods
2.
Int J Comput Assist Radiol Surg ; 17(9): 1697-1705, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35881210

ABSTRACT

PURPOSE: Ultrasound is the standard-of-care to guide the systematic biopsy of the prostate. During the biopsy procedure, up to 12 biopsy cores are randomly sampled from six zones within the prostate, where the histopathology of those cores is used to determine the presence and grade of the cancer. Histopathology reports only provide statistical information on the presence of cancer and do not normally contain fine-grain information of cancer distribution within each core. This limitation hinders the development of machine learning models to detect the presence of cancer in ultrasound so that biopsy can be more targeted to highly suspicious prostate regions. METHODS: In this paper, we tackle this challenge in the form of training with noisy labels derived from histopathology. Noisy labels often result in the model overfitting to the training data, hence limiting its generalizability. To avoid overfitting, we focus on the generalization of the features of the model and present an iterative data label refinement algorithm to amend the labels gradually. We simultaneously train two classifiers, with the same structure, and automatically stop the training when we observe any sign of overfitting. Then, we use a confident learning approach to clean the data labels and continue with the training. This process is iteratively applied to the training data and labels until convergence. RESULTS: We illustrate the performance of the proposed method by classifying prostate cancer using a dataset of ultrasound images from 353 biopsy cores obtained from 90 patients. We achieve area under the curve, sensitivity, specificity, and accuracy of 0.73, 0.80, 0.63, and 0.69, respectively. CONCLUSION: Our approach is able to provide clinicians with a visualization of regions that likely contain cancerous tissue to obtain more accurate biopsy samples. The results demonstrate that our proposed method produces superior accuracy compared to the state-of-the-art methods.


Subject(s)
Image-Guided Biopsy , Prostatic Neoplasms , Biopsy, Large-Core Needle , Humans , Image-Guided Biopsy/methods , Male , Neural Networks, Computer , Prostate/diagnostic imaging , Prostate/pathology , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology
3.
Int J Comput Assist Radiol Surg ; 17(5): 841-847, 2022 May.
Article in English | MEDLINE | ID: mdl-35344123

ABSTRACT

PURPOSE: Ultrasound-guided biopsy plays a major role in prostate cancer (PCa) detection, yet is limited by a high rate of false negatives and low diagnostic yield of the current systematic, non-targeted approaches. Developing machine learning models for accurately identifying cancerous tissue in ultrasound would help sample tissues from regions with higher cancer likelihood. A plausible approach for this purpose is to use individual ultrasound signals corresponding to a core as inputs and consider the histopathology diagnosis for the entire core as labels. However, this introduces significant amount of label noise to training and degrades the classification performance. Previously, we suggested that histopathology-reported cancer involvement can be a reasonable approximation for the label noise. METHODS: Here, we propose an involvement-based label refinement (iLR) method to correct corrupted labels and improve cancer classification. The difference between predicted and true cancer involvements is used to guide the label refinement process. We further incorporate iLR into state-of-the-art methods for learning with noisy labels and predicting cancer involvement. RESULTS: We use 258 biopsy cores from 70 patients and demonstrate that our proposed label refinement method improves the performance of multiple noise-tolerant approaches and achieves a balanced accuracy, correlation coefficient, and mean absolute error of 76.7%, 0.68, and 12.4, respectively. CONCLUSIONS: Our key contribution is to leverage a data-centric method to deal with noisy labels using histopathology reports, and improve the performance of prostate cancer diagnosis through a hierarchical training process with label refinement.


Subject(s)
Prostate , Prostatic Neoplasms , Humans , Image-Guided Biopsy/methods , Machine Learning , Male , Prostate/diagnostic imaging , Prostate/pathology , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Ultrasonography/methods
4.
Int J Comput Assist Radiol Surg ; 17(1): 121-128, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34783976

ABSTRACT

PURPOSE: Systematic prostate biopsy is widely used for cancer diagnosis. The procedure is blind to underlying prostate tissue micro-structure; hence, it can lead to a high rate of false negatives. Development of a machine-learning model that can reliably identify suspicious cancer regions is highly desirable. However, the models proposed to-date do not consider the uncertainty present in their output or the data to benefit clinical decision making for targeting biopsy. METHODS: We propose a deep network for improved detection of prostate cancer in systematic biopsy considering both the label and model uncertainty. The architecture of our model is based on U-Net, trained with temporal enhanced ultrasound (TeUS) data. We estimate cancer detection uncertainty using test-time augmentation and test-time dropout. We then use uncertainty metrics to report the cancer probability for regions with high confidence to help the clinical decision making during the biopsy procedure. RESULTS: Experiments for prostate cancer classification includes data from 183 prostate biopsy cores of 41 patients. We achieve an area under the curve, sensitivity, specificity and balanced accuracy of 0.79, 0.78, 0.71 and 0.75, respectively. CONCLUSION: Our key contribution is to automatically estimate model and label uncertainty towards enabling targeted ultrasound-guided prostate biopsy. We anticipate that such information about uncertainty can decrease the number of unnecessary biopsy with a higher rate of cancer yield.


Subject(s)
Prostate , Prostatic Neoplasms , Humans , Image-Guided Biopsy , Magnetic Resonance Imaging , Male , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Ultrasonography, Interventional , Uncertainty
5.
Int J Comput Assist Radiol Surg ; 15(6): 1023-1031, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32356095

ABSTRACT

PURPOSE: Ultrasound imaging is routinely used in prostate biopsy, which involves obtaining prostate tissue samples using a systematic, yet, non-targeted approach. This approach is blinded to individual patient intraprostatic pathology, and unfortunately, has a high rate of false negatives. METHODS: In this paper, we propose a deep network for improved detection of prostate cancer in systematic biopsy. We address several challenges associated with training such network: (1) Statistical labels: Since biopsy core's pathology report only represents a statistical distribution of cancer within the core, we use multiple instance learning (MIL) networks to enable learning from ultrasound image regions associated with those data; (2) Limited labels: The number of biopsy cores are limited to at most 12 per patient. As a result, the number of samples available for training a deep network is limited. We alleviate this issue by effectively combining Independent Conditional Variational Auto Encoders (ICVAE) with MIL. We train ICVAE to learn label-invariant features of RF data, which is subsequently used to generate synthetic data for improved training of the MIL network. RESULTS: Our in vivo study includes data from 339 prostate biopsy cores of 70 patients. We achieve an area under the curve, sensitivity, specificity, and balanced accuracy of 0.68, 0.77, 0.55 and 0.66, respectively. CONCLUSION: The proposed approach is generic and can be applied to several other scenarios where unlabeled data and noisy labels in training samples are present.


Subject(s)
Image-Guided Biopsy/methods , Prostate/pathology , Prostatic Neoplasms/pathology , Ultrasonography, Interventional/methods , Feasibility Studies , Humans , Male , Neural Networks, Computer , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Sensitivity and Specificity
6.
Article in English | MEDLINE | ID: mdl-29505407

ABSTRACT

Temporal-enhanced ultrasound (TeUS) is a novel noninvasive imaging paradigm that captures information from a temporal sequence of backscattered US radio frequency data obtained from a fixed tissue location. This technology has been shown to be effective for classification of various in vivo and ex vivo tissue types including prostate cancer from benign tissue. Our previous studies have indicated two primary phenomena that influence TeUS: 1) changes in tissue temperature due to acoustic absorption and 2) micro vibrations of tissue due to physiological vibration. In this paper, first, a theoretical formulation for TeUS is presented. Next, a series of simulations are carried out to investigate micro vibration as a source of tissue characterizing information in TeUS. The simulations include finite element modeling of micro vibration in synthetic phantoms, followed by US image generation during TeUS imaging. The simulations are performed on two media, a sparse array of scatterers and a medium with pathology mimicking scatterers that match nuclei distribution extracted from a prostate digital pathology data set. Statistical analysis of the simulated TeUS data shows its ability to accurately classify tissue types. Our experiments suggest that TeUS can capture the microstructural differences, including scatterer density, in tissues as they react to micro vibrations.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Ultrasonography/methods , Computer Simulation , Databases, Factual , Finite Element Analysis , Humans , Male , Phantoms, Imaging , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging
7.
Int J Comput Assist Radiol Surg ; 13(8): 1201-1209, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29589258

ABSTRACT

PURPOSE: We have previously proposed temporal enhanced ultrasound (TeUS) as a new paradigm for tissue characterization. TeUS is based on analyzing a sequence of ultrasound data with deep learning and has been demonstrated to be successful for detection of cancer in ultrasound-guided prostate biopsy. Our aim is to enable the dissemination of this technology to the community for large-scale clinical validation. METHODS: In this paper, we present a unified software framework demonstrating near-real-time analysis of ultrasound data stream using a deep learning solution. The system integrates ultrasound imaging hardware, visualization and a deep learning back-end to build an accessible, flexible and robust platform. A client-server approach is used in order to run computationally expensive algorithms in parallel. We demonstrate the efficacy of the framework using two applications as case studies. First, we show that prostate cancer detection using near-real-time analysis of RF and B-mode TeUS data and deep learning is feasible. Second, we present real-time segmentation of ultrasound prostate data using an integrated deep learning solution. RESULTS: The system is evaluated for cancer detection accuracy on ultrasound data obtained from a large clinical study with 255 biopsy cores from 157 subjects. It is further assessed with an independent dataset with 21 biopsy targets from six subjects. In the first study, we achieve area under the curve, sensitivity, specificity and accuracy of 0.94, 0.77, 0.94 and 0.92, respectively, for the detection of prostate cancer. In the second study, we achieve an AUC of 0.85. CONCLUSION: Our results suggest that TeUS-guided biopsy can be potentially effective for the detection of prostate cancer.


Subject(s)
Image-Guided Biopsy/methods , Prostatic Neoplasms/diagnosis , Ultrasonography, Interventional/methods , Algorithms , Biopsy, Large-Core Needle , Computer Systems , Humans , Male , Sensitivity and Specificity
8.
Ultrasound Med Biol ; 42(12): 3043-3049, 2016 12.
Article in English | MEDLINE | ID: mdl-27592559

ABSTRACT

Spinal needle injections are guided by fluoroscopy or palpation, resulting in radiation exposure and/or multiple needle re-insertions. Consequently, guiding these procedures with live ultrasound has become more popular, but images are still challenging to interpret. We introduce a guidance system based on augmentation of ultrasound images with a patient-specific 3-D surface model of the lumbar spine. We assessed the feasibility of the system in a study on 12 patients. The system could accurately provide augmentations of the epidural space and the facet joint for all subjects. Following conventional, fluoroscopy-guided needle placement, augmentation accuracy was determined according to the electromagnetically tracked final position of the needle. In 9 of 12 cases, the accuracy was considered sufficient for successfully delivering anesthesia. The unsuccessful cases can be attributed to errors in the electromagnetic tracking reference, which can be avoided by a setup reducing the influence of the metal C-arm.


Subject(s)
Anesthesia, Epidural/methods , Imaging, Three-Dimensional/methods , Ultrasonography, Interventional/methods , Aged , Anesthesia, Epidural/instrumentation , Feasibility Studies , Female , Humans , Lumbar Vertebrae/diagnostic imaging , Male , Reproducibility of Results
9.
Int J Comput Assist Radiol Surg ; 10(9): 1417-25, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26036968

ABSTRACT

PURPOSE: Facet joint injections of analgesic agents are widely used to treat patients with lower back pain. The current standard-of-care for guiding the injection is fluoroscopy, which exposes the patient and physician to significant radiation. As an alternative, several ultrasound guidance systems have been proposed, but have not become the standard-of-care, mainly because of the difficulty in image interpretation by the anesthesiologist unfamiliar with the complex spinal sonography. METHODS: We introduce an ultrasound-based navigation system that allows for live 2D ultrasound images augmented with a patient-specific statistical model of the spine and relating this information to the position of the tracked injection needle. The model registration accuracy is assessed on ultrasound data obtained from nine subjects who had prior CT images as the gold standard for the statistical model. The clinical validity of our method is evaluated on four subjects (of an ongoing in vivo study) which underwent facet joint injections. RESULTS: The statistical model could be registered to the bone structures in the ultrasound volume with an average RMS accuracy of 2.3±0.4 mm. The shape of the individual vertebrae could be estimated from the US volume with an average RMS surface distance error of 1.5±0.4 mm. The facet joints could be identified by the statistical model with an average accuracy of 5.1 ± 1.5 mm. CONCLUSIONS: The results of this initial feasibility assessment suggest that this ultrasound-based system is capable of providing information sufficient to guide facet joint injections. Further clinical studies are warranted.


Subject(s)
Injections, Intra-Articular/methods , Injections, Spinal/methods , Low Back Pain/diagnostic imaging , Low Back Pain/drug therapy , Zygapophyseal Joint/diagnostic imaging , Aged , Algorithms , Equipment Design , Feasibility Studies , Female , Fluoroscopy , Humans , Male , Middle Aged , Models, Statistical , Needles , Reproducibility of Results , Spine , Ultrasonography
10.
IEEE Trans Med Imaging ; 34(2): 652-61, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25350925

ABSTRACT

This work reports the use of ultrasound radio frequency (RF) time series analysis as a method for ultrasound-based classification of malignant breast lesions. The RF time series method is versatile and requires only a few seconds of raw ultrasound data with no need for additional instrumentation. Using the RF time series features, and a machine learning framework, we have generated malignancy maps, from the estimated cancer likelihood, for decision support in biopsy recommendation. These maps depict the likelihood of malignancy for regions of size 1 mm(2) within the suspicious lesions. We report an area under receiver operating characteristics curve of 0.86 (95% confidence interval [CI]: 0.84%-0.90%) using support vector machines and 0.81 (95% CI: 0.78-0.85) using Random Forests classification algorithms, on 22 subjects with leave-one-subject-out cross-validation. Changing the classification method yielded consistent results which indicates the robustness of this tissue typing method. The findings of this report suggest that ultrasound RF time series, along with the developed machine learning framework, can help in differentiating malignant from benign breast lesions, subsequently reducing the number of unnecessary biopsies after mammography screening.


Subject(s)
Breast Neoplasms/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Radio Waves , Ultrasonography, Mammary/methods , Female , Humans , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL
...