Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 95
Filter
1.
Med Phys ; 2024 May 07.
Article in English | MEDLINE | ID: mdl-38713916

ABSTRACT

BACKGROUND: Disease or injury may cause a change in the biomechanical properties of the lungs, which can alter lung function. Image registration can be used to measure lung ventilation and quantify volume change, which can be a useful diagnostic aid. However, lung registration is a challenging problem because of the variation in deformation along the lungs, sliding motion of the lungs along the ribs, and change in density. PURPOSE: Landmark correspondences have been used to make deformable image registration robust to large displacements. METHODS: To tackle the challenging task of intra-patient lung computed tomography (CT) registration, we extend the landmark correspondence prediction model deep convolutional neural network-Match by introducing a soft mask loss term to encourage landmark correspondences in specific regions and avoid the use of a mask during inference. To produce realistic deformations to train the landmark correspondence model, we use data-driven synthetic transformations. We study the influence of these learned landmark correspondences on lung CT registration by integrating them into intensity-based registration as a distance-based penalty. RESULTS: Our results on the public thoracic CT dataset COPDgene show that using learned landmark correspondences as a soft constraint can reduce median registration error from approximately 5.46 to 4.08 mm compared to standard intensity-based registration, in the absence of lung masks. CONCLUSIONS: We show that using landmark correspondences results in minor improvements in local alignment, while significantly improving global alignment.

2.
Med Phys ; 51(4): 2367-2377, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38408022

ABSTRACT

BACKGROUND: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 ∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.


Subject(s)
Deep Learning , Prostatic Neoplasms , Male , Humans , Prostate/diagnostic imaging , Prostate/pathology , Pelvis , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/radiotherapy , Prostatic Neoplasms/pathology , Radiotherapy Planning, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Algorithms
3.
IEEE J Biomed Health Inform ; 28(3): 1161-1172, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37878422

ABSTRACT

We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzhen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform.


Subject(s)
Benchmarking , Prostatic Neoplasms , Male , Humans , Lymphocytes , Breast , China
4.
NMR Biomed ; 36(12): e5019, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37622473

ABSTRACT

At ultrahigh field strengths images of the body are hampered by B1 -field inhomogeneities. These present themselves as inhomogeneous signal intensity and contrast, which is regarded as a "bias field" to the ideal image. Current bias field correction methods, such as the N4 algorithm, assume a low frequency bias field, which is not sufficiently valid for T2w images at 7 T. In this work we propose a deep learning based bias field correction method to address this issue for T2w prostate images at 7 T. By combining simulated B1 -field distributions of a multi-transmit setup at 7 T with T2w prostate images at 1.5 T, we generated artificial 7 T images for which the homogeneous counterpart was available. Using these paired data, we trained a neural network to correct the bias field. We predicted either a homogeneous image (t-Image neural network) or the bias field (t-Biasf neural network). In addition, we experimented with the single-channel images of the receive array and the corresponding sum of magnitudes of this array as the input image. Testing was carried out on four datasets: the test split of the synthetic training dataset, volunteer and patient images at 7 T, and patient images at 3 T. For the test split, the performance was evaluated using the structural similarity index measure, Wasserstein distance, and root mean squared error. For all other test data, the features Homogeneity and Energy derived from the gray level co-occurrence matrix (GLCM) were used to quantify the improvement. For each test dataset, the proposed method was compared with the current gold standard: the N4 algorithm. Additionally, a questionnaire was filled out by two clinical experts to assess the homogeneity and contrast preservation of the 7 T datasets. All four proposed neural networks were able to substantially reduce the B1 -field induced inhomogeneities in T2w 7 T prostate images. By visual inspection, the images clearly look more homogeneous, which is confirmed by the increase in Homogeneity and Energy in the GLCM, and the questionnaire scores from two clinical experts. Occasionally, changes in contrast within the prostate were observed, although much less for the t-Biasf network than for the t-Image network. Further, results on the 3 T dataset demonstrate that the proposed learning based approach is on par with the N4 algorithm. The results demonstrate that the trained networks were capable of reducing the B1 -field induced inhomogeneities for prostate imaging at 7 T. The quantitative evaluation showed that all proposed learning based correction techniques outperformed the N4 algorithm. Of the investigated methods, the single-channel t-Biasf neural network proves most reliable for bias field correction.


Subject(s)
Deep Learning , Prostate , Male , Humans , Prostate/diagnostic imaging , Neural Networks, Computer , Algorithms , Image Processing, Computer-Assisted/methods
5.
Eur J Cancer ; 185: 167-177, 2023 05.
Article in English | MEDLINE | ID: mdl-36996627

ABSTRACT

INTRODUCTION: Predicting checkpoint inhibitors treatment outcomes in melanoma is a relevant task, due to the unpredictable and potentially fatal toxicity and high costs for society. However, accurate biomarkers for treatment outcomes are lacking. Radiomics are a technique to quantitatively capture tumour characteristics on readily available computed tomography (CT) imaging. The purpose of this study was to investigate the added value of radiomics for predicting clinical benefit from checkpoint inhibitors in melanoma in a large, multicenter cohort. METHODS: Patients who received first-line anti-PD1±anti-CTLA4 treatment for advanced cutaneous melanoma were retrospectively identified from nine participating hospitals. For every patient, up to five representative lesions were segmented on baseline CT, and radiomics features were extracted. A machine learning pipeline was trained on the radiomics features to predict clinical benefit, defined as stable disease for more than 6 months or response per RECIST 1.1 criteria. This approach was evaluated using a leave-one-centre-out cross validation and compared to a model based on previously discovered clinical predictors. Lastly, a combination model was built on the radiomics and clinical model. RESULTS: A total of 620 patients were included, of which 59.2% experienced clinical benefit. The radiomics model achieved an area under the receiver operator characteristic curve (AUROC) of 0.607 [95% CI, 0.562-0.652], lower than that of the clinical model (AUROC=0.646 [95% CI, 0.600-0.692]). The combination model yielded no improvement over the clinical model in terms of discrimination (AUROC=0.636 [95% CI, 0.592-0.680]) or calibration. The output of the radiomics model was significantly correlated with three out of five input variables of the clinical model (p < 0.001). DISCUSSION: The radiomics model achieved a moderate predictive value of clinical benefit, which was statistically significant. However, a radiomics approach was unable to add value to a simpler clinical model, most likely due to the overlap in predictive information learned by both models. Future research should focus on the application of deep learning, spectral CT-derived radiomics, and a multimodal approach for accurately predicting benefit to checkpoint inhibitor treatment in advanced melanoma.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Melanoma/diagnostic imaging , Melanoma/drug therapy , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/drug therapy , Retrospective Studies , Treatment Outcome , Tomography, X-Ray Computed
6.
MAGMA ; 36(1): 79-93, 2023 Feb.
Article in English | MEDLINE | ID: mdl-35904612

ABSTRACT

OBJECTIVES: Diffusion-weighted MRI can assist preoperative planning by reconstructing the trajectory of eloquent fiber pathways, such as the corticospinal tract (CST). However, accurate reconstruction of the full extent of the CST remains challenging with existing tractography methods. We suggest a novel tractography algorithm exploiting unused fiber orientations to produce more complete and reliable results. METHODS: Our novel approach, referred to as multi-level fiber tractography (MLFT), reconstructs fiber pathways by progressively considering previously unused fiber orientations at multiple levels of tract propagation. Anatomical priors are used to minimize the number of false-positive pathways. The MLFT method was evaluated on synthetic data and in vivo data by reconstructing the CST while compared to conventional tractography approaches. RESULTS: The radial extent of MLFT reconstructions is comparable to that of probabilistic reconstruction: [Formula: see text] for the left and [Formula: see text] for the right hemisphere according to Wilcoxon test, while achieving significantly higher topography preservation compared to probabilistic tractography: [Formula: see text]. DISCUSSION: MLFT provides a novel way to reconstruct fiber pathways by adding the capability of including branching pathways in fiber tractography. Thanks to its robustness, feasible reconstruction extent and topography preservation, our approach may assist in clinical practice as well as in virtual dissection studies.


Subject(s)
Diffusion Tensor Imaging , Image Processing, Computer-Assisted , Diffusion Tensor Imaging/methods , Image Processing, Computer-Assisted/methods , Diffusion Magnetic Resonance Imaging/methods , Algorithms , Pyramidal Tracts/diagnostic imaging
7.
Sci Rep ; 12(1): 15102, 2022 09 06.
Article in English | MEDLINE | ID: mdl-36068311

ABSTRACT

Breast cancer tumor grade is strongly associated with patient survival. In current clinical practice, pathologists assign tumor grade after visual analysis of tissue specimens. However, different studies show significant inter-observer variation in breast cancer grading. Computer-based breast cancer grading methods have been proposed but only work on specifically selected tissue areas and/or require labor-intensive annotations to be applied to new datasets. In this study, we trained and evaluated a deep learning-based breast cancer grading model that works on whole-slide histopathology images. The model was developed using whole-slide images from 706 young (< 40 years) invasive breast cancer patients with corresponding tumor grade (low/intermediate vs. high), and its constituents nuclear grade, tubule formation and mitotic rate. The performance of the model was evaluated using Cohen's kappa on an independent test set of 686 patients using annotations by expert pathologists as ground truth. The predicted low/intermediate (n = 327) and high (n = 359) grade groups were used to perform survival analysis. The deep learning system distinguished low/intermediate versus high tumor grade with a Cohen's Kappa of 0.59 (80% accuracy) compared to expert pathologists. In subsequent survival analysis the two groups predicted by the system were found to have a significantly different overall survival (OS) and disease/recurrence-free survival (DRFS/RFS) (p < 0.05). Univariate Cox hazard regression analysis showed statistically significant hazard ratios (p < 0.05). After adjusting for clinicopathologic features and stratifying for molecular subtype the hazard ratios showed a trend but lost statistical significance for all endpoints. In conclusion, we developed a deep learning-based model for automated grading of breast cancer on whole-slide images. The model distinguishes between low/intermediate and high grade tumors and finds a trend in the survival of the two predicted groups.


Subject(s)
Breast Neoplasms , Deep Learning , Breast Neoplasms/pathology , Female , Humans , Neoplasm Grading , Observer Variation , Pathologists , Survival Analysis
8.
Eur J Cancer ; 175: 60-76, 2022 11.
Article in English | MEDLINE | ID: mdl-36096039

ABSTRACT

BACKGROUND: Checkpoint inhibition has radically improved the perspective for patients with metastatic cancer, but predicting who will not respond with high certainty remains difficult. Imaging-derived biomarkers may be able to provide additional insights into the heterogeneity in tumour response between patients. In this systematic review, we aimed to summarise and qualitatively assess the current evidence on imaging biomarkers that predict response and survival in patients treated with checkpoint inhibitors in all cancer types. METHODS: PubMed and Embase were searched from database inception to 29th November 2021. Articles eligible for inclusion described baseline imaging predictive factors, radiomics and/or imaging machine learning models for predicting response and survival in patients with any kind of malignancy treated with checkpoint inhibitors. Risk of bias was assessed using the QUIPS and PROBAST tools and data was extracted. RESULTS: In total, 119 studies including 15,580 patients were selected. Of these studies, 73 investigated simple imaging factors. 45 studies investigated radiomic features or deep learning models. Predictors of worse survival were (i) higher tumour burden, (ii) presence of liver metastases, (iii) less subcutaneous adipose tissue, (iv) less dense muscle and (v) presence of symptomatic brain metastases. Hazard rate ratios did not exceed 2.00 for any predictor in the larger and higher quality studies. The added value of baseline fluorodeoxyglucose positron emission tomography parameters in predicting response to treatment was limited. Pilot studies of radioactive drug tracer imaging showed promising results. Reports on radiomics were almost unanimously positive, but numerous methodological concerns exist. CONCLUSIONS: There is well-supported evidence for several imaging biomarkers that can be used in clinical decision making. Further research, however, is needed into biomarkers that can more accurately identify which patients who will not benefit from checkpoint inhibition. Radiomics and radioactive drug labelling appear to be promising approaches for this purpose.


Subject(s)
Brain Neoplasms , Positron-Emission Tomography , Humans , Radiopharmaceuticals
9.
Biomed Opt Express ; 13(5): 2683-2694, 2022 May 01.
Article in English | MEDLINE | ID: mdl-35774322

ABSTRACT

Correct Descemet Membrane Endothelial Keratoplasty (DMEK) graft orientation is imperative for success of DMEK surgery, but intraoperative evaluation can be challenging. We present a method for automatic evaluation of the graft orientation in intraoperative optical coherence tomography (iOCT), exploiting the natural rolling behavior of the graft. The method encompasses a deep learning model for graft segmentation, post-processing to obtain a smooth line representation, and curvature calculations to determine graft orientation. For an independent test set of 100 iOCT-frames, the automatic method correctly identified graft orientation in 78 frames and obtained an area under the receiver operating characteristic curve (AUC) of 0.84. When we replaced the automatic segmentation with the manual masks, the AUC increased to 0.92, corresponding to an accuracy of 86%. In comparison, two corneal specialists correctly identified graft orientation in 90% and 91% of the iOCT-frames.

10.
Article in English | MEDLINE | ID: mdl-35452387

ABSTRACT

Lightweight segmentation models are becoming more popular for fast diagnosis on small and low cost medical imaging devices. This study focuses on the segmentation of the left ventricle (LV) in cardiac ultrasound (US) images. A new lightweight model [LV network (LVNet)] is proposed for segmentation, which gives the benefits of requiring fewer parameters but with improved segmentation performance in terms of Dice score (DS). The proposed model is compared with state-of-the-art methods, such as UNet, MiniNetV2, and fully convolutional dense dilated network (FCdDN). The model proposed comes with a post-processing pipeline that further enhances the segmentation results. In general, the training is done directly using the segmentation mask as the output and the US image as the input of the model. A new strategy for segmentation is also introduced in addition to the direct training method used. Compared with the UNet model, an improvement in DS performance as high as 5% for segmentation with papillary (WP) muscles was found, while showcasing an improvement of 18.5% when the papillary muscles are excluded. The model proposed requires only 5% of the memory required by a UNet model. LVNet achieves a better trade-off between the number of parameters and its segmentation performance as compared with other conventional models. The developed codes are available at https://github.com/navchetanawasthi/Left_Ventricle_Segmentation.


Subject(s)
Heart Ventricles , Image Processing, Computer-Assisted , Echocardiography , Heart Ventricles/diagnostic imaging , Image Processing, Computer-Assisted/methods , Muscles , Ultrasonography
11.
Phys Med Biol ; 67(2)2022 01 19.
Article in English | MEDLINE | ID: mdl-34891142

ABSTRACT

Breathing motion can displace internal organs by up to several cm; as such, it is a primary factor limiting image quality in medical imaging. Motion can also complicate matters when trying to fuse images from different modalities, acquired at different locations and/or on different days. Currently available devices for monitoring breathing motion often do so indirectly, by detecting changes in the outline of the torso rather than the internal motion itself, and these devices are often fixed to floors, ceilings or walls, and thus cannot accompany patients from one location to another. We have developed small ultrasound-based sensors, referred to as 'organ configuration motion' (OCM) sensors, that attach to the skin and provide rich motion-sensitive information. In the present work we tested the ability of OCM sensors to enable respiratory gating duringin vivoPET imaging. A motion phantom involving an FDG solution was assembled, and two cancer patients scheduled for a clinical PET/CT exam were recruited for this study. OCM signals were used to help reconstruct phantom andin vivodata into time series of motion-resolved images. As expected, the motion-resolved images captured the underlying motion. In Patient #1, a single large lesion proved to be mostly stationary through the breathing cycle. However, in Patient #2, several small lesions were mobile during breathing, and our proposed new approach captured their breathing-related displacements. In summary, a relatively inexpensive hardware solution was developed here for respiration monitoring. Because the proposed sensors attach to the skin, as opposed to walls or ceilings, they can accompany patients from one procedure to the next, potentially allowing data gathered in different places and at different times to be combined and compared in ways that account for breathing motion.


Subject(s)
Multimodal Imaging , Positron Emission Tomography Computed Tomography , Humans , Motion , Phantoms, Imaging , Positron-Emission Tomography/methods
12.
Healthcare (Basel) ; 11(1)2022 Dec 30.
Article in English | MEDLINE | ID: mdl-36611583

ABSTRACT

Ultrasound (US) imaging is a medical imaging modality that uses the reflection of sound in the range of 2-18 MHz to image internal body structures. In US, the frequency bandwidth (BW) is directly associated with image resolution. BW is a property of the transducer and more bandwidth comes at a higher cost. Thus, methods that can transform strongly bandlimited ultrasound data into broadband data are essential. In this work, we propose a deep learning (DL) technique to improve the image quality for a given bandwidth by learning features provided by broadband data of the same field of view. Therefore, the performance of several DL architectures and conventional state-of-the-art techniques for image quality improvement and artifact removal have been compared on in vitro US datasets. Two training losses have been utilized on three different architectures: a super resolution convolutional neural network (SRCNN), U-Net, and a residual encoder decoder network (REDNet) architecture. The models have been trained to transform low-bandwidth image reconstructions to high-bandwidth image reconstructions, to reduce the artifacts, and make the reconstructions visually more attractive. Experiments were performed for 20%, 40%, and 60% fractional bandwidth on the original images and showed that the improvements obtained are as high as 45.5% in RMSE, and 3.85 dB in PSNR, in datasets with a 20% bandwidth limitation.

13.
Sensors (Basel) ; 21(23)2021 Nov 28.
Article in English | MEDLINE | ID: mdl-34883951

ABSTRACT

Cardiovascular diseases (CVDs) remain the leading cause of death worldwide. An effective management and treatment of CVDs highly relies on accurate diagnosis of the disease. As the most common imaging technique for clinical diagnosis of the CVDs, US imaging has been intensively explored. Especially with the introduction of deep learning (DL) techniques, US imaging has advanced tremendously in recent years. Photoacoustic imaging (PAI) is one of the most promising new imaging methods in addition to the existing clinical imaging methods. It can characterize different tissue compositions based on optical absorption contrast and thus can assess the functionality of the tissue. This paper reviews some major technological developments in both US (combined with deep learning techniques) and PA imaging in the application of diagnosis of CVDs.


Subject(s)
Cardiology , Cardiovascular System , Photoacoustic Techniques , Diagnostic Imaging , Ultrasonography
14.
Front Oncol ; 11: 761169, 2021.
Article in English | MEDLINE | ID: mdl-34970486

ABSTRACT

While the diagnosis of high-grade glioma (HGG) is still associated with a considerably poor prognosis, neurosurgical tumor resection provides an opportunity for prolonged survival and improved quality of life for affected patients. However, successful tumor resection is dependent on a proper surgical planning to avoid surgery-induced functional deficits whilst achieving a maximum extent of resection (EOR). With diffusion magnetic resonance imaging (MRI) providing insight into individual white matter neuroanatomy, the challenge remains to disentangle that information as correctly and as completely as possible. In particular, due to the lack of sensitivity and accuracy, the clinical value of widely used diffusion tensor imaging (DTI)-based tractography is increasingly questioned. We evaluated whether the recently developed multi-level fiber tracking (MLFT) technique can improve tractography of the corticospinal tract (CST) in patients with motor-eloquent HGGs. Forty patients with therapy-naïve HGGs (mean age: 62.6 ± 13.4 years, 57.5% males) and preoperative diffusion MRI [repetition time (TR)/echo time (TE): 5000/78 ms, voxel size: 2x2x2 mm3, one volume at b=0 s/mm2, 32 volumes at b=1000 s/mm2] underwent reconstruction of the CST of the tumor-affected and unaffected hemispheres using MLFT in addition to deterministic DTI-based and deterministic constrained spherical deconvolution (CSD)-based fiber tractography. The brain stem was used as a seeding region, with a motor cortex mask serving as a target region for MLFT and a region of interest (ROI) for the other two algorithms. Application of the MLFT method substantially improved bundle reconstruction, leading to CST bundles with higher radial extent compared to the two other algorithms (delineation of CST fanning with a wider range; median radial extent for tumor-affected vs. unaffected hemisphere - DTI: 19.46° vs. 18.99°, p=0.8931; CSD: 30.54° vs. 27.63°, p=0.0546; MLFT: 81.17° vs. 74.59°, p=0.0134). In addition, reconstructions by MLFT and CSD-based tractography nearly completely included respective bundles derived from DTI-based tractography, which was however favorable for MLFT compared to CSD-based tractography (median coverage of the DTI-based CST for affected vs. unaffected hemispheres - CSD: 68.16% vs. 77.59%, p=0.0075; MLFT: 93.09% vs. 95.49%; p=0.0046). Thus, a more complete picture of the CST in patients with motor-eloquent HGGs might be achieved based on routinely acquired diffusion MRI data using MLFT.

15.
Sci Rep ; 11(1): 13976, 2021 07 07.
Article in English | MEDLINE | ID: mdl-34234179

ABSTRACT

Corneal thickness (pachymetry) maps can be used to monitor restoration of corneal endothelial function, for example after Descemet's membrane endothelial keratoplasty (DMEK). Automated delineation of the corneal interfaces in anterior segment optical coherence tomography (AS-OCT) can be challenging for corneas that are irregularly shaped due to pathology, or as a consequence of surgery, leading to incorrect thickness measurements. In this research, deep learning is used to automatically delineate the corneal interfaces and measure corneal thickness with high accuracy in post-DMEK AS-OCT B-scans. Three different deep learning strategies were developed based on 960 B-scans from 50 patients. On an independent test set of 320 B-scans, corneal thickness could be measured with an error of 13.98 to 15.50 µm for the central 9 mm range, which is less than 3% of the average corneal thickness. The accurate thickness measurements were used to construct detailed pachymetry maps. Moreover, follow-up scans could be registered based on anatomical landmarks to obtain differential pachymetry maps. These maps may enable a more comprehensive understanding of the restoration of the endothelial function after DMEK, where thickness often varies throughout different regions of the cornea, and subsequently contribute to a standardized postoperative regime.


Subject(s)
Corneal Pachymetry , Descemet Membrane/diagnostic imaging , Descemet Membrane/surgery , Tomography, Optical Coherence , Corneal Pachymetry/methods , Descemet Stripping Endothelial Keratoplasty , Humans
16.
Med Image Anal ; 73: 102141, 2021 10.
Article in English | MEDLINE | ID: mdl-34246850

ABSTRACT

Adversarial attacks are considered a potentially serious security threat for machine learning systems. Medical image analysis (MedIA) systems have recently been argued to be vulnerable to adversarial attacks due to strong financial incentives and the associated technological infrastructure. In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology. We focus on adversarial black-box settings, in which the attacker does not have full access to the target model and usually uses another model, commonly referred to as surrogate model, to craft adversarial examples that are then transferred to the target model. We consider this to be the most realistic scenario for MedIA systems. Firstly, we study the effect of weight initialization (pre-training on ImageNet or random initialization) on the transferability of adversarial attacks from the surrogate model to the target model, i.e., how effective attacks crafted using the surrogate model are on the target model. Secondly, we study the influence of differences in development (training and validation) data between target and surrogate models. We further study the interaction of weight initialization and data differences with differences in model architecture. All experiments were done with a perturbation degree tuned to ensure maximal transferability at minimal visual perceptibility of the attacks. Our experiments show that pre-training may dramatically increase the transferability of adversarial examples, even when the target and surrogate's architectures are different: the larger the performance gain using pre-training, the larger the transferability. Differences in the development data between target and surrogate models considerably decrease the performance of the attack; this decrease is further amplified by difference in the model architecture. We believe these factors should be considered when developing security-critical MedIA systems planned to be deployed in clinical practice. We recommend avoiding using only standard components, such as pre-trained architectures and publicly available datasets, as well as disclosure of design specifications, in addition to using adversarial defense methods. When evaluating the vulnerability of MedIA systems to adversarial attacks, various attack scenarios and target-surrogate differences should be simulated to achieve realistic robustness estimates. The code and all trained models used in our experiments are publicly available.3.


Subject(s)
Machine Learning , Neural Networks, Computer , Humans
17.
Lab Invest ; 101(4): 525-533, 2021 04.
Article in English | MEDLINE | ID: mdl-33608619

ABSTRACT

Ductal carcinoma in situ (DCIS) is a non-invasive breast cancer that can progress into invasive ductal carcinoma (IDC). Studies suggest DCIS is often overtreated since a considerable part of DCIS lesions may never progress into IDC. Lower grade lesions have a lower progression speed and risk, possibly allowing treatment de-escalation. However, studies show significant inter-observer variation in DCIS grading. Automated image analysis may provide an objective solution to address high subjectivity of DCIS grading by pathologists. In this study, we developed and evaluated a deep learning-based DCIS grading system. The system was developed using the consensus DCIS grade of three expert observers on a dataset of 1186 DCIS lesions from 59 patients. The inter-observer agreement, measured by quadratic weighted Cohen's kappa, was used to evaluate the system and compare its performance to that of expert observers. We present an analysis of the lesion-level and patient-level inter-observer agreement on an independent test set of 1001 lesions from 50 patients. The deep learning system (dl) achieved on average slightly higher inter-observer agreement to the three observers (o1, o2 and o3) (κo1,dl = 0.81, κo2,dl = 0.53 and κo3,dl = 0.40) than the observers amongst each other (κo1,o2 = 0.58, κo1,o3 = 0.50 and κo2,o3 = 0.42) at the lesion-level. At the patient-level, the deep learning system achieved similar agreement to the observers (κo1,dl = 0.77, κo2,dl = 0.75 and κo3,dl = 0.70) as the observers amongst each other (κo1,o2 = 0.77, κo1,o3 = 0.75 and κo2,o3 = 0.72). The deep learning system better reflected the grading spectrum of DCIS than two of the observers. In conclusion, we developed a deep learning-based DCIS grading system that achieved a performance similar to expert observers. To the best of our knowledge, this is the first automated system for the grading of DCIS that could assist pathologists by providing robust and reproducible second opinions on DCIS grade.


Subject(s)
Breast Neoplasms , Carcinoma, Intraductal, Noninfiltrating , Deep Learning , Image Interpretation, Computer-Assisted/methods , Neoplasm Grading/methods , Biopsy , Breast/pathology , Breast Neoplasms/diagnosis , Breast Neoplasms/pathology , Carcinoma, Intraductal, Noninfiltrating/diagnosis , Carcinoma, Intraductal, Noninfiltrating/pathology , Female , Humans , Middle Aged
18.
Med Image Anal ; 68: 101849, 2021 02.
Article in English | MEDLINE | ID: mdl-33197715

ABSTRACT

Rotation-invariance is a desired property of machine-learning models for medical image analysis and in particular for computational pathology applications. We propose a framework to encode the geometric structure of the special Euclidean motion group SE(2) in convolutional networks to yield translation and rotation equivariance via the introduction of SE(2)-group convolution layers. This structure enables models to learn feature representations with a discretized orientation dimension that guarantees that their outputs are invariant under a discrete set of rotations. Conventional approaches for rotation invariance rely mostly on data augmentation, but this does not guarantee the robustness of the output when the input is rotated. At that, trained conventional CNNs may require test-time rotation augmentation to reach their full capability. This study is focused on histopathology image analysis applications for which it is desirable that the arbitrary global orientation information of the imaged tissues is not captured by the machine learning models. The proposed framework is evaluated on three different histopathology image analysis tasks (mitosis detection, nuclei segmentation and tumor detection). We present a comparative analysis for each problem and show that consistent increase of performances can be achieved when using the proposed framework.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Machine Learning
19.
J Med Imaging (Bellingham) ; 7(6): 064003, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33344673

ABSTRACT

Purpose: Convolutional neural network (CNN) methods have been proposed to quantify lesions in medical imaging. Commonly, more than one imaging examination is available for a patient, but the serial information in these images often remains unused. CNN-based methods have the potential to extract valuable information from previously acquired imaging to better quantify lesions on current imaging of the same patient. Approach: A pretrained CNN can be updated with a patient's previously acquired imaging: patient-specific fine-tuning (FT). In this work, we studied the improvement in performance of lesion quantification methods on magnetic resonance images after FT compared to a pretrained base CNN. We applied the method to two different approaches: the detection of liver metastases and the segmentation of brain white matter hyperintensities (WMH). Results: The patient-specific fine-tuned CNN has a better performance than the base CNN. For the liver metastases, the median true positive rate increases from 0.67 to 0.85. For the WMH segmentation, the mean Dice similarity coefficient increases from 0.82 to 0.87. Conclusions: We showed that patient-specific FT has the potential to improve the lesion quantification performance of general CNNs by exploiting a patient's previously acquired imaging.

20.
Cancer Epidemiol Biomarkers Prev ; 29(11): 2358-2368, 2020 11.
Article in English | MEDLINE | ID: mdl-32917665

ABSTRACT

BACKGROUND: Manual qualitative and quantitative measures of terminal duct lobular unit (TDLU) involution were previously reported to be inversely associated with breast cancer risk. We developed and applied a deep learning method to yield quantitative measures of TDLU involution in normal breast tissue. We assessed the associations of these automated measures with breast cancer risk factors and risk. METHODS: We obtained eight quantitative measures from whole slide images from a benign breast disease (BBD) nested case-control study within the Nurses' Health Studies (287 breast cancer cases and 1,083 controls). Qualitative assessments of TDLU involution were available for 177 cases and 857 controls. The associations between risk factors and quantitative measures among controls were assessed using analysis of covariance adjusting for age. The relationship between each measure and risk was evaluated using unconditional logistic regression, adjusting for the matching factors, BBD subtypes, parity, and menopausal status. Qualitative measures and breast cancer risk were evaluated accounting for matching factors and BBD subtypes. RESULTS: Menopausal status and parity were significantly associated with all eight measures; select TDLU measures were associated with BBD histologic subtype, body mass index, and birth index (P < 0.05). No measure was correlated with body size at ages 5-10 years, age at menarche, age at first birth, or breastfeeding history (P > 0.05). Neither quantitative nor qualitative measures were associated with breast cancer risk. CONCLUSIONS: Among Nurses' Health Studies women diagnosed with BBD, TDLU involution is not a biomarker of subsequent breast cancer. IMPACT: TDLU involution may not impact breast cancer risk as previously thought.


Subject(s)
Breast Neoplasms/physiopathology , Adult , Female , Humans , Middle Aged , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...