Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
NPJ Digit Med ; 7(1): 97, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38622284

ABSTRACT

Meniscal injury represents a common type of knee injury, accounting for over 50% of all knee injuries. The clinical diagnosis and treatment of meniscal injury heavily rely on magnetic resonance imaging (MRI). However, accurately diagnosing the meniscus from a comprehensive knee MRI is challenging due to its limited and weak signal, significantly impeding the precise grading of meniscal injuries. In this study, a visual interpretable fine grading (VIFG) diagnosis model has been developed to facilitate intelligent and quantified grading of meniscal injuries. Leveraging a multilevel transfer learning framework, it extracts comprehensive features and incorporates an attributional attention module to precisely locate the injured positions. Moreover, the attention-enhancing feedback module effectively concentrates on and distinguishes regions with similar grades of injury. The proposed method underwent validation on FastMRI_Knee and Xijing_Knee dataset, achieving mean grading accuracies of 0.8631 and 0.8502, surpassing the state-of-the-art grading methods notably in error-prone Grade 1 and Grade 2 cases. Additionally, the visually interpretable heatmaps generated by VIFG provide accurate depictions of actual or potential meniscus injury areas beyond human visual capability. Building upon this, a novel fine grading criterion was introduced for subtypes of meniscal injury, further classifying Grade 2 into 2a, 2b, and 2c, aligning with the anatomical knowledge of meniscal blood supply. It can provide enhanced injury-specific details, facilitating the development of more precise surgical strategies. The efficacy of this subtype classification was evidenced in 20 arthroscopic cases, underscoring the potential enhancement brought by intelligent-assisted diagnosis and treatment for meniscal injuries.

2.
IEEE J Biomed Health Inform ; 28(5): 3042-3054, 2024 May.
Article in English | MEDLINE | ID: mdl-38376973

ABSTRACT

Accurate fine-grained grading of lumbar intervertebral disc (LIVD) degeneration is essential for the diagnosis and treatment design of high-incidence low back pain. However, the grading accuracy is still challenged by lacking the fine-grained degenerative details, which is mainly due to the existing grading methods are easily dominated by the salient nucleus pulposus regions in LIVD, overlooking the inconspicuous degeneration changes of the surrounding structures. In this study, a novel regional feature recalibration network (RFRecNet) is proposed to achieve accurate and reliable LIVD degeneration grading. Detection transformer (DETR) is first utilized to detect all LIVDs and then input to the proposed RFRecNet for the fine-grained grading. To obtain sufficient features from both the salient nucleus pulposus and the surrounding regions, a regional cube-based feature boosting and suppression (RC-FBS) module is designed to adaptively recalibrate the feature extraction and utilization from the various regions in LIVD, and a feature diversification (FD) module is proposed to capture the complementary semantic information from the multi-scale features for the comprehensive fine-grained degeneration grading. Extensive experiments were conducted on a clinically collected dataset, which consists of 500 MR scans with a total of 10225 LIVDs. An average grading accuracy of 90.5%, specificity of 97.5%, sensitivity of 90.8%, and Cohen's kappa correlation coefficient of 0.876 are obtained, which indicate that the proposed framework is promising to provide doctors with reliable and consistent fine-grained quantitative evaluation results of the LIVD degeneration conditions for the optimal surgical plan design.


Subject(s)
Image Interpretation, Computer-Assisted , Intervertebral Disc Degeneration , Lumbar Vertebrae , Magnetic Resonance Imaging , Humans , Intervertebral Disc Degeneration/diagnostic imaging , Lumbar Vertebrae/diagnostic imaging , Magnetic Resonance Imaging/methods , Image Interpretation, Computer-Assisted/methods , Algorithms
3.
Phys Med ; 110: 102595, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37178624

ABSTRACT

PURPOSE: Although many deep learning-based abdominal multi-organ segmentation networks have been proposed, the various intensity distributions and organ shapes of the CT images from multi-center, multi-phase with various diseases introduce new challenges for robust abdominal CT segmentation. To achieve robust and efficient abdominal multi-organ segmentation, a new two-stage method is presented in this study. METHODS: A binary segmentation network is used for coarse localization, followed by a multi-scale attention network for the fine segmentation of liver, kidney, spleen, and pancreas. To constrain the organ shapes produced by the fine segmentation network, an additional network is pre-trained to learn the shape features of the organs with serious diseases and then employed to constrain the training of the fine segmentation network. RESULTS: The performance of the presented segmentation method was extensively evaluated on the multi-center data set from the Fast and Low GPU Memory Abdominal oRgan sEgmentation (FLARE) challenge, which was held in conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021. Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD) were calculated to quantitatively evaluate the segmentation accuracy and efficiency. An average DSC and NSD of 83.7% and 64.4% were achieved, and our method finally won the second place among more than 90 participating teams. CONCLUSIONS: The evaluation results on the public challenge demonstrate that our method shows promising performance in robustness and efficiency, which may promote the clinical application of the automatic abdominal multi-organ segmentation.


Subject(s)
Algorithms , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Abdomen/diagnostic imaging , Spleen/diagnostic imaging , Image Processing, Computer-Assisted/methods
4.
Phys Med Biol ; 66(20)2021 10 05.
Article in English | MEDLINE | ID: mdl-34517352

ABSTRACT

Objective.Ankylosing spondylitis (AS) is a disabling systemic disease that seriously threatens the patient's quality of life. Magnetic resonance imaging (MRI) is highly preferred in clinical diagnosis due to its high contrast and tissue resolution. However, since the uncertainty and intensity inhomogeneous of the AS lesions in MRI, it is still challenging and time-consuming for doctors to quantify the lesions to determine the grade of the patient's condition. Thus, an automatic AS grading method is presented in this study, which integrates the lesion segmentation and grading in a pipeline.Approach. To tackle the large variations in lesion shapes, sizes, and intensity distributions, a lightweight hybrid multi-scale convolutional neural network with reinforcement learning (LHR-Net) is proposed for the AS lesion segmentation. Specifically, the proposed LHR-Net is equipped with the newly proposed hybrid multi-scale module, which consists of multiply convolution layers with different kernel sizes and dilation rates for extracting sufficient multi-scale features. Additionally, a reinforcement learning-based data augmentation module is utilized to deal with the subjects with diffuse and fuzzy lesions that are difficult to segment. Furthermore, to resolve the incomplete segmentation results caused by the inhomogeneous intensity distributions of the AS lesions in MR images, a voxel constraint strategy is proposed to weigh the training voxel labels in the lesion regions. With the accurately segmented AS lesions, automatic AS grading is then performed by a ResNet-50-based classification network.Main results. The performance of the proposed LHR-Net was extensively evaluated on a clinically collected AS MRI dataset, which includes 100 subjects. Dice similarity coefficient (DSC), average surface distance, Hausdorff Distance at95thpercentile (HD95), predicted positive volume, and sensitivity were employed to quantitatively evaluate the segmentation results. The average DSC of the proposed LHR-Net on the AS dataset reached 0.71 on the test set, which outperforms the other state-of-the-art segmentation method by 0.04.Significance. With the accurately segmented lesions, 31 subjects in the test set (38 subjects) were correctly graded, which demonstrates that the proposed LHR-Net might provide a potential automatic method for reproducible computer-assisted diagnosis of AS grading.


Subject(s)
Spondylitis, Ankylosing , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Neural Networks, Computer , Quality of Life , Spondylitis, Ankylosing/diagnostic imaging
5.
Med Phys ; 48(8): 4459-4471, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34101198

ABSTRACT

PURPOSE: Missing or discrepant imaging volume is a common challenge in deformable image registration (DIR). To minimize the adverse impact, we train a neural network to synthesize cropped portions of head and neck CT's and then test its use in DIR. METHODS: Using a training dataset of 409 head and neck CT's, we trained a generative adversarial network to take in a cropped 3D image and output an image with synthesized anatomy in the cropped region. The network used a 3D U-Net generator along with Visual Geometry Group (VGG) deep feature losses. To test our technique, for each of the 53 test volumes, we used Elastix to deformably register combinations of a randomly cropped, full, and synthetically full volume to a single cropped, full, and synthetically full target volume. We additionally tested our method's robustness to crop extent by progressively increasing the amount of cropping, synthesizing the missing anatomy using our network, and then performing the same registration combinations. Registration performance was measured using 95% Hausdorff distance across 16 contours. RESULTS: We successfully trained a network to synthesize missing anatomy in superiorly and inferiorly cropped images. The network can estimate large regions in an incomplete image, far from the cropping boundary. Registration using our estimated full images was not significantly different from registration using the original full images. The average contour matching error for full image registration was 9.9 mm, whereas our method was 11.6, 12.1, and 13.6 mm for synthesized-to-full, full-to-synthesized, and synthesized-to-synthesized registrations, respectively. In comparison, registration using the cropped images had errors of 31.7 mm and higher. Plotting the registered image contour error as a function of initial preregistered error shows that our method is robust to registration difficulty. Synthesized-to-full registration was statistically independent of cropping extent up to 18.7 cm superiorly cropped. Synthesized-to-synthesized registration was nearly independent, with a -0.04 mm of change in average contour error for every additional millimeter of cropping. CONCLUSIONS: Different or inadequate in scan extent is a major cause of DIR inaccuracies. We address this challenge by training a neural network to complete cropped 3D images. We show that with image completion, the source of DIR inaccuracy is eliminated, and the method is robust to varying crop extent.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Algorithms , Head , Humans , Imaging, Three-Dimensional , Neck
6.
Phys Med Biol ; 66(3): 035001, 2021 01 26.
Article in English | MEDLINE | ID: mdl-33197901

ABSTRACT

Automated male pelvic multi-organ segmentation on CT images is highly desired for applications, including radiotherapy planning. To further improve the performance and efficiency of existing automated segmentation methods, in this study, we propose a multi-task edge-recalibrated network (MTER-Net), which aims to overcome the challenges, including blurry boundaries, large inter-patient appearance variations, and low soft-tissue contrast. The proposed MTER-Net is equipped with the following novel components. (a) To exploit the saliency and stability of femoral heads, we employed a light-weight localization module to locate the target region and efficiently remove the complex background. (b) We add an edge stream to the regular segmentation stream to focus on processing the edge-related information, distinguish the organs with blurry boundaries, and then boost the overall segmentation performance. Between the regular segmentation stream and edge stream, we introduce an edge recalibration module at each resolution level to connect the intermediate layers and deliver the higher-level activations from the regular stream to the edge stream to denoise the irrelevant activations. (c) Finally, using a 3D Atrous Spatial Pyramid Pooling (ASPP) feature fusion module, we fuse the features at different scales in the regular stream and the predictions from the edge stream to form the final segmentation result. The proposed segmentation network was evaluated on 200 prostate cancer patient CT images with manually delineated contours of bladder, rectum, seminal vesicle, and prostate. The segmentation performance of the proposed method was quantitatively evaluated using three metrics including Dice similarity coefficient (DSC), average surface distance (ASD), and 95% surface distance (95SD). The proposed MTER-Net achieves average DSC of 86.35%, ASD of 1.09 mm, and 95SD of 3.53 mm on the four organs, which outperforms the state-of-the-art segmentation networks by a large margin. Specifically, the quantitative DSC evaluation results of the four organs are 96.49% (bladder), 86.39% (rectum), 76.38% (seminal vesicle), and 86.14% (prostate), respectively. In conclusion, we demonstrate that the proposed MTER-Net efficiently attains superior performance to state-of-the-art pelvic organ segmentation methods.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Pelvis/diagnostic imaging , Tomography, X-Ray Computed , Humans , Male
7.
Phys Med Biol ; 65(13): 135011, 2020 07 13.
Article in English | MEDLINE | ID: mdl-32657281

ABSTRACT

Automated multi-organ segmentation on abdominal CT images may replace or complement manual segmentation for clinical applications including image-guided radiation therapy. However, the accuracy of auto-segmentation is challenged by low image contrast, large spatial and inter-patient anatomical variations. In this study, we propose an end-to-end segmentation network, termed self-paced DenseNet, for improved multi-organ segmentation performance, especially for the difficult-to-segment organs. Specifically, a learning-based attention mechanism and dense connection block are seamlessly integrated into the proposed self-paced DenseNet to improve the learning capability and efficiency of the backbone network. To heavily focus on the organs showing low soft-tissue contrast and motion artifacts, a boundary condition is utilized to constrain the network optimization. Additionally, to ease the large learning pace discrepancies of individual organs, a task-wise self-paced-learning strategy is employed to adaptively control the learning paces of individual organs. The proposed self-paced DenseNet was trained and evaluated on a public abdominal CT data set consisting of 90 subjects with manually labeled ground truths of eight organs (including spleen, left kidney, esophagus, gallbladder, stomach, liver, pancreas, and duodenum). For quantitative evaluation, the Dice similarity coefficient (DSC) and average surface distance (ASD) were calculated. An average DSC of 84.46% and ASD of 1.82 mm were achieved on the eight organs, which outperforms the state-of-the-art segmentation methods 2.96% on DSC under the same experimental configuration. Moreover, the proposed segmentation method shows notable improvements on the duodenum and gallbladder, obtaining an average DSC of 69.26% and 80.94% and ASD of 2.14 mm and 2.24 mm, respectively. The results are markedly superior to the average DSC of 63.12% and 76.35% and average ASD of 3.87 mm and 4.33 mm using the vanilla DenseNet, respectively, for the two organs. We demonstrated the effectiveness of the proposed self-paced DenseNet to automatically segment abdominal organs with low boundary conspicuity. The self-paced DenseNet achieved consistently superior segmentation performance on eight abdominal organs with varying segmentation difficulties. The demonstrated computational efficiency (<2 s/CT) makes it well-suited for online applications.


Subject(s)
Abdomen/diagnostic imaging , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed , Algorithms , Artifacts , Automation , Humans
8.
Phys Med Biol ; 65(24): 245034, 2020 12 11.
Article in English | MEDLINE | ID: mdl-32097892

ABSTRACT

Accurate segmentation of organs at risk (OARs) is necessary for adaptive head and neck (H&N) cancer treatment planning, but manual delineation is tedious, slow, and inconsistent. A self-channel-and-spatial-attention neural network (SCSA-Net) is developed for H&N OAR segmentation on CT images. To simultaneously ease the training and improve the segmentation performance, the proposed SCSA-Net utilizes the self-attention ability of the network. Spatial and channel-wise attention learning mechanisms are both employed to adaptively force the network to emphasize the meaningful features and weaken the irrelevant features simultaneously. The proposed network was first evaluated on a public dataset, which includes 48 patients, then on a separate serial CT dataset, which contains ten patients who received weekly diagnostic fan-beam CT scans. On the second dataset, the accuracy of using SCSA-Net to track the parotid and submandibular gland volume changes during radiotherapy treatment was quantified. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95SD) were calculated on the brainstem, optic chiasm, optic nerves, mandible, parotid glands, and submandibular glands to evaluate the proposed SCSA-Net. The proposed SCSA-Net consistently outperforms the state-of-the-art methods on the public dataset. Specifically, compared with Res-Net and SE-Net, which is constructed from squeeze-and-excitation block equipped residual blocks, the DSC of the optic nerves and submandibular glands is improved by 0.06, 0.03 and 0.05, 0.04 by the SCSA-Net. Moreover, the proposed method achieves statistically significant improvements in terms of DSC on all and eight of nine OARs over Res-Net and SE-Net, respectively. The trained network was able to achieve good segmentation results on the serial dataset, but the results were further improved after fine-tuning of the model using the simulation CT images. For the parotids and submandibular glands, the volume changes of individual patients are highly consistent between the automated and manual segmentation (Pearson's correlation 0.97-0.99). The proposed SCSA-Net is computationally efficient to perform segmentation (sim 2 s/CT).


Subject(s)
Head and Neck Neoplasms/pathology , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Organs at Risk/radiation effects , Tomography, X-Ray Computed/methods , Head and Neck Neoplasms/diagnostic imaging , Humans
9.
Med Phys ; 46(6): 2669-2682, 2019 Jun.
Article in English | MEDLINE | ID: mdl-31002188

ABSTRACT

PURPOSE: Image-guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs-at-risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC-GAN) is developed for fully automated H&N OARs segmentation on CT and low-field MRI. METHODS AND MATERIAL: A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel-wise prediction. A convolutional neural network (CNN)-based discriminator network is then utilized to correct predicted errors and image-level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR-guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC-GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the DenseNet (SC-GAN-ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation. RESULTS: The proposed SC-GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low-field MR dataset, the following average Dice's indices were obtained using improved SC-GAN: 0.916 (brainstem), 0.589 (optical chiasm), 0.816 (mandible), 0.703 (optical nerves), 0.799 (larynx), 0.706 (pharynx), and 0.845 (parotid glands). The average surface distances ranged from 0.68 mm (brainstem) to 1.70 mm (larynx). The 95% surface distance ranged from 1.48 mm (left optical nerve) to 3.92 mm (larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC-GAN performance is superior to SC-GAN-ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU. CONCLUSION: The performance of our previous shape constrained fully CNNs for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low-field MR images acquired on a MR-guided radiation radiotherapy system can support accurate and fully automated segmentation of both bony and soft tissue OARs for adaptive radiotherapy.


Subject(s)
Head and Neck Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Neural Networks, Computer , Tomography, X-Ray Computed , Head and Neck Neoplasms/radiotherapy , Humans , Organs at Risk/diagnostic imaging , Organs at Risk/radiation effects , Radiotherapy, Image-Guided/adverse effects
10.
Med Phys ; 45(10): 4558-4567, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30136285

ABSTRACT

PURPOSE: Intensity modulated radiation therapy (IMRT) is commonly employed for treating head and neck (H&N) cancer with uniform tumor dose and conformal critical organ sparing. Accurate delineation of organs-at-risk (OARs) on H&N CT images is thus essential to treatment quality. Manual contouring used in current clinical practice is tedious, time-consuming, and can produce inconsistent results. Existing automated segmentation methods are challenged by the substantial inter-patient anatomical variation and low CT soft tissue contrast. To overcome the challenges, we developed a novel automated H&N OARs segmentation method that combines a fully convolutional neural network (FCNN) with a shape representation model (SRM). METHODS: Based on manually segmented H&N CT, the SRM and FCNN were trained in two steps: (a) SRM learned the latent shape representation of H&N OARs from the training dataset; (b) the pre-trained SRM with fixed parameters were used to constrain the FCNN training. The combined segmentation network was then used to delineate nine OARs including the brainstem, optic chiasm, mandible, optical nerves, parotids, and submandibular glands on unseen H&N CT images. Twenty-two and 10 H&N CT scans provided by the Public Domain Database for Computational Anatomy (PDDCA) were utilized for training and validation, respectively. Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95%SD) were calculated to quantitatively evaluate the segmentation accuracy of the proposed method. The proposed method was compared with an active appearance model that won the 2015 MICCAI H&N Segmentation Grand Challenge based on the same dataset, an atlas method and a deep learning method based on different patient datasets. RESULTS: An average DSC = 0.870 (brainstem), DSC = 0.583 (optic chiasm), DSC = 0.937 (mandible), DSC = 0.653 (left optic nerve), DSC = 0.689 (right optic nerve), DSC = 0.835 (left parotid), DSC = 0.832 (right parotid), DSC = 0.755 (left submandibular), and DSC = 0.813 (right submandibular) were achieved. The segmentation results are consistently superior to the results of atlas and statistical shape based methods as well as a patch-wise convolutional neural network method. Once the networks are trained off-line, the average time to segment all 9 OARs for an unseen CT scan is 9.5 s. CONCLUSION: Experiments on clinical datasets of H&N patients demonstrated the effectiveness of the proposed deep neural network segmentation method for multi-organ segmentation on volumetric CT scans. The accuracy and robustness of the segmentation were further increased by incorporating shape priors using SMR. The proposed method showed competitive performance and took shorter time to segment multiple organs in comparison to state of the art methods.


Subject(s)
Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Tomography, X-Ray Computed , Automation , Humans , Models, Theoretical , Organs at Risk/radiation effects
SELECTION OF CITATIONS
SEARCH DETAIL
...