Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
2.
Phys Eng Sci Med ; 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38656437

ABSTRACT

Cervical cancer is a common cancer in women globally, with treatment usually involving radiation therapy (RT). Accurate segmentation for the tumour site and organ-at-risks (OARs) could assist in the reduction of treatment side effects and improve treatment planning efficiency. Cervical cancer Magnetic Resonance Imaging (MRI) segmentation is challenging due to a limited amount of training data available and large inter- and intra- patient shape variation for OARs. The proposed Masked-Net consists of a masked encoder within the 3D U-Net to account for the large shape variation within the dataset, with additional dilated layers added to improve segmentation performance. A new loss function was introduced to consider the bounding box loss during training with the proposed Masked-Net. Transfer learning from a male pelvis MRI data with a similar field of view was included. The approaches were compared to the 3D U-Net which was widely used in MRI image segmentation. The data used consisted of 52 volumes obtained from 23 patients with stage IB to IVB cervical cancer across a maximum of 7 weeks of RT with manually contoured labels including the bladder, cervix, gross tumour volume, uterus and rectum. The model was trained and tested with a 5-fold cross validation. Outcomes were evaluated based on the Dice Similarity Coefficients (DSC), the Hausdorff Distance (HD) and the Mean Surface Distance (MSD). The proposed method accounted for the small dataset, large variations in OAR shape and tumour sizes with an average DSC, HD and MSD for all anatomical structures of 0.790, 30.19mm and 3.15mm respectively.

3.
Med Image Anal ; 93: 103089, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38246088

ABSTRACT

In medical image analysis, automated segmentation of multi-component anatomical entities, with the possible presence of variable anomalies or pathologies, is a challenging task. In this work, we develop a multi-step approach using U-Net-based models to initially detect anomalies (bone marrow lesions, bone cysts) in the distal femur, proximal tibia and patella from 3D magnetic resonance (MR) images in individuals with varying grades of knee osteoarthritis. Subsequently, the extracted data are used for downstream tasks involving semantic segmentation of individual bone and cartilage volumes as well as bone anomalies. For anomaly detection, U-Net-based models were developed to reconstruct bone volume profiles of the femur and tibia in images via inpainting so anomalous bone regions could be replaced with close to normal appearances. The reconstruction error was used to detect bone anomalies. An anomaly-aware segmentation network, which was compared to anomaly-naïve segmentation networks, was used to provide a final automated segmentation of the individual femoral, tibial and patellar bone and cartilage volumes from the knee MR images which contain a spectrum of bone anomalies. The anomaly-aware segmentation approach provided up to 58% reduction in Hausdorff distances for bone segmentations compared to the results from anomaly-naïve segmentation networks. In addition, the anomaly-aware networks were able to detect bone anomalies in the MR images with greater sensitivity and specificity (area under the receiver operating characteristic curve [AUC] up to 0.896) compared to anomaly-naïve segmentation networks (AUC up to 0.874).


Subject(s)
Knee Joint , Osteoarthritis, Knee , Humans , Knee Joint/diagnostic imaging , Cartilage , Osteoarthritis, Knee/diagnostic imaging , Tibia/diagnostic imaging , Patella
4.
J Orthop Res ; 42(2): 385-394, 2024 02.
Article in English | MEDLINE | ID: mdl-37525546

ABSTRACT

Cam femoroacetabular impingement (FAI) syndrome is associated with hip osteoarthritis (OA) development. Hip shape features, derived from statistical shape modeling (SSM), are predictive for OA incidence, progression, and arthroplasty. Currently, no three-dimensional (3D) SSM studies have investigated whether there are cam shape differences between male and female patients, which may be of potential clinical relevance for FAI syndrome assessments. This study analyzed sex-specific cam location and shape in FAI syndrome patients from clinical magnetic resonance examinations (M:F 56:41, age: 16-63 years) using 3D focused shape modeling-based segmentation (CamMorph) and partial least squares regression to obtain shape features (latent variables [LVs]) of cam morphology. Two-way analysis of variance tests were used to assess cam LV data for sex and cam volume severity differences. There was no significant interaction between sex and cam volume severity for the LV data. A sex main effect was significant for LV 1 (cam size) and LV 2 (cam location) with medium to large effect sizes (p < 0.001, d > 0.75). Mean results revealed males presented with a superior-focused cam, whereas females presented with an anterior-focused cam. When stratified by cam volume, cam morphologies were located superiorly in male and anteriorly in female FAI syndrome patients with negligible, mild, or moderate cam volumes. Both male and female FAI syndrome patients with major cam volumes had a global cam distribution. In conclusion, sex-specific cam location differences are present in FAI syndrome patients with negligible, mild, and moderate cam volumes, whereas major cam volumes were globally distributed in both male and female patients.


Subject(s)
Femoracetabular Impingement , Osteoarthritis, Hip , Humans , Male , Female , Adolescent , Young Adult , Adult , Middle Aged , Femoracetabular Impingement/surgery , Magnetic Resonance Imaging , Imaging, Three-Dimensional/methods , Hip Joint/pathology
5.
Magn Reson Imaging ; 105: 17-28, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37839621

ABSTRACT

Sparse reconstruction is an important aspect of MRI, helping to reduce acquisition time and improve spatial-temporal resolution. Popular methods are based mostly on compressed sensing (CS), which relies on the random sampling of k-space to produce incoherent (noise-like) artefacts. Due to hardware constraints, 1D Cartesian phase-encode under-sampling schemes are popular for 2D CS-MRI. However, 1D under-sampling limits 2D incoherence between measurements, yielding structured aliasing artefacts (ghosts) that may be difficult to remove assuming a 2D sparsity model. Reconstruction algorithms typically deploy direction-insensitive 2D regularisation for these direction-associated artefacts. Recognising that phase-encode artefacts can be separated into contiguous 1D signals, we develop two decoupling techniques that enable explicit 1D regularisation and leverage the excellent 1D incoherence characteristics. We also derive a combined 1D + 2D reconstruction technique that takes advantage of spatial relationships within the image. Experiments conducted on retrospectively under-sampled brain and knee data demonstrate that combination of the proposed 1D AliasNet modules with existing 2D deep learned (DL) recovery techniques leads to an improvement in image quality. We also find AliasNet enables a superior scaling of performance compared to increasing the size of the original 2D network layers. AliasNet therefore improves the regularisation of aliasing artefacts arising from phase-encode under-sampling, by tailoring the network architecture to account for their expected appearance. The proposed 1D + 2D approach is compatible with any existing 2D DL recovery technique deployed for this application.


Subject(s)
Artifacts , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Retrospective Studies , Magnetic Resonance Imaging/methods , Algorithms
6.
Neuroimage ; 277: 120267, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37422279

ABSTRACT

Accurate medical classification requires a large number of multi-modal data, and in many cases, different feature types. Previous studies have shown promising results when using multi-modal data, outperforming single-modality models when classifying diseases such as Alzheimer's Disease (AD). However, those models are usually not flexible enough to handle missing modalities. Currently, the most common workaround is discarding samples with missing modalities which leads to considerable data under-utilisation. Adding to the fact that labelled medical images are already scarce, the performance of data-driven methods like deep learning can be severely hampered. Therefore, a multi-modal method that can handle missing data in various clinical settings is highly desirable. In this paper, we present Multi-Modal Mixing Transformer (3MT), a disease classification transformer that not only leverages multi-modal data but also handles missing data scenarios. In this work, we test 3MT for AD and Cognitively normal (CN) classification and mild cognitive impairment (MCI) conversion prediction to progressive MCI (pMCI) or stable MCI (sMCI) using clinical and neuroimaging data. The model uses a novel Cascaded Modality Transformers architecture with cross-attention to incorporate multi-modal information for more informed predictions. We propose a novel modality dropout mechanism to ensure an unprecedented level of modality independence and robustness to handle missing data scenarios. The result is a versatile network that enables the mixing of arbitrary numbers of modalities with different feature types and also ensures full data utilization in missing data scenarios. The model is trained and evaluated on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset with the state-of-the-art performance and further evaluated with The Australian Imaging Biomarker & Lifestyle Flagship Study of Ageing (AIBL) dataset with missing data.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Humans , Magnetic Resonance Imaging/methods , Alzheimer Disease/diagnostic imaging , Australia , Neuroimaging/methods , Biomarkers , Cognitive Dysfunction/diagnostic imaging
7.
BMJ Open ; 13(4): e067740, 2023 04 24.
Article in English | MEDLINE | ID: mdl-37094888

ABSTRACT

INTRODUCTION: Traumatic brain injury (TBI) is a heterogeneous condition with a broad spectrum of injury severity, pathophysiological processes and variable outcomes. For moderate-to-severe TBI survivors, recovery is often protracted and outcomes can range from total dependence to full recovery. Despite advances in medical treatment options, prognosis remains largely unchanged. The objective of this study is to develop a machine learning predictive model for neurological outcomes at 6 months in patients with a moderate-to-severe TBI, incorporating longitudinal clinical, multimodal neuroimaging and blood biomarker predictor variables. METHODS AND ANALYSIS: A prospective, observational, cohort study will enrol 300 patients with moderate-to-severe TBI from seven Australian hospitals over 3 years. Candidate predictors including demographic and general health variables, and longitudinal clinical, neuroimaging (CT and MRI), blood biomarker and patient-reported outcome measures will be collected at multiple time points within the acute phase of injury. The predictor variables will populate novel machine learning models to predict the Glasgow Outcome Scale Extended 6 months after injury. The study will also expand on current prognostic models by including novel blood biomarkers (circulating cell-free DNA), and the results of quantitative neuroimaging such as Quantitative Susceptibility Mapping and Dynamic Contrast Enhanced MRI as predictor variables. ETHICS AND DISSEMINATION: Ethical approval has been obtained by the Royal Brisbane and Women's Hospital Human Research Ethics Committee, Queensland. Participants or their substitute decision-maker/s will receive oral and written information about the study before providing written informed consent. Study findings will be disseminated by peer-review publications and presented at national and international conferences and clinical networks. TRIAL REGISTRATION NUMBER: ACTRN12620001360909.


Subject(s)
Brain Injuries, Traumatic , Female , Humans , Australia , Biomarkers , Brain Injuries, Traumatic/therapy , Cohort Studies , Multicenter Studies as Topic , Observational Studies as Topic , Prospective Studies
8.
Quant Imaging Med Surg ; 12(10): 4924-4941, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36185062

ABSTRACT

Background: Femoroacetabular impingement (FAI) cam morphology is routinely assessed using manual measurements of two-dimensional (2D) alpha angles which are prone to high rater variability and do not provide direct three-dimensional (3D) data on these osseous formations. We present CamMorph, a fully automated 3D pipeline for segmentation, statistical shape assessment and measurement of cam volume, surface area and height from clinical magnetic resonance (MR) images of the hip in FAI patients. Methods: The novel CamMorph pipeline involves two components: (I) accurate proximal femur segmentation generated by combining the 3D U-net to identify both global (region) and local (edge) features in clinical MR images and focused shape modelling to generate a 3D anatomical model for creating patient-specific proximal femur models; (II) patient-specific anatomical information from 3D focused shape modelling to simulate 'healthy' femoral bone models with cam-affected region constraints applied to the anterosuperior femoral head-neck region to quantify cam morphology in FAI patients. The CamMorph pipeline, which generates patient-specific data within 5 min, was used to analyse multi-site clinical MR images of the hip to measure and assess cam morphology in male (n=56) and female (n=41) FAI patients. Results: There was excellent agreement between manual and CamMorph segmentations of the proximal femur as demonstrated by the mean Dice similarity index (DSI; 0.964±0.006), 95% Hausdorff distance (HD; 2.123±0.876 mm) and average surface distance (ASD; 0.539±0.189 mm) values. Compared to female FAI patients, male patients had a significantly larger median cam volume (969.22 vs. 272.97 mm3, U=240.0, P<0.001), mean surface area [657.36 vs. 306.93 mm2, t(95)=8.79, P<0.001], median maximum-height (3.66 vs. 2.15 mm, U=407.0, P<0.001) and median average-height (1.70 vs. 0.86 mm, U=380.0, P<0.001). Conclusions: The fully automated 3D CamMorph pipeline developed in the present study successfully segmented and measured cam morphology from clinical MR images of the hip in male and female patients with differing FAI severity and pathoanatomical characteristics.

9.
Med Image Anal ; 82: 102562, 2022 11.
Article in English | MEDLINE | ID: mdl-36049450

ABSTRACT

Direct automatic segmentation of objects in 3D medical imaging, such as magnetic resonance (MR) imaging, is challenging as it often involves accurately identifying multiple individual structures with complex geometries within a large volume under investigation. Most deep learning approaches address these challenges by enhancing their learning capability through a substantial increase in trainable parameters within their models. An increased model complexity will incur high computational costs and large memory requirements unsuitable for real-time implementation on standard clinical workstations, as clinical imaging systems typically have low-end computer hardware with limited memory and CPU resources only. This paper presents a compact convolutional neural network (CAN3D) designed specifically for clinical workstations and allows the segmentation of large 3D Magnetic Resonance (MR) images in real-time. The proposed CAN3D has a shallow memory footprint to reduce the number of model parameters and computer memory required for state-of-the-art performance and maintain data integrity by directly processing large full-size 3D image input volumes with no patches required. The proposed architecture significantly reduces computational costs, especially for inference using the CPU. We also develop a novel loss function with extra shape constraints to improve segmentation accuracy for imbalanced classes in 3D MR images. Compared to state-of-the-art approaches (U-Net3D, improved U-Net3D and V-Net), CAN3D reduced the number of parameters up to two orders of magnitude and achieved much faster inference, up to 5 times when predicting with a standard commercial CPU (instead of GPU). For the open-access OAI-ZIB knee MR dataset, in comparison with manual segmentation, CAN3D achieved Dice coefficient values of (mean = 0.87 ± 0.02 and 0.85 ± 0.04) with mean surface distance errors (mean = 0.36 ± 0.32 mm and 0.29 ± 0.10 mm) for imbalanced classes such as (femoral and tibial) cartilage volumes respectively when training volume-wise under only 12G video memory. Similarly, CAN3D demonstrated high accuracy and efficiency on a pelvis 3D MR imaging dataset for prostate cancer consisting of 211 examinations with expert manual semantic labels (bladder, body, bone, rectum, prostate) now released publicly for scientific use as part of this work.


Subject(s)
Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Humans , Male , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Magnetic Resonance Imaging/methods , Prostate
10.
Arthrosc Sports Med Rehabil ; 4(4): e1353-e1362, 2022 Aug.
Article in English | MEDLINE | ID: mdl-36033193

ABSTRACT

Purpose: To obtain automated measurements of cam volume, surface area, and height from baseline (preintervention) and 12-month magnetic resonance (MR) images acquired from male and female patients allocated to physiotherapy (PT) or arthroscopic surgery (AS) management for femoroacetabular impingement (FAI) in the Australian FASHIoN trial. Methods: An automated segmentation pipeline (CamMorph) was used to obtain cam morphology data from three-dimensional (3D) MR hip examinations in FAI patients classified with mild, moderate, or major cam volumes. Pairwise comparisons between baseline and 12-month cam volume, surface area, and height data were performed within the PT and AS patient groups using paired t-tests or Wilcoxon signed-rank tests. Results: A total of 43 patients were included with 15 PT patients (9 males, 6 females) and 28 AS patients (18 males, 10 females) for premanagement and postmanagement cam morphology assessments. Within the PT male and female patient groups, there were no significant differences between baseline and 12-month mean cam volume (male: 1269 vs 1288 mm3, t[16] = -0.39; female: 545 vs 550 mm,3 t[10] = -0.78), surface area (male: 1525 vs 1491 mm2, t[16] = 0.92; female: 885 vs 925 mm,2 t[10] = -0.78), maximum height (male: 4.36 vs 4.32 mm, t[16] = 0.34; female: 3.05 vs 2.96 mm, t[10] = 1.05) and average height (male: 2.18 vs 2.18 mm, t[16] = 0.22; female: 1.4 vs 1.43 mm, t[10] = -0.38). In contrast, within the AS male and female patient groups, there were significant differences between baseline and 12-month cam volume (male: 1343 vs 718 mm3, W = 0.0; female: 499 vs 240 mm3, t[18] = 2.89), surface area (male: 1520 vs 1031 mm2, t(34) = 6.48; female: 782 vs 483 mm2, t(18) = 3.02), maximum-height (male: 4.3 vs 3.42 mm, W = 13.5; female: 2.85 vs 2.24 mm, t(18) = 3.04) and average height (male: 2.17 vs 1.52 mm, W = 3.0; female: 1.4 vs 0.94 mm, W = 3.0). In AS patients, 3D bone models provided good visualization of cam bone mass removal postostectomy. Conclusions: Automated measurement of cam morphology from baseline (preintervention) and 12-month MR images demonstrated that the cam volume, surface area, maximum-height, and average height were significantly smaller in AS patients following ostectomy, whereas there were no significant differences in these cam measures in PT patients from the Australian FASHIoN study. Level of Evidence: Level II, cohort study.

11.
J Am Coll Radiol ; 19(6): 769-778, 2022 06.
Article in English | MEDLINE | ID: mdl-35381190

ABSTRACT

PURPOSE: Only 10% of CT scans unveil positive findings in mild traumatic brain injury, raising concerns of its overuse in this population. A number of clinical rules have been developed to address this issue, but they still suffer limitations in their specificity. Machine learning models have been applied in limited studies to mimic clinical rules; however, further improvement in terms of balanced sensitivity and specificity is still needed. In this work, the authors applied a deep artificial neural networks (DANN) model and an instance hardness threshold algorithm to reproduce the Pediatric Emergency Care Applied Research Network (PECARN) clinical rule in a pediatric population collected as a part of the PECARN study between 2004 and 2006. METHODS: The DANN model was applied using 14,983 patients younger than 18 years with Glasgow Coma Scale scores ≥ 14 who had head CT reports. The clinical features of the PECARN rules, PECARN-A (group A, age < 2 years) and PECARN-B (group B, age ≤ 2 years), were used to directly evaluate the model. The average accuracy, sensitivity, precision, and specificity were calculated by comparing the model's prediction outcome to that reported by the PECARN investigators. The instance hardness threshold and DANN model were applied to predict the need for CT in pediatric patients using 5-fold cross-validation. RESULTS: In the first phase, the DANN model resulted in 98.6% sensitivity and 99.7% specificity for predicting the need for CT using the predictors of the two PECARN clinical rules combined to train the model. In the second phase, the DANN model was superior to both the PECARN-A and PECARN-B rules using the predictors for each age group separately to train the model. Compared with the clinical rule, for group A, the model achieved an average sensitivity (93.7% versus 100%) and specificity (97.5% versus 53.6%); for group B, the average sensitivity of the model was 99.2% versus 98.6%, and the specificity was 98.8% versus 58.2%. CONCLUSIONS: In this study, a DANN model achieved comparable sensitivity and outstanding specificity for replicating the PECARN clinical rule and predicting the need for CT in pediatric patients after mild traumatic brain injury compared with the original statistically derived clinical rule.


Subject(s)
Brain Concussion , Craniocerebral Trauma , Emergency Medical Services , Child , Child, Preschool , Craniocerebral Trauma/epidemiology , Decision Support Techniques , Emergency Service, Hospital , Humans , Neural Networks, Computer , Tomography, X-Ray Computed
12.
J Med Imaging Radiat Oncol ; 65(5): 564-577, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34254448

ABSTRACT

Magnetic resonance (MR) imaging visualises soft tissue contrast in exquisite detail without harmful ionising radiation. In this work, we provide a state-of-the-art review on the use of deep learning in MR image reconstruction from different image acquisition types involving compressed sensing techniques, parallel image acquisition and multi-contrast imaging. Publications with deep learning-based image reconstruction for MR imaging were identified from the literature (PubMed and Google Scholar), and a comprehensive description of each of the works was provided. A detailed comparison that highlights the differences, the data used and the performance of each of these works were also made. A discussion of the potential use cases for each of these methods is provided. The sparse image reconstruction methods were found to be most popular in using deep learning for improved performance, accelerating acquisitions by around 4-8 times. Multi-contrast image reconstruction methods rely on at least one pre-acquired image, but can achieve 16-fold, and even up to 32- to 50-fold acceleration depending on the set-up. Parallel imaging provides frameworks to be integrated in many of these methods for additional speed-up potential. The successful use of compressed sensing techniques and multi-contrast imaging with deep learning and parallel acquisition methods could yield significant MR acquisition speed-ups within clinical routines in the near future.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging
13.
Comput Biol Med ; 135: 104614, 2021 08.
Article in English | MEDLINE | ID: mdl-34229143

ABSTRACT

Head computed tomography (CT) is the gold standard in emergency departments (EDs) to evaluate mild traumatic brain injury (mTBI) patients, especially for paediatrics. Data-driven models for successfully classifying head CT scans that have mTBI will be valuable in terms of timeliness and cost-effectiveness for TBI diagnosis. This study applied two different machine learning (ML) models to diagnose mTBI in a paediatric population collected as part of the paediatric emergency care applied research network (PECARN) study between 2004 and 2006. The models were conducted using 15,271 patients under the age of 18 years with mTBI and had a head CT report. In the conventional model, random forest (RF) ranked the features to reduce data dimensionality and the top ranked features were used to train a shallow artificial neural network (ANN) model. In the second model, a deep ANN applied to classify positive and negative mTBI patients using the entirety of the features available. The dataset was divided into two subsets: 80% for training and 20% for testing using five-fold cross-validation. Accuracy, sensitivity, precision, and specificity were calculated by comparing the model's prediction outcome to the actual diagnosis for each patient. RF ranked ten clinical demographic features and twelve CT-findings; the hybrid RF-ANN model achieved an average specificity of 99.96%, sensitivity of 95.98%, precision of 99.25%, and accuracy of 99.74% in identifying positive mTBI from negative mTBI subjects. The deep ANN proved its ability to carry out the task efficiently with an average specificity of 99.9%, sensitivity of 99.2%, precision of 99.9%, and accuracy of 99.9%. The performance of the two proposed models demonstrated the feasibility of using ANN to diagnose mTBI in a paediatric population. This is the first study to investigate deep ANN in a paediatric cohort with mTBI using clinical and non-imaging data and diagnose mTBI with balanced sensitivity and specificity using shallow and deep ML models. This method, if validated, would have the potential to reduce the burden of TBI evaluation in EDs and aide clinicians in the decision-making process.


Subject(s)
Brain Concussion , Pediatrics , Adolescent , Child , Humans , Machine Learning , Neural Networks, Computer , Sensitivity and Specificity
14.
Magn Reson Imaging ; 82: 42-54, 2021 10.
Article in English | MEDLINE | ID: mdl-34147595

ABSTRACT

BACKGROUND: Magnetic resonance (MR) T2 and T2* mapping sequences allow in vivo quantification of biochemical characteristics within joint cartilage of relevance to clinical assessment of conditions such as hip osteoarthritis (OA). PURPOSE: To evaluate an automated immediate reliability analysis of T2 and T2* mapping from MR examinations of hip joint cartilage using a bone and cartilage segmentation pipeline based around focused shape modelling. STUDY TYPE: Technical validation. SUBJECTS: 17 asymptomatic volunteers (M: F 7:10, aged 22-47 years, mass 50-90 kg, height 163-189 cm) underwent unilateral hip joint MR examinations. Automated analysis of cartilage T2 and T2* data immediate reliability was evaluated in 9 subjects (M: F 4: 5) for each sequence. FIELD STRENGTH/SEQUENCE: A 3 T MR system with a body matrix flex-coil was used to acquire images with the following sequences: T2 weighted 3D-trueFast Imaging with Steady-State Precession (water excitation; 10.18 ms repetition time (TR); 4.3 ms echo time (TE); Voxel Size (VS): 0.625 × 0.625 × 0.65 mm; 160 mm field of view (FOV); Flip Angle (FA): 30 degrees; Pixel Bandwidth (PB): 140 Hz/pixel); a multi-echo spin echo (MESE) T2 mapping sequence (TR/TE: 2080/18-90 ms (5 echoes); VS: 4 × 0.78 × 0.78 mm; FOV: 200 mm; FA: 180 degrees; PB: 230 Hz/pixel) and a MESE T2* mapping sequence (TR/TE: 873/3.82-19.1 ms (5 echoes); VS: 3 × 0.625 × 0.625 mm; FOV: 160 mm; FA: 25 degrees; PB: 250 Hz/pixel). ASSESSMENT: Automated cartilage segmentation and quantitative analysis provided T2 and T2* data from test-retest MR examinations to assess immediate reliability. STATISTICAL TESTS: Coefficient of variation (CV) and intraclass correlations (ICC2, 1) to analyse automated T2 and T2* mapping reliability focusing on the clinically important superior cartilage regions of the hip joint. RESULTS: Comparisons between test-retest T2 and (T2*) data revealed mean CV's of 3.385% (1.25%), mean ICC2, 1's of 0.871 (0.984) and median mean differences of -1.139ms (+0.195ms). CONCLUSION: The T2 and T2* times from automated analyses of hip cartilage from test-retest MR examinations had high (T2) and excellent (T2*) immediate reliability.


Subject(s)
Cartilage, Articular , Magnetic Resonance Imaging , Cartilage, Articular/diagnostic imaging , Hip Joint/diagnostic imaging , Humans , Magnetic Resonance Spectroscopy , Reproducibility of Results
15.
Magn Reson Imaging ; 77: 159-168, 2021 04.
Article in English | MEDLINE | ID: mdl-33400936

ABSTRACT

Multi-contrast (MC) Magnetic Resonance Imaging (MRI) of the same patient usually requires long scanning times, despite the images sharing redundant information. In this work, we propose a new iterative network that utilizes the sharable information among MC images for MRI acceleration. The proposed network has reinforced data fidelity control and anatomy guidance through an iterative optimization procedure of Gradient Descent, leading to reduced uncertainties and improved reconstruction results. Through a convolutional network, the new method incorporates a learnable regularization unit that is capable of extracting, fusing, and mapping shareable information among different contrasts. Specifically, a dilated inception block is proposed to promote multi-scale feature extractions and increase the receptive field diversity for contextual information incorporation. Lastly, an optimal MC information feeding protocol is built through the design of a complementary feature extractor block. Comprehensive experiments demonstrated the superiority of the proposed network, both qualitatively and quantitatively.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Neural Networks, Computer , Contrast Media , Humans
16.
J Biomech ; 115: 110163, 2021 01 22.
Article in English | MEDLINE | ID: mdl-33338974

ABSTRACT

Finite element analysis (FEA) provides a powerful approach for estimating the in-vivo loading characteristics of the hip joint during various locomotory and functional activities. However, time-consuming procedures, such as the generation of high-quality FE meshes and setup of FE simulation, typically make the method impractical for rapid applications which could be used in clinical routine. Alternatively, discrete element analysis (DEA) has been developed to quantify mechanical conditions of the hip joint in a fraction of time compared to FEA. Although DEA has proven effective in the estimation of contact stresses and areas in various complex applications, it has not yet been well characterised by its ability to evaluate contact mechanics for the hip joint during gait cycle loading using data from several individuals. The objective of this work was to compare DEA modelling against well-established FEA for analysing contact mechanics of the hip joint during walking gait. Subject-specific models were generated from magnetic resonance images of the hip joints in five asymptomatic subjects. The DEA and FEA models were then simulated for 13 loading time-points extracted from a full gait cycle. Computationally, DEA was substantially more efficient compared to FEA (simulation times of seconds vs. hours). The DEA and FEA methods had similar predictions for contact pressure distribution for the hip joint during normal walking. In all 13 simulated loading time-points across five subjects, the maximum difference in average contact pressures between DEA and FEA was within ±0.06 MPa. Furthermore, the difference in contact area ratio computed using DEA and FEA was less than ±6%.


Subject(s)
Hip Joint , Walking , Biomechanical Phenomena , Computer Simulation , Finite Element Analysis , Gait , Humans
17.
Med Phys ; 47(10): 4939-4948, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32745260

ABSTRACT

PURPOSE: High resolution three-dimensional (3D) magnetic resonance (MR) images are well suited for automated cartilage segmentation in the human knee joint. However, volumetric scans such as 3D Double-Echo Steady-State (DESS) images are not routinely acquired in clinical practice which limits opportunities for reliable cartilage segmentation using (fully) automated algorithms. In this work, a method for generating synthetic 3D MR (syn3D-DESS) images with better contrast and higher spatial resolution from routine, low resolution, two-dimensional (2D) Turbo-Spin Echo (TSE) clinical knee scans is proposed. METHODS: A UNet convolutional neural network is employed for synthesizing enhanced artificial MR images suitable for automated knee cartilage segmentation. Training of the model was performed on a large, publically available dataset from the OAI, consisting of 578 MR examinations of knee joints from 102 healthy individuals and patients with knee osteoarthritis. RESULTS: The generated synthetic images have higher spatial resolution and better tissue contrast than the original 2D TSE, which allow high quality automated 3D segmentations of the cartilage. The proposed approach was evaluated on a separate set of MR images from 88 subjects with manual cartilage segmentations. It provided a significant improvement in automated segmentation of knee cartilages when using the syn3D-DESS images compared to the original 2D TSE images. CONCLUSION: The proposed method can successfully synthesize 3D DESS images from 2D TSE images to provide images suitable for automated cartilage segmentation.


Subject(s)
Cartilage, Articular , Osteoarthritis, Knee , Cartilage, Articular/diagnostic imaging , Humans , Imaging, Three-Dimensional , Knee/diagnostic imaging , Knee Joint/diagnostic imaging , Magnetic Resonance Imaging , Osteoarthritis, Knee/diagnostic imaging
18.
Med Phys ; 47(9): 4303-4315, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32648965

ABSTRACT

PURPOSE: Combining high-resolution magnetic resonance imaging (MRI) with a linear accelerator (Linac) as a single MRI-Linac system provides the capability to monitor intra-fractional motion and anatomical changes during radiotherapy, which facilitates more accurate delivery of radiation dose to the tumor and less exposure to healthy tissue. The gradient nonlinearity (GNL)-induced distortions in MRI, however, hinder the implementation of MRI-Linac system in image-guided radiotherapy where highly accurate geometry and anatomy of the target tumor is indispensable. METHODS: To correct the geometric distortions in MR images, in particular, for the 1 Tesla (T) MRI-Linac system, a deep fully connected neural network was proposed to automatically learn the intricate relationship between the undistorted (theoretical) and distorted (real) space. A dataset, consisting of spatial samples acquired by phantom measurement that covers both inside and outside the working diameter of spherical volume (DSV), was utilized for training the neural network, which offers the ability to describe subtle deviations of the GNL field within the entire region of interest (ROI). RESULTS: The performance of the proposed method was evaluated on MR images of a three-dimensional (3D) phantom and the pelvic region of an adult volunteer scanned in the 1T MRI-Linac system. The experimental results showed that the severe geometric distortions within the entire ROI had been successfully corrected with an error less than the pixel size. Also, the presented network is highly efficient, which achieved significant improvement in terms of computational efficiency compared to existing methods. CONCLUSIONS: The feasibility of the presented deep neural network for characterizing the GNL field deviations in the 1T MRI-Linac system was demonstrated in this study, which shows promise in facilitating the MRI-Linac system to be routinely implemented in real-time MRI-guided radiotherapy.


Subject(s)
Magnetic Resonance Imaging , Radiotherapy, Image-Guided , Humans , Neural Networks, Computer , Particle Accelerators , Phantoms, Imaging
19.
Biomed Phys Eng Express ; 6(6)2020 11 04.
Article in English | MEDLINE | ID: mdl-35045404

ABSTRACT

Previous studies on computer aided detection/diagnosis (CAD) in 4D breast magnetic resonance imaging (MRI) usually regard lesion detection, segmentation and characterization as separate tasks, and typically require users to manually select 2D MRI slices or regions of interest as the input. In this work, we present a breast MRI CAD system that can handle 4D multimodal breast MRI data, and integrate lesion detection, segmentation and characterization with no user intervention. The proposed CAD system consists of three major stages: region candidate generation, feature extraction and region candidate classification. Breast lesions are firstly extracted as region candidates using the novel 3D multiscale morphological sifting (MMS). The 3D MMS, which uses linear structuring elements to extract lesion-like patterns, can segment lesions from breast images accurately and efficiently. Analytical features are then extracted from all available 4D multimodal breast MRI sequences, including T1-, T2-weighted and DCE sequences, to represent the signal intensity, texture, morphological and enhancement kinetic characteristics of the region candidates. The region candidates are lastly classified as lesion or normal tissue by the random under-sampling boost (RUSboost), and as malignant or benign lesion by the random forest. Evaluated on a breast MRI dataset which contains a total of 117 cases with 141 biopsy-proven lesions (95 malignant and 46 benign lesions), the proposed system achieves a true positive rate (TPR) of 0.90 at 3.19 false positives per patient (FPP) for lesion detection and a TPR of 0.91 at a FPP of 2.95 for identifying malignant lesions without any user intervention. The average dice similarity index (DSI) is0.72±0.15for lesion segmentation. Compared with previously proposed lesion detection, detection-segmentation and detection-characterization systems evaluated on the same breast MRI dataset, the proposed CAD system achieves a favourable performance in breast lesion detection and characterization.


Subject(s)
Breast , Magnetic Resonance Imaging , Breast/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Humans , Magnetic Resonance Imaging/methods , Multimodal Imaging
20.
Comput Methods Programs Biomed ; 164: 193-205, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30195427

ABSTRACT

Biomedical imaging analysis typically comprises a variety of complex tasks requiring sophisticated algorithms and visualising high dimensional data. The successful integration and deployment of the enabling software to clinical (research) partners, for rigorous evaluation and testing, is a crucial step to facilitate adoption of research innovations within medical settings. In this paper, we introduce the Simple Medical Imaging Library Interface (SMILI), an object oriented open-source framework with a compact suite of objects geared for rapid biomedical imaging (cross-platform) application development and deployment. SMILI supports the development of both command-line (shell and Python scripting) and graphical applications utilising the same set of processing algorithms. It provides a substantial subset of features when compared to more complex packages, yet it is small enough to ship with clinical applications with limited overhead and has a license suitable for commercial use. After describing where SMILI fits within the existing biomedical imaging software ecosystem, by comparing it to other state-of-the-art offerings, we demonstrate its capabilities in creating a clinical application for manual measurement of cam-type lesions of the femoral head-neck region for the investigation of femoro-acetabular impingement (FAI) from three dimensional (3D) magnetic resonance (MR) images of the hip. This application for the investigation of FAI proved to be convenient for radiological analyses and resulted in high intra (ICC=0.97) and inter-observer (ICC=0.95) reliabilities for measurement of α-angles of the femoral head-neck region. We believe that SMILI is particularly well suited for prototyping biomedical imaging applications requiring user interaction and/or visualisation of 3D mesh, scalar, vector or tensor data.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Computer Graphics , Hip Joint/diagnostic imaging , Humans , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/statistics & numerical data , Image Processing, Computer-Assisted/statistics & numerical data , Imaging, Three-Dimensional/methods , Imaging, Three-Dimensional/statistics & numerical data , Libraries, Digital , Magnetic Resonance Imaging/methods , Magnetic Resonance Imaging/statistics & numerical data , Software , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...