Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 14.249
Filter
1.
Front Oncol ; 14: 1423774, 2024.
Article in English | MEDLINE | ID: mdl-38966060

ABSTRACT

Purpose: Addressing the challenges of unclear tumor boundaries and the confusion between cysts and tumors in liver tumor segmentation, this study aims to develop an auto-segmentation method utilizing Gaussian filter with the nnUNet architecture to effectively distinguish between tumors and cysts, enhancing the accuracy of liver tumor auto-segmentation. Methods: Firstly, 130 cases of liver tumorsegmentation challenge 2017 (LiTS2017) were used for training and validating nnU-Net-based auto-segmentation model. Then, 14 cases of 3D-IRCADb dataset and 25 liver cancer cases retrospectively collected in our hospital were used for testing. The dice similarity coefficient (DSC) was used to evaluate the accuracy of auto-segmentation model by comparing with manual contours. Results: The nnU-Net achieved an average DSC value of 0.86 for validation set (20 LiTS cases) and 0.82 for public testing set (14 3D-IRCADb cases). For clinical testing set, the standalone nnU-Net model achieved an average DSC value of 0.75, which increased to 0.81 after post-processing with the Gaussian filter (P<0.05), demonstrating its effectiveness in mitigating the influence of liver cysts on liver tumor segmentation. Conclusion: Experiments show that Gaussian filter is beneficial to improve the accuracy of liver tumor segmentation in clinic.

2.
Data Brief ; 55: 110569, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38966660

ABSTRACT

The dataset contains RGB, depth, segmentation images of the scenes and information about the camera poses that can be used to create a full 3D model of the scene and develop methods that reconstruct objects from a single RGB-D camera view. Data were collected in the custom simulator that loads random graspable objects and random tables from the ShapeNet dataset. The graspable object is placed above the table in a random position. Then, the scene is simulated using the PhysX engine to make sure that the scene is physically plausible. The simulator captures images of the scene from a random pose and then takes the second image from the camera pose that is on the opposite side of the scene. The second subset was created using Kinect Azure and a set of real objects located on the ArUco board that was used to estimate the camera pose.

3.
Med Image Anal ; 97: 103253, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38968907

ABSTRACT

Airway-related quantitative imaging biomarkers are crucial for examination, diagnosis, and prognosis in pulmonary diseases. However, the manual delineation of airway structures remains prohibitively time-consuming. While significant efforts have been made towards enhancing automatic airway modelling, current public-available datasets predominantly concentrate on lung diseases with moderate morphological variations. The intricate honeycombing patterns present in the lung tissues of fibrotic lung disease patients exacerbate the challenges, often leading to various prediction errors. To address this issue, the 'Airway-Informed Quantitative CT Imaging Biomarker for Fibrotic Lung Disease 2023' (AIIB23) competition was organized in conjunction with the official 2023 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). The airway structures were meticulously annotated by three experienced radiologists. Competitors were encouraged to develop automatic airway segmentation models with high robustness and generalization abilities, followed by exploring the most correlated QIB of mortality prediction. A training set of 120 high-resolution computerised tomography (HRCT) scans were publicly released with expert annotations and mortality status. The online validation set incorporated 52 HRCT scans from patients with fibrotic lung disease and the offline test set included 140 cases from fibrosis and COVID-19 patients. The results have shown that the capacity of extracting airway trees from patients with fibrotic lung disease could be enhanced by introducing voxel-wise weighted general union loss and continuity loss. In addition to the competitive image biomarkers for mortality prediction, a strong airway-derived biomarker (Hazard ratio>1.5, p < 0.0001) was revealed for survival prognostication compared with existing clinical measurements, clinician assessment and AI-based biomarkers.

4.
Cancer Imaging ; 24(1): 83, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38956718

ABSTRACT

BACKGROUND: 3D reconstruction of Wilms' tumor provides several advantages but are not systematically performed because manual segmentation is extremely time-consuming. The objective of our study was to develop an artificial intelligence tool to automate the segmentation of tumors and kidneys in children. METHODS: A manual segmentation was carried out by two experts on 14 CT scans. Then, the segmentation of Wilms' tumor and neoplastic kidney was automatically performed using the CNN U-Net and the same CNN U-Net trained according to the OV2ASSION method. The time saving for the expert was estimated depending on the number of sections automatically segmented. RESULTS: When segmentations were performed manually by two experts, the inter-individual variability resulted in a Dice index of 0.95 for tumor and 0.87 for kidney. Fully automatic segmentation with the CNN U-Net yielded a poor Dice index of 0.69 for Wilms' tumor and 0.27 for kidney. With the OV2ASSION method, the Dice index varied depending on the number of manually segmented sections. For the segmentation of the Wilms' tumor and neoplastic kidney, it varied respectively from 0.97 to 0.94 for a gap of 1 (2 out of 3 sections performed manually) to 0.94 and 0.86 for a gap of 10 (1 section out of 6 performed manually). CONCLUSION: Fully automated segmentation remains a challenge in the field of medical image processing. Although it is possible to use already developed neural networks, such as U-Net, we found that the results obtained were not satisfactory for segmentation of neoplastic kidneys or Wilms' tumors in children. We developed an innovative CNN U-Net training method that makes it possible to segment the kidney and its tumor with the same precision as an expert while reducing their intervention time by 80%.


Subject(s)
Artificial Intelligence , Kidney Neoplasms , Tomography, X-Ray Computed , Wilms Tumor , Wilms Tumor/diagnostic imaging , Wilms Tumor/pathology , Humans , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/pathology , Tomography, X-Ray Computed/methods , Child , Imaging, Three-Dimensional/methods , Child, Preschool , Neural Networks, Computer , Male , Female , Automation
5.
Article in English | MEDLINE | ID: mdl-38957182

ABSTRACT

Organ segmentation is a fundamental requirement in medical image analysis. Many methods have been proposed over the past 6 decades for segmentation. A unique feature of medical images is the anatomical information hidden within the image itself. To bring natural intelligence (NI) in the form of anatomical information accumulated over centuries into deep learning (DL) AI methods effectively, we have recently introduced the idea of hybrid intelligence (HI) that combines NI and AI and a system based on HI to perform medical image segmentation. This HI system has shown remarkable robustness to image artifacts, pathology, deformations, etc. in segmenting organs in the Thorax body region in a multicenter clinical study. The HI system utilizes an anatomy modeling strategy to encode NI and to identify a rough container region in the shape of each object via a non-DL-based approach so that DL training and execution are applied only to the fuzzy container region. In this paper, we introduce several advances related to modeling of the NI component so that it becomes substantially more efficient computationally, and at the same time, is well integrated with the DL portion (AI component) of the system. We demonstrate a 9-40 fold computational improvement in the auto-segmentation task for radiation therapy (RT) planning via clinical studies obtained from 4 different RT centers, while retaining state-of-the-art accuracy of the previous system in segmenting 11 objects in the Thorax body region.

6.
Article in English | MEDLINE | ID: mdl-38957573

ABSTRACT

Medical image auto-segmentation techniques are basic and critical for numerous image-based analysis applications that play an important role in developing advanced and personalized medicine. Compared with manual segmentations, auto-segmentations are expected to contribute to a more efficient clinical routine and workflow by requiring fewer human interventions or revisions to auto-segmentations. However, current auto-segmentation methods are usually developed with the help of some popular segmentation metrics that do not directly consider human correction behavior. Dice Coefficient (DC) focuses on the truly-segmented areas, while Hausdorff Distance (HD) only measures the maximal distance between the auto-segmentation boundary with the ground truth boundary. Boundary length-based metrics such as surface DC (surDC) and Added Path Length (APL) try to distinguish truly-predicted boundary pixels and wrong ones. It is uncertain if these metrics can reliably indicate the required manual mending effort for application in segmentation research. Therefore, in this paper, the potential use of the above four metrics, as well as a novel metric called Mendability Index (MI), to predict the human correction effort is studied with linear and support vector regression models. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions with corresponding auto-segmentations and ground truth segmentations are utilized to train and test the prediction models. The five-fold cross-validation experiments demonstrate that meaningful human effort prediction can be achieved using segmentation metrics with varying prediction errors for different objects. The improved variant of MI, called MIhd, generally shows the best prediction performance, suggesting its potential to indicate reliably the clinical value of auto-segmentations.

7.
Article in English | MEDLINE | ID: mdl-38957740

ABSTRACT

Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.

8.
Data Brief ; 54: 110253, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38962191

ABSTRACT

The claustrum has a unique thin sheet-like structure that makes it hard to identify in typical anatomical MRI scans. Attempts have been made to identify the claustrum in anatomical images with either automatic segmentation techniques or using atlas-based approaches. However, the resulting labels fail to include the ventral claustrum portion, which consists of fragmented grey matter referred to as "puddles". The current dataset is a high-resolution label of the whole claustrum manually defined using an ultra-high resolution postmortem MRI image of one individual. Manual labelling was performed by four independent research trainees. Two trainees labelled the left claustrum and another two trainees labelled the right claustrum. For every hemisphere we created a union of the two labels and assessed the label correspondence using dice coefficients. We provide size measurements of the labels in MNI space by calculating the oriented bounding box size. These data are the first manual claustrum segmentation labels that include both the dorsal and ventral claustrum regions at such a high resolution in standard space. The label can be used to approximate the claustrum location in typical in vivo MRI scans of healthy individuals.

9.
Front Oncol ; 14: 1396887, 2024.
Article in English | MEDLINE | ID: mdl-38962265

ABSTRACT

Pathological images are considered the gold standard for clinical diagnosis and cancer grading. Automatic segmentation of pathological images is a fundamental and crucial step in constructing powerful computer-aided diagnostic systems. Medical microscopic hyperspectral pathological images can provide additional spectral information, further distinguishing different chemical components of biological tissues, offering new insights for accurate segmentation of pathological images. However, hyperspectral pathological images have higher resolution and larger area, and their annotation requires more time and clinical experience. The lack of precise annotations limits the progress of research in pathological image segmentation. In this paper, we propose a novel semi-supervised segmentation method for microscopic hyperspectral pathological images based on multi-consistency learning (MCL-Net), which combines consistency regularization methods with pseudo-labeling techniques. The MCL-Net architecture employs a shared encoder and multiple independent decoders. We introduce a Soft-Hard pseudo-label generation strategy in MCL-Net to generate pseudo-labels that are closer to real labels for pathological images. Furthermore, we propose a multi-consistency learning strategy, treating pseudo-labels generated by the Soft-Hard process as real labels, by promoting consistency between predictions of different decoders, enabling the model to learn more sample features. Extensive experiments in this paper demonstrate the effectiveness of the proposed method, providing new insights for the segmentation of microscopic hyperspectral tissue pathology images.

10.
Front Med (Lausanne) ; 11: 1372091, 2024.
Article in English | MEDLINE | ID: mdl-38962734

ABSTRACT

Introduction: Microaneurysms serve as early signs of diabetic retinopathy, and their accurate detection is critical for effective treatment. Due to their low contrast and similarity to retinal vessels, distinguishing microaneurysms from background noise and retinal vessels in fluorescein fundus angiography (FFA) images poses a significant challenge. Methods: We present a model for automatic detection of microaneurysms. FFA images were pre-processed using Top-hat transformation, Gray-stretching, and Gaussian filter techniques to eliminate noise. The candidate microaneurysms were coarsely segmented using an improved matched filter algorithm. Real microaneurysms were segmented by a morphological strategy. To evaluate the segmentation performance, our proposed model was compared against other models, including Otsu's method, Region Growing, Global Threshold, Matched Filter, Fuzzy c-means, and K-means, using both self-constructed and publicly available datasets. Performance metrics such as accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union were calculated. Results: The proposed model outperforms other models in terms of accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union. The segmentation results obtained with our model closely align with benchmark standard. Our model demonstrates significant advantages for microaneurysm segmentation in FFA images and holds promise for clinical application in the diagnosis of diabetic retinopathy. Conclusion: The proposed model offers a robust and accurate approach to microaneurysm detection, outperforming existing methods and demonstrating potential for clinical application in the effective treatment of diabetic retinopathy.

11.
Clin Imaging ; 113: 110231, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38964173

ABSTRACT

PURPOSE: Qualitative findings in Crohn's disease (CD) can be challenging to reliably report and quantify. We evaluated machine learning methodologies to both standardize the detection of common qualitative findings of ileal CD and determine finding spatial localization on CT enterography (CTE). MATERIALS AND METHODS: Subjects with ileal CD and a CTE from a single center retrospective study between 2016 and 2021 were included. 165 CTEs were reviewed by two fellowship-trained abdominal radiologists for the presence and spatial distribution of five qualitative CD findings: mural enhancement, mural stratification, stenosis, wall thickening, and mesenteric fat stranding. A Random Forest (RF) ensemble model using automatically extracted specialist-directed bowel features and an unbiased convolutional neural network (CNN) were developed to predict the presence of qualitative findings. Model performance was assessed using area under the curve (AUC), sensitivity, specificity, accuracy, and kappa agreement statistics. RESULTS: In 165 subjects with 29,895 individual qualitative finding assessments, agreement between radiologists for localization was good to very good (κ = 0.66 to 0.73), except for mesenteric fat stranding (κ = 0.47). RF prediction models had excellent performance, with an overall AUC, sensitivity, specificity of 0.91, 0.81 and 0.85, respectively. RF model and radiologist agreement for localization of CD findings approximated agreement between radiologists (κ = 0.67 to 0.76). Unbiased CNN models without benefit of disease knowledge had very similar performance to RF models which used specialist-defined imaging features. CONCLUSION: Machine learning techniques for CTE image analysis can identify the presence, location, and distribution of qualitative CD findings with similar performance to experienced radiologists.

12.
Comput Biol Med ; 179: 108819, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38964245

ABSTRACT

Automatic skin segmentation is an efficient method for the early diagnosis of skin cancer, which can minimize the missed detection rate and treat early skin cancer in time. However, significant variations in texture, size, shape, the position of lesions, and obscure boundaries in dermoscopy images make it extremely challenging to accurately locate and segment lesions. To address these challenges, we propose a novel framework named TG-Net, which exploits textual diagnostic information to guide the segmentation of dermoscopic images. Specifically, TG-Net adopts a dual-stream encoder-decoder architecture. The dual-stream encoder comprises Res2Net for extracting image features and our proposed text attention (TA) block for extracting textual features. Through hierarchical guidance, textual features are embedded into the process of image feature extraction. Additionally, we devise a multi-level fusion (MLF) module to merge higher-level features and generate a global feature map as guidance for subsequent steps. In the decoding stage of the network, local features and the global feature map are utilized in three multi-scale reverse attention modules (MSRA) to produce the final segmentation results. We conduct extensive experiments on three publicly accessible datasets, namely ISIC 2017, HAM10000, and PH2. Experimental results demonstrate that TG-Net outperforms state-of-the-art methods, validating the reliability of our method. Source code is available at https://github.com/ukeLin/TG-Net.

13.
Comput Biol Med ; 179: 108743, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38964246

ABSTRACT

Abdominal tumor segmentation is a crucial yet challenging step during the screening and diagnosis of tumors. While 3D segmentation models provide powerful performance, they demand substantial computational resources. Additionally, in 3D data, tumors often represent a small portion, leading to imbalanced data and potentially overlooking crucial information. Conversely, 2D segmentation models have a lightweight structure, but disregard the inter-slice correlation, risking the loss of tumor in edge slices. To address these challenges, this paper proposes a novel Position-Aware and Key Slice Feature Sharing 2D tumor segmentation model (PAKS-Net). Leveraging the Swin-Transformer, we effectively model the global features within each slice, facilitating essential information extraction. Furthermore, we introduce a Position-Aware module to capture the spatial relationship between tumors and their corresponding organs, mitigating noise and interference from surrounding organ tissues. To enhance the edge slice segmentation accuracy, we employ key slices to assist in the segmentation of other slices to prioritize tumor regions. Through extensive experiments on three abdominal tumor segmentation CT datasets and a lung tumor segmentation CT dataset, PAKS-Net demonstrates superior performance, reaching 0.893, 0.769, 0.598 and 0.738 tumor DSC on the KiTS19, LiTS17, pancreas and LOTUS datasets, surpassing 3D segmentation models, while remaining computationally efficient with fewer parameters.

14.
Article in English | MEDLINE | ID: mdl-38965165

ABSTRACT

PURPOSE: Cardiac perfusion MRI is vital for disease diagnosis, treatment planning, and risk stratification, with anomalies serving as markers of underlying ischemic pathologies. AI-assisted methods and tools enable accurate and efficient left ventricular (LV) myocardium segmentation on all DCE-MRI timeframes, offering a solution to the challenges posed by the multidimensional nature of the data. This study aims to develop and assess an automated method for LV myocardial segmentation on DCE-MRI data of a local hospital. METHODS: The study consists of retrospective DCE-MRI data from 55 subjects acquired at the local hospital using a 1.5 T MRI scanner. The dataset included subjects with and without cardiac abnormalities. The timepoint for the reference frame (post-contrast LV myocardium) was identified using standard deviation across the temporal sequences. Iterative image registration of other temporal images with respect to this reference image was performed using Maxwell's demons algorithm. The registered stack was fed to the model built using the U-Net framework for predicting the LV myocardium at all timeframes of DCE-MRI. RESULTS: The mean and standard deviation of the dice similarity coefficient (DSC) for myocardial segmentation using pre-trained network Net_cine is 0.78 ± 0.04, and for the fine-tuned network Net_dyn which predicts mask on all timeframes individually, it is 0.78 ± 0.03. The DSC for Net_dyn ranged from 0.71 to 0.93. The average DSC achieved for the reference frame is 0.82 ± 0.06. CONCLUSION: The study proposed a fast and fully automated AI-assisted method to segment LV myocardium on all timeframes of DCE-MRI data. The method is robust, and its performance is independent of the intra-temporal sequence registration and can easily accommodate timeframes with potential registration errors.

15.
Article in English | MEDLINE | ID: mdl-38965166

ABSTRACT

PURPOSE: Most recently transformer models became the state of the art in various medical image segmentation tasks and challenges, outperforming most of the conventional deep learning approaches. Picking up on that trend, this study aims at applying various transformer models to the highly challenging task of colorectal cancer (CRC) segmentation in CT imaging and assessing how they hold up to the current state-of-the-art convolutional neural network (CNN), the nnUnet. Furthermore, we wanted to investigate the impact of the network size on the resulting accuracies, since transformer models tend to be significantly larger than conventional network architectures. METHODS: For this purpose, six different transformer models, with specific architectural advancements and network sizes were implemented alongside the aforementioned nnUnet and were applied to the CRC segmentation task of the medical segmentation decathlon. RESULTS: The best results were achieved with the Swin-UNETR, D-Former, and VT-Unet, each transformer models, with a Dice similarity coefficient (DSC) of 0.60, 0.59 and 0.59, respectively. Therefore, the current state-of-the-art CNN, the nnUnet could be outperformed by transformer architectures regarding this task. Furthermore, a comparison with the inter-observer variability (IOV) of approx. 0.64 DSC indicates almost expert-level accuracy. The comparatively low IOV emphasizes the complexity and challenge of CRC segmentation, as well as indicating limitations regarding the achievable segmentation accuracy. CONCLUSION: As a result of this study, transformer models underline their current upward trend in producing state-of-the-art results also for the challenging task of CRC segmentation. However, with ever smaller advances in total accuracies, as demonstrated in this study by the on par performances of multiple network variants, other advantages like efficiency, low computation demands, or ease of adaption to new tasks become more and more relevant.

16.
Mar Pollut Bull ; 205: 116644, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38959569

ABSTRACT

The cleanup of marine debris is an urgent problem in marine environmental protection. AUVs with visual recognition technology have gradually become a central research issue. However, existing recognition algorithms have slow inference speeds and high computational overhead. They are also affected by blurred images and interference information. To solve these problems, a real-time semantic segmentation network is proposed, called WaterBiSeg-Net. First, we propose the Multi-scale Information Enhancement Module to solve the impact of low-definition and blurred images. Then, to suppress the interference of background information, the Gated Aggregation Layer is proposed. In addition, we propose a method that can extract boundary information directly. Finally, extensive experiments on SUIM and TrashCan datasets show that WaterBiSeg-Net can better complete the task of marine debris segmentation and provide accurate segmentation results for AUVs in real-time. This research offers a low computational cost and real-time solution for AUVs to identify marine debris.

17.
Neural Netw ; 178: 106489, 2024 Jun 22.
Article in English | MEDLINE | ID: mdl-38959598

ABSTRACT

Medical image segmentation is crucial for understanding anatomical or pathological changes, playing a key role in computer-aided diagnosis and advancing intelligent healthcare. Currently, important issues in medical image segmentation need to be addressed, particularly the problem of segmenting blurry edge regions and the generalizability of segmentation models. Therefore, this study focuses on different medical image segmentation tasks and the issue of blurriness. By addressing these tasks, the study significantly improves diagnostic efficiency and accuracy, contributing to the overall enhancement of healthcare outcomes. To optimize segmentation performance and leverage feature information, we propose a Neighborhood Fuzzy c-Means Multiscale Pyramid Hybrid Attention Unet (NFMPAtt-Unet) model. NFMPAtt-Unet comprises three core components: the Multiscale Dynamic Weight Feature Pyramid module (MDWFP), the Hybrid Weighted Attention mechanism (HWA), and the Neighborhood Rough Set-based Fuzzy c-Means Feature Extraction module (NFCMFE). The MDWFP dynamically adjusts weights across multiple scales, improving feature information capture. The HWA enhances the network's ability to capture and utilize crucial features, while the NFCMFE, grounded in neighborhood rough set concepts, aids in fuzzy C-means feature extraction, addressing complex structures and uncertainties in medical images, thereby enhancing adaptability. Experimental results demonstrate that NFMPAtt-Unet outperforms state-of-the-art models, highlighting its efficacy in medical image segmentation.

18.
Phys Med Biol ; 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38959909

ABSTRACT

OBJECTIVE: Head and neck (H&N) cancers are among the most prevalent types of cancer worldwide, and [18F]F-FDG PET/CT is widely used for H&N cancer management. Recently, the diffusion model has demonstrated remarkable performance in various image-generation tasks. In this work, we proposed a 3D diffusion model to accurately perform H&N tumor segmentation from 3D PET and CT volumes. Approach. The 3D diffusion model was developed considering the 3D nature of PET and CT images acquired. During the reverse process, the model utilized a 3D U-Net structure and took the concatenation of 3D PET, CT, and Gaussian noise volumes as the network input to generate the tumor mask. Experiments based on the HECKTOR challenge dataset were conducted to evaluate the effectiveness of the proposed diffusion model. Several state-of-the-art techniques based on U-Net and Transformer structures were adopted as the reference methods. Benefits of employing both PET and CT as the network input as well as further extending the diffusion model from 2D to 3D were investigated based on various quantitative metrics and the uncertainty maps generated. Main results. Results showed that the proposed 3D diffusion model could generate more accurate segmentation results compared with other methods (mean Dice of 0.739 compared to less than 0.726 for other methods). Compared to the diffusion model in 2D format, the proposed 3D model yielded superior results (mean Dice of 0.739 compared to 0.669). Our experiments also highlighted the advantage of utilizing dual-modality PET and CT data over only single-modality data for H&N tumor segmentation (with mean Dice less than 0.570). Significance. This work demonstrated the effectiveness of the proposed 3D diffusion model in generating more accurate H&N tumor segmentation masks compared to the other reference methods. .

19.
MAGMA ; 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38960988

ABSTRACT

OBJECTIVE: To highlight progress and opportunities of measuring kidney size with MRI, and to inspire research into resolving the remaining methodological gaps and unanswered questions relating to kidney size assessment. MATERIALS AND METHODS: This work is not a comprehensive review of the literature but highlights valuable recent developments of MRI of kidney size. RESULTS: The links between renal (patho)physiology and kidney size are outlined. Common methodological approaches for MRI of kidney size are reviewed. Techniques tailored for renal segmentation and quantification of kidney size are discussed. Frontier applications of kidney size monitoring in preclinical models and human studies are reviewed. Future directions of MRI of kidney size are explored. CONCLUSION: MRI of kidney size matters. It will facilitate a growing range of (pre)clinical applications, and provide a springboard for new insights into renal (patho)physiology. As kidney size can be easily obtained from already established renal MRI protocols without the need for additional scans, this measurement should always accompany diagnostic MRI exams. Reconciling global kidney size changes with alterations in the size of specific renal layers is an important topic for further research. Acute kidney size measurements alone cannot distinguish between changes induced by alterations in the blood or the tubular volume fractions-this distinction requires further research into cartography of the renal blood and the tubular volumes.

20.
Int J Numer Method Biomed Eng ; : e3843, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38963037

ABSTRACT

Infrared thermography is gaining relevance in breast cancer assessment. For this purpose, breast segmentation in thermograms is an important task for performing automatic image analysis and detecting possible temperature changes that indicate the presence of malignancy. However, it is not a simple task since the breast limit borders, especially the top borders, often have low contrast, making it difficult to isolate the breast area. Several algorithms have been proposed for breast segmentation, but these highly depend on the contrast at the lower breast borders and on filtering algorithms to remove false edges. This work focuses on taking advantage of the distinctive inframammary shape to simplify the definition of the lower breast border, regardless of the contrast level, which indeed also provides a strong anatomical reference to support the definition of the poorly marked upper boundary of the breasts, which has been one of the major challenges in the literature. In order to demonstrate viability of the proposed technique for an automatic breast segmentation, we applied it to a database with 180 thermograms and compared their results with those reported by others in the literature. We found that our approach achieved a high performance, in terms of Intersection over Union of 0.934, even higher than that reported by artificial intelligence algorithms. The performance is invariant to breast sizes and thermal contrast of the images.

SELECTION OF CITATIONS
SEARCH DETAIL
...