Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 274
Filtrar
1.
Diagnostics (Basel) ; 14(11)2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38893643

RESUMO

The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen's Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model's competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.

2.
Med Biol Eng Comput ; 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38898202

RESUMO

Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.

3.
Comput Med Imaging Graph ; 116: 102410, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38905961

RESUMO

Trabecular bone analysis plays a crucial role in understanding bone health and disease, with applications like osteoporosis diagnosis. This paper presents a comprehensive study on 3D trabecular computed tomography (CT) image restoration, addressing significant challenges in this domain. The research introduces a backbone model, Cascade-SwinUNETR, for single-view 3D CT image restoration. This model leverages deep layer aggregation with supervision and capabilities of Swin-Transformer to excel in feature extraction. Additionally, this study also brings DVSR3D, a dual-view restoration model, achieving good performance through deep feature fusion with attention mechanisms and Autoencoders. Furthermore, an Unsupervised Domain Adaptation (UDA) method is introduced, allowing models to adapt to input data distributions without additional labels, holding significant potential for real-world medical applications, and eliminating the need for invasive data collection procedures. The study also includes the curation of a new dual-view dataset for CT image restoration, addressing the scarcity of real human bone data in Micro-CT. Finally, the dual-view approach is validated through downstream medical bone microstructure measurements. Our contributions open several paths for trabecular bone analysis, promising improved clinical outcomes in bone health assessment and diagnosis.

4.
Technol Health Care ; 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38875055

RESUMO

BACKGROUND: The incidence of kidney tumors is progressively increasing each year. The precision of segmentation for kidney tumors is crucial for diagnosis and treatment. OBJECTIVE: To enhance accuracy and reduce manual involvement, propose a deep learning-based method for the automatic segmentation of kidneys and kidney tumors in CT images. METHODS: The proposed method comprises two parts: object detection and segmentation. We first use a model to detect the position of the kidney, then narrow the segmentation range, and finally use an attentional recurrent residual convolutional network for segmentation. RESULTS: Our model achieved a kidney dice score of 0.951 and a tumor dice score of 0.895 on the KiTS19 dataset. Experimental results show that our model significantly improves the accuracy of kidney and kidney tumor segmentation and outperforms other advanced methods. CONCLUSION: The proposed method provides an efficient and automatic solution for accurately segmenting kidneys and renal tumors on CT images. Additionally, this study can assist radiologists in assessing patients' conditions and making informed treatment decisions.

5.
J Xray Sci Technol ; 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38848160

RESUMO

BACKGROUND: The rapid development of deep learning techniques has greatly improved the performance of medical image segmentation, and medical image segmentation networks based on convolutional neural networks and Transformer have been widely used in this field. However, due to the limitation of the restricted receptive field of convolutional operation and the lack of local fine information extraction ability of the self-attention mechanism in Transformer, the current neural networks with pure convolutional or Transformer structure as the backbone still perform poorly in medical image segmentation. METHODS: In this paper, we propose FDB-Net (Fusion Double Branch Network, FDB-Net), a double branch medical image segmentation network combining CNN and Transformer, by using a CNN containing gnConv blocks and a Transformer containing Varied-Size Window Attention (VWA) blocks as the feature extraction backbone network, the dual-path encoder ensures that the network has a global receptive field as well as access to the target local detail features. We also propose a new feature fusion module (Deep Feature Fusion, DFF), which helps the image to simultaneously fuse features from two different structural encoders during the encoding process, ensuring the effective fusion of global and local information of the image. CONCLUSION: Our model achieves advanced results in all three typical tasks of medical image segmentation, which fully validates the effectiveness of FDB-Net.

6.
Med Biol Eng Comput ; 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38802608

RESUMO

Three-dimensional vessel model reconstruction from patient-specific magnetic resonance angiography (MRA) images often requires some manual maneuvers. This study aimed to establish the deep learning (DL)-based method for vessel model reconstruction. Time of flight MRA of 40 patients with internal carotid artery aneurysms was prepared, and three-dimensional vessel models were constructed using the threshold and region-growing method. Using those datasets, supervised deep learning using 2D U-net was performed to reconstruct 3D vessel models. The accuracy of the DL-based vessel segmentations was assessed using 20 MRA images outside the training dataset. The dice coefficient was used as the indicator of the model accuracy, and the blood flow simulation was performed using the DL-based vessel model. The created DL model could successfully reconstruct a three-dimensional model in all 60 cases. The dice coefficient in the test dataset was 0.859. Of note, the DL-generated model proved its efficacy even for large aneurysms (> 10 mm in their diameter). The reconstructed model was feasible in performing blood flow simulation to assist clinical decision-making. Our DL-based method could successfully reconstruct a three-dimensional vessel model with moderate accuracy. Future studies are warranted to exhibit that DL-based technology can promote medical image processing.

7.
Biomed Phys Eng Express ; 10(4)2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38781934

RESUMO

Congenital heart defects (CHD) are one of the serious problems that arise during pregnancy. Early CHD detection reduces death rates and morbidity but is hampered by the relatively low detection rates (i.e., 60%) of current screening technology. The detection rate could be increased by supplementing ultrasound imaging with fetal ultrasound image evaluation (FUSI) using deep learning techniques. As a result, the non-invasive foetal ultrasound image has clear potential in the diagnosis of CHD and should be considered in addition to foetal echocardiography. This review paper highlights cutting-edge technologies for detecting CHD using ultrasound images, which involve pre-processing, localization, segmentation, and classification. Existing technique of preprocessing includes spatial domain filter, non-linear mean filter, transform domain filter, and denoising methods based on Convolutional Neural Network (CNN); segmentation includes thresholding-based techniques, region growing-based techniques, edge detection techniques, Artificial Neural Network (ANN) based segmentation methods, non-deep learning approaches and deep learning approaches. The paper also suggests future research directions for improving current methodologies.


Assuntos
Aprendizado Profundo , Cardiopatias Congênitas , Redes Neurais de Computação , Ultrassonografia Pré-Natal , Humanos , Cardiopatias Congênitas/diagnóstico por imagem , Ultrassonografia Pré-Natal/métodos , Gravidez , Feminino , Processamento de Imagem Assistida por Computador/métodos , Ecocardiografia/métodos , Algoritmos , Coração Fetal/diagnóstico por imagem , Feto/diagnóstico por imagem
8.
Technol Health Care ; 32(S1): 403-413, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38759064

RESUMO

BACKGROUND: Cardiovascular diseases are the top cause of death in China. Manual segmentation of cardiovascular images, prone to errors, demands an automated, rapid, and precise solution for clinical diagnosis. OBJECTIVE: The paper highlights deep learning in automatic cardiovascular image segmentation, efficiently identifying pixel regions of interest for auxiliary diagnosis and research in cardiovascular diseases. METHODS: In our study, we introduce innovative Region Weighted Fusion (RWF) and Shape Feature Refinement (SFR) modules, utilizing polarized self-attention for significant performance improvement in multiscale feature integration and shape fine-tuning. The RWF module includes reshaping, weight computation, and feature fusion, enhancing high-resolution attention computation and reducing information loss. Model optimization through loss functions offers a more reliable solution for cardiovascular medical image processing. RESULTS: Our method excels in segmentation accuracy, emphasizing the vital role of the RWF module. It demonstrates outstanding performance in cardiovascular image segmentation, potentially raising clinical practice standards. CONCLUSIONS: Our method ensures reliable medical image processing, guiding cardiovascular segmentation for future advancements in practical healthcare and contributing scientifically to enhanced disease diagnosis and treatment.


Assuntos
Doenças Cardiovasculares , Aprendizado Profundo , Humanos , Doenças Cardiovasculares/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , China , Algoritmos
9.
Bioengineering (Basel) ; 11(4)2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38671820

RESUMO

BACKGROUND AND OBJECTIVE: Local advanced rectal cancer (LARC) poses significant treatment challenges due to its location and high recurrence rates. Accurate early detection is vital for treatment planning. With magnetic resonance imaging (MRI) being resource-intensive, this study explores using artificial intelligence (AI) to interpret computed tomography (CT) scans as an alternative, providing a quicker, more accessible diagnostic tool for LARC. METHODS: In this retrospective study, CT images of 1070 T3-4 rectal cancer patients from 2010 to 2022 were analyzed. AI models, trained on 739 cases, were validated using two test sets of 134 and 197 cases. By utilizing techniques such as nonlocal mean filtering, dynamic histogram equalization, and the EfficientNetB0 algorithm, we identified images featuring characteristics of a positive circumferential resection margin (CRM) for the diagnosis of locally advanced rectal cancer (LARC). Importantly, this study employs an innovative approach by using both hard and soft voting systems in the second stage to ascertain the LARC status of cases, thus emphasizing the novelty of the soft voting system for improved case identification accuracy. The local recurrence rates and overall survival of the cases predicted by our model were assessed to underscore its clinical value. RESULTS: The AI model exhibited high accuracy in identifying CRM-positive images, achieving an area under the curve (AUC) of 0.89 in the first test set and 0.86 in the second. In a patient-based analysis, the model reached AUCs of 0.84 and 0.79 using a hard voting system. Employing a soft voting system, the model attained AUCs of 0.93 and 0.88, respectively. Notably, AI-identified LARC cases exhibited a significantly higher five-year local recurrence rate and displayed a trend towards increased mortality across various thresholds. Furthermore, the model's capability to predict adverse clinical outcomes was superior to those of traditional assessments. CONCLUSION: AI can precisely identify CRM-positive LARC cases from CT images, signaling an increased local recurrence and mortality rate. Our study presents a swifter and more reliable method for detecting LARC compared to traditional CT or MRI techniques.

10.
Biomed Mater ; 19(3)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38626778

RESUMO

Accurate segmentation of coronary artery tree and personalized 3D printing from medical images is essential for CAD diagnosis and treatment. The current literature on 3D printing relies solely on generic models created with different software or 3D coronary artery models manually segmented from medical images. Moreover, there are not many studies examining the bioprintability of a 3D model generated by artificial intelligence (AI) segmentation for complex and branched structures. In this study, deep learning algorithms with transfer learning have been employed for accurate segmentation of the coronary artery tree from medical images to generate printable segmentations. We propose a combination of deep learning and 3D printing, which accurately segments and prints complex vascular patterns in coronary arteries. Then, we performed the 3D printing of the AI-generated coronary artery segmentation for the fabrication of bifurcated hollow vascular structure. Our results indicate improved performance of segmentation with the aid of transfer learning with a Dice overlap score of 0.86 on a test set of 10 coronary tomography angiography images. Then, bifurcated regions from 3D models were printed into the Pluronic F-127 support bath using alginate + glucomannan hydrogel. We successfully fabricated the bifurcated coronary artery structures with high length and wall thickness accuracy, however, the outer diameters of the vessels and length of the bifurcation point differ from the 3D models. The extrusion of unnecessary material, primarily observed when the nozzle moves from left to the right vessel during 3D printing, can be mitigated by adjusting the nozzle speed. Moreover, the shape accuracy can also be improved by designing a multi-axis printhead that can change the printing angle in three dimensions. Thus, this study demonstrates the potential of the use of AI-segmented 3D models in the 3D printing of coronary artery structures and, when further improved, can be used for the fabrication of patient-specific vascular implants.


Assuntos
Algoritmos , Inteligência Artificial , Vasos Coronários , Impressão Tridimensional , Humanos , Vasos Coronários/diagnóstico por imagem , Aprendizado Profundo , Imageamento Tridimensional/métodos , Angiografia Coronária/métodos , Alginatos/química , Angiografia por Tomografia Computadorizada/métodos , Software
11.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 213-219, 2024 Apr 25.
Artigo em Chinês | MEDLINE | ID: mdl-38686400

RESUMO

Medical image registration plays an important role in medical diagnosis and treatment planning. However, the current registration methods based on deep learning still face some challenges, such as insufficient ability to extract global information, large number of network model parameters, slow reasoning speed and so on. Therefore, this paper proposed a new model LCU-Net, which used parallel lightweight convolution to improve the ability of global information extraction. The problem of large number of network parameters and slow inference speed was solved by multi-scale fusion. The experimental results showed that the Dice coefficient of LCU-Net reached 0.823, the Hausdorff distance was 1.258, and the number of network parameters was reduced by about one quarter compared with that before multi-scale fusion. The proposed algorithm shows remarkable advantages in medical image registration tasks, and it not only surpasses the existing comparison algorithms in performance, but also has excellent generalization performance and wide application prospects.


Assuntos
Algoritmos , Encéfalo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Aprendizado Profundo
12.
J Imaging Inform Med ; 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622385

RESUMO

Convolutional neural networks (CNN) have been used for a wide variety of deep learning applications, especially in computer vision. For medical image processing, researchers have identified certain challenges associated with CNNs. These challenges encompass the generation of less informative features, limitations in capturing both high and low-frequency information within feature maps, and the computational cost incurred when enhancing receptive fields by deepening the network. Transformers have emerged as an approach aiming to address and overcome these specific limitations of CNNs in the context of medical image analysis. Preservation of all spatial details of medical images is necessary to ensure accurate patient diagnosis. Hence, this research introduced the use of a pure Vision Transformer (ViT) for a denoising artificial neural network for medical image processing specifically for low-dose computed tomography (LDCT) image denoising. The proposed model follows a U-Net framework that contains ViT modules with the integration of Noise2Neighbor (N2N) interpolation operation. Five different datasets containing LDCT and normal-dose CT (NDCT) image pairs were used to carry out this experiment. To test the efficacy of the proposed model, this experiment includes comparisons between the quantitative and visual results among CNN-based (BM3D, RED-CNN, DRL-E-MP), hybrid CNN-ViT-based (TED-Net), and the proposed pure ViT-based denoising model. The findings of this study showed that there is about 15-20% increase in SSIM and PSNR when using self-attention transformers than using the typical pure CNN. Visual results also showed improvements especially when it comes to showing fine structural details of CT images.

13.
Healthc Technol Lett ; 11(2-3): 126-136, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638491

RESUMO

The task of segmentation is integral to computer-aided surgery systems. Given the privacy concerns associated with medical data, collecting a large amount of annotated data for training is challenging. Unsupervised learning techniques, such as contrastive learning, have shown powerful capabilities in learning image-level representations from unlabelled data. This study leverages classification labels to enhance the accuracy of the segmentation model trained on limited annotated data. The method uses a multi-scale projection head to extract image features at various scales. The partitioning method for positive sample pairs is then improved to perform contrastive learning on the extracted features at each scale to effectively represent the differences between positive and negative samples in contrastive learning. Furthermore, the model is trained simultaneously with both segmentation labels and classification labels. This enables the model to extract features more effectively from each segmentation target class and further accelerates the convergence speed. The method was validated using the publicly available CholecSeg8k dataset for comprehensive abdominal cavity surgical segmentation. Compared to select existing methods, the proposed approach significantly enhances segmentation performance, even with a small labelled subset (1-10%) of the dataset, showcasing a superior intersection over union (IoU) score.

14.
Healthc Technol Lett ; 11(2-3): 189-195, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638495

RESUMO

An important part of surgical training in ophthalmology is understanding how to proficiently perform cataract surgery. Operating skill in cataract surgery is typically assessed by real-time or video-based expert review using a rating scale. This is time-consuming, subjective and labour-intensive. A typical trainee graduates with over 100 complete surgeries, each of which requires review by the surgical educators. Due to the consistently repetitive nature of this task, it lends itself well to machine learning-based evaluation. Recent studies utilize deep learning models trained on tool motion trajectories obtained using additional equipment or robotic systems. However, the process of tool recognition by extracting frames from the videos to perform phase recognition followed by skill assessment is exhaustive. This project proposes a deep learning model for skill evaluation using raw surgery videos that is cost-effective and end-to-end trainable. An advanced ensemble of convolutional neural network models is leveraged to model technical skills in cataract surgeries and is evaluated using a large dataset comprising almost 200 surgical trials. The highest accuracy of 0.8494 is observed on the phacoemulsification step data. Our model yielded an average accuracy of 0.8200 and an average AUC score of 0.8800 for all four phase datasets of cataract surgery proving its robustness against different data. The proposed ensemble model with 2D and 3D convolutional neural networks demonstrated a promising result without using tool motion trajectories to evaluate surgery expertise.

15.
Healthc Technol Lett ; 11(2-3): 157-166, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638498

RESUMO

This study focuses on enhancing the inference speed of laparoscopic tool detection on embedded devices. Laparoscopy, a minimally invasive surgery technique, markedly reduces patient recovery times and postoperative complications. Real-time laparoscopic tool detection helps assisting laparoscopy by providing information for surgical navigation, and its implementation on embedded devices is gaining interest due to the portability, network independence and scalability of the devices. However, embedded devices often face computation resource limitations, potentially hindering inference speed. To mitigate this concern, the work introduces a two-fold modification to the YOLOv7 model: the feature channels and integrate RepBlock is halved, yielding the YOLOv7-RepFPN model. This configuration leads to a significant reduction in computational complexity. Additionally, the focal EIoU (efficient intersection of union) loss function is employed for bounding box regression. Experimental results on an embedded device demonstrate that for frame-by-frame laparoscopic tool detection, the proposed YOLOv7-RepFPN achieved an mAP of 88.2% (with IoU set to 0.5) on a custom dataset based on EndoVis17, and an inference speed of 62.9 FPS. Contrasting with the original YOLOv7, which garnered an 89.3% mAP and 41.8 FPS under identical conditions, the methodology enhances the speed by 21.1 FPS while maintaining detection accuracy. This emphasizes the effectiveness of the work.

16.
Healthc Technol Lett ; 11(2-3): 48-58, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638504

RESUMO

Real-time detection of surgical tools in laparoscopic data plays a vital role in understanding surgical procedures, evaluating the performance of trainees, facilitating learning, and ultimately supporting the autonomy of robotic systems. Existing detection methods for surgical data need to improve processing speed and high prediction accuracy. Most methods rely on anchors or region proposals, limiting their adaptability to variations in tool appearance and leading to sub-optimal detection results. Moreover, using non-anchor-based detectors to alleviate this problem has been partially explored without remarkable results. An anchor-free architecture based on a transformer that allows real-time tool detection is introduced. The proposal is to utilize multi-scale features within the feature extraction layer and at the transformer-based detection architecture through positional encoding that can refine and capture context-aware and structural information of different-sized tools. Furthermore, a supervised contrastive loss is introduced to optimize representations of object embeddings, resulting in improved feed-forward network performances for classifying localized bounding boxes. The strategy demonstrates superiority to state-of-the-art (SOTA) methods. Compared to the most accurate existing SOTA (DSSS) method, the approach has an improvement of nearly 4% on mAP and a reduction in the inference time by 113%. It also showed a 7% higher mAP than the baseline model.

17.
BMC Ophthalmol ; 24(1): 98, 2024 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-38438876

RESUMO

Image segmentation is a fundamental task in deep learning, which is able to analyse the essence of the images for further development. However, for the supervised learning segmentation method, collecting pixel-level labels is very time-consuming and labour-intensive. In the medical image processing area for optic disc and cup segmentation, we consider there are two challenging problems that remain unsolved. One is how to design an efficient network to capture the global field of the medical image and execute fast in real applications. The other is how to train the deep segmentation network using a few training data due to some medical privacy issues. In this paper, to conquer such issues, we first design a novel attention-aware segmentation model equipped with the multi-scale attention module in the pyramid structure-like encoder-decoder network, which can efficiently learn the global semantics and the long-range dependencies of the input images. Furthermore, we also inject the prior knowledge that the optic cup lies inside the optic disc by a novel loss function. Then, we propose a self-supervised contrastive learning method for optic disc and cup segmentation. The unsupervised feature representation is learned by matching an encoded query to a dictionary of encoded keys using a contrastive technique. Finetuning the pre-trained model using the proposed loss function can help achieve good performance for the task. To validate the effectiveness of the proposed method, extensive systemic evaluations on different public challenging optic disc and cup benchmarks, including DRISHTI-GS and REFUGE datasets demonstrate the superiority of the proposed method, which can achieve new state-of-the-art performance approaching 0.9801 and 0.9087 F1 score respectively while gaining 0.9657 D C disc and 0.8976 D C cup . The code will be made publicly available.


Assuntos
Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagem , Conscientização , Benchmarking , Processamento de Imagem Assistida por Computador , Atenção
18.
Sci Rep ; 14(1): 5068, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429362

RESUMO

Using deep learning technology to segment oral CBCT images for clinical diagnosis and treatment is one of the important research directions in the field of clinical dentistry. However, the blurred contour and the scale difference limit the segmentation accuracy of the crown edge and the root part of the current methods, making these regions become difficult-to-segment samples in the oral CBCT segmentation task. Aiming at the above problems, this work proposed a Difficult-to-Segment Focus Network (DSFNet) for segmenting oral CBCT images. The network utilizes a Feature Capturing Module (FCM) to efficiently capture local and long-range features, enhancing the feature extraction performance. Additionally, a Multi-Scale Feature Fusion Module (MFFM) is employed to merge multiscale feature information. To further improve the loss ratio for difficult-to-segment samples, a hybrid loss function is proposed, combining Focal Loss and Dice Loss. By utilizing the hybrid loss function, DSFNet achieves 91.85% Dice Similarity Coefficient (DSC) and 0.216 mm Average Symmetric Surface Distance (ASSD) performance in oral CBCT segmentation tasks. Experimental results show that the proposed method is superior to current dental CBCT image segmentation techniques and has real-world applicability.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Tecnologia , Processamento de Imagem Assistida por Computador
19.
Eur Radiol ; 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38536464

RESUMO

BACKGROUND: Accurate mortality risk quantification is crucial for the management of hepatocellular carcinoma (HCC); however, most scoring systems are subjective. PURPOSE: To develop and independently validate a machine learning mortality risk quantification method for HCC patients using standard-of-care clinical data and liver radiomics on baseline magnetic resonance imaging (MRI). METHODS: This retrospective study included all patients with multiphasic contrast-enhanced MRI at the time of diagnosis treated at our institution. Patients were censored at their last date of follow-up, end-of-observation, or liver transplantation date. The data were randomly sampled into independent cohorts, with 85% for development and 15% for independent validation. An automated liver segmentation framework was adopted for radiomic feature extraction. A random survival forest combined clinical and radiomic variables to predict overall survival (OS), and performance was evaluated using Harrell's C-index. RESULTS: A total of 555 treatment-naïve HCC patients (mean age, 63.8 years ± 8.9 [standard deviation]; 118 females) with MRI at the time of diagnosis were included, of which 287 (51.7%) died after a median time of 14.40 (interquartile range, 22.23) months, and had median followed up of 32.47 (interquartile range, 61.5) months. The developed risk prediction framework required 1.11 min on average and yielded C-indices of 0.8503 and 0.8234 in the development and independent validation cohorts, respectively, outperforming conventional clinical staging systems. Predicted risk scores were significantly associated with OS (p < .00001 in both cohorts). CONCLUSIONS: Machine learning reliably, rapidly, and reproducibly predicts mortality risk in patients with hepatocellular carcinoma from data routinely acquired in clinical practice. CLINICAL RELEVANCE STATEMENT: Precision mortality risk prediction using routinely available standard-of-care clinical data and automated MRI radiomic features could enable personalized follow-up strategies, guide management decisions, and improve clinical workflow efficiency in tumor boards. KEY POINTS: • Machine learning enables hepatocellular carcinoma mortality risk prediction using standard-of-care clinical data and automated radiomic features from multiphasic contrast-enhanced MRI. • Automated mortality risk prediction achieved state-of-the-art performances for mortality risk quantification and outperformed conventional clinical staging systems. • Patients were stratified into low, intermediate, and high-risk groups with significantly different survival times, generalizable to an independent evaluation cohort.

20.
Med Biol Eng Comput ; 62(7): 1991-2004, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38429443

RESUMO

Detection of suspicious pulmonary nodules from lung CT scans is a crucial task in computer-aided diagnosis (CAD) systems. In recent years, various deep learning-based approaches have been proposed and demonstrated significant potential for addressing this task. However, existing deep convolutional neural networks exhibit limited long-range dependency capabilities and neglect crucial contextual information, resulting in reduced performance on detecting small-size nodules in CT scans. In this work, we propose a novel end-to-end framework called LGDNet for the detection of suspicious pulmonary nodules in lung CT scans by fusing local features and global representations. To overcome the limited long-range dependency capabilities inherent in convolutional operations, a dual-branch module is designed to integrate the convolutional neural network (CNN) branch that extracts local features with the transformer branch that captures global representations. To further address the issue of misalignment between local features and global representations, an attention gate module is proposed in the up-sampling stage to selectively combine misaligned semantic data from both branches, resulting in more accurate detection of small-size nodules. Our experiments on the large-scale LIDC dataset demonstrate that the proposed LGDNet with the dual-branch module and attention gate module could significantly improve the nodule detection sensitivity by achieving a final competition performance metric (CPM) score of 89.49%, outperforming the state-of-the-art nodule detection methods, indicating its potential for clinical applications in the early diagnosis of lung diseases.


Assuntos
Neoplasias Pulmonares , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico , Aprendizado Profundo , Diagnóstico por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Algoritmos , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...