Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 176
Filtrar
1.
Radiologie (Heidelb) ; 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39020050

RESUMO

BACKGROUND: A recent innovation in computed tomography (CT) imaging has been the introduction of photon-counting detector CT (PCD-CT) systems, which are able to register the number and the energy level of incoming x­ray photons and have smaller detector elements compared with conventional CT scanners that operate with energy-integrating detectors (EID-CT). OBJECTIVES: The study aimed to evaluate the potential benefits of a novel, non-CE certified PCD-CT in detecting myeloma-associated osteolytic bone lesions (OL) compared with a state-of-the-art EID-CT. MATERIALS AND METHODS: Nine patients with multiple myeloma stage III (according to Durie and Salmon) underwent magnetic resonance imaging (MRI), EID-CT, and PCD-CT of the lower lumbar spine and pelvis. The PCD-CT and EID-CT images of all myeloma lesions that were visible in clinical MRI scans were reviewed by three radiologists for corresponding OL. Additionally, the visualization of destructions to cancellous or cortical bone, and trabecular structures, was compared between PCD-CT and EID-CT. RESULTS: Readers detected 21% more OL in PCD-CT than in EID-CT images (138 vs. 109; p < 0.0001). The sensitivity advantage of PCD-CT in lesion detection increased with decreasing lesion size. The visualization quality of cancellous and cortical destructions as well as of trabecular structures was rated higher by all three readers in PCD-CT images (mean image quality improvements for PCD-CT over EID-CT were +0.45 for cancellous and +0.13 for cortical destructions). CONCLUSIONS: For myeloma-associated OL, PCD-CT demonstrated significantly higher sensitivity, especially with small size. Visualization of bone tissue and lesions was considered significantly better in PCD-CT than in EID-CT. This implies that PCD-CT scanners could potentially be used in the early detection of myeloma-associated bone lesions.

2.
Med Image Anal ; 96: 103192, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38810516

RESUMO

Methods to detect malignant lesions from screening mammograms are usually trained with fully annotated datasets, where images are labelled with the localisation and classification of cancerous lesions. However, real-world screening mammogram datasets commonly have a subset that is fully annotated and another subset that is weakly annotated with just the global classification (i.e., without lesion localisation). Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it. The first option will reduce detection accuracy because it does not use the whole dataset, and the second option is too expensive given that the annotation needs to be done by expert radiologists. In this paper, we propose a middle-ground solution for the dilemma, which is to formulate the training as a weakly- and semi-supervised learning problem that we refer to as malignant breast lesion detection with incomplete annotations. To address this problem, our new method comprises two stages, namely: (1) pre-training a multi-view mammogram classifier with weak supervision from the whole dataset, and (2) extending the trained classifier to become a multi-view detector that is trained with semi-supervised student-teacher learning, where the training set contains fully and weakly-annotated mammograms. We provide extensive detection results on two real-world screening mammogram datasets containing incomplete annotations and show that our proposed approach achieves state-of-the-art results in the detection of malignant breast lesions with incomplete annotations.


Assuntos
Neoplasias da Mama , Mamografia , Interpretação de Imagem Radiográfica Assistida por Computador , Humanos , Neoplasias da Mama/diagnóstico por imagem , Mamografia/métodos , Feminino , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Aprendizado de Máquina Supervisionado
3.
Scand J Gastroenterol ; : 1-11, 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38775234

RESUMO

BACKGROUND: Adenoma detection rate (ADR) is higher after a positive fecal immunochemical test (FIT) compared to direct screening colonoscopy. OBJECTIVE: This meta-analysis evaluated how ADR, the rates of advanced adenoma detection (AADR), colorectal cancer detection (CDR), and sessile serrated lesion detection (SSLDR) are affected by different FIT positivity thresholds. METHODS: We searched MEDLINE, EMBASE, CINAHL, and EBM Reviews databases for studies reporting ADR, AADR, CDR, and SSLDR according to different FIT cut-off values in asymptomatic average-risk individuals aged 50-74 years old. Data were stratified according to sex, age, time to colonoscopy, publication year, continent, and FIT kit type. Study quality, heterogeneity, and publication bias were assessed. RESULTS: Overall, 4280 articles were retrieved and fifty-eight studies were included (277,661 FIT-positive colonoscopies; mean cecal intubation 96.3%; mean age 60.8 years; male 52.1%). Mean ADR was 56.1% (95% CI 53.4 - 58.7%), while mean AADR, CDR, and SSLDR were 27.2% (95% CI 24.4 - 30.1%), 5.3% (95% CI 4.7 - 6.0%), and 3.0% (95% CI 1.7 - 4.6%), respectively. For each 20 µg Hb/g increase in FIT cut-off level, ADR increased by 1.54% (95% CI 0.52 - 2.56%, p < 0.01), AADR by 3.90% (95% CI 2.76 - 5.05%, p < 0.01) and CDR by 1.46% (95% CI 0.66 - 2.24%, p < 0.01). Many detection rates were greater amongst males and Europeans. CONCLUSIONS: ADRs in FIT-positive colonoscopies are influenced by the adopted FIT positivity threshold, and identified targets, importantly, proved to be higher than most current societal recommendations.

4.
J Imaging Inform Med ; 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38740661

RESUMO

Accurate treatment outcome assessment is crucial in clinical trials. However, due to the image-reading subjectivity, there exist discrepancies among different radiologists. The situation is common in liver cancer due to the complexity of abdominal scans and the heterogeneity of radiological imaging manifestations in liver subtypes. Therefore, we developed a deep learning-based detect-then-track pipeline that can automatically identify liver lesions from 3D CT scans then longitudinally track target lesions, thereby providing the evaluation of RECIST treatment outcomes in liver cancer. We constructed and validated the pipeline on 173 multi-national patients (344 venous-phase CT scans) consisting of a public dataset and two in-house cohorts of 28 centers. The proposed pipeline achieved a mean average precision of 0.806 and 0.726 of lesion detection on the validation and test sets. The model's diameter measurement reliability and consistency are significantly higher than that of clinicians (p = 1.6 × 10-4). The pipeline can make precise lesion tracking with accuracies of 85.7% and 90.8% then finally yield the RECIST accuracies of 82.1% and 81.4% on the validation and test sets. Our proposed pipeline can provide precise and convenient RECIST outcome assessments and has the potential to aid clinicians with more efficient therapeutic decisions.

5.
Med Image Anal ; 95: 103145, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38615432

RESUMO

In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.


Assuntos
Dermatopatias , Humanos , Dermatopatias/diagnóstico por imagem , Imageamento Tridimensional/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos
6.
Phys Med Biol ; 69(10)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38588676

RESUMO

Background. Pancreatic cancer is one of the most malignant tumours, demonstrating a poor prognosis and nearly identically high mortality and morbidity, mainly because of the difficulty of early diagnosis and timely treatment for localized stages.Objective. To develop a noncontrast CT (NCCT)-based pancreatic lesion detection model that could serve as an intelligent tool for diagnosing pancreatic cancer early, overcoming the challenges associated with low contrast intensities and complex anatomical structures present in NCCT images.Approach.We design a multiscale and multiperception (MSMP) feature learning network with ResNet50 coupled with a feature pyramid network as the backbone for strengthening feature expressions. We added multiscale atrous convolutions to expand different receptive fields, contextual attention to perceive contextual information, and channel and spatial attention to focus on important channels and spatial regions, respectively. The MSMP network then acts as a feature extractor for proposing an NCCT-based pancreatic lesion detection model with image patches covering the pancreas as its input; Faster R-CNN is employed as the detection method for accurately detecting pancreatic lesions.Main results. By using the new MSMP network as a feature extractor, our model outperforms the conventional object detection algorithms in terms of the recall (75.40% and 90.95%), precision (40.84% and 68.21%), F1 score (52.98% and 77.96%), F2 score (64.48% and 85.26%) and Ap50 metrics (53.53% and 70.14%) at the image and patient levels, respectively.Significance.The good performance of our new model implies that MSMP can mine NCCT imaging features for detecting pancreatic lesions from complex backgrounds well. The proposed detection model is expected to be further developed as an intelligent method for the early detection of pancreatic cancer.


Assuntos
Neoplasias Pancreáticas , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pancreáticas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina
7.
Cancers (Basel) ; 16(8)2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38672606

RESUMO

This study aimed to develop a rapid, 1 mm3 isotropic resolution, whole-brain MRI technique for automatic lesion segmentation and multi-parametric mapping without using contrast by continuously applying balanced steady-state free precession with inversion pulses throughout incomplete inversion recovery in a single 6 min scan. Modified k-means clustering was performed for automatic brain tissue and lesion segmentation using distinct signal evolutions that contained mixed T1/T2/magnetization transfer properties. Multi-compartment modeling was used to derive quantitative multi-parametric maps for tissue characterization. Fourteen patients with contrast-enhancing gliomas were scanned with this sequence prior to the injection of a contrast agent, and their segmented lesions were compared to conventionally defined manual segmentations of T2-hyperintense and contrast-enhancing lesions. Simultaneous T1, T2, and macromolecular proton fraction maps were generated and compared to conventional 2D T1 and T2 mapping and myelination water fraction mapping acquired with MAGiC. The lesion volumes defined with the new method were comparable to the manual segmentations (r = 0.70, p < 0.01; t-test p > 0.05). The T1, T2, and macromolecular proton fraction mapping values of the whole brain were comparable to the reference values and could distinguish different brain tissues and lesion types (p < 0.05), including infiltrating tumor regions within the T2-lesion. Highly efficient, whole-brain, multi-contrast imaging facilitated automatic lesion segmentation and quantitative multi-parametric mapping without contrast, highlighting its potential value in the clinic when gadolinium is contraindicated.

8.
J Imaging Inform Med ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38587768

RESUMO

Capsule endoscopy (CE) is non-invasive and painless during gastrointestinal examination. However, capsule endoscopy can increase the workload of image reviewing for clinicians, making it prone to missed and misdiagnosed diagnoses. Current researches primarily concentrated on binary classifiers, multiple classifiers targeting fewer than four abnormality types and detectors within a specific segment of the digestive tract, and segmenters for a single type of anomaly. Due to intra-class variations, the task of creating a unified scheme for detecting multiple gastrointestinal diseases is particularly challenging. A cascade neural network designed in this study, Cascade-EC, can automatically identify and localize four types of gastrointestinal lesions in CE images: angiectasis, bleeding, erosion, and polyp. Cascade-EC consists of EfficientNet for image classification and CA_stm_Retinanet for lesion detection and location. As the first layer of Cascade-EC, the EfficientNet network classifies CE images. CA_stm_Retinanet, as the second layer, performs the target detection and location task on the classified image. CA_stm_Retinanet adopts the general architecture of Retinanet. Its feature extraction module is the CA_stm_Backbone from the stack of CA_stm Block. CA_stm Block adopts the split-transform-merge strategy and introduces the coordinate attention. The dataset in this study is from Shanghai East Hospital, collected by PillCam SB3 and AnKon capsule endoscopes, which contains a total of 7936 images of 317 patients from the years 2017 to 2021. In the testing set, the average precision of Cascade-EC in the multi-lesions classification task was 94.55%, the average recall was 90.60%, and the average F1 score was 92.26%. The mean mAP@ 0.5 of Cascade-EC for detecting the four types of diseases is 85.88%. The experimental results show that compared with a single target detection network, Cascade-EC has better performance and can effectively assist clinicians to classify and detect multiple lesions in CE images.

9.
Artif Intell Med ; 150: 102842, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38553147

RESUMO

This paper introduces a novel one-stage end-to-end detector specifically designed to detect small lesions in medical images. Precise localization of small lesions presents challenges due to their appearance and the diverse contextual backgrounds in which they are found. To address this, our approach introduces a new type of pixel-based anchor that dynamically moves towards the targeted lesion for detection. We refer to this new architecture as GravityNet, and the novel anchors as gravity points since they appear to be "attracted" by the lesions. We conducted experiments on two well-established medical problems involving small lesions to evaluate the performance of the proposed approach: microcalcifications detection in digital mammograms and microaneurysms detection in digital fundus images. Our method demonstrates promising results in effectively detecting small lesions in these medical imaging tasks.


Assuntos
Mamografia , Mamografia/métodos , Fundo de Olho
10.
Eur J Cancer ; 202: 114026, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38547776

RESUMO

IMPORTANCE: Total body photography for skin cancer screening is a well-established tool allowing documentation and follow-up of the entire skin surface. Artificial intelligence-based systems are increasingly applied for automated lesion detection and diagnosis. DESIGN AND PATIENTS: In this prospective observational international multicentre study experienced dermatologists performed skin cancer screenings and identified clinically relevant melanocytic lesions (CRML, requiring biopsy or observation). Additionally, patients received 2D automated total body mapping (ATBM) with automated lesion detection (ATBM master, Fotofinder Systems GmbH). Primary endpoint was the percentage of CRML detected by the bodyscan software. Secondary endpoints included the percentage of correctly identified "new" and "changed" lesions during follow-up examinations. RESULTS: At baseline, dermatologists identified 1075 CRML in 236 patients and 999 CRML (92.9%) were also detected by the automated software. During follow-up examinations dermatologists identified 334 CRMLs in 55 patients, with 323 (96.7%) also being detected by ATBM with automated lesions detection. Moreover, all new (n = 13) or changed CRML (n = 24) during follow-up were detected by the software. Average time requirements per baseline examination was 14.1 min (95% CI [12.8-15.5]). Subgroup analysis of undetected lesions revealed either technical (e.g. covering by clothing, hair) or lesion-specific reasons (e.g. hypopigmentation, palmoplantar sites). CONCLUSIONS: ATBM with lesion detection software correctly detected the vast majority of CRML and new or changed CRML during follow-up examinations in a favourable amount of time. Our prospective international study underlines that automated lesion detection in TBP images is feasible, which is of relevance for developing AI-based skin cancer screenings.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Melanoma/patologia , Inteligência Artificial , Estudos Prospectivos , Relevância Clínica , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Algoritmos
11.
EJNMMI Phys ; 11(1): 17, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38358541

RESUMO

BACKGROUND: Conventional PET/CT imaging reconstruction is typically performed using voxel size of 3.0-4.0 mm in three axes. It is hypothesized that a smaller voxel sizes could improve the accuracy of small lesion detection. This study aims to explore the advantages and conditions of small voxel imaging on clinical application. METHODS: Both NEMA IQ phantom and 30 patients with an injected dose of 3.7 MBq/kg were scanned using a total-body PET/CT (uEXPLORER). Images were reconstructed using matrices of 192 × 192, 512 × 512, and 1024 × 1024 with scanning duration of 3 min, 5 min, 8 min, and 10 min, respectively. RESULTS: In the phantom study, the contrast recovery coefficient reached the maximum in matrix group of 512 × 512, and background variability increased as voxel size decreased. In the clinical study, SUVmax, SD, and TLR increased, while SNR decreased as the voxel size decreased. When the scanning duration increased, SNR increased, while SUVmax, SD, and TLR decreased. The SUVmean was more reluctant to the changes in imaging matrix and scanning duration. The mean subjective scores for all 512 × 512 groups and 1024 × 1024 groups (scanning duration ≥ 8 min) were over three points. One false-positive lesion was found in groups of 512 × 512 with scanning duration of 3 min, 1024 × 1024 with 3 min and 5 min, respectively. Meanwhile, the false-negative lesions found in group of 192 × 192 with duration of 3 min and 5 min, 512 × 512 with 3 min and 1024 × 1024 with 3 min and 5 min were 5, 4, 1, 4, and 1, respectively. The reconstruction time and storage space occupation were significantly increased as the imaging matrix increased. CONCLUSIONS: PET/CT imaging with smaller voxel can improve SUVmax and TLR of lesions, which is advantageous for the diagnosis of small or hypometabolic lesions if with sufficient counts. With an 18F-FDG injection dose of 3.7 MBq/kg, uEXPLORER PET/CT imaging using matrix of 512 × 512 with 5 min or 1024 × 1024 with 8 min can meet the image requirements for clinical use.

12.
Cancer Imaging ; 24(1): 30, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38424612

RESUMO

BACKGROUND: Prostate-specific membrane antigen (PSMA) PET/CT imaging is widely used for quantitative image analysis, especially in radioligand therapy (RLT) for metastatic castration-resistant prostate cancer (mCRPC). Unknown features influencing PSMA biodistribution can be explored by analyzing segmented organs at risk (OAR) and lesions. Manual segmentation is time-consuming and labor-intensive, so automated segmentation methods are desirable. Training deep-learning segmentation models is challenging due to the scarcity of high-quality annotated images. Addressing this, we developed shifted windows UNEt TRansformers (Swin UNETR) for fully automated segmentation. Within a self-supervised framework, the model's encoder was pre-trained on unlabeled data. The entire model was fine-tuned, including its decoder, using labeled data. METHODS: In this work, 752 whole-body [68Ga]Ga-PSMA-11 PET/CT images were collected from two centers. For self-supervised model pre-training, 652 unlabeled images were employed. The remaining 100 images were manually labeled for supervised training. In the supervised training phase, 5-fold cross-validation was used with 64 images for model training and 16 for validation, from one center. For testing, 20 hold-out images, evenly distributed between two centers, were used. Image segmentation and quantification metrics were evaluated on the test set compared to the ground-truth segmentation conducted by a nuclear medicine physician. RESULTS: The model generates high-quality OARs and lesion segmentation in lesion-positive cases, including mCRPC. The results show that self-supervised pre-training significantly improved the average dice similarity coefficient (DSC) for all classes by about 3%. Compared to nnU-Net, a well-established model in medical image segmentation, our approach outperformed with a 5% higher DSC. This improvement was attributed to our model's combined use of self-supervised pre-training and supervised fine-tuning, specifically when applied to PET/CT input. Our best model had the lowest DSC for lesions at 0.68 and the highest for liver at 0.95. CONCLUSIONS: We developed a state-of-the-art neural network using self-supervised pre-training on whole-body [68Ga]Ga-PSMA-11 PET/CT images, followed by fine-tuning on a limited set of annotated images. The model generates high-quality OARs and lesion segmentation for PSMA image analysis. The generalizable model holds potential for various clinical applications, including enhanced RLT and patient-specific internal dosimetry.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias de Próstata Resistentes à Castração , Masculino , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Radioisótopos de Gálio , Órgãos em Risco , Distribuição Tecidual , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador/métodos
13.
Eur J Nucl Med Mol Imaging ; 51(4): 1173-1184, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38049657

RESUMO

PURPOSE: The automatic segmentation and detection of prostate cancer (PC) lesions throughout the body are extremely challenging due to the lesions' complexity and variability in appearance, shape, and location. In this study, we investigated the performance of a three-dimensional (3D) convolutional neural network (CNN) to automatically characterize metastatic lesions throughout the body in a dataset of PC patients with recurrence after radical prostatectomy. METHODS: We retrospectively collected [68 Ga]Ga-PSMA-11 PET/CT images from 116 patients with metastatic PC at two centers: center 1 provided the data for fivefold cross validation (n = 78) and internal testing (n = 19), and center 2 provided the data for external testing (n = 19). PET and CT data were jointly input into a 3D U-Net to achieve whole-body segmentation and detection of PC lesions. The performance in both the segmentation and the detection of lesions throughout the body was evaluated using established metrics, including the Dice similarity coefficient (DSC) for segmentation and the recall, precision, and F1-score for detection. The correlation and consistency between tumor burdens (PSMA-TV and TL-PSMA) calculated from automatic segmentation and artificial ground truth were assessed by linear regression and Bland‒Altman plots. RESULTS: On the internal test set, the DSC, precision, recall, and F1-score values were 0.631, 0.961, 0.721, and 0.824, respectively. On the external test set, the corresponding values were 0.596, 0.888, 0.792, and 0.837, respectively. Our approach outperformed previous studies in segmenting and detecting metastatic lesions throughout the body. Tumor burden indicators derived from deep learning and ground truth showed strong correlation (R2 ≥ 0.991, all P < 0.05) and consistency. CONCLUSION: Our 3D CNN accurately characterizes whole-body tumors in relapsed PC patients; its results are highly consistent with those of manual contouring. This automatic method is expected to improve work efficiency and to aid in the assessment of tumor burden.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Radioisótopos de Gálio , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Isótopos de Gálio , Estudos Retrospectivos , Recidiva Local de Neoplasia/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Prostatectomia , Ácido Edético
14.
Comput Biol Med ; 168: 107833, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38071840

RESUMO

Skin cancer, encompassing various forms such as melanoma, basal cell carcinoma, and others, remains a significant global health concern, often proving fatal if not diagnosed and treated in its early stages. The challenge of accurately diagnosing skin cancer, particularly melanoma, persists even for experienced dermatologists due to the intricate and unpredictable nature of its symptoms. To address the need for more accurate and efficient skin cancer detection, a novel Golden Hawk Optimization-based Distributed Capsule Neural Network (GHO-DCaNN) is proposed. This novel technique leverages advanced computational methods to improve the reliability and precision of skin cancer diagnosis. An optimized clustering-based segmentation approach is introduced, integrating the innovative Sewer Shad Fly Optimization (SSFO), which combines elements of both mayfly and moth flame optimization. This integration enhances the accuracy of lesion boundary delineation and feature extraction. The core of the innovation lies in the optimized distributed capsule neural network, which is trained using the Hybrid GHO. This optimizer, inspired by the behaviors of the golden eagle and fire hawk, ensures the effectiveness of epidermis lesion detection, pushing the boundaries of skin cancer diagnosis methods. The achievements based on the metrics, like specificity, sensitivity, and accuracy show 97.53%, 99.05%, and 98.83% for 90% of training and 97.83%, 99.50%, and 99.06% for k-fold of 10, respectively.


Assuntos
Ephemeroptera , Melanoma , Neoplasias Cutâneas , Animais , Melanoma/diagnóstico , Reprodutibilidade dos Testes , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Redes Neurais de Computação , Epiderme , Dermoscopia/métodos
15.
Microsc Res Tech ; 87(1): 78-94, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37681440

RESUMO

Diabetic retinopathy (DR) is a prevalent cause of global visual impairment, contributing to approximately 4.8% of blindness cases worldwide as reported by the World Health Organization (WHO). The condition is characterized by pathological abnormalities in the retinal layer, including microaneurysms, vitreous hemorrhages, and exudates. Microscopic analysis of retinal images is crucial in diagnosing and treating DR. This article proposes a novel method for early DR screening using segmentation and unsupervised learning techniques. The approach integrates a neural network energy-based model into the Fuzzy C-Means (FCM) algorithm to enhance convergence criteria, aiming to improve the accuracy and efficiency of automated DR screening tools. The evaluation of results includes the primary dataset from the Shiva Netralaya Centre, IDRiD, and DIARETDB1. The performance of the proposed method is compared against FCM, EFCM, FLICM, and M-FLICM techniques, utilizing metrics such as accuracy in noiseless and noisy conditions and average execution time. The results showcase auspicious performance on both primary and secondary datasets, achieving accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s. The proposed method holds significant potential in medical image analysis and could pave the way for future advancements in automated DR diagnosis and management. RESEARCH HIGHLIGHTS: A novel approach is proposed in the article, integrating a neural network energy-based model into the FCM algorithm to enhance the convergence criteria and the accuracy of automated DR screening tools. By leveraging the microscopic characteristics of retinal images, the proposed method significantly improves the accuracy of lesion segmentation, facilitating early detection and monitoring of DR. The evaluation of the method's performance includes primary datasets from reputable sources such as the Shiva Netralaya Centre, IDRiD, and DIARETDB1, demonstrating its effectiveness in comparison to other techniques (FCM, EFCM, FLICM, and M-FLICM) in terms of accuracy in both noiseless and noisy conditions. It achieves impressive accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/patologia , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Retina/diagnóstico por imagem , Retina/patologia , Análise por Conglomerados
16.
Bioengineering (Basel) ; 10(12)2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-38136020

RESUMO

The early identification and treatment of various dermatological conditions depend on the detection of skin lesions. Due to advancements in computer-aided diagnosis and machine learning approaches, learning-based skin lesion analysis methods have attracted much interest recently. Employing the concept of transfer learning, this research proposes a deep convolutional neural network (CNN)-based multistage and multiclass framework to categorize seven types of skin lesions. In the first stage, a CNN model was developed to classify skin lesion images into two classes, namely benign and malignant. In the second stage, the model was then used with the transfer learning concept to further categorize benign lesions into five subcategories (melanocytic nevus, actinic keratosis, benign keratosis, dermatofibroma, and vascular) and malignant lesions into two subcategories (melanoma and basal cell carcinoma). The frozen weights of the CNN developed-trained with correlated images benefited the transfer learning using the same type of images for the subclassification of benign and malignant classes. The proposed multistage and multiclass technique improved the classification accuracy of the online ISIC2018 skin lesion dataset by up to 93.4% for benign and malignant class identification. Furthermore, a high accuracy of 96.2% was achieved for subclassification of both classes. Sensitivity, specificity, precision, and F1-score metrics further validated the effectiveness of the proposed multistage and multiclass framework. Compared to existing CNN models described in the literature, the proposed approach took less time to train and had a higher classification rate.

17.
Diagnostics (Basel) ; 13(21)2023 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-37958190

RESUMO

We performed a systematic evaluation of the diagnostic performance of LAFOV PET/CT with increasing acquisition time. The first 100 oncologic adult patients referred for 3 MBq/kg 2-[18F]fluoro-2-deoxy-D-glucose PET/CT on the Siemens Biograph Vision Quadra were included. A standard imaging protocol of 10 min was used and scans were reconstructed at 30 s, 60 s, 90 s, 180 s, 300 s, and 600 s. Paired comparisons of quantitative image noise, qualitative image quality, lesion detection, and lesion classification were performed. Image noise (n = 50, 34 women) was acceptable according to the current standard of care (coefficient-of-varianceref < 0.15) after 90 s and improved significantly with increasing acquisition time (PB < 0.001). The same was seen in observer rankings (PB < 0.001). Lesion detection (n = 100, 74 women) improved significantly from 30 s to 90 s (PB < 0.001), 90 s to 180 s (PB = 0.001), and 90 s to 300 s (PB = 0.002), while lesion classification improved from 90 s to 180 s (PB < 0.001), 180 s to 300 s (PB = 0.021), and 90 s to 300 s (PB < 0.001). We observed improved image quality, lesion detection, and lesion classification with increasing acquisition time while maintaining a total scan time of less than 5 min, which demonstrates a potential clinical benefit. Based on these results we recommend a standard imaging acquisition protocol for LAFOV PET/CT of minimum 180 s to maximum 300 s after injection of 3 MBq/kg 2-[18F]fluoro-2-deoxy-D-glucose.

18.
Cureus ; 15(9): e45587, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37868395

RESUMO

Radiology has been a pioneer in the healthcare industry's digital transformation, incorporating digital imaging systems like picture archiving and communication system (PACS) and teleradiology over the past thirty years. This shift has reshaped radiology services, positioning the field at a crucial junction for potential evolution into an integrated diagnostic service through artificial intelligence and machine learning. These technologies offer advanced tools for radiology's transformation. The radiology community has advanced computer-aided diagnosis (CAD) tools using machine learning techniques, notably deep learning convolutional neural networks (CNNs), for medical image pattern recognition. However, the integration of CAD tools into clinical practice has been hindered by challenges in workflow integration, unclear business models, and limited clinical benefits, despite development dating back to the 1990s. This comprehensive review focuses on detecting chest-related diseases through techniques like chest X-rays (CXRs), magnetic resonance imaging (MRI), nuclear medicine, and computed tomography (CT) scans. It examines the utilization of computer-aided programs by researchers for disease detection, addressing key areas: the role of computer-aided programs in disease detection advancement, recent developments in MRI, CXR, radioactive tracers, and CT scans for chest disease identification, research gaps for more effective development, and the incorporation of machine learning programs into diagnostic tools.

19.
BMC Bioinformatics ; 24(1): 401, 2023 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-37884877

RESUMO

BACKGROUND: Recent advancements in computing power and state-of-the-art algorithms have helped in more accessible and accurate diagnosis of numerous diseases. In addition, the development of de novo areas in imaging science, such as radiomics and radiogenomics, have been adding more to personalize healthcare to stratify patients better. These techniques associate imaging phenotypes with the related disease genes. Various imaging modalities have been used for years to diagnose breast cancer. Nonetheless, digital breast tomosynthesis (DBT), a state-of-the-art technique, has produced promising results comparatively. DBT, a 3D mammography, is replacing conventional 2D mammography rapidly. This technological advancement is key to AI algorithms for accurately interpreting medical images. OBJECTIVE AND METHODS: This paper presents a comprehensive review of deep learning (DL), radiomics and radiogenomics in breast image analysis. This review focuses on DBT, its extracted synthetic mammography (SM), and full-field digital mammography (FFDM). Furthermore, this survey provides systematic knowledge about DL, radiomics, and radiogenomics for beginners and advanced-level researchers. RESULTS: A total of 500 articles were identified, with 30 studies included as the set criteria. Parallel benchmarking of radiomics, radiogenomics, and DL models applied to the DBT images could allow clinicians and researchers alike to have greater awareness as they consider clinical deployment or development of new models. This review provides a comprehensive guide to understanding the current state of early breast cancer detection using DBT images. CONCLUSION: Using this survey, investigators with various backgrounds can easily seek interdisciplinary science and new DL, radiomics, and radiogenomics directions towards DBT.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Feminino , Intensificação de Imagem Radiográfica/métodos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/genética , Mamografia/métodos
20.
Eur J Radiol ; 168: 111121, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37806195

RESUMO

PURPOSE: To assess whether image quality differences between SECT (single-energy CT) and DECT (dual-energy CT 70 keV) with equivalent radiation doses result in altered detection and characterization accuracy of liver metastases when using deep learning image reconstruction (DLIR), and whether DECT spectral curve usage improves accuracy of indeterminate lesion characterization. METHODS: In this prospective Health Insurance Portability and Accountability Act-compliant study (March through August 2022), adult men and non-pregnant adult women with biopsy-proven colorectal cancer and liver metastases underwent SECT (120 kVp) and a DECT (70 keV) portovenous abdominal CT scan using DLIR in the same breath-hold (Revolution CT ES; GE Healthcare). Participants were excluded if consent could not be obtained, if there were nonequivalent radiation doses between the two scans, or if the examination was cancelled/rescheduled. Three radiologists independently performed lesion detection and characterization during two separate sessions (SECT DLIRmedium and DECT DLIRhigh) as well as reported lesion confidence and overall image quality. Hounsfield units were measured. Spectral HU curves were provided for any lesions rated as indeterminate. McNemar's test was used to test the marginal homogeneity in terms of diagnostic sensitivity, accuracy and lesion detection. A generalized estimating equation method was used for categorical outcomes. RESULTS: 30 participants (mean age, 58 years ± 11, 21 men) were evaluated. Mean CTDIvol was 34 mGy for both scans. 141 lesions (124 metastases, 17 benign) with a mean size of 0.8 cm ± 0.3 cm were identified. High scores for image quality (scores of 4 or 5) were not significantly different between DECT (N = 71 out of 90 total scores from the three readers) and SECT (N = 62) (OR, 2.01; 95% CI:0.89, 4.57; P = 0.093). Equivalent image noise to SECT DLIRmed (HU SD 10 ± 2) was obtained with DECT DLIRhigh (HU SD 10 ± 3) (P = 1). There was no significant difference in lesion detection between DECT and SECT (140/141 lesions) (99.3%; 95% CI:96.1%, 100%).The mean lesion confidence scores by each reader were 4.2 ± 1.3, 3.9 ± 1.0, and 4.8 ± 0.8 for SECT and 4.1 ± 1.4, 4.0 ± 1.0, and 4.7 ± 0.8 for DECT (odds ratio [OR], 0.83; 95% CI: 0.62, 1.11; P = 0.21). Small lesion (≤5mm) characterization accuracy on SECT and DECT was 89.1% (95% CI:76.4%, 96.4%; 41/46) and 84.8% (71.1%, 93.7%; 39/46), respectively (P = 0.41). Use of spectral HU lesion curves resulted in 34 correct changes in characterizations and no mischaracterizations. CONCLUSION: DECT required a higher strength of DLIR to obtain equivalent noise compared to SECT DLIR. At equivalent radiation doses and image noise, there was no significant difference in subjective image quality or observer lesion performance between DECT (70 keV) and SECT. However, DECT spectral HU curves of indeterminate lesions improved characterization.


Assuntos
Aprendizado Profundo , Neoplasias Hepáticas , Masculino , Adulto , Humanos , Feminino , Pessoa de Meia-Idade , Tomografia Computadorizada por Raios X/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Abdome , Estudos Prospectivos , Doses de Radiação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...