Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 134
Filtrar
1.
IEEE Winter Conf Appl Comput Vis ; 2024: 1989-1998, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38978834

RESUMO

Domain generalization (DG) approaches intend to extract domain invariant features that can lead to a more robust deep learning model. In this regard, style augmentation is a strong DG method taking advantage of instance-specific feature statistics containing informative style characteristics to synthetic novel domains. While it is one of the state-of-the-art methods, prior works on style augmentation have either disregarded the interdependence amongst distinct feature channels or have solely constrained style augmentation to linear interpolation. To address these research gaps, in this work, we introduce a novel augmentation approach, named Correlated Style Uncertainty (CSU), surpassing the limitations of linear interpolation in style statistic space and simultaneously preserving vital correlation information. Our method's efficacy is established through extensive experimentation on diverse cross-domain computer vision and medical imaging classification tasks: PACS, Office-Home, and Camelyon17 datasets, and the Duke-Market1501 instance retrieval task. The results showcase a remarkable improvement margin over existing state-of-the-art techniques. The source code is available https://github.com/freshman97/CSU.

2.
ArXiv ; 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38855539

RESUMO

Knowledge distillation (KD) has demonstrated remarkable success across various domains, but its application to medical imaging tasks, such as kidney and liver tumor segmentation, has encountered challenges. Many existing KD methods are not specifically tailored for these tasks. Moreover, prevalent KD methods often lack a careful consideration of 'what' and 'from where' to distill knowledge from the teacher to the student. This oversight may lead to issues like the accumulation of training bias within shallower student layers, potentially compromising the effectiveness of KD. To address these challenges, we propose Hierarchical Layer-selective Feedback Distillation (HLFD). HLFD strategically distills knowledge from a combination of middle layers to earlier layers and transfers final layer knowledge to intermediate layers at both the feature and pixel levels. This design allows the model to learn higher-quality representations from earlier layers, resulting in a robust and compact student model. Extensive quantitative evaluations reveal that HLFD outperforms existing methods by a significant margin. For example, in the kidney segmentation task, HLFD surpasses the student model (without KD) by over 10%, significantly improving its focus on tumor-specific features. From a qualitative standpoint, the student model trained using HLFD excels at suppressing irrelevant information and can focus sharply on tumor-specific details, which opens a new pathway for more efficient and accurate diagnostic tools. Code is available here.

3.
Rev Invest Clin ; 76(2): 065-079, 2024 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-38359843

RESUMO

Background: Pan-immuno-inflammation value (PIV) is a new and comprehensive index that reflects both the immune response and systemic inflammation in the body. Objective: The aim of this study was to investigate the prognostic relevance of PIV in predicting in-hospital mortality in acute pulmonary embolism (PE) patients and to compare it with the well-known risk scoring system, PE severity index (PESI), which is commonly used for a short-term mortality prediction in such patients. Methods: In total, 373 acute PE patients diagnosed with contrast-enhanced computed tomography were included in the study. Detailed cardiac evaluation of each patient was performed and PESI and PIV were calculated. Results: In total, 60 patients died during their hospital stay. The multivariable logistic regression analysis revealed that baseline heart rate, N-terminal pro-B-type natriuretic peptide, lactate dehydrogenase, PIV, and PESI were independent risk factors for in-hospital mortality in acute PE patients. When comparing with PESI, PIV was non-inferior in terms of predicting the survival status in patients with acute PE. Conclusion: In our study, we found that the PIV was statistically significant in predicting in-hospital mortality in acute PE patients and was non-inferior to the PESI.


Assuntos
Mortalidade Hospitalar , Inflamação , Embolia Pulmonar , Índice de Gravidade de Doença , Humanos , Embolia Pulmonar/mortalidade , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Doença Aguda , Prognóstico , Fatores de Risco , Tomografia Computadorizada por Raios X , Idoso de 80 Anos ou mais , Peptídeo Natriurético Encefálico/sangue , Fragmentos de Peptídeos/sangue , L-Lactato Desidrogenase/sangue , Biomarcadores , Valor Preditivo dos Testes , Modelos Logísticos
4.
Acad Radiol ; 31(6): 2424-2433, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38262813

RESUMO

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.


Assuntos
Neoplasias Ósseas , Aprendizado Profundo , Estadiamento de Neoplasias , Neoplasias da Próstata , Tomografia Computadorizada por Raios X , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Neoplasias Ósseas/diagnóstico por imagem , Neoplasias Ósseas/secundário , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Idoso , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
5.
IEEE J Biomed Health Inform ; 28(3): 1273-1284, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38051612

RESUMO

Monitoring of prevalent airborne diseases such as COVID-19 characteristically involves respiratory assessments. While auscultation is a mainstream method for preliminary screening of disease symptoms, its utility is hampered by the need for dedicated hospital visits. Remote monitoring based on recordings of respiratory sounds on portable devices is a promising alternative, which can assist in early assessment of COVID-19 that primarily affects the lower respiratory tract. In this study, we introduce a novel deep learning approach to distinguish patients with COVID-19 from healthy controls given audio recordings of cough or breathing sounds. The proposed approach leverages a novel hierarchical spectrogram transformer (HST) on spectrogram representations of respiratory sounds. HST embodies self-attention mechanisms over local windows in spectrograms, and window size is progressively grown over model stages to capture local to global context. HST is compared against state-of-the-art conventional and deep-learning baselines. Demonstrations on crowd-sourced multi-national datasets indicate that HST outperforms competing methods, achieving over 90% area under the receiver operating characteristic curve (AUC) in detecting COVID-19 cases.


Assuntos
COVID-19 , Sons Respiratórios , Humanos , Sons Respiratórios/diagnóstico , COVID-19/diagnóstico , Auscultação , Tosse , Fontes de Energia Elétrica
6.
Artigo em Inglês | MEDLINE | ID: mdl-38082949

RESUMO

Accurate segmentation of organs-at-risks (OARs) is a precursor for optimizing radiation therapy planning. Existing deep learning-based multi-scale fusion architectures have demonstrated a tremendous capacity for 2D medical image segmentation. The key to their success is aggregating global context and maintaining high resolution representations. However, when translated into 3D segmentation problems, existing multi-scale fusion architectures might underperform due to their heavy computation overhead and substantial data diet. To address this issue, we propose a new OAR segmentation framework, called OARFocalFuseNet, which fuses multi-scale features and employs focal modulation for capturing global-local context across multiple scales. Each resolution stream is enriched with features from different resolution scales, and multi-scale information is aggregated to model diverse contextual ranges. As a result, feature representations are further boosted. The comprehensive comparisons in our experimental setup with OAR segmentation as well as multi-organ segmentation show that our proposed OARFocalFuseNet outperforms the recent state-of-the-art methods on publicly available OpenKBP datasets and Synapse multi-organ segmentation. Both of the proposed methods (3D-MSF and OARFocalFuseNet) showed promising performance in terms of standard evaluation metrics. Our best performing method (OARFocalFuseNet) obtained a dice coefficient of 0.7995 and hausdorff distance of 5.1435 on OpenKBP datasets and dice coefficient of 0.8137 on Synapse multi-organ segmentation dataset. Our code is available at https://github.com/NoviceMAn-prog/OARFocalFuse.


Assuntos
Órgãos em Risco , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
7.
Artigo em Inglês | MEDLINE | ID: mdl-38083589

RESUMO

Colorectal cancer (CRC) is one of the most common causes of cancer and cancer-related mortality worldwide. Performing colon cancer screening in a timely fashion is the key to early detection. Colonoscopy is the primary modality used to diagnose colon cancer. However, the miss rate of polyps, adenomas and advanced adenomas remains significantly high. Early detection of polyps at the precancerous stage can help reduce the mortality rate and the economic burden associated with colorectal cancer. Deep learning-based computer-aided diagnosis (CADx) system may help gastroenterologists to identify polyps that may otherwise be missed, thereby improving the polyp detection rate. Additionally, CADx system could prove to be a cost-effective system that improves long-term colorectal cancer prevention. In this study, we proposed a deep learning-based architecture for automatic polyp segmentation called Transformer ResU-Net (TransResU-Net). Our proposed architecture is built upon residual blocks with ResNet-50 as the backbone and takes advantage of the transformer self-attention mechanism as well as dilated convolution(s). Our experimental results on two publicly available polyp segmentation benchmark datasets showed that TransResU-Net obtained a highly promising dice score and a real-time speed. With high efficacy in our performance metrics, we concluded that TransResU-Net could be a strong benchmark for building a real-time polyp detection system for the early diagnosis, treatment, and prevention of colorectal cancer. The source code of the proposed TransResU-Net is publicly available at https://github.com/nikhilroxtomar/TransResUNet.


Assuntos
Adenoma , Neoplasias do Colo , Pólipos do Colo , Neoplasias Colorretais , Humanos , Neoplasias Colorretais/diagnóstico , Detecção Precoce de Câncer , Pólipos do Colo/diagnóstico por imagem , Neoplasias do Colo/diagnóstico por imagem , Adenoma/diagnóstico por imagem
8.
ArXiv ; 2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-38106459

RESUMO

Pediatric brain and spinal cancers remain the leading cause of cancer-related death in children. Advancements in clinical decision-support in pediatric neuro-oncology utilizing the wealth of radiology imaging data collected through standard care, however, has significantly lagged other domains. Such data is ripe for use with predictive analytics such as artificial intelligence (AI) methods, which require large datasets. To address this unmet need, we provide a multi-institutional, large-scale pediatric dataset of 23,101 multi-parametric MRI exams acquired through routine care for 1,526 brain tumor patients, as part of the Children's Brain Tumor Network. This includes longitudinal MRIs across various cancer diagnoses, with associated patient-level clinical information, digital pathology slides, as well as tissue genotype and omics data. To facilitate downstream analysis, treatment-naïve images for 370 subjects were processed and released through the NCI Childhood Cancer Data Initiative via the Cancer Data Service. Through ongoing efforts to continuously build these imaging repositories, our aim is to accelerate discovery and translational AI models with real-world data, to ultimately empower precision medicine for children.

9.
NPJ Digit Med ; 6(1): 220, 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38012349

RESUMO

Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.

10.
J Infect Dis ; 228(Suppl 4): S322-S336, 2023 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-37788501

RESUMO

The mass production of the graphics processing unit and the coronavirus disease 2019 (COVID-19) pandemic have provided the means and the motivation, respectively, for rapid developments in artificial intelligence (AI) and medical imaging techniques. This has led to new opportunities to improve patient care but also new challenges that must be overcome before these techniques are put into practice. In particular, early AI models reported high performances but failed to perform as well on new data. However, these mistakes motivated further innovation focused on developing models that were not only accurate but also stable and generalizable to new data. The recent developments in AI in response to the COVID-19 pandemic will reap future dividends by facilitating, expediting, and informing other medical AI applications and educating the broad academic audience on the topic. Furthermore, AI research on imaging animal models of infectious diseases offers a unique problem space that can fill in evidence gaps that exist in clinical infectious disease research. Here, we aim to provide a focused assessment of the AI techniques leveraged in the infectious disease imaging research space, highlight the unique challenges, and discuss burgeoning solutions.


Assuntos
COVID-19 , Doenças Transmissíveis , Humanos , Inteligência Artificial , Pandemias , Diagnóstico por Imagem/métodos , Doenças Transmissíveis/diagnóstico por imagem
11.
Front Radiol ; 3: 1175473, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37810757

RESUMO

Purpose: The goal of this work is to explore the best optimizers for deep learning in the context of medical image segmentation and to provide guidance on how to design segmentation networks with effective optimization strategies. Approach: Most successful deep learning networks are trained using two types of stochastic gradient descent (SGD) algorithms: adaptive learning and accelerated schemes. Adaptive learning helps with fast convergence by starting with a larger learning rate (LR) and gradually decreasing it. Momentum optimizers are particularly effective at quickly optimizing neural networks within the accelerated schemes category. By revealing the potential interplay between these two types of algorithms [LR and momentum optimizers or momentum rate (MR) in short], in this article, we explore the two variants of SGD algorithms in a single setting. We suggest using cyclic learning as the base optimizer and integrating optimal values of learning rate and momentum rate. The new optimization function proposed in this work is based on the Nesterov accelerated gradient optimizer, which is more efficient computationally and has better generalization capabilities compared to other adaptive optimizers. Results: We investigated the relationship of LR and MR under an important problem of medical image segmentation of cardiac structures from MRI and CT scans. We conducted experiments using the cardiac imaging dataset from the ACDC challenge of MICCAI 2017, and four different architectures were shown to be successful for cardiac image segmentation problems. Our comprehensive evaluations demonstrated that the proposed optimizer achieved better results (over a 2% improvement in the dice metric) than other optimizers in the deep learning literature with similar or lower computational cost in both single and multi-object segmentation settings. Conclusions: We hypothesized that the combination of accelerated and adaptive optimization methods can have a drastic effect in medical image segmentation performances. To this end, we proposed a new cyclic optimization method (Cyclic Learning/Momentum Rate) to address the efficiency and accuracy problems in deep learning-based medical image segmentation. The proposed strategy yielded better generalization in comparison to adaptive optimizers.

12.
Curr Opin Gastroenterol ; 39(5): 436-447, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37523001

RESUMO

PURPOSE OF REVIEW: Early and accurate diagnosis of pancreatic cancer is crucial for improving patient outcomes, and artificial intelligence (AI) algorithms have the potential to play a vital role in computer-aided diagnosis of pancreatic cancer. In this review, we aim to provide the latest and relevant advances in AI, specifically deep learning (DL) and radiomics approaches, for pancreatic cancer diagnosis using cross-sectional imaging examinations such as computed tomography (CT) and magnetic resonance imaging (MRI). RECENT FINDINGS: This review highlights the recent developments in DL techniques applied to medical imaging, including convolutional neural networks (CNNs), transformer-based models, and novel deep learning architectures that focus on multitype pancreatic lesions, multiorgan and multitumor segmentation, as well as incorporating auxiliary information. We also discuss advancements in radiomics, such as improved imaging feature extraction, optimized machine learning classifiers and integration with clinical data. Furthermore, we explore implementing AI-based clinical decision support systems for pancreatic cancer diagnosis using medical imaging in practical settings. SUMMARY: Deep learning and radiomics with medical imaging have demonstrated strong potential to improve diagnostic accuracy of pancreatic cancer, facilitate personalized treatment planning, and identify prognostic and predictive biomarkers. However, challenges remain in translating research findings into clinical practice. More studies are required focusing on refining these methods, addressing significant limitations, and developing integrative approaches for data analysis to further advance the field of pancreatic cancer diagnosis.


Assuntos
Aprendizado Profundo , Neoplasias Pancreáticas , Humanos , Inteligência Artificial , Pâncreas , Neoplasias Pancreáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
13.
Neuroimaging Clin N Am ; 33(2): 279-297, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36965946

RESUMO

Advanced imaging techniques are needed to assist in providing a prognosis for patients with traumatic brain injury (TBI), particularly mild TBI (mTBI). Diffusion tensor imaging (DTI) is one promising advanced imaging technique, but has shown variable results in patients with TBI and is not without limitations, especially when considering individual patients. Efforts to resolve these limitations are being explored and include developing advanced diffusion techniques, creating a normative database, improving study design, and testing machine learning algorithms. This article will review the fundamentals of DTI, providing an overview of the current state of its utility in evaluating and providing prognosis in patients with TBI.


Assuntos
Concussão Encefálica , Lesões Encefálicas Traumáticas , Humanos , Imagem de Tensor de Difusão/métodos , Lesões Encefálicas Traumáticas/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética , Prognóstico , Encéfalo/diagnóstico por imagem
14.
J Med Imaging (Bellingham) ; 10(2): 024002, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36891503

RESUMO

Purpose: We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach: The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results: We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of < 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones. Conclusions: Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning.

15.
Med Biol Eng Comput ; 61(1): 285-295, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36414816

RESUMO

One of the techniques for achieving unique and reliable information in medicine is renal scintigraphy. A key step for quantitative renal scintigraphy is segmentation of the kidneys. Here, an automatic segmentation framework was proposed for computer-aided renal scintigraphy procedures. To extract kidney boundary in dynamic renal scintigraphic images, a multi-step approach was proposed. This technique is featured with key steps, namely, localization and segmentation. At first, the ROI of each kidney was estimated using Otsu's thresholding, anatomical constraint, and integral projection, which is done in an automatic process. Afterwards, the ROI obtained for the kidneys was used as the initial contours to create the final counter of kidneys using geometric active contours. At this step and for the segmentation, an improved variational level set was utilized through Mumford-Shah formulation. Using e.cam gamma camera system (SIEMENS), 30 data sets were used to assess the proposed method. By comparing the manually outlined borders, the performance of the proposed method was shown. Different measures were used to examine the performance. It was found that the proposed segmentation method managed to extract the kidney boundary in renal scintigraphic images. The proposed technique achieved a sensitivity of 95.15% and a specificity of 95.33%. In addition, the section under the curve in the ROC analysis was equal to 0.974. The proposed technique successfully segmented the renal contour in dynamic renal scintigraphy. Using all the data sets, a correct segmentation of the kidney was performed. In addition, the technique was successful with noisy and low-resolution images and challenging cases with close interfering activities such as liver and spleen activities.


Assuntos
Algoritmos , Rim , Rim/diagnóstico por imagem , Abdome , Fígado , Computadores , Processamento de Imagem Assistida por Computador/métodos
16.
Mach Learn Med Imaging ; 14349: 134-143, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38274402

RESUMO

Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer. Therefore, detecting and stratifying their risk level is of ultimate importance for effective treatment planning and disease control. However, this is a highly challenging task because of the diverse and irregular shape, texture, and size of the IPMN cysts as well as the pancreas. In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from multi-contrast MRI scans. Our proposed analysis framework includes an efficient volumetric self-adapting segmentation strategy for pancreas delineation, followed by a newly designed deep learning-based classification scheme with a radiomics-based predictive approach. We test our proposed decision-fusion model in multi-center data sets of 246 multi-contrast MRI scans and obtain superior performance to the state of the art (SOTA) in this field. Our ablation studies demonstrate the significance of both radiomics and deep learning modules for achieving the new SOTA performance compared to international guidelines and published studies (81.9% vs 61.3% in accuracy). Our findings have important implications for clinical decision-making. In a series of rigorous experiments on multi-center data sets (246 MRI scans from five centers), we achieved unprecedented performance (81.9% accuracy). The code is available upon publication.

17.
IEEE Int Conf Comput Vis Workshops ; 2023: 2646-2655, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38298808

RESUMO

Accurate medical image segmentation is of utmost importance for enabling automated clinical decision procedures. However, prevailing supervised deep learning approaches for medical image segmentation encounter significant challenges due to their heavy dependence on extensive labeled training data. To tackle this issue, we propose a novel self-supervised algorithm, S3-Net, which integrates a robust framework based on the proposed Inception Large Kernel Attention (I-LKA) modules. This architectural enhancement makes it possible to comprehensively capture contextual information while preserving local intricacies, thereby enabling precise semantic segmentation. Furthermore, considering that lesions in medical images often exhibit deformations, we leverage deformable convolution as an integral component to effectively capture and delineate lesion deformations for superior object boundary definition. Additionally, our self-supervised strategy emphasizes the acquisition of invariance to affine transformations, which is commonly encountered in medical scenarios. This emphasis on robustness with respect to geometric distortions significantly enhances the model's ability to accurately model and handle such distortions. To enforce spatial consistency and promote the grouping of spatially connected image pixels with similar feature representations, we introduce a spatial consistency loss term. This aids the network in effectively capturing the relationships among neighboring pixels and enhancing the overall segmentation quality. The S3-Net approach iteratively learns pixel-level feature representations for image content clustering in an end-to-end manner. Our experimental results on skin lesion and lung organ segmentation tasks show the superior performance of our method compared to the SOTA approaches.

18.
Med Image Comput Comput Assist Interv ; 14222: 736-746, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38299070

RESUMO

Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87% and +0.76% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at GitHub.

19.
Osteoarthr Imaging ; 3(1)2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-39036792

RESUMO

Objective: To evaluate whether the deep learning (DL) segmentation methods from the six teams that participated in the IWOAI 2019 Knee Cartilage Segmentation Challenge are appropriate for quantifying cartilage loss in longitudinal clinical trials. Design: We included 556 subjects from the Osteoarthritis Initiative study with manually read cartilage volume scores for the baseline and 1-year visits. The teams used their methods originally trained for the IWOAI 2019 challenge to segment the 1130 knee MRIs. These scans were anonymized and the teams were blinded to any subject or visit identifiers. Two teams also submitted updated methods. The resulting 9,040 segmentations are available online.The segmentations included tibial, femoral, and patellar compartments. In post-processing, we extracted medial and lateral tibial compartments and geometrically defined central medial and lateral femoral sub-compartments. The primary study outcome was the sensitivity to measure cartilage loss as defined by the standardized response mean (SRM). Results: For the tibial compartments, several of the DL segmentation methods had SRMs similar to the gold standard manual method. The highest DL SRM was for the lateral tibial compartment at 0.38 (the gold standard had 0.34). For the femoral compartments, the gold standard had higher SRMs than the automatic methods at 0.31/0.30 for medial/lateral compartments. Conclusion: The lower SRMs for the DL methods in the femoral compartments at 0.2 were possibly due to the simple sub-compartment extraction done during post-processing. The study demonstrated that state-of-the-art DL segmentation methods may be used in standardized longitudinal single-scanner clinical trials for well-defined cartilage compartments.

20.
Sensors (Basel) ; 22(23)2022 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-36502261

RESUMO

Condition assessment of civil engineering structures has been an active research area due to growing concerns over the safety of aged as well as new civil structures. Utilization of emerging immersive visualization technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in the architectural, engineering, and construction (AEC) industry has demonstrated that these visualization tools can be paradigm-shifting. Extended Reality (XR), an umbrella term for VR, AR, and MR technologies, has found many diverse use cases in the AEC industry. Despite this exciting trend, there is no review study on the usage of XR technologies for the condition assessment of civil structures. Thus, the present paper aims to fill this gap by presenting a literature review encompassing the utilization of XR technologies for the condition assessment of civil structures. This study aims to provide essential information and guidelines for practitioners and researchers on using XR technologies to maintain the integrity and safety of civil structures.


Assuntos
Realidade Aumentada , Realidade Virtual , Engenharia , Tecnologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...