Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Blood ; 141(17): 2100-2113, 2023 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-36542832

RESUMO

The choice to postpone treatment while awaiting genetic testing can result in significant delay in definitive therapies in patients with severe pancytopenia. Conversely, the misdiagnosis of inherited bone marrow failure (BMF) can expose patients to ineffectual and expensive therapies, toxic transplant conditioning regimens, and inappropriate use of an affected family member as a stem cell donor. To predict the likelihood of patients having acquired or inherited BMF, we developed a 2-step data-driven machine-learning model using 25 clinical and laboratory variables typically recorded at the initial clinical encounter. For model development, patients were labeled as having acquired or inherited BMF depending on their genomic data. Data sets were unbiasedly clustered, and an ensemble model was trained with cases from the largest cluster of a training cohort (n = 359) and validated with an independent cohort (n = 127). Cluster A, the largest group, was mostly immune or inherited aplastic anemia, whereas cluster B comprised underrepresented BMF phenotypes and was not included in the next step of data modeling because of a small sample size. The ensemble cluster A-specific model was accurate (89%) to predict BMF etiology, correctly predicting inherited and likely immune BMF in 79% and 92% of cases, respectively. Our model represents a practical guide for BMF diagnosis and highlights the importance of clinical and laboratory variables in the initial evaluation, particularly telomere length. Our tool can be potentially used by general hematologists and health care providers not specialized in BMF, and in under-resourced centers, to prioritize patients for genetic testing or for expeditious treatment.


Assuntos
Anemia Aplástica , Doenças da Medula Óssea , Pancitopenia , Humanos , Doenças da Medula Óssea/diagnóstico , Doenças da Medula Óssea/genética , Doenças da Medula Óssea/terapia , Diagnóstico Diferencial , Anemia Aplástica/diagnóstico , Anemia Aplástica/genética , Anemia Aplástica/terapia , Transtornos da Insuficiência da Medula Óssea/diagnóstico , Pancitopenia/diagnóstico
2.
Hepatol Commun ; 6(10): 2901-2913, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35852311

RESUMO

Hepatocellular carcinoma (HCC) can be potentially discovered from abdominal computed tomography (CT) studies under varied clinical scenarios (e.g., fully dynamic contrast-enhanced [DCE] studies, noncontrast [NC] plus venous phase [VP] abdominal studies, or NC-only studies). Each scenario presents its own clinical challenges that could benefit from computer-aided detection (CADe) tools. We investigate whether a single CADe model can be made flexible enough to handle different contrast protocols and whether this flexibility imparts performance gains. We developed a flexible three-dimensional deep algorithm, called heterophase volumetric detection (HPVD), that can accept any combination of contrast-phase inputs with adjustable sensitivity depending on the clinical purpose. We trained HPVD on 771 DCE CT scans to detect HCCs and evaluated it on 164 positives and 206 controls. We compared performance against six clinical readers, including two radiologists, two hepatopancreaticobiliary surgeons, and two hepatologists. The area under the curve of the localization receiver operating characteristic for NC-only, NC plus VP, and full DCE CT yielded 0.71 (95% confidence interval [CI], 0.64-0.77), 0.81 (95% CI, 0.75-0.87), and 0.89 (95% CI, 0.84-0.93), respectively. At a high-sensitivity operating point of 80% on DCE CT, HPVD achieved 97% specificity, which is comparable to measured physician performance. We also demonstrated performance improvements over more typical and less flexible nonheterophase detectors. Conclusion: A single deep-learning algorithm can be effectively applied to diverse HCC detection clinical scenarios, indicating that HPVD could serve as a useful clinical aid for at-risk and opportunistic HCC surveillance.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Algoritmos , Carcinoma Hepatocelular/diagnóstico , Meios de Contraste , Humanos , Neoplasias Hepáticas/diagnóstico , Tomografia Computadorizada por Raios X/métodos
3.
IEEE Trans Med Imaging ; 41(10): 2658-2669, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35442886

RESUMO

Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.


Assuntos
Imageamento Tridimensional , Tomografia Computadorizada por Raios X , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Radiografia , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X/métodos
4.
Med Image Anal ; 77: 102345, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35051899

RESUMO

Accurate and reliable detection of abnormal lymph nodes in magnetic resonance (MR) images is very helpful for the diagnosis and treatment of numerous diseases. However, it is still a challenging task due to similar appearances between abnormal lymph nodes and other tissues. In this paper, we propose a novel network based on an improved Mask R-CNN framework for the detection of abnormal lymph nodes in MR images. Instead of laboriously collecting large-scale pixel-wise annotated training data, pseudo masks generated from RECIST bookmarks on hand are utilized as the supervision. Different from the standard Mask R-CNN architecture, there are two main innovations in our proposed network: 1) global-local attention which encodes the global and local scale context for detection and utilizes the channel attention mechanism to extract more discriminative features and 2) multi-task uncertainty loss which adaptively weights multiple objective loss functions based on the uncertainty of each task to automatically search the optimal solution. For the experiments, we built a new abnormal lymph node dataset with 821 RECIST bookmarks of 41 different types of abnormal abdominal lymph nodes from 584 different patients. The experimental results showed the superior performance of our algorithm over compared state-of-the-art approaches.


Assuntos
Linfonodos , Imageamento por Ressonância Magnética , Algoritmos , Humanos , Linfonodos/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Incerteza
5.
IEEE Trans Med Imaging ; 40(10): 2759-2770, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33370236

RESUMO

Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations). When training a lesion detector on a partially-labeled dataset, the missing annotations will generate incorrect negative signals and degrade the performance. Besides DeepLesion, there are several small single-type datasets, such as LUNA for lung nodules and LiTS for liver tumors. These datasets have heterogeneous label scopes, i.e., different lesion types are labeled in different datasets with other types ignored. In this work, we aim to develop a universal lesion detection algorithm to detect a variety of lesions. The problem of heterogeneous and partial labels is tackled. First, we build a simple yet effective lesion detection framework named Lesion ENSemble (LENS). LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion and leverage their synergy by proposal fusion. Next, we propose strategies to mine missing annotations from partially-labeled datasets by exploiting clinical prior knowledge and cross-dataset knowledge transfer. Finally, we train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion. Our method brings a relative improvement of 49% compared to the current state-of-the-art approach in the metric of average sensitivity. We have publicly released our manual 3D annotations of DeepLesion online.1 1https://github.com/viggin/DeepLesion_manual_test_set.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Radiografia
6.
Med Image Anal ; 67: 101839, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33080508

RESUMO

The interpretation of medical images is a complex cognition procedure requiring cautious observation, precise understanding/parsing of the normal body anatomies, and combining knowledge of physiology and pathology. Interpreting chest X-ray (CXR) images is challenging since the 2D CXR images show the superimposition on internal organs/tissues with low resolution and poor boundaries. Unlike previous CXR computer-aided diagnosis works that focused on disease diagnosis/classification, we firstly propose a deep disentangled generative model (DGM) simultaneously generating abnormal disease residue maps and "radiorealistic" normal CXR images from an input abnormal CXR image. The intuition of our method is based on the assumption that disease regions usually superimpose upon or replace the pixels of normal tissues in an abnormal CXR. Thus, disease regions can be disentangled or decomposed from the abnormal CXR by comparing it with a generated patient-specific normal CXR. DGM consists of three encoder-decoder architecture branches: one for radiorealistic normal CXR image synthesis using adversarial learning, one for disease separation by generating a residue map to delineate the underlying abnormal region, and the other one for facilitating the training process and enhancing the model's robustness on noisy data. A self-reconstruction loss is adopted in the first two branches to enforce the generated normal CXR image to preserve similar visual structures as the original CXR. We evaluated our model on a large-scale chest X-ray dataset. The results show that our model can generate disease residue/saliency maps (coherent with radiologist annotations) along with radiorealistic and patient specific normal CXR images. The disease residue/saliency map can be used by radiologists to improve the CXR reading efficiency in clinical practice. The synthesized normal CXR can be used for data augmentation and normal control of personalized longitudinal disease study. Furthermore, DGM quantitatively boosts the diagnosis performance on several important clinical applications, including normal/abnormal CXR classification, and lung opacity classification/detection.


Assuntos
Diagnóstico por Computador , Tórax , Humanos , Aprendizagem , Radiografia , Raios X
7.
NPJ Digit Med ; 3: 70, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32435698

RESUMO

As one of the most ubiquitous diagnostic imaging tests in medical practice, chest radiography requires timely reporting of potential findings and diagnosis of diseases in the images. Automated, fast, and reliable detection of diseases based on chest radiography is a critical step in radiology workflow. In this work, we developed and evaluated various deep convolutional neural networks (CNN) for differentiating between normal and abnormal frontal chest radiographs, in order to help alert radiologists and clinicians of potential abnormal findings as a means of work list triaging and reporting prioritization. A CNN-based model achieved an AUC of 0.9824 ± 0.0043 (with an accuracy of 94.64 ± 0.45%, a sensitivity of 96.50 ± 0.36% and a specificity of 92.86 ± 0.48%) for normal versus abnormal chest radiograph classification. The CNN model obtained an AUC of 0.9804 ± 0.0032 (with an accuracy of 94.71 ± 0.32%, a sensitivity of 92.20 ± 0.34% and a specificity of 96.34 ± 0.31%) for normal versus lung opacity classification. Classification performance on the external dataset showed that the CNN model is likely to be highly generalizable, with an AUC of 0.9444 ± 0.0029. The CNN model pre-trained on cohorts of adult patients and fine-tuned on pediatric patients achieved an AUC of 0.9851 ± 0.0046 for normal versus pneumonia classification. Pretraining with natural images demonstrates benefit for a moderate-sized training image set of about 8500 images. The remarkable performance in diagnostic accuracy observed in this study shows that deep CNNs can accurately and effectively differentiate normal and abnormal chest radiographs, thereby providing potential benefits to radiology workflow and patient care.

8.
IEEE Trans Image Process ; 26(3): 1509-1520, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28113342

RESUMO

Scene text detection and segmentation are two important and challenging research problems in the field of computer vision. This paper proposes a novel method for scene text detection and segmentation based on cascaded convolution neural networks (CNNs). In this method, a CNN based text-aware candidate text region (CTR) extraction model (named detection network, DNet) is designed and trained using both the edges and the whole regions of text, with which coarse CTRs are detected. A CNN based CTR refinement model (named segmentation network, SNet) is then constructed to precisely segment the coarse CTRs into text to get the refined CTRs. With DNet and SNet, much fewer CTRs are extracted than with traditional approaches while more true text regions are kept. The refined CTRs are finally classified using a CNN based CTR classification model (named classification network, CNet) to get the final text regions. All of these CNN based models are modified from VGGNet-16. Extensive experiments on three benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance and greatly outperforms other scene text detection and segmentation approaches.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...