Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38648141

RESUMO

Accurate recognition of fetal anatomical structure is a pivotal task in ultrasound (US) image analysis. Sonographers naturally apply anatomical knowledge and clinical expertise to recognizing key anatomical structures in complex US images. However, mainstream object detection approaches usually treat each structure recognition separately, overlooking anatomical correlations between different structures in fetal US planes. In this work, we propose a Fetal Anatomy Reasoning Network (FARN) that incorporates two kinds of relationship forms: a global context semantic block summarized with visual similarity and a local topology relationship block depicting structural pair constraints. Specifically, by designing the Adaptive Relation Graph Reasoning (ARGR) module, anatomical structures are treated as nodes, with two kinds of relationships between nodes modeled as edges. The flexibility of the model is enhanced by constructing the adaptive relationship graph in a data-driven way, enabling adaptation to various data samples without the need for predefined additional constraints. The feature representation is further enhanced by aggregating the outputs of the ARGR module. Comprehensive experimental results demonstrate that FARN achieves promising performance in detecting 37 anatomical structures across key US planes in tertiary obstetric screening. FARN effectively utilizes key relationships to improve detection performance, demonstrates robustness to small-scale, similar, and indistinct structures, and avoids some detection errors that deviate from anatomical norms. Overall, our study serves as a resource for developing efficient and concise approaches to model inter-anatomy relationships.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37930929

RESUMO

Biometric parameter measurements are powerful tools for evaluating a fetus's gestational age, growth pattern, and abnormalities in a 2D ultrasound. However, it is still challenging to measure fetal biometric parameters automatically due to the indiscriminate confusing factors, limited foreground-background contrast, variety of fetal anatomy shapes at different gestational ages, and blurry anatomical boundaries in ultrasound images. The performance of a standard CNN architecture is limited for these tasks due to the restricted receptive field. We propose a novel hybrid Transformer framework, TransFSM, to address fetal multi-anatomy segmentation and biometric measurement tasks. Unlike the vanilla Transformer based on a single-scale input, TransFSM has a deformable self-attention mechanism so it can effectively process multi-scale information to segment fetal anatomy with irregular shapes and different sizes. We devised a BAD to capture more intrinsic local details using boundary-wise prior knowledge, which compensates for the defects of the Transformer in extracting local features. In addition, a Transformer auxiliary segment head is designed to improve mask prediction by learning the semantic correspondence of the same pixel categories and feature discriminability among different pixel categories. Extensive experiments were conducted on clinical cases and benchmark datasets for anatomy segmentation and biometric measurement tasks. The experiment results indicate that our method achieves state-of-the-art performance in seven evaluation metrics compared with CNN-based, Transformer-based, and hybrid approaches. By Knowledge distillation, the proposed TransFSM can create a more compact and efficient model with high deploying potential in resource-constrained scenarios. Our study serves as a unified framework for biometric estimation across multiple anatomical regions to monitor fetal growth in clinical practice.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...