Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Eur J Radiol ; 181: 111736, 2024 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-39307069

RESUMO

PURPOSE: Compared to conventional energy integrating detector CT, Photon-Counting CT (PCCT) has the advantage of increased spatial resolution. The pancreas is a highly complex organ anatomically. The increased spatial resolution of PCCT challenges radiologists' knowledge of pancreatic anatomy. The purpose of this review was to review detailed macroscopic and microscopic anatomy of the pancreas in the context of current and future PCCT. METHOD: This review is based on a literature review of all parts of pancreatic anatomy and a retrospective imaging review of PCCT scans from 20 consecutively included patients without pancreatic pathology (mean age 61.8 years, 11 female), scanned in the workup of pancreatic cancer with a contrast enhanced multiphase protocol. Two radiologists assessed the visibility of the main and accessory pancreatic ducts, side ducts, ampulla, major papilla, minor papilla, pancreatic arteries and veins, regional lymph nodes, coeliac ganglia, and coeliac plexus. RESULTS: The macroscopic anatomy of the pancreas was consistently visualized with PCCT. Visualization of detailed anatomy of the ductal system (including side ducts), papillae, arteries, vein, lymph nodes, and innervation was possible in 90% or more of patients with moderate to good interreader agreement. CONCLUSION: PCCT scans of the pancreas visualizes previously unseen or inconsistently seen small anatomical structures consistently. Increased knowledge of pancreatic anatomy could have importance in imaging of pancreatic cancer and other pancreatic diseases.

2.
BMC Oral Health ; 24(1): 772, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38987714

RESUMO

Integrating artificial intelligence (AI) into medical and dental applications can be challenging due to clinicians' distrust of computer predictions and the potential risks associated with erroneous outputs. We introduce the idea of using AI to trigger second opinions in cases where there is a disagreement between the clinician and the algorithm. By keeping the AI prediction hidden throughout the diagnostic process, we minimize the risks associated with distrust and erroneous predictions, relying solely on human predictions. The experiment involved 3 experienced dentists, 25 dental students, and 290 patients treated for advanced caries across 6 centers. We developed an AI model to predict pulp status following advanced caries treatment. Clinicians were asked to perform the same prediction without the assistance of the AI model. The second opinion framework was tested in a 1000-trial simulation. The average F1-score of the clinicians increased significantly from 0.586 to 0.645.


Assuntos
Inteligência Artificial , Cárie Dentária , Humanos , Cárie Dentária/terapia , Encaminhamento e Consulta , Planejamento de Assistência ao Paciente , Algoritmos
3.
Med Phys ; 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39031886

RESUMO

BACKGROUND: The pancreas is a complex abdominal organ with many anatomical variations, and therefore automated pancreas segmentation from medical images is a challenging application. PURPOSE: In this paper, we present a framework for segmenting individual pancreatic subregions and the pancreatic duct from three-dimensional (3D) computed tomography (CT) images. METHODS: A multiagent reinforcement learning (RL) network was used to detect landmarks of the head, neck, body, and tail of the pancreas, and landmarks along the pancreatic duct in a selected target CT image. Using the landmark detection results, an atlas of pancreases was nonrigidly registered to the target image, resulting in anatomical probability maps for the pancreatic subregions and duct. The probability maps were augmented with multilabel 3D U-Net architectures to obtain the final segmentation results. RESULTS: To evaluate the performance of our proposed framework, we computed the Dice similarity coefficient (DSC) between the predicted and ground truth manual segmentations on a database of 82 CT images with manually segmented pancreatic subregions and 37 CT images with manually segmented pancreatic ducts. For the four pancreatic subregions, the mean DSC improved from 0.38, 0.44, and 0.39 with standard 3D U-Net, Attention U-Net, and shifted windowing (Swin) U-Net architectures, to 0.51, 0.47, and 0.49, respectively, when utilizing the proposed RL-based framework. For the pancreatic duct, the RL-based framework achieved a mean DSC of 0.70, significantly outperforming the standard approaches and existing methods on different datasets. CONCLUSIONS: The resulting accuracy of the proposed RL-based segmentation framework demonstrates an improvement against segmentation with standard U-Net architectures.

4.
Radiother Oncol ; 198: 110410, 2024 09.
Artigo em Inglês | MEDLINE | ID: mdl-38917883

RESUMO

BACKGROUND AND PURPOSE: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge. MATERIALS AND METHODS: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test. RESULTS: While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities. CONCLUSION: This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.


Assuntos
Neoplasias de Cabeça e Pescoço , Imageamento por Ressonância Magnética , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Órgãos em Risco/efeitos da radiação , Planejamento da Radioterapia Assistida por Computador/métodos
5.
IEEE J Biomed Health Inform ; 28(6): 3597-3612, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38421842

RESUMO

Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.


Assuntos
Tecnologia de Rastreamento Ocular , Aprendizado de Máquina , Humanos , Diagnóstico por Imagem/métodos , Algoritmos , Movimentos Oculares/fisiologia , Processamento de Imagem Assistida por Computador/métodos
6.
Med Phys ; 51(3): 2175-2186, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38230752

RESUMO

BACKGROUND: Accurate and consistent contouring of organs-at-risk (OARs) from medical images is a key step of radiotherapy (RT) cancer treatment planning. Most contouring approaches rely on computed tomography (CT) images, but the integration of complementary magnetic resonance (MR) modality is highly recommended, especially from the perspective of OAR contouring, synthetic CT and MR image generation for MR-only RT, and MR-guided RT. Although MR has been recognized as valuable for contouring OARs in the head and neck (HaN) region, the accuracy and consistency of the resulting contours have not been yet objectively evaluated. PURPOSE: To analyze the interobserver and intermodality variability in contouring OARs in the HaN region, performed by observers with different level of experience from CT and MR images of the same patients. METHODS: In the final cohort of 27 CT and MR images of the same patients, contours of up to 31 OARs were obtained by a radiation oncology resident (junior observer, JO) and a board-certified radiation oncologist (senior observer, SO). The resulting contours were then evaluated in terms of interobserver variability, characterized as the agreement among different observers (JO and SO) when contouring OARs in a selected modality (CT or MR), and intermodality variability, characterized as the agreement among different modalities (CT and MR) when OARs were contoured by a selected observer (JO or SO), both by the Dice coefficient (DC) and 95-percentile Hausdorff distance (HD 95 $_{95}$ ). RESULTS: The mean (±standard deviation) interobserver variability was 69.0 ± 20.2% and 5.1 ± 4.1 mm, while the mean intermodality variability was 61.6 ± 19.0% and 6.1 ± 4.3 mm in terms of DC and HD 95 $_{95}$ , respectively, across all OARs. Statistically significant differences were only found for specific OARs. The performed MR to CT image registration resulted in a mean target registration error of 1.7 ± 0.5 mm, which was considered as valid for the analysis of intermodality variability. CONCLUSIONS: The contouring variability was, in general, similar for both image modalities, and experience did not considerably affect the contouring performance. However, the results indicate that an OAR is difficult to contour regardless of whether it is contoured in the CT or MR image, and that observer experience may be an important factor for OARs that are deemed difficult to contour. Several of the differences in the resulting variability can be also attributed to adherence to guidelines, especially for OARs with poor visibility or without distinctive boundaries in either CT or MR images. Although considerable contouring differences were observed for specific OARs, it can be concluded that almost all OARs can be contoured with a similar degree of variability in either the CT or MR modality, which works in favor of MR images from the perspective of MR-only and MR-guided RT.


Assuntos
Neoplasias de Cabeça e Pescoço , Planejamento da Radioterapia Assistida por Computador , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Pescoço , Tomografia Computadorizada por Raios X , Imageamento por Ressonância Magnética , Cabeça , Órgãos em Risco , Variações Dependentes do Observador , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia
7.
J Dent ; 138: 104732, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37778496

RESUMO

OBJECTIVES: The objective was to examine the effect of giving Artificial Intelligence (AI)-based radiographic information versus standard radiographic and clinical information to dental students on their pulp exposure prediction ability. METHODS: 292 preoperative bitewing radiographs from patients previously treated were used. A multi-path neural network was implemented. The first path was a convolutional neural network (CNN) based on ResNet-50 architecture. The second path was a neural network trained on the distance between the pulp and lesion extracted from X-ray segmentations. Both paths merged and were followed by fully connected layers that predicted the probability of pulp exposure. A trial concerning the prediction of pulp exposure based on radiographic input and information on age and pain was conducted, involving 25 dental students. The data displayed was divided into 4 groups (G): GX-ray, GX-ray+clinical data, GX-ray+AI, GX-ray+clinical data+AI. RESULTS: The results showed that AI surpassed the performance of students in all groups with an F1-score of 0.71 (P < 0.001). The students' F1-score in GX-ray+AI and GX-ray+clinical data+AI with model prediction (0.61 and 0.61 respectively) was slightly higher than the F1-score in GX-ray and GX-ray+clinical data (0.58 and 0.59 respectively) with a borderline statistical significance of P = 0.054. CONCLUSIONS: Although the AI model had much better performance than all groups, the participants when given AI prediction, benefited only 'slightly'. AI technology seems promising, but more explainable AI predictions along with a 'learning curve' are warranted.


Assuntos
Aprendizado Profundo , Cárie Dentária , Humanos , Inteligência Artificial , Suscetibilidade à Cárie Dentária , Redes Neurais de Computação , Cárie Dentária/diagnóstico por imagem , Cárie Dentária/terapia
8.
J Digit Imaging ; 36(3): 767-775, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36622464

RESUMO

The workload of some radiologists increased dramatically in the last several, which resulted in a potentially reduced quality of diagnosis. It was demonstrated that diagnostic accuracy of radiologists significantly reduces at the end of work shifts. The study aims to investigate how radiologists cover chest X-rays with their gaze in the presence of different chest abnormalities and high workload. We designed a randomized experiment to quantitatively assess how radiologists' image reading patterns change with the radiological workload. Four radiologists read chest X-rays on a radiological workstation equipped with an eye-tracker. The lung fields on the X-rays were automatically segmented with U-Net neural network allowing to measure the lung coverage with radiologists' gaze. The images were randomly split so that each image was shown at a different time to a different radiologist. Regression models were fit to the gaze data to calculate the treads in lung coverage for individual radiologists and chest abnormalities. For the study, a database of 400 chest X-rays with reference diagnoses was assembled. The average lung coverage with gaze ranged from 55 to 65% per radiologist. For every 100 X-rays read, the lung coverage reduced from 1.3 to 7.6% for the different radiologists. The coverage reduction trends were consistent for all abnormalities ranging from 3.4% per 100 X-rays for cardiomegaly to 4.1% per 100 X-rays for atelectasis. The more image radiologists read, the smaller part of the lung fields they cover with the gaze. This pattern is very stable for all abnormality types and is not affected by the exact order the abnormalities are viewed by radiologists. The proposed randomized experiment captured and quantified consistent changes in X-ray reading for different lung abnormalities that occur due to high workload.


Assuntos
Radiologistas , Radiologia , Humanos , Raios X , Radiografia , Pulmão/diagnóstico por imagem
9.
Sci Rep ; 13(1): 1135, 2023 01 20.
Artigo em Inglês | MEDLINE | ID: mdl-36670118

RESUMO

In 2020, an experiment testing AI solutions for lung X-ray analysis on a multi-hospital network was conducted. The multi-hospital network linked 178 Moscow state healthcare centers, where all chest X-rays from the network were redirected to a research facility, analyzed with AI, and returned to the centers. The experiment was formulated as a public competition with monetary awards for participating industrial and research teams. The task was to perform the binary detection of abnormalities from chest X-rays. For the objective real-life evaluation, no training X-rays were provided to the participants. This paper presents one of the top-performing AI frameworks from this experiment. First, the framework used two EfficientNets, histograms of gradients, Haar feature ensembles, and local binary patterns to recognize whether an input image represents an acceptable lung X-ray sample, meaning the X-ray is not grayscale inverted, is a frontal chest X-ray, and completely captures both lung fields. Second, the framework extracted the region with lung fields and then passed them to a multi-head DenseNet, where the heads recognized the patient's gender, age and the potential presence of abnormalities, and generated the heatmap with the abnormality regions highlighted. During one month of the experiment from 11.23.2020 to 12.25.2020, 17,888 cases have been analyzed by the framework with 11,902 cases having radiological reports with the reference diagnoses that were unequivocally parsed by the experiment organizers. The performance measured in terms of the area under receiving operator curve (AUC) was 0.77. The AUC for individual diseases ranged from 0.55 for herniation to 0.90 for pneumothorax.


Assuntos
Pneumotórax , Radiografia Torácica , Humanos , Radiografia Torácica/métodos , Pulmão/diagnóstico por imagem , Tórax , Inteligência Artificial
10.
Med Phys ; 50(3): 1917-1927, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36594372

RESUMO

PURPOSE: For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS: The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES: The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value  files. POTENTIAL APPLICATIONS: The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.


Assuntos
Neoplasias de Cabeça e Pescoço , Radioterapia Guiada por Imagem , Humanos , Algoritmos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Processamento de Imagem Assistida por Computador/métodos , Órgãos em Risco/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
11.
Acta Odontol Scand ; 81(6): 422-435, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36548872

RESUMO

OBJECTIVES: To assess the efficiency of AI methods in finding radiographic features in Endodontic treatment considerations. MATERIAL AND METHODS: This review was based on the PRISMA guidelines and QUADAS 2 tool. A systematic search was performed of the literature on cases with endodontic treatments, comparing AI algorithms (test) versus conventional image assessments (control) for finding radiographic features. The search was conducted in PubMed, Scopus, Google Scholar and the Cochrane library. Inclusion criteria were studies on the use of AI and machine learning in endodontic treatments using dental X-rays. RESULTS: The initial search retrieved 1131 papers, from which 24 were included. High heterogeneity of the materials left out a meta-analysis. The reported subcategories were periapical lesion, vertical root fractures, predicting root/canal morphology, locating minor apical foramen, tooth segmentation and endodontic retreatment prediction. Radiographic features assessed were mostly periapical lesions. The studies mostly considered the decision of 1-3 experts as the reference for training their models. Almost half of the included materials campared their trained neural network model with other methods. More than 58% of studies had some level of bias. CONCLUSIONS: AI-based models have shown effectiveness in finding radiographic features in different endodontic treatments. While the reported accuracy measurements seem promising, the papers mostly were biased methodologically.


Assuntos
Inteligência Artificial , Dente , Humanos , Assistência Odontológica , Tratamento do Canal Radicular/métodos
12.
IEEE J Biomed Health Inform ; 26(9): 4541-4550, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35704540

RESUMO

Around 60-80% of radiological errors are attributed to overlooked abnormalities, the rate of which increases at the end of work shifts. In this study, we run an experiment to investigate if artificial intelligence (AI) can assist in detecting radiologists' gaze patterns that correlate with fatigue. A retrospective database of lung X-ray images with the reference diagnoses was used. The X-ray images were acquired from 400 subjects with a mean age of 49 ± 17, and 61% men. Four practicing radiologists read these images while their eye movements were recorded. The radiologists passed a series of concentration tests at prearranged breaks of the experiment. A U-Net neural network was adapted to annotate lung anatomy on X-rays and calculate coverage and information gain features from the radiologists' eye movements over lung fields. The lung coverage, information gain, and eye tracker-based features were compared with the cumulative work done (CDW) label for each radiologist. The gaze-traveled distance, X-ray coverage, and lung coverage statistically significantly (p < 0.01) deteriorated with cumulative work done (CWD) for three out of four radiologists. The reading time and information gain over lungs statistically significantly deteriorated for all four radiologists. We discovered a novel AI-based metric blending reading time, speed, and organ coverage, which can be used to predict changes in the fatigue-related image reading patterns.


Assuntos
Inteligência Artificial , Carga de Trabalho , Adulto , Idoso , Fadiga , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Radiologistas , Estudos Retrospectivos
13.
Eur Spine J ; 31(8): 2115-2124, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35596800

RESUMO

PURPOSE: To propose a fully automated deep learning (DL) framework for the vertebral morphometry and Cobb angle measurement from three-dimensional (3D) computed tomography (CT) images of the spine, and validate the proposed framework on an external database. METHODS: The vertebrae were first localized and segmented in each 3D CT image using a DL architecture based on an ensemble of U-Nets, and then automated vertebral morphometry in the form of vertebral body (VB) and intervertebral disk (IVD) heights, and spinal curvature measurements in the form of coronal and sagittal Cobb angles (thoracic kyphosis and lumbar lordosis) were performed using dedicated machine learning techniques. The framework was trained on 1725 vertebrae from 160 CT images and validated on an external database of 157 vertebrae from 15 CT images. RESULTS: The resulting mean absolute errors (± standard deviation) between the obtained DL and corresponding manual measurements were 1.17 ± 0.40 mm for VB heights, 0.54 ± 0.21 mm for IVD heights, and 3.42 ± 1.36° for coronal and sagittal Cobb angles, with respective maximal absolute errors of 2.51 mm, 1.64 mm, and 5.52°. Linear regression revealed excellent agreement, with Pearson's correlation coefficient of 0.943, 0.928, and 0.996, respectively. CONCLUSION: The obtained results are within the range of values, obtained by existing DL approaches without external validation. The results therefore confirm the scalability of the proposed DL framework from the perspective of application to external data, and time and computational resource consumption required for framework training.


Assuntos
Aprendizado Profundo , Cifose , Lordose , Escoliose , Humanos , Vértebras Lombares/diagnóstico por imagem , Vértebras Torácicas/diagnóstico por imagem
14.
Med Image Anal ; 78: 102417, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35325712

RESUMO

Morphological abnormalities of the femoroacetabular (hip) joint are among the most common human musculoskeletal disorders and often develop asymptomatically at early easily treatable stages. In this paper, we propose an automated framework for landmark-based detection and quantification of hip abnormalities from magnetic resonance (MR) images. The framework relies on a novel idea of multi-landmark environment analysis with reinforcement learning. In particular, we merge the concepts of the graphical lasso and Morris sensitivity analysis with deep neural networks to quantitatively estimate the contribution of individual landmark and landmark subgroup locations to the other landmark locations. Convolutional neural networks for image segmentation are utilized to propose the initial landmark locations, and landmark detection is then formulated as a reinforcement learning (RL) problem, where each landmark-agent can adjust its position by observing the local MR image neighborhood and the locations of the most-contributive landmarks. The framework was validated on T1-, T2- and proton density-weighted MR images of 260 patients with the aim to measure the lateral center-edge angle (LCEA), femoral neck-shaft angle (NSA), and the anterior and posterior acetabular sector angles (AASA and PASA) of the hip, and derive the quantitative abnormality metrics from these angles. The framework was successfully tested using the UNet and feature pyramid network (FPN) segmentation architectures for landmark proposal generation, and the deep Q-network (DeepQN), deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3), and actor-critic policy gradient (A2C) RL networks for landmark position optimization. The resulting overall landmark detection error of 1.5 mm and angle measurement error of 1.4° indicates a superior performance in comparison to existing methods. Moreover, the automatically estimated abnormality labels were in 95% agreement with those generated by an expert radiologist.


Assuntos
Articulação do Quadril/anormalidades , Redes Neurais de Computação , Articulação do Quadril/diagnóstico por imagem , Humanos , Aprendizagem , Imageamento por Ressonância Magnética
15.
Eur Spine J ; 31(8): 2031-2045, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35278146

RESUMO

PURPOSE: To summarize and critically evaluate the existing studies for spinopelvic measurements of sagittal balance that are based on deep learning (DL). METHODS: Three databases (PubMed, WoS and Scopus) were queried for records using keywords related to DL and measurement of sagittal balance. After screening the resulting 529 records that were augmented with specific web search, 34 studies published between 2017 and 2022 were included in the final review, and evaluated from the perspective of the observed sagittal spinopelvic parameters, properties of spine image datasets, applied DL methodology and resulting measurement performance. RESULTS: Studies reported DL measurement of up to 18 different spinopelvic parameters, but the actual number depended on the image field of view. Image datasets were composed of lateral lumbar spine and whole spine X-rays, biplanar whole spine X-rays and lumbar spine magnetic resonance cross sections, and were increasing in size or enriched by augmentation techniques. Spinopelvic parameter measurement was approached either by landmark detection or structure segmentation, and U-Net was the most frequently applied DL architecture. The latest DL methods achieved excellent performance in terms of mean absolute error against reference manual measurements (~ 2° or ~ 1 mm). CONCLUSION: Although the application of relatively complex DL architectures resulted in an improved measurement accuracy of sagittal spinopelvic parameters, future methods should focus on multi-institution and multi-observer analyses as well as uncertainty estimation and error handling implementations for integration into the clinical workflow. Further advances will enhance the predictive analytics of DL methods for spinopelvic parameter measurement. LEVEL OF EVIDENCE I: Diagnostic: individual cross-sectional studies with the consistently applied reference standard and blinding.


Assuntos
Aprendizado Profundo , Estudos Transversais , Humanos , Vértebras Lombares/diagnóstico por imagem , Região Lombossacral/diagnóstico por imagem , Pelve/diagnóstico por imagem , Radiografia
16.
IEEE J Biomed Health Inform ; 25(10): 3886-3897, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33945490

RESUMO

Accurate segmentation of the polyps from colonoscopy images provides useful information for the diagnosis and treatment of colorectal cancer. Despite deep learning methods advance automatic polyp segmentation, their performance often degrades when applied to new data acquired from different scanners or sequences (target domain). As manual annotation is tedious and labor-intensive for new target domain, leveraging knowledge learned from the labeled source domain to promote the performance in the unlabeled target domain is highly demanded. In this work, we propose a mutual-prototype adaptation network to eliminate domain shifts in multi-centers and multi-devices colonoscopy images. We first devise a mutual-prototype alignment (MPA) module with the prototype relation function to refine features through self-domain and cross-domain information in a coarse-to-fine process. Then two auxiliary modules: progressive self-training (PST) and disentangled reconstruction (DR) are proposed to improve the segmentation performance. The PST module selects reliable pseudo labels through a novel uncertainty guided self-training loss to obtain accurate prototypes in the target domain. The DR module reconstructs original images jointly utilizing prediction results and private prototypes to maintain semantic consistency and provide complement supervision information. We extensively evaluate the proposed model in polyp segmentation performance on three conventional colonoscopy datasets: CVC-DB, Kvasir-SEG, and ETIS-Larib. The comprehensive experimental results demonstrate that the proposed model outperforms state-of-the-art methods.


Assuntos
Colonoscopia , Redes Neurais de Computação , Humanos , Semântica
17.
Sci Rep ; 11(1): 3246, 2021 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-33547335

RESUMO

Patients with severe COVID-19 have overwhelmed healthcare systems worldwide. We hypothesized that machine learning (ML) models could be used to predict risks at different stages of management and thereby provide insights into drivers and prognostic markers of disease progression and death. From a cohort of approx. 2.6 million citizens in Denmark, SARS-CoV-2 PCR tests were performed on subjects suspected for COVID-19 disease; 3944 cases had at least one positive test and were subjected to further analysis. SARS-CoV-2 positive cases from the United Kingdom Biobank was used for external validation. The ML models predicted the risk of death (Receiver Operation Characteristics-Area Under the Curve, ROC-AUC) of 0.906 at diagnosis, 0.818, at hospital admission and 0.721 at Intensive Care Unit (ICU) admission. Similar metrics were achieved for predicted risks of hospital and ICU admission and use of mechanical ventilation. Common risk factors, included age, body mass index and hypertension, although the top risk features shifted towards markers of shock and organ dysfunction in ICU patients. The external validation indicated fair predictive performance for mortality prediction, but suboptimal performance for predicting ICU admission. ML may be used to identify drivers of progression to more severe disease and for prognostication patients in patients with COVID-19. We provide access to an online risk calculator based on these findings.


Assuntos
COVID-19/diagnóstico , COVID-19/mortalidade , Simulação por Computador , Aprendizado de Máquina , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Índice de Massa Corporal , COVID-19/complicações , COVID-19/fisiopatologia , Comorbidade , Cuidados Críticos , Feminino , Hospitalização , Humanos , Hipertensão/complicações , Unidades de Terapia Intensiva , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Prospectivos , Curva ROC , Respiração Artificial , Fatores de Risco , Fatores Sexuais
18.
IEEE J Biomed Health Inform ; 25(5): 1660-1672, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-32956067

RESUMO

Pneumothorax is potentially a life-threatening disease that requires urgent diagnosis and treatment. The chest X-ray is the diagnostic modality of choice when pneumothorax is suspected. The computer-aided diagnosis of pneumothorax has received a dramatic boost in the last few years due to deep learning advances and the first public pneumothorax diagnosis competition with 15257 chest X-rays manually annotated by a team of 19 radiologists. This paper describes one of the top frameworks that participated in the competition. The framework investigates the benefits of combining the Unet convolutional neural network with various backbones, namely ResNet34, SE-ResNext50, SE-ResNext101, and DenseNet121. The paper presents a step-by-step instruction for the framework application, including data augmentation, and different pre- and post-processing steps. The performance of the framework was of 0.8574 measured in terms of the Dice coefficient. The second contribution of the paper is the comparison of the deep learning framework against three experienced radiologists on the pneumothorax detection and segmentation on challenging X-rays. We also evaluated how diagnostic confidence of radiologists affects the accuracy of the diagnosis and observed that the deep learning framework and radiologists find the same X-rays to be easy/difficult to analyze (p-value <1e4). Finally, the methodology of all top-performing teams from the competition leaderboard was analyzed to find the consistent methodological patterns of accurate pneumothorax detection and segmentation.


Assuntos
Aprendizado Profundo , Pneumotórax , Diagnóstico por Computador , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Pneumotórax/diagnóstico por imagem , Radiologistas
19.
Med Phys ; 47(9): e929-e950, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32510603

RESUMO

Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Cabeça , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador
20.
Med Phys ; 47(8): 3721-3731, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32406531

RESUMO

PURPOSE: Radiation therapy (RT) is prescribed for curative and palliative treatment for around 50% of patients with solid tumors. Radiation-induced toxicities of healthy organs accompany many RTs and represent one of the main limiting factors during dose delivery. The existing RT planning solutions generally discard spatial dose distribution information and lose the ability to recognize radiosensitive regions of healthy organs potentially linked to toxicity manifestation. This study proposes a universal deep learning-based algorithm for recognitions of consistent dose patterns and generation of toxicity risk maps for the abdominal area. METHODS: We investigated whether convolutional neural networks (CNNs) can automatically associate abdominal computed tomography (CT) images and RT dose plans with post-RT toxicities without being provided segmentation of abdominal organs. The CNNs were also applied to study RT plans, where doses at specific anatomical regions were reduced/increased, with the aim to pinpoint critical regions sparing of which significantly reduces toxicity risks. The obtained risk maps were computed for individual anatomical regions inside the liver and statistically compared to the existing clinical studies. RESULTS: A database of 122 liver stereotactic body RT (SBRT) executed at Stanford Hospital from July 2004 and November 2015 was assembled. All patients treated for primary liver cancer, mainly hepatocellular carcinoma and cholangiocarcinoma, with complete follow-ups were extracted from the database. The SBRT treatment doses ranged from 26 to 50 Gy delivered in 1-5 fractions for primary liver cancer. The patients were followed up for 1-68 months depending on the survival time. The CNNs were trained to recognize acute and late grade 3+ biliary stricture/obstruction, hepatic failure or decompensation, hepatobiliary infection, liver function test (LFT) elevation or/and portal vein thrombosis, named for convenience hepatobiliary (HB) toxicities. The toxicity prediction accuracy was of 0.73 measured in terms of the area under the receiving operator characteristic curve. Significantly higher risk scores (P < 0.05) of HB toxicity manifestation were associated with irradiation for the hepatobiliary tract in comparison to the risk scores for liver segments I-VIII and portal vein. This observation is in strong agreement with anatomical and clinical expectations. CONCLUSION: In this work, we proposed and validated a universal deep learning-based solution for the identification of radiosensitive anatomical regions. Without any prior anatomical knowledge, CNNs automatically recognized the importance of hepatobiliary tract sparing during liver SBRT.


Assuntos
Carcinoma Hepatocelular , Aprendizado Profundo , Neoplasias Hepáticas , Radiocirurgia , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/radioterapia , Neoplasias Hepáticas/cirurgia , Radiocirurgia/efeitos adversos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA