Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Insights Imaging ; 14(1): 90, 2023 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-37199794

RESUMO

OBJECTIVES: The aim of this study was to develop and validate a commercially available AI platform for the automatic determination of image quality in mammography and tomosynthesis considering a standardized set of features. MATERIALS AND METHODS: In this retrospective study, 11,733 mammograms and synthetic 2D reconstructions from tomosynthesis of 4200 patients from two institutions were analyzed by assessing the presence of seven features which impact image quality in regard to breast positioning. Deep learning was applied to train five dCNN models on features detecting the presence of anatomical landmarks and three dCNN models for localization features. The validity of models was assessed by the calculation of the mean squared error in a test dataset and was compared to the reading by experienced radiologists. RESULTS: Accuracies of the dCNN models ranged between 93.0% for the nipple visualization and 98.5% for the depiction of the pectoralis muscle in the CC view. Calculations based on regression models allow for precise measurements of distances and angles of breast positioning on mammograms and synthetic 2D reconstructions from tomosynthesis. All models showed almost perfect agreement compared to human reading with Cohen's kappa scores above 0.9. CONCLUSIONS: An AI-based quality assessment system using a dCNN allows for precise, consistent and observer-independent rating of digital mammography and synthetic 2D reconstructions from tomosynthesis. Automation and standardization of quality assessment enable real-time feedback to technicians and radiologists that shall reduce a number of inadequate examinations according to PGMI (Perfect, Good, Moderate, Inadequate) criteria, reduce a number of recalls and provide a dependable training platform for inexperienced technicians.

2.
Eur Radiol ; 33(7): 4589-4596, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36856841

RESUMO

OBJECTIVES: High breast density is a well-known risk factor for breast cancer. This study aimed to develop and adapt two (MLO, CC) deep convolutional neural networks (DCNN) for automatic breast density classification on synthetic 2D tomosynthesis reconstructions. METHODS: In total, 4605 synthetic 2D images (1665 patients, age: 57 ± 37 years) were labeled according to the ACR (American College of Radiology) density (A-D). Two DCNNs with 11 convolutional layers and 3 fully connected layers each, were trained with 70% of the data, whereas 20% was used for validation. The remaining 10% were used as a separate test dataset with 460 images (380 patients). All mammograms in the test dataset were read blinded by two radiologists (reader 1 with two and reader 2 with 11 years of dedicated mammographic experience in breast imaging), and the consensus was formed as the reference standard. The inter- and intra-reader reliabilities were assessed by calculating Cohen's kappa coefficients, and diagnostic accuracy measures of automated classification were evaluated. RESULTS: The two models for MLO and CC projections had a mean sensitivity of 80.4% (95%-CI 72.2-86.9), a specificity of 89.3% (95%-CI 85.4-92.3), and an accuracy of 89.6% (95%-CI 88.1-90.9) in the differentiation between ACR A/B and ACR C/D. DCNN versus human and inter-reader agreement were both "substantial" (Cohen's kappa: 0.61 versus 0.63). CONCLUSION: The DCNN allows accurate, standardized, and observer-independent classification of breast density based on the ACR BI-RADS system. KEY POINTS: • A DCNN performs on par with human experts in breast density assessment for synthetic 2D tomosynthesis reconstructions. • The proposed technique may be useful for accurate, standardized, and observer-independent breast density evaluation of tomosynthesis.


Assuntos
Densidade da Mama , Neoplasias da Mama , Humanos , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Feminino , Variações Dependentes do Observador , Neoplasias da Mama/diagnóstico por imagem , Mamografia/métodos , Redes Neurais de Computação
3.
J Imaging ; 8(8)2022 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-35893086

RESUMO

Timestamps in the Radiology Information System (RIS) are a readily available and valuable source of information with increasing significance, among others, due to the current focus on the clinical impact of artificial intelligence applications. We aimed to evaluate timestamp-based radiological dictation time, introduce timestamp modeling techniques, and compare those with prospective measured reporting. Dictation time was calculated from RIS timestamps between 05/2010 and 01/2021 at our institution (n = 108,310). We minimized contextual outliers by simulating the raw data by iteration (1000, vector size (µ/sd/λ) = 100/loop), assuming normally distributed reporting times. In addition, 329 reporting times were prospectively measured by two radiologists (1 and 4 years of experience). Altogether, 106,127 of 108,310 exams were included after simulation, with a mean dictation time of 16.62 min. Mean dictation time was 16.05 min head CT (44,743/45,596), 15.84 min for chest CT (32,797/33,381), 17.92 min for abdominal CT (n = 22,805/23,483), 10.96 min for CT foot (n = 937/958), 9.14 min for lumbar spine (881/892), 8.83 min for shoulder (409/436), 8.83 min for CT wrist (1201/1322), and 39.20 min for a polytrauma patient (2127/2242), without a significant difference to the prospective reporting times. In conclusion, timestamp analysis is useful to measure current reporting practice, whereas body-region and radiological experience are confounders. This could aid in cost-benefit assessments of workflow changes (e.g., AI implementation).

4.
Invest Radiol ; 57(8): 552-559, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35797580

RESUMO

OBJECTIVE: This study trained and evaluated algorithms to detect, segment, and classify simple and complex pleural effusions on computed tomography (CT) scans. MATERIALS AND METHODS: For detection and segmentation, we randomly selected 160 chest CT scans out of all consecutive patients (January 2016-January 2021, n = 2659) with reported pleural effusion. Effusions were manually segmented and a negative cohort of chest CTs from 160 patients without effusions was added. A deep convolutional neural network (nnU-Net) was trained and cross-validated (n = 224; 70%) for segmentation and tested on a separate subset (n = 96; 30%) with the same distribution of reported pleural complexity features as in the training cohort (eg, hyperdense fluid, gas, pleural thickening and loculation). On a separate consecutive cohort with a high prevalence of pleural complexity features (n = 335), a random forest model was implemented for classification of segmented effusions with Hounsfield unit thresholds, density distribution, and radiomics-based features as input. As performance measures, sensitivity, specificity, and area under the curves (AUCs) for detection/classifier evaluation (per-case level) and Dice coefficient and volume analysis for the segmentation task were used. RESULTS: Sensitivity and specificity for detection of effusion were excellent at 0.99 and 0.98, respectively (n = 96; AUC, 0.996, test data). Segmentation was robust (median Dice, 0.89; median absolute volume difference, 13 mL), irrespective of size, complexity, or contrast phase. The sensitivity, specificity, and AUC for classification in simple versus complex effusions were 0.67, 0.75, and 0.77, respectively. CONCLUSION: Using a dataset with different degrees of complexity, a robust model was developed for the detection, segmentation, and classification of effusion subtypes. The algorithms are openly available at https://github.com/usb-radiology/pleuraleffusion.git.


Assuntos
Derrame Pleural , Tomografia Computadorizada por Raios X , Algoritmos , Exsudatos e Transudatos/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Derrame Pleural/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
5.
Diagnostics (Basel) ; 12(5)2022 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-35626201

RESUMO

Pericardial effusions (PEFs) are often missed on Computed Tomography (CT), which particularly affects the outcome of patients presenting with hemodynamic compromise. An automatic PEF detection, segmentation, and classification tool would expedite and improve CT based PEF diagnosis; 258 CTs with (206 with simple PEF, 52 with hemopericardium) and without PEF (each 134 with contrast, 124 non-enhanced) were identified using the radiology report (01/2016−01/2021). PEF were manually 3D-segmented. A deep convolutional neural network (nnU-Net) was trained on 316 cases and separately tested on the remaining 200 and 22 external post-mortem CTs. Inter-reader variability was tested on 40 CTs. PEF classification utilized the median Hounsfield unit from each prediction. The sensitivity and specificity for PEF detection was 97% (95% CI 91.48−99.38%) and 100.00% (95% CI 96.38−100.00%) and 89.74% and 83.61% for diagnosing hemopericardium (AUC 0.944, 95% CI 0.904−0.984). Model performance (Dice coefficient: 0.75 ± 0.01) was non-inferior to inter-reader (0.69 ± 0.02) and was unaffected by contrast administration nor alternative chest pathology (p > 0.05). External dataset testing yielded similar results. Our model reliably detects, segments, and classifies PEF on CT in a complex dataset, potentially serving as an alert tool whilst enhancing report quality. The model and corresponding datasets are publicly available.

6.
J Imaging ; 8(3)2022 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-35324605

RESUMO

For AI-based classification tasks in computed tomography (CT), a reference standard for evaluating the clinical diagnostic accuracy of individual classes is essential. To enable the implementation of an AI tool in clinical practice, the raw data should be drawn from clinical routine data using state-of-the-art scanners, evaluated in a blinded manner and verified with a reference test. Three hundred and thirty-five consecutive CTs, performed between 1 January 2016 and 1 January 2021 with reported pleural effusion and pathology reports from thoracocentesis or biopsy within 7 days of the CT were retrospectively included. Two radiologists (4 and 10 PGY) blindly assessed the chest CTs for pleural CT features. If needed, consensus was achieved using an experienced radiologist's opinion (29 PGY). In addition, diagnoses were extracted from written radiological reports. We analyzed these findings for a possible correlation with the following patient outcomes: mortality and median hospital stay. For AI prediction, we used an approach consisting of nnU-Net segmentation, PyRadiomics features and a random forest model. Specificity and sensitivity for CT-based detection of empyema (n = 81 of n = 335 patients) were 90.94 (95%-CI: 86.55-94.05) and 72.84 (95%-CI: 61.63-81.85%) in all effusions, with moderate to almost perfect interrater agreement for all pleural findings associated with empyema (Cohen's kappa = 0.41-0.82). Highest accuracies were found for pleural enhancement or thickening with 87.02% and 81.49%, respectively. For empyema prediction, AI achieved a specificity and sensitivity of 74.41% (95% CI: 68.50-79.57) and 77.78% (95% CI: 66.91-85.96), respectively. Empyema was associated with a longer hospital stay (median = 20 versus 14 days), and findings consistent with pleural carcinomatosis impacted mortality.

7.
Eur J Radiol ; 150: 110259, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35334245

RESUMO

PURPOSE: It is known from histology studies that lung vessels are affected in viral pneumonia. However, their diagnostic potential as a chest CT imaging parameter has only rarely been exploited. The purpose of this study is to develop a robust method for automated lung vessel segmentation and morphology analysis and apply it to a large chest CT dataset. METHODS: In total, 509 non-enhanced chest CTs (NECTs) and 563 CT pulmonary angiograms (CTPAs) were included. Sub-groups were patients with healthy lungs (group_NORM, n = 634) and those RT-PCR-positive for Influenza A/B (group_INF, n = 159) and SARS-CoV-2 (group_COV, n = 279). A lung vessel segmentation algorithm (LVSA) based on traditional image processing was developed, validated with a point-of-interest approach, and applied to a large clinical dataset. Total blood vessel volume in lung (TBV) and the blood vessel volume percentage (BV%) of three blood vessel size types were calculated and compared between groups: small (BV5%, cross-sectional area < 5 mm2), medium (BV5-10%, 5-10 mm2) and large (BV10%, >10 mm2). RESULTS: Sensitivity of the LVSA was 84.6% (95 %CI: 73.9-95.3) for NECTs and 92.8% (95 %CI: 90.8-94.7) for CTPAs. In viral pneumonia, besides an increased TBV, the main finding was a significantly decreased BV5% in group_COV (n = 14%) and group_INF (n = 15%) compared to group_NORM (n = 18%) [p < 0.001]. At the same time, BV10% was increased (group_COV n = 15% and group_INF n = 14% vs. group_NORM n = 11%; p < 0.001). CONCLUSION: In COVID-19 and Influenza, the blood vessel volume is redistributed from small to large vessels in the lung. Automated LSVA allows researchers and clinicians to derive imaging parameters for large amounts of CTs. This can enhance the understanding of vascular changes, particularly in infectious lung diseases.


Assuntos
COVID-19 , Influenza Humana , Pneumonia Viral , Humanos , Influenza Humana/diagnóstico por imagem , Pulmão/irrigação sanguínea , Pulmão/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Estudos Retrospectivos , SARS-CoV-2
8.
Eur Radiol ; 31(9): 6816-6824, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33742228

RESUMO

OBJECTIVES: To evaluate the performance of a deep convolutional neural network (DCNN) in detecting and classifying distal radius fractures, metal, and cast on radiographs using labels based on radiology reports. The secondary aim was to evaluate the effect of the training set size on the algorithm's performance. METHODS: A total of 15,775 frontal and lateral radiographs, corresponding radiology reports, and a ResNet18 DCNN were used. Fracture detection and classification models were developed per view and merged. Incrementally sized subsets served to evaluate effects of the training set size. Two musculoskeletal radiologists set the standard of reference on radiographs (test set A). A subset (B) was rated by three radiology residents. For a per-study-based comparison with the radiology residents, the results of the best models were merged. Statistics used were ROC and AUC, Youden's J statistic (J), and Spearman's correlation coefficient (ρ). RESULTS: The models' AUC/J on (A) for metal and cast were 0.99/0.98 and 1.0/1.0. The models' and residents' AUC/J on (B) were similar on fracture (0.98/0.91; 0.98/0.92) and multiple fragments (0.85/0.58; 0.91/0.70). Training set size and AUC correlated on metal (ρ = 0.740), cast (ρ = 0.722), fracture (frontal ρ = 0.947, lateral ρ = 0.946), multiple fragments (frontal ρ = 0.856), and fragment displacement (frontal ρ = 0.595). CONCLUSIONS: The models trained on a DCNN with report-based labels to detect distal radius fractures on radiographs are suitable to aid as a secondary reading tool; models for fracture classification are not ready for clinical use. Bigger training sets lead to better models in all categories except joint affection. KEY POINTS: • Detection of metal and cast on radiographs is excellent using AI and labels extracted from radiology reports. • Automatic detection of distal radius fractures on radiographs is feasible and the performance approximates radiology residents. • Automatic classification of the type of distal radius fracture varies in accuracy and is inferior for joint involvement and fragment displacement.


Assuntos
Radiologia , Fraturas do Rádio , Humanos , Redes Neurais de Computação , Radiografia , Radiologistas , Fraturas do Rádio/diagnóstico por imagem
9.
J Imaging ; 8(1)2021 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-35049844

RESUMO

Computed tomography (CT) diagnosis of empyema is challenging because current literature features multiple overlapping pleural findings. We aimed to identify informative findings for structured reporting. The screening according to inclusion criteria (P: Pleural empyema, I: CT C: culture/gram-stain/pathology/pus, O: Diagnostic accuracy measures), data extraction, and risk of bias assessment of studies published between 01-1980 and 10-2021 on Pubmed, Embase, and Web of Science (WOS) were performed independently by two reviewers. CT findings with pooled diagnostic odds ratios (DOR) with 95% confidence intervals, not including 1, were considered as informative. Summary estimates of diagnostic accuracy for CT findings were calculated by using a bivariate random-effects model and heterogeneity sources were evaluated. Ten studies with a total of 252 patients with and 846 without empyema were included. From 119 overlapping descriptors, five informative CT findings were identified: Pleural enhancement, thickening, loculation, fat thickening, and fat stranding with an AUC of 0.80 (hierarchical summary receiver operating characteristic, HSROC). Potential sources of heterogeneity were different thresholds, empyema prevalence, and study year.

10.
Eur J Radiol ; 131: 109233, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32927416

RESUMO

PURPOSE: During the emerging COVID-19 pandemic, radiology departments faced a substantial increase in chest CT admissions coupled with the novel demand for quantification of pulmonary opacities. This article describes how our clinic implemented an automated software solution for this purpose into an established software platform in 10 days. The underlying hypothesis was that modern academic centers in radiology are capable of developing and implementing such tools by their own efforts and fast enough to meet the rapidly increasing clinical needs in the wake of a pandemic. METHOD: Deep convolutional neural network algorithms for lung segmentation and opacity quantification on chest CTs were trained using semi-automatically and manually created ground-truth (Ntotal = 172). The performance of the in-house method was compared to an externally developed algorithm on a separate test subset (N = 66). RESULTS: The final algorithm was available at day 10 and achieved human-like performance (Dice coefficient = 0.97). For opacity quantification, a slight underestimation was seen both for the in-house (1.8 %) and for the external algorithm (0.9 %). In contrast to the external reference, the underestimation for the in-house algorithm showed no dependency on total opacity load, making it more suitable for follow-up. CONCLUSIONS: The combination of machine learning and a clinically embedded software development platform enabled time-efficient development, instant deployment, and rapid adoption in clinical routine. The algorithm for fully automated lung segmentation and opacity quantification that we developed in the midst of the COVID-19 pandemic was ready for clinical use within just 10 days and achieved human-level performance even in complex cases.


Assuntos
Betacoronavirus , Infecções por Coronavirus/diagnóstico por imagem , Aprendizado de Máquina , Pneumonia Viral/diagnóstico por imagem , Software , COVID-19 , Humanos , Redes Neurais de Computação , Pandemias , SARS-CoV-2 , Tomografia Computadorizada por Raios X/métodos
11.
Contrast Media Mol Imaging ; 2018: 5693058, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30515067

RESUMO

Results of PET/CT examinations are communicated as text-based reports which are frequently not fully structured. Incomplete or missing staging information can be a significant source of staging and treatment errors. We compared standard text-based reports to a manual full 3D-segmentation-based approach with respect to TNM completeness and processing time. TNM information was extracted retrospectively from 395 reports. Moreover, the RIS time stamps of these reports were analyzed. 2995 lesions using a set of 41 classification labels (TNM features + location) were manually segmented on the corresponding image data. Information content and processing time of reports and segmentations were compared using descriptive statistics and modelling. The TNM/UICC stage was mentioned explicitly in only 6% (n=22) of the text-based reports. In 22% (n=86), information was incomplete, most frequently affecting T stage (19%, n=74), followed by N stage (6%, n=22) and M stage (2%, n=9). Full NSCLC-lesion segmentation required a median time of 13.3 min, while the median of the shortest estimator of the text-based reporting time (R1) was 18.1 min (p=0.01). Tumor stage (UICC I/II: 5.2 min, UICC III/IV: 20.3 min, p < 0.001), lesion size (p < 0.001), and lesion count (n=1: 4.4 min, n=12: 37.2 min, p < 0.001) correlated significantly with the segmentation time, but not with the estimators of text-based reporting time. Numerous text-based reports are lacking staging information. A segmentation-based reporting approach tailored to the staging task improves report quality with manageable processing time and helps to avoid erroneous therapy decisions based on incomplete reports. Furthermore, segmented data may be used for multimedia enhancement and automatization.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico , Neoplasias Pulmonares/diagnóstico , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Projetos de Pesquisa , Adulto , Idoso , Idoso de 80 Anos ou mais , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Feminino , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Multimídia , Estadiamento de Neoplasias/métodos , Estudos Retrospectivos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...