Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Australas J Ultrasound Med ; 26(1): 26-33, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36960131

ABSTRACT

Purpose: To investigate and determine the sonographic findings obtained from manually distorted testes to predict testicular atrophy following manual detorsion. Materials and methods: Twenty-two patients who had been diagnosed with testicular torsion and undergone manual detorsion were included. These patients were classified according to the presence or absence of testicular atrophy. The duration of symptoms, presence or absence of hyperperfusion within the entire affected testis, and echogenicity (homogeneous or heterogeneous) within the affected testis were compared using the Mann-Whitney U-test or Fisher's exact test, as appropriate. Results: Testicular atrophy was detected in seven patients. There was a significant difference in the frequency of hyperperfusion within the entire affected testis (with atrophy [present/absent] vs. without atrophy [present/absent] = 0/7 vs. 8/7, P = 0.023) between patients with and without testicular atrophy. No significant differences in the duration of symptoms (with atrophy vs. without atrophy = 7 ± 3.3 h vs. 4.7 ± 3.6 h, P = 0.075) or frequency of echogenicity within the testis (with atrophy [heterogeneous/homogeneous] vs. without atrophy [heterogeneous/homogeneous] = 2/5 vs. 2/13, P = 0.565) were observed between the groups. Conclusions: This small cohort study suggests that the presence of hyperperfusion within the entire affected testis immediately after successful manual detorsion is useful in predicting the avoidance of testicular atrophy.

2.
Radiol Case Rep ; 18(1): 138-142, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36340225

ABSTRACT

We report 2 cases of pulmonary actinomycosis complicated by a pseudoaneurysm. In Case 1, a 67-year-old man visited a hospital 7 months ago because of hemoptysis. CT revealed a suspected lung abscess in the left lingular segment; however, no diagnosis was confirmed by bronchoscopy. A CT scan taken after heavy hemoptysis showed a pseudoaneurysm within the consolidation of the same segment. On the same day, embolization of the left bronchial and intercostal arteries was performed. Left lingulectomy was performed 5 days later, and pulmonary actinomycosis was diagnosed histologically. Case 2 was a 51-year-old man with a 2-year history of cough and intermittent hemoptysis. CT showed a lesion with a cavity suggesting an abscess 3 months previously, and antibiotic treatment was started. After the appearance of massive hemoptysis, embolization was performed for a pseudoaneurysm seen on bronchial arteriography. Four days later, a left lower lobectomy was performed, and pulmonary actinomycosis was histologically diagnosed. Pseudoaneurysms are commonly associated with tuberculosis; however, only one report of pseudoaneurysms has been associated with pulmonary actinomycosis. Appropriate treatment should be selected according to the type of pseudoaneurysm and the risk of recurrent hemoptysis. Angiography and embolization are essential tools in diagnosing and treating pulmonary arterial pseudoaneurysms; however, surgical intervention may also be an option in some cases to ensure a good long-term outcome.

3.
Pol J Radiol ; 87: e521-e529, 2022.
Article in English | MEDLINE | ID: mdl-36250139

ABSTRACT

Purpose: To verify whether deep learning can be used to differentiate between carcinosarcomas (CSs) and endometrial carcinomas (ECs) using several magnetic resonance imaging (MRI) sequences. Material and methods: This retrospective study included 52 patients with CS and 279 patients with EC. A deep-learning model that uses convolutional neural networks (CNN) was trained with 572 T2-weighted images (T2WI) from 42 patients, 488 apparent diffusion coefficient of water maps from 33 patients, and 539 fat-saturated contrast-enhanced T1-weighted images from 40 patients with CS, as well as 1612 images from 223 patients with EC for each sequence. These were tested with 9-10 images of 9-10 patients with CS and 56 images of 56 patients with EC for each sequence, respectively. Three experienced radiologists independently interpreted these test images. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) for each sequence were compared between the CNN models and the radiologists. Results: The CNN model of each sequence had sensitivity 0.89-0.93, specificity 0.44-0.70, accuracy 0.83-0.89, and AUC 0.80-0.94. It also showed an equivalent or better diagnostic performance than the 3 readers (sensitivity 0.43-0.91, specificity 0.30-0.78, accuracy 0.45-0.88, and AUC 0.49-0.92). The CNN model displayed the highest diagnostic performance on T2WI (sensitivity 0.93, specificity 0.70, accuracy 0.89, and AUC 0.94). Conclusions: Deep learning provided diagnostic performance comparable to or better than experienced radiologists when distinguishing between CS and EC on MRI.

4.
BMC Med Imaging ; 22(1): 80, 2022 04 30.
Article in English | MEDLINE | ID: mdl-35501705

ABSTRACT

PURPOSE: To compare the diagnostic performance of deep learning models using convolutional neural networks (CNN) with that of radiologists in diagnosing endometrial cancer and to verify suitable imaging conditions. METHODS: This retrospective study included patients with endometrial cancer or non-cancerous lesions who underwent MRI between 2015 and 2020. In Experiment 1, single and combined image sets of several sequences from 204 patients with cancer and 184 patients with non-cancerous lesions were used to train CNNs. Subsequently, testing was performed using 97 images from 51 patients with cancer and 46 patients with non-cancerous lesions. The test image sets were independently interpreted by three blinded radiologists. Experiment 2 investigated whether the addition of different types of images for training using the single image sets improved the diagnostic performance of CNNs. RESULTS: The AUC of the CNNs pertaining to the single and combined image sets were 0.88-0.95 and 0.87-0.93, respectively, indicating non-inferior diagnostic performance than the radiologists. The AUC of the CNNs trained with the addition of other types of single images to the single image sets was 0.88-0.95. CONCLUSION: CNNs demonstrated high diagnostic performance for the diagnosis of endometrial cancer using MRI. Although there were no significant differences, adding other types of images improved the diagnostic performance for some single image sets.


Subject(s)
Deep Learning , Endometrial Neoplasms , Endometrial Neoplasms/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging/methods , Radiologists , Retrospective Studies
5.
Cancers (Basel) ; 14(4)2022 Feb 16.
Article in English | MEDLINE | ID: mdl-35205735

ABSTRACT

BACKGROUND: This study aimed to compare deep learning with radiologists' assessments for diagnosing ovarian carcinoma using MRI. METHODS: This retrospective study included 194 patients with pathologically confirmed ovarian carcinomas or borderline tumors and 271 patients with non-malignant lesions who underwent MRI between January 2015 and December 2020. T2WI, DWI, ADC map, and fat-saturated contrast-enhanced T1WI were used for the analysis. A deep learning model based on a convolutional neural network (CNN) was trained using 1798 images from 146 patients with malignant tumors and 1865 images from 219 patients with non-malignant lesions for each sequence, and we tested with 48 and 52 images of patients with malignant and non-malignant lesions, respectively. The sensitivity, specificity, accuracy, and AUC were compared between the CNN and interpretations of three experienced radiologists. RESULTS: The CNN of each sequence had a sensitivity of 0.77-0.85, specificity of 0.77-0.92, accuracy of 0.81-0.87, and an AUC of 0.83-0.89, and it achieved a diagnostic performance equivalent to the radiologists. The CNN showed the highest diagnostic performance on the ADC map among all sequences (specificity = 0.85; sensitivity = 0.77; accuracy = 0.81; AUC = 0.89). CONCLUSION: The CNNs provided a diagnostic performance that was non-inferior to the radiologists for diagnosing ovarian carcinomas on MRI.

6.
J Med Ultrasound ; 29(2): 116-118, 2021.
Article in English | MEDLINE | ID: mdl-34377643

ABSTRACT

We report a case of a 12-year-old boy with an accessory spleen torsion. He presented with left-sided abdominal pain after trauma. A 4 cm oval mass without contrast enhancement was detected on contrast-enhanced computed tomography (CT), and ultrasound (US) showed a 4 cm oval mass below the spleen. The mass mainly consisted of high echoes similar to the spleen; the central part showed irregularly low echoes. Subsequent follow-up daily US examinations showed gradual expansion of the central low echoes with conspicuous hyperechoic dots. Discontinuation of the branch from the splenic artery to the mass was observed, both, on US and CT. These findings led to the diagnosis of a hemorrhagic infarct caused by torsion of the accessory spleen. Laparoscopy showed adherence of the accessory spleen to the omentum and colon by twisting four times around its axis. It was resected and confirmed the diagnosis of a torsioned accessory spleen.

7.
Eur J Radiol ; 135: 109471, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33338759

ABSTRACT

PURPOSE: To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image. METHODS: This study included 418 patients (age range, 21-91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists. RESULTS: The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783-0.867, a specificity of 0.933 and 0.917-0.950, and an accuracy of 0.908 and 0.867-0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272-0.62). CONCLUSION: Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.


Subject(s)
Deep Learning , Uterine Cervical Neoplasms , Adult , Aged , Aged, 80 and over , Female , Humans , Middle Aged , Neural Networks, Computer , Radiologists , Retrospective Studies , Uterine Cervical Neoplasms/diagnostic imaging , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...