Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
2.
J Digit Imaging ; 36(3): 1237-1247, 2023 06.
Article in English | MEDLINE | ID: mdl-36698035

ABSTRACT

Under the black-box nature in the deep learning model, it is uncertain how the change in contrast level and format affects the performance. We aimed to investigate the effect of contrast level and image format on the effectiveness of deep learning for diagnosing pneumothorax on chest radiographs. We collected 3316 images (1016 pneumothorax and 2300 normal images), and all images were set to the standard contrast level (100%) and stored in the Digital Imaging and Communication in Medicine and Joint Photographic Experts Group (JPEG) formats. Data were randomly separated into 80% of training and 20% of test sets, and the contrast of images in the test set was changed to 5 levels (50%, 75%, 100%, 125%, and 150%). We trained the model to detect pneumothorax using ResNet-50 with 100% level images and tested with 5-level images in the two formats. While comparing the overall performance between each contrast level in the two formats, the area under the receiver-operating characteristic curve (AUC) was significantly different (all p < 0.001) except between 125 and 150% in JPEG format (p = 0.382). When comparing the two formats at same contrast levels, AUC was significantly different (all p < 0.001) except 50% and 100% (p = 0.079 and p = 0.082, respectively). The contrast level and format of medical images could influence the performance of the deep learning model. It is required to train with various contrast levels and formats of image, and further image processing for improvement and maintenance of the performance.


Subject(s)
Deep Learning , Pneumothorax , Humans , Pneumothorax/diagnostic imaging , Radiography , Algorithms , ROC Curve , Radiography, Thoracic/methods , Retrospective Studies
3.
Sci Rep ; 12(1): 21884, 2022 12 19.
Article in English | MEDLINE | ID: mdl-36536152

ABSTRACT

Acute thoracic aortic dissection is a life-threatening disease, in which blood leaking from the damaged inner layer of the aorta causes dissection between the intimal and adventitial layers. The diagnosis of this disease is challenging. Chest x-rays are usually performed for initial screening or diagnosis, but the diagnostic accuracy of this method is not high. Recently, deep learning has been successfully applied in multiple medical image analysis tasks. In this paper, we attempt to increase the accuracy of diagnosis of acute thoracic aortic dissection based on chest x-rays by applying deep learning techniques. In aggregate, 3,331 images, comprising 716 positive images and 2615 negative images, were collected from 3,331 patients. Residual neural network 18 was used to detect acute thoracic aortic dissection. The diagnostic accuracy of the ResNet18 was observed to be 90.20% with a precision of 75.00%, recall of 94.44%, and F1-score of 83.61%. Further research is required to improve diagnostic accuracy based on aorta segmentation.


Subject(s)
Aortic Dissection , Dissection, Thoracic Aorta , Humans , Neural Networks, Computer , Aorta , Radiography, Thoracic/methods
4.
J Pers Med ; 12(10)2022 Oct 03.
Article in English | MEDLINE | ID: mdl-36294776

ABSTRACT

Recent studies utilizing deep convolutional neural networks (CNN) have described the central venous catheter (CVC) on chest radiography images. However, there have been no studies for the classification of the CVC tip position with a definite criterion on the chest radiograph. This study aimed to develop an algorithm for the automatic classification of proper depth with the application of automatic segmentation of the trachea and the CVC on chest radiographs using a deep CNN. This was a retrospective study that used plain chest supine anteroposterior radiographs. The trachea and CVC were segmented on images and three labels (shallow, proper, and deep position) were assigned based on the vertical distance between the tracheal carina and CVC tip. We used a two-stage approach model for the automatic segmentation of the trachea and CVC with U-net++ and automatic classification of CVC placement with EfficientNet B4. The primary outcome was a successful three-label classification through five-fold validations with segmented images and a test with segmentation-free images. Of a total of 808 images, 207 images were manually segmented and the overall accuracy of the five-fold validation for the classification of three-class labels (mean (SD)) of five-fold validation was 0.76 (0.03). In the test for classification with 601 segmentation-free images, the average accuracy, precision, recall, and F1-score were 0.82, 0.73, 0.73, and 0.73, respectively. We achieved the highest accuracy value of 0.91 in the shallow position label, while the highest F1-score was 0.82 in the deep position label. A deep CNN can achieve a comparative performance in the classification of the CVC position based on the distance from the carina to the CVC tip as well as automatic segmentation of the trachea and CVC on plain chest radiographs.

5.
Polymers (Basel) ; 14(20)2022 Oct 13.
Article in English | MEDLINE | ID: mdl-36297879

ABSTRACT

Currently, protective clothing used in clinical field is the most representative example of efforts to reduce radiation exposure to radiation workers. However, lead is classified as a substance harmful to the human body that can cause lead poisoning. Therefore, research on the development of lead-free radiation shielding bodies is being conducted. In this study, the shielding body was manufactured by changing the size, layer, and height of the nozzle, using a 90.7% pure tungsten filament, a 3D printer material, and we compared its performance with existing protection tools. Our findings revealed that the shielding rate of the mixed tungsten filament was higher than that of the existing protective tools, confirming its potency to replace lead as the most protective material in clinical field.

6.
J Pers Med ; 12(9)2022 Aug 24.
Article in English | MEDLINE | ID: mdl-36143148

ABSTRACT

Background: This study aimed to develop an algorithm for multilabel classification according to the distance from carina to endotracheal tube (ETT) tip (absence, shallow > 70 mm, 30 mm ≤ proper ≤ 70 mm, and deep position < 30 mm) with the application of automatic segmentation of the trachea and the ETT on chest radiographs using deep convolutional neural network (CNN). Methods: This study was a retrospective study using plain chest radiographs. We segmented the trachea and the ETT on images and labeled the classification of the ETT position. We proposed models for the classification of the ETT position using EfficientNet B0 with the application of automatic segmentation using Mask R-CNN and ResNet50. Primary outcomes were favorable performance for automatic segmentation and four-label classification through five-fold validation with segmented images and a test with non-segmented images. Results: Of 1985 images, 596 images were manually segmented and consisted of 298 absence, 97 shallow, 100 proper, and 101 deep images according to the ETT position. In five-fold validations with segmented images, Dice coefficients [mean (SD)] between segmented and predicted masks were 0.841 (0.063) for the trachea and 0.893 (0.078) for the ETT, and the accuracy for four-label classification was 0.945 (0.017). In the test for classification with 1389 non-segmented images, overall values were 0.922 for accuracy, 0.843 for precision, 0.843 for sensitivity, 0.922 for specificity, and 0.843 for F1-score. Conclusions: Automatic segmentation of the ETT and trachea images and classification of the ETT position using deep CNN with plain chest radiographs could achieve good performance and improve the physician's performance in deciding the appropriateness of ETT depth.

7.
Sensors (Basel) ; 22(11)2022 Jun 06.
Article in English | MEDLINE | ID: mdl-35684921

ABSTRACT

We recently developed a long-length detector that combines three detectors and successfully acquires whole-body X-ray images. Although the developed detector system can efficiently acquire whole-body images in a short time, it may show problems with diagnostic performance in some areas owing to the use of high-energy X-rays during whole-spine and long-length examinations. In particular, during examinations of relatively thin bones, such as ankles, with a long-length detector, the image quality deteriorates because of an increase in X-ray transmission. An additional filter is primarily used to address this limitation, but this approach imposes a higher load on the X-ray tube to compensate for reductions in the radiation dose and the problem of high manufacturing costs. Thus, in this study, a newly designed additional filter was fabricated using 3D printing technology to improve the applicability of the long-length detector. Whole-spine anterior-posterior (AP), lateral, and long-leg AP X-ray examinations were performed using 3D-printed additional filters composed of 14 mm thick aluminum (Al) or 14 mm thick Al + 1 mm thick copper (Cu) composite material. The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and radiation dose for the acquired X-ray images were evaluated to demonstrate the usefulness of the filters. Under all X-ray inspection conditions, the most effective data were obtained when the composite additional filter based on a 14 mm thick Al + 1 mm thick Cu material was used. We confirmed that an SNR improvement of up to 46%, CNR improvement of 37%, and radiation dose reduction of 90% could be achieved in the X-ray images obtained using the composite additional filter in comparison to the images obtained with no filter. The results proved that the additional filter made with a 3D printer was effective in improving image quality and reducing the radiation dose for X-ray images obtained using a long-length detector.


Subject(s)
Printing, Three-Dimensional , Phantoms, Imaging , Radiation Dosage , Radiography , Signal-To-Noise Ratio , X-Rays
8.
J Pers Med ; 12(5)2022 May 11.
Article in English | MEDLINE | ID: mdl-35629198

ABSTRACT

Purpose: This study aimed to develop and validate an automatic segmentation algorithm for the boundary delineation of ten wrist bones, consisting of eight carpal and two distal forearm bones, using a convolutional neural network (CNN). Methods: We performed a retrospective study using adult wrist radiographs. We labeled the ground truth masking of wrist bones, and propose that the Fine Mask R-CNN consisted of wrist regions of interest (ROI) using a Single-Shot Multibox Detector (SSD) and segmentation via Mask R-CNN, plus the extended mask head. The primary outcome was an improvement in the prediction of delineation via the network combined with ground truth masking, and this was compared between two networks through five-fold validations. Results: In total, 702 images were labeled for the segmentation of ten wrist bones. The overall performance (mean (SD] of Dice coefficient) of the auto-segmentation of the ten wrist bones improved from 0.93 (0.01) using Mask R-CNN to 0.95 (0.01) using Fine Mask R-CNN (p < 0.001). The values of each wrist bone were higher when using the Fine Mask R-CNN than when using the alternative (all p < 0.001). The value derived for the distal radius was the highest, and that for the trapezoid was the lowest in both networks. Conclusion: Our proposed Fine Mask R-CNN model achieved good performance in the automatic segmentation of ten overlapping wrist bones derived from adult wrist radiographs.

9.
J Digit Imaging ; 34(5): 1099-1109, 2021 10.
Article in English | MEDLINE | ID: mdl-34379216

ABSTRACT

This study aimed to develop a method for detection of femoral neck fracture (FNF) including displaced and non-displaced fractures using convolutional neural network (CNN) with plain X-ray and to validate its use across hospitals through internal and external validation sets. This is a retrospective study using hip and pelvic anteroposterior films for training and detecting femoral neck fracture through residual neural network (ResNet) 18 with convolutional block attention module (CBAM) + + . The study was performed at two tertiary hospitals between February and May 2020 and used data from January 2005 to December 2018. Our primary outcome was favorable performance for diagnosis of femoral neck fracture from negative studies in our dataset. We described the outcomes as area under the receiver operating characteristic curve (AUC), accuracy, Youden index, sensitivity, and specificity. A total of 4,189 images that contained 1,109 positive images (332 non-displaced and 777 displaced) and 3,080 negative images were collected from two hospitals. The test values after training with one hospital dataset were 0.999 AUC, 0.986 accuracy, 0.960 Youden index, and 0.966 sensitivity, and 0.993 specificity. Values of external validation with the other hospital dataset were 0.977, 0.971, 0.920, 0.939, and 0.982, respectively. Values of merged hospital datasets were 0.987, 0.983, 0.960, 0.973, and 0.987, respectively. A CNN algorithm for FNF detection in both displaced and non-displaced fractures using plain X-rays could be used in other hospitals to screen for FNF after training with images from the hospital of interest.


Subject(s)
Deep Learning , Femoral Neck Fractures , Algorithms , Femoral Neck Fractures/diagnostic imaging , Humans , Retrospective Studies , X-Rays
10.
J Clin Med ; 10(15)2021 Jul 21.
Article in English | MEDLINE | ID: mdl-34361982

ABSTRACT

The present study aimed to develop a machine learning network to diagnose middle ear diseases with tympanic membrane images and to identify its assistive role in the diagnostic process. The medical records of subjects who underwent ear endoscopy tests were reviewed. From these records, 2272 diagnostic tympanic membranes images were appropriately labeled as normal, otitis media with effusion (OME), chronic otitis media (COM), or cholesteatoma and were used for training. We developed the "ResNet18 + Shuffle" network and validated the model performance. Seventy-one representative cases were selected to test the final accuracy of the network and resident physicians. We asked 10 resident physicians to make diagnoses from tympanic membrane images with and without the help of the machine learning network, and the change of the diagnostic performance of resident physicians with the aid of the answers from the machine learning network was assessed. The devised network showed a highest accuracy of 97.18%. A five-fold validation showed that the network successfully diagnosed ear diseases with an accuracy greater than 93%. All resident physicians were able to diagnose middle ear diseases more accurately with the help of the machine learning network. The increase in diagnostic accuracy was up to 18% (1.4% to 18.4%). The machine learning network successfully classified middle ear diseases and was assistive to clinicians in the interpretation of tympanic membrane images.

SELECTION OF CITATIONS
SEARCH DETAIL
...