Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 12(1): 12867, 2022 07 27.
Article in English | MEDLINE | ID: mdl-35896575

ABSTRACT

Artificial intelligence (AI) applications in medical imaging continue facing the difficulty in collecting and using large datasets. One method proposed for solving this problem is data augmentation using fictitious images generated by generative adversarial networks (GANs). However, applying a GAN as a data augmentation technique has not been explored, owing to the quality and diversity of the generated images. To promote such applications by generating diverse images, this study aims to generate free-form lesion images from tumor sketches using a pix2pix-based model, which is an image-to-image translation model derived from GAN. As pix2pix, which assumes one-to-one image generation, is unsuitable for data augmentation, we propose StylePix2pix, which is independently improved to allow one-to-many image generation. The proposed model introduces a mapping network and style blocks from StyleGAN. Image generation results based on 20 tumor sketches created by a physician demonstrated that the proposed method can reproduce tumors with complex shapes. Additionally, the one-to-many image generation of StylePix2pix suggests effectiveness in data-augmentation applications.


Subject(s)
Image Processing, Computer-Assisted , Lung Neoplasms , Artificial Intelligence , Humans , Image Processing, Computer-Assisted/methods , Lung Neoplasms/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed
2.
Int J Comput Assist Radiol Surg ; 16(2): 241-251, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33428062

ABSTRACT

PURPOSE: In recent years, convolutional neural network (CNN), an artificial intelligence technology with superior image recognition, has become increasingly popular and frequently used for classification tasks in medical imaging. However, the amount of labelled data available for classifying medical images is often significantly less than that of natural images, and the handling of rare diseases is often challenging. To overcome these problems, data augmentation has been performed using generative adversarial networks (GANs). However, conventional GAN cannot effectively handle the various shapes of tumours because it randomly generates images. In this study, we introduced semi-conditional InfoGAN, which enables some labels to be added to InfoGAN, for the generation of shape-controlled tumour images. InfoGAN is a derived model of GAN, and it can represent object features in images without any label. METHODS: Chest computed tomography images of 66 patients diagnosed with three histological types of lung cancer (adenocarcinoma, squamous cell carcinoma, and small cell lung cancer) were used for analysis. To investigate the applicability of the generated images, we classified the histological types of lung cancer using a CNN that was pre-trained with the generated images. RESULTS: As a result of the training, InfoGAN was possible to generate images that controlled the diameters of each lesion and the presence or absence of the chest wall. The classification accuracy of the pre-trained CNN was 57.7%, which was higher than that of the CNN trained only with real images (34.2%), thereby suggesting the potential of image generation. CONCLUSION: The applicability of semi-conditional InfoGAN for feature learning and representation in medical images was demonstrated in this study. InfoGAN can perform constant feature learning and generate images with a variety of shapes using a small dataset.


Subject(s)
Artificial Intelligence , Lung Neoplasms/diagnostic imaging , Lung/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Radiography, Thoracic , Tomography, X-Ray Computed/methods
3.
Int J Urol ; 27(10): 922-928, 2020 10.
Article in English | MEDLINE | ID: mdl-32729184

ABSTRACT

OBJECTIVES: To investigate whether a deep learning model from magnetic resonance imaging information is an accurate method to predict the risk of urinary incontinence after robot-assisted radical prostatectomy. METHODS: This study included 400 patients with prostate cancer who underwent robot-assisted radical prostatectomy. Patients using 0 or 1 pad/day within 3 months after robot-assisted radical prostatectomy were categorized into the "good" group, whereas the other patients were categorized into the "bad" group. Magnetic resonance imaging DICOM data, and preoperative and intraoperative covariates were assessed. To evaluate the deep learning models from the testing dataset, their sensitivity, specificity and area under the receiver operating characteristic curve were analyzed. Gradient-weighted class activation mapping was used to visualize the regions of deep learning interest. RESULTS: The combination of deep learning and naive Bayes algorithm using axial magnetic resonance imaging in addition to clinicopathological parameters had the highest performance, with an area under the receiver operating characteristic curve of 77.5% for predicting early recovery from post-prostatectomy urinary incontinence, whereas machine learning using clinicopathological parameters only achieved low performance, with an area under the receiver operating characteristic curve of 62.2%. The gradient-weighted class activation mapping methods showed that deep learning focused on pelvic skeletal muscles in patients in the good group, and on the perirectal and hip joint regions in patients in the bad group. CONCLUSIONS: Our results suggest that deep learning using magnetic resonance imaging is useful for predicting the severity of urinary incontinence after robot-assisted radical prostatectomy. Deep learning algorithms might help in the choice of treatment strategy, especially for prostate cancer patients who wish to avoid prolonged urinary incontinence after robot-assisted radical prostatectomy.


Subject(s)
Deep Learning , Prostatic Neoplasms , Robotic Surgical Procedures , Robotics , Bayes Theorem , Humans , Magnetic Resonance Imaging , Male , Prostatectomy/adverse effects , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/surgery , Recovery of Function , Robotic Surgical Procedures/adverse effects
4.
Phys Rev Lett ; 99(25): 255301, 2007 Dec 21.
Article in English | MEDLINE | ID: mdl-18233529

ABSTRACT

Superfluidity in one and three dimensions has been studied for 4He fluid films adsorbed in nanopores which are straight channels and three-dimensionally connected pores, respectively. We observed the superfluid in one and three dimensions where thermal phonon wavelengths are much longer than the channel diameter and the period of the pore connection, respectively, and found that the superfluid onset depends on the pore connection. In the straight channels, the observed superfluid density disappears at a temperature far below the heat capacity anomaly of the Ginzburg-Landau transition, while in the pores connected in three dimension, the adsorbed 4He films show an evident three-dimensional transition where the superfluid onset occurs at the heat capacity peak.

SELECTION OF CITATIONS
SEARCH DETAIL
...