Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
1.
Article in English | MEDLINE | ID: mdl-38700592

ABSTRACT

PURPOSE: To investigate the possibility of distinguishing between IgG4-related ophthalmic disease (IgG4-ROD) and orbital MALT lymphoma using artificial intelligence (AI) and hematoxylin-eosin (HE) images. METHODS: After identifying a total of 127 patients from whom we were able to procure tissue blocks with IgG4-ROD and orbital MALT lymphoma, we performed histological and molecular genetic analyses, such as gene rearrangement. Subsequently, pathological HE images were collected from these patients followed by the cutting out of 10 different image patches from the HE image of each patient. A total of 970 image patches from the 97 patients were used to construct nine different models of deep learning, and the 300 image patches from the remaining 30 patients were used to evaluate the diagnostic performance of the models. Area under the curve (AUC) and accuracy (ACC) were used for the performance evaluation of the deep learning models. In addition, four ophthalmologists performed the binary classification between IgG4-ROD and orbital MALT lymphoma. RESULTS: EVA, which is a vision-centric foundation model to explore the limits of visual representation, was the best deep learning model among the nine models. The results of EVA were ACC = 73.3% and AUC = 0.807. The ACC of the four ophthalmologists ranged from 40 to 60%. CONCLUSIONS: It was possible to construct an AI software based on deep learning that was able to distinguish between IgG4-ROD and orbital MALT. This AI model may be useful as an initial screening tool to direct further ancillary investigations.

2.
PLoS One ; 19(4): e0300716, 2024.
Article in English | MEDLINE | ID: mdl-38578764

ABSTRACT

BACKGROUND AND PURPOSE: Mean pulmonary artery pressure (mPAP) is a key index for chronic thromboembolic pulmonary hypertension (CTEPH). Using machine learning, we attempted to construct an accurate prediction model for mPAP in patients with CTEPH. METHODS: A total of 136 patients diagnosed with CTEPH were included, for whom mPAP was measured. The following patient data were used as explanatory variables in the model: basic patient information (age and sex), blood tests (brain natriuretic peptide (BNP)), echocardiography (tricuspid valve pressure gradient (TRPG)), and chest radiography (cardiothoracic ratio (CTR), right second arc ratio, and presence of avascular area). Seven machine learning methods including linear regression were used for the multivariable prediction models. Additionally, prediction models were constructed using the AutoML software. Among the 136 patients, 2/3 and 1/3 were used as training and validation sets, respectively. The average of R squared was obtained from 10 different data splittings of the training and validation sets. RESULTS: The optimal machine learning model was linear regression (averaged R squared, 0.360). The optimal combination of explanatory variables with linear regression was age, BNP level, TRPG level, and CTR (averaged R squared, 0.388). The R squared of the optimal multivariable linear regression model was higher than that of the univariable linear regression model with only TRPG. CONCLUSION: We constructed a more accurate prediction model for mPAP in patients with CTEPH than a model of TRPG only. The prediction performance of our model was improved by selecting the optimal machine learning method and combination of explanatory variables.


Subject(s)
Hypertension, Pulmonary , Pulmonary Embolism , Humans , Hypertension, Pulmonary/diagnosis , Arterial Pressure , Echocardiography/methods , Tricuspid Valve , Natriuretic Peptide, Brain , Pulmonary Embolism/complications , Pulmonary Embolism/diagnostic imaging , Chronic Disease
3.
Front Oncol ; 14: 1277749, 2024.
Article in English | MEDLINE | ID: mdl-38322414

ABSTRACT

Purpose: To examine the molecular biological differences between conjunctival mucosa-associated lymphoid tissue (MALT) lymphoma and orbital MALT lymphoma in ocular adnexa lymphoma. Methods: Observational case series. A total of 129 consecutive, randomized cases of ocular adnexa MALT lymphoma diagnosed histopathologically between 2008 and 2020.Total RNA was extracted from formalin-fixed paraffin-embedded tissue from ocular adnexa MALT lymphoma, and RNA-sequencing was performed. Orbital MALT lymphoma gene expression was compared with that of conjunctival MALT lymphoma. Gene set (GS) analysis detecting for gene set cluster was performed in RNA-sequence. Related proteins were further examined by immunohistochemical staining. In addition, artificial segmentation image used to count stromal area in HE images. Results: GS analysis showed differences in expression in 29 GS types in primary orbital MALT lymphoma (N=5,5, FDR q-value <0.25). The GS with the greatest difference in expression was the GS of epithelial-mesenchymal transition (EMT). Based on this GS change, immunohistochemical staining was added using E-cadherin as an epithelial marker and vimentin as a mesenchymal marker for EMT. There was significant staining of vimentin in orbital lymphoma (P<0.01, N=129) and of E-cadherin in conjunctival lesions (P=0.023, N=129). Vimentin staining correlated with Ann Arbor staging (1 versus >1) independent of age and sex on multivariate analysis (P=0.004). Stroma area in tumor were significant difference(P<0.01). Conclusion: GS changes including EMT and stromal area in tumor were used to demonstrate the molecular biological differences between conjunctival MALT lymphoma and orbital MALT lymphoma in ocular adnexa lymphomas.

4.
Acad Radiol ; 31(3): 822-829, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37914626

ABSTRACT

RATIONALE AND OBJECTIVES: Pericardial fat (PF)-the thoracic visceral fat surrounding the heart-promotes the development of coronary artery disease by inducing inflammation of the coronary arteries. To evaluate PF, we generated pericardial fat count images (PFCIs) from chest radiographs (CXRs) using a dedicated deep-learning model. MATERIALS AND METHODS: We reviewed data of 269 consecutive patients who underwent coronary computed tomography (CT). We excluded patients with metal implants, pleural effusion, history of thoracic surgery, or malignancy. Thus, the data of 191 patients were used. We generated PFCIs from the projection of three-dimensional CT images, wherein fat accumulation was represented by a high pixel value. Three different deep-learning models, including CycleGAN were combined in the proposed method to generate PFCIs from CXRs. A single CycleGAN-based model was used to generate PFCIs from CXRs for comparison with the proposed method. To evaluate the image quality of the generated PFCIs, structural similarity index measure (SSIM), mean squared error (MSE), and mean absolute error (MAE) of (i) the PFCI generated using the proposed method and (ii) the PFCI generated using the single model were compared. RESULTS: The mean SSIM, MSE, and MAE were 8.56 × 10-1, 1.28 × 10-2, and 3.57 × 10-2, respectively, for the proposed model, and 7.62 × 10-1, 1.98 × 10-2, and 5.04 × 10-2, respectively, for the single CycleGAN-based model. CONCLUSION: PFCIs generated from CXRs with the proposed model showed better performance than those generated with the single model. The evaluation of PF without CT may be possible using the proposed method.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional , Tomography, X-Ray Computed
5.
PeerJ Comput Sci ; 9: e1620, 2023.
Article in English | MEDLINE | ID: mdl-37869462

ABSTRACT

Purpose: The purpose of this study is to compare two libraries dedicated to the Markov chain Monte Carlo method: pystan and numpyro. In the comparison, we mainly focused on the agreement of estimated latent parameters and the performance of sampling using the Markov chain Monte Carlo method in Bayesian item response theory (IRT). Materials and methods: Bayesian 1PL-IRT and 2PL-IRT were implemented with pystan and numpyro. Then, the Bayesian 1PL-IRT and 2PL-IRT were applied to two types of medical data obtained from a published article. The same prior distributions of latent parameters were used in both pystan and numpyro. Estimation results of latent parameters of 1PL-IRT and 2PL-IRT were compared between pystan and numpyro. Additionally, the computational cost of the Markov chain Monte Carlo method was compared between the two libraries. To evaluate the computational cost of IRT models, simulation data were generated from the medical data and numpyro. Results: For all the combinations of IRT types (1PL-IRT or 2PL-IRT) and medical data types, the mean and standard deviation of the estimated latent parameters were in good agreement between pystan and numpyro. In most cases, the sampling time using the Markov chain Monte Carlo method was shorter in numpyro than that in pystan. When the large-sized simulation data were used, numpyro with a graphics processing unit was useful for reducing the sampling time. Conclusion: Numpyro and pystan were useful for applying the Bayesian 1PL-IRT and 2PL-IRT. Our results show that the two libraries yielded similar estimation result and that regarding to sampling time, the fastest libraries differed based on the dataset size.

6.
Eur Radiol ; 2023 Oct 26.
Article in English | MEDLINE | ID: mdl-37882835

ABSTRACT

OBJECTIVES: To build preoperative prediction models with and without MRI for regional lymph node metastasis (r-LNM, pelvic and/or para-aortic LNM (PENM/PANM)) and for PANM in endometrial cancer using established risk factors. METHODS: In this retrospective two-center study, 364 patients with endometrial cancer were included: 253 in the model development and 111 in the external validation. For r-LNM and PANM, respectively, best subset regression with ten-time fivefold cross validation was conducted using ten established risk factors (4 clinical and 6 imaging factors). Models with the top 10 percentile of area under the curve (AUC) and with the fewest variables in the model development were subjected to the external validation (11 and 4 candidates, respectively, for r-LNM and PANM). Then, the models with the highest AUC were selected as the final models. Models without MRI findings were developed similarly, assuming the cases where MRI was not available. RESULTS: The final r-LNM model consisted of pelvic lymph node (PEN) ≥ 6 mm, deep myometrial invasion (DMI) on MRI, CA125, para-aortic lymph node (PAN) ≥ 6 mm, and biopsy; PANM model consisted of DMI, PAN, PEN, and CA125 (in order of correlation coefficient ß values). The AUCs were 0.85 (95%CI: 0.77-0.92) and 0.86 (0.75-0.94) for the external validation, respectively. The model without MRI for r-LNM and PANM showed AUC of 0.79 (0.68-0.89) and 0.87 (0.76-0.96), respectively. CONCLUSIONS: The prediction models created by best subset regression with cross validation showed high diagnostic performance for predicting LNM in endometrial cancer, which may avoid unnecessary lymphadenectomies. CLINICAL RELEVANCE STATEMENT: The prediction risks of lymph node metastasis (LNM) and para-aortic LNM can be easily obtained for all patients with endometrial cancer by inputting the conventional clinical information into our models. They help in the decision-making for optimal lymphadenectomy and personalized treatment. KEY POINTS: •Diagnostic performance of lymph node metastases (LNM) in endometrial cancer is low based on size criteria and can be improved by combining with other clinical information. •The optimized logistic regression model for regional LNM consists of lymph node ≥ 6 mm, deep myometrial invasion, cancer antigen-125, and biopsy, showing high diagnostic performance. •Our model predicts the preoperative risk of LNM, which may avoid unnecessary lymphadenectomies.

7.
Sci Rep ; 13(1): 17533, 2023 10 16.
Article in English | MEDLINE | ID: mdl-37845348

ABSTRACT

To evaluate the diagnostic performance of our deep learning (DL) model of COVID-19 and investigate whether the diagnostic performance of radiologists was improved by referring to our model. Our datasets contained chest X-rays (CXRs) for the following three categories: normal (NORMAL), non-COVID-19 pneumonia (PNEUMONIA), and COVID-19 pneumonia (COVID). We used two public datasets and private dataset collected from eight hospitals for the development and external validation of our DL model (26,393 CXRs). Eight radiologists performed two reading sessions: one session was performed with reference to CXRs only, and the other was performed with reference to both CXRs and the results of the DL model. The evaluation metrics for the reading session were accuracy, sensitivity, specificity, and area under the curve (AUC). The accuracy of our DL model was 0.733, and that of the eight radiologists without DL was 0.696 ± 0.031. There was a significant difference in AUC between the radiologists with and without DL for COVID versus NORMAL or PNEUMONIA (p = 0.0038). Our DL model alone showed better diagnostic performance than that of most radiologists. In addition, our model significantly improved the diagnostic performance of radiologists for COVID versus NORMAL or PNEUMONIA.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , Humans , COVID-19/diagnostic imaging , COVID-19 Testing , X-Rays , Tomography, X-Ray Computed/methods , Pneumonia/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Radiologists , Computers , Retrospective Studies
8.
Curr Eye Res ; 48(12): 1195-1202, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37566457

ABSTRACT

PURPOSE: The purpose of this study was to develop artificial intelligence algorithms that can distinguish between orbital and conjunctival mucosa-associated lymphoid tissue (MALT) lymphomas in pathological images. METHODS: Tissue blocks with residual MALT lymphoma and data from histological and flow cytometric studies and molecular genetic analyses such as gene rearrangement were procured for 129 patients treated between April 2008 and April 2020. We collected pathological hematoxylin and eosin-stained (HE) images of lymphoma from these patients and cropped 10 different image patches at a resolution of 2048 × 2048 from pathological images from each patient. A total of 990 images from 99 patients were used to create and evaluate machine-learning models. Each image patch of three different magnification rates at ×4, ×20, and ×40 underwent texture analysis to extract features, and then seven different machine-learning algorithms were applied to the results to create models. Cross-validation on a patient-by-patient basis was used to create and evaluate models, and then 300 images from the remaining 30 cases were used to evaluate the average accuracy rate. RESULTS: Ten-fold cross-validation using the support vector machine with linear kernel algorithm was identified as the best algorithm for discriminating between conjunctival mucosa-associated lymphoid tissue and orbital MALT lymphomas, with an average accuracy rate under cross-validation of 85%. There were ×20 magnification HE images that were more accurate in distinguishing orbital and conjunctival MALT lymphomas among ×4, ×20, and ×40. CONCLUSION: Artificial intelligence algorithms can successfully distinguish HE images between orbital and conjunctival MALT lymphomas.


Subject(s)
Conjunctival Neoplasms , Eye Neoplasms , Lymphoma, B-Cell, Marginal Zone , Humans , Lymphoma, B-Cell, Marginal Zone/genetics , Artificial Intelligence , Conjunctival Neoplasms/diagnosis , Conjunctival Neoplasms/pathology , Machine Learning
9.
Cancers (Basel) ; 15(5)2023 Feb 28.
Article in English | MEDLINE | ID: mdl-36900325

ABSTRACT

We aimed to develop and evaluate an automatic prediction system for grading histopathological images of prostate cancer. A total of 10,616 whole slide images (WSIs) of prostate tissue were used in this study. The WSIs from one institution (5160 WSIs) were used as the development set, while those from the other institution (5456 WSIs) were used as the unseen test set. Label distribution learning (LDL) was used to address a difference in label characteristics between the development and test sets. A combination of EfficientNet (a deep learning model) and LDL was utilized to develop an automatic prediction system. Quadratic weighted kappa (QWK) and accuracy in the test set were used as the evaluation metrics. The QWK and accuracy were compared between systems with and without LDL to evaluate the usefulness of LDL in system development. The QWK and accuracy were 0.364 and 0.407 in the systems with LDL and 0.240 and 0.247 in those without LDL, respectively. Thus, LDL improved the diagnostic performance of the automatic prediction system for the grading of histopathological images for cancer. By handling the difference in label characteristics using LDL, the diagnostic performance of the automatic prediction system could be improved for prostate cancer grading.

10.
Sci Rep ; 13(1): 628, 2023 01 12.
Article in English | MEDLINE | ID: mdl-36635425

ABSTRACT

This study aimed to develop a versatile automatic segmentation model of bladder cancer (BC) on MRI using a convolutional neural network and investigate the robustness of radiomics features automatically extracted from apparent diffusion coefficient (ADC) maps. This two-center retrospective study used multi-vendor MR units and included 170 patients with BC, of whom 140 were assigned to training datasets for the modified U-net model with five-fold cross-validation and 30 to test datasets for assessment of segmentation performance and reproducibility of automatically extracted radiomics features. For model input data, diffusion-weighted images with b = 0 and 1000 s/mm2, ADC maps, and multi-sequence images (b0-b1000-ADC maps) were used. Segmentation accuracy was compared between ours and existing models. The reproducibility of radiomics features on ADC maps was evaluated using intraclass correlation coefficient. The model with multi-sequence images achieved the highest Dice similarity coefficient (DSC) with five-fold cross-validation (mean DSC = 0.83 and 0.79 for the training and validation datasets, respectively). The median (interquartile range) DSC of the test dataset model was 0.81 (0.70-0.88). Radiomics features extracted from manually and automatically segmented BC exhibited good reproducibility. Thus, our U-net model performed highly accurate segmentation of BC, and radiomics features extracted from the automatic segmentation results exhibited high reproducibility.


Subject(s)
Magnetic Resonance Imaging , Urinary Bladder Neoplasms , Humans , Retrospective Studies , Reproducibility of Results , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Urinary Bladder Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods
12.
Jpn J Radiol ; 41(4): 449-455, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36469224

ABSTRACT

PURPOSE: This study proposes a Bayesian multidimensional nominal response model (MD-NRM) to statistically analyze the nominal response of multiclass classifications. MATERIALS AND METHODS: First, for MD-NRM, we extended the conventional nominal response model to achieve stable convergence of the Bayesian nominal response model and utilized multidimensional ability parameters. We then applied MD-NRM to a 3-class classification problem, where radiologists visually evaluated chest X-ray images and selected their diagnosis from one of the three classes. The classification problem consisted of 150 cases, and each of the six radiologists selected their diagnosis based on a visual evaluation of the images. Consequently, 900 (= 150 × 6) nominal responses were obtained. In MD-NRM, we assumed that the responses were determined by the softmax function, the ability of radiologists, and the difficulty of images. In addition, we assumed that the multidimensional ability of one radiologist were represented by a 3 × 3 matrix. The latent parameters of the MD-NRM (ability parameters of radiologists and difficulty parameters of images) were estimated from the 900 responses. To implement Bayesian MD-NRM and estimate the latent parameters, a probabilistic programming language (Stan, version 2.21.0) was used. RESULTS: For all parameters, the Rhat values were less than 1.10. This indicates that the latent parameters of the MD-NRM converged successfully. CONCLUSION: The results show that it is possible to estimate the latent parameters (ability and difficulty parameters) of the MD-NRM using Stan. Our code for the implementation of the MD-NRM is available as open source.


Subject(s)
Radiologists , Humans , Bayes Theorem
14.
Sci Rep ; 12(1): 11090, 2022 06 30.
Article in English | MEDLINE | ID: mdl-35773366

ABSTRACT

The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner simultaneously acquires metabolic information via PET and morphological information using MRI. However, attenuation correction, which is necessary for quantitative PET evaluation, is difficult as it requires the generation of attenuation-correction maps from MRI, which has no direct relationship with the gamma-ray attenuation information. MRI-based bone tissue segmentation is potentially available for attenuation correction in relatively rigid and fixed organs such as the head and pelvis regions. However, this is challenging for the chest region because of respiratory and cardiac motions in the chest, its anatomically complicated structure, and the thin bone cortex. We propose a new method using unsupervised generative attentional networks with adaptive layer-instance normalisation for image-to-image translation (U-GAT-IT), which specialised in unpaired image transformation based on attention maps for image transformation. We added the modality-independent neighbourhood descriptor (MIND) to the loss of U-GAT-IT to guarantee anatomical consistency in the image transformation between different domains. Our proposed method obtained a synthesised computed tomography of the chest. Experimental results showed that our method outperforms current approaches. The study findings suggest the possibility of synthesising clinically acceptable computed tomography images from chest MRI with minimal changes in anatomical structures without human annotation.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pelvis , Positron-Emission Tomography/methods , Tomography, X-Ray Computed
15.
Sci Rep ; 12(1): 8214, 2022 05 17.
Article in English | MEDLINE | ID: mdl-35581272

ABSTRACT

This retrospective study aimed to develop and validate a deep learning model for the classification of coronavirus disease-2019 (COVID-19) pneumonia, non-COVID-19 pneumonia, and the healthy using chest X-ray (CXR) images. One private and two public datasets of CXR images were included. The private dataset included CXR from six hospitals. A total of 14,258 and 11,253 CXR images were included in the 2 public datasets and 455 in the private dataset. A deep learning model based on EfficientNet with noisy student was constructed using the three datasets. The test set of 150 CXR images in the private dataset were evaluated by the deep learning model and six radiologists. Three-category classification accuracy and class-wise area under the curve (AUC) for each of the COVID-19 pneumonia, non-COVID-19 pneumonia, and healthy were calculated. Consensus of the six radiologists was used for calculating class-wise AUC. The three-category classification accuracy of our model was 0.8667, and those of the six radiologists ranged from 0.5667 to 0.7733. For our model and the consensus of the six radiologists, the class-wise AUC of the healthy, non-COVID-19 pneumonia, and COVID-19 pneumonia were 0.9912, 0.9492, and 0.9752 and 0.9656, 0.8654, and 0.8740, respectively. Difference of the class-wise AUC between our model and the consensus of the six radiologists was statistically significant for COVID-19 pneumonia (p value = 0.001334). Thus, an accurate model of deep learning for the three-category classification could be constructed; the diagnostic performance of our model was significantly better than that of the consensus interpretation by the six radiologists for COVID-19 pneumonia.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , COVID-19/diagnostic imaging , Humans , Pneumonia/diagnosis , Retrospective Studies , SARS-CoV-2
16.
Eur Radiol ; 32(11): 7976-7987, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35394186

ABSTRACT

OBJECTIVES: To develop and evaluate a deep learning-based algorithm (DLA) for automatic detection of bone metastases on CT. METHODS: This retrospective study included CT scans acquired at a single institution between 2009 and 2019. Positive scans with bone metastases and negative scans without bone metastasis were collected to train the DLA. Another 50 positive and 50 negative scans were collected separately from the training dataset and were divided into validation and test datasets at a 2:3 ratio. The clinical efficacy of the DLA was evaluated in an observer study with board-certified radiologists. Jackknife alternative free-response receiver operating characteristic analysis was used to evaluate observer performance. RESULTS: A total of 269 positive scans including 1375 bone metastases and 463 negative scans were collected for the training dataset. The number of lesions identified in the validation and test datasets was 49 and 75, respectively. The DLA achieved a sensitivity of 89.8% (44 of 49) with 0.775 false positives per case for the validation dataset and 82.7% (62 of 75) with 0.617 false positives per case for the test dataset. With the DLA, the overall performance of nine radiologists with reference to the weighted alternative free-response receiver operating characteristic figure of merit improved from 0.746 to 0.899 (p < .001). Furthermore, the mean interpretation time per case decreased from 168 to 85 s (p = .004). CONCLUSION: With the aid of the algorithm, the overall performance of radiologists in bone metastases detection improved, and the interpretation time decreased at the same time. KEY POINTS: • A deep learning-based algorithm for automatic detection of bone metastases on CT was developed. • In the observer study, overall performance of radiologists in bone metastases detection improved significantly with the aid of the algorithm. • Radiologists' interpretation time decreased at the same time.


Subject(s)
Bone Neoplasms , Deep Learning , Humans , Radiographic Image Interpretation, Computer-Assisted , Retrospective Studies , Algorithms , Tomography, X-Ray Computed , Radiologists , Bone Neoplasms/diagnostic imaging , Bone Neoplasms/secondary
17.
Magn Reson Imaging ; 85: 161-167, 2022 01.
Article in English | MEDLINE | ID: mdl-34687853

ABSTRACT

PURPOSE: To evaluate radiomic machine learning (ML) classifiers based on multiparametric magnetic resonance images (MRI) in pretreatment assessment of endometrial cancer (EC) risk factors and to examine effects on radiologists' interpretation of deep myometrial invasion (dMI). METHODS: This retrospective study examined 200 consecutive patients with EC during January 2004 -March 2017, divided randomly to Discovery (n = 150) and Test (n = 50) datasets. Radiomic features of tumors were extracted from T2-weighted images, apparent diffusion coefficient map, and contrast enhanced T1-weighed images. Using the Discovery dataset, feature selection and hyperparameter tuning for XGBoost were performed. Ten classifiers were built to predict dMI, histological grade, lymphovascular invasion (LVI), and pelvic/paraaortic lymph node metastasis (PLNM/PALNM), respectively. Using the Test dataset, the diagnostic performances of ten classifiers were assessed by the area under the receiver operator characteristic curve (AUC). Next, four radiologists assessed dMI independently using MRI with a Likert scale before and after referring to inference of the ML classifier for the Test dataset. Then, AUCs obtained before and after reference were compared. RESULTS: In the Test dataset, mean AUC of ML classifiers for dMI, histological grade, LVI, PLNM, and PALNM were 0.83, 0.77, 0.81, 0.72, and 0.82. AUCs of all radiologists for dMI (0.83-0.88) were better than or equal to mean AUC of the ML classifier, which showed no statistically significant difference before and after the reference. CONCLUSION: Radiomic classifiers showed promise for pretreatment assessment of EC risk factors. Radiologists' inferences outperformed the ML classifier for dMI and showed no improvement by review.


Subject(s)
Endometrial Neoplasms , Machine Learning , Endometrial Neoplasms/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging/methods , Prognosis , Radiologists , Retrospective Studies , Risk Factors
18.
Sci Rep ; 11(1): 18422, 2021 09 16.
Article in English | MEDLINE | ID: mdl-34531429

ABSTRACT

To determine whether temporal subtraction (TS) CT obtained with non-rigid image registration improves detection of various bone metastases during serial clinical follow-up examinations by numerous radiologists. Six board-certified radiologists retrospectively scrutinized CT images for patients with history of malignancy sequentially. These radiologists selected 50 positive and 50 negative subjects with and without bone metastases, respectively. Furthermore, for each subject, they selected a pair of previous and current CT images satisfying predefined criteria by consensus. Previous images were non-rigidly transformed to match current images and subtracted from current images to automatically generate TS images. Subsequently, 18 radiologists independently interpreted the 100 CT image pairs to identify bone metastases, both without and with TS images, with each interpretation separated from the other by an interval of at least 30 days. Jackknife free-response receiver operating characteristics (JAFROC) analysis was conducted to assess observer performance. Compared with interpretation without TS images, interpretation with TS images was associated with a significantly higher mean figure of merit (0.710 vs. 0.658; JAFROC analysis, P = 0.0027). Mean sensitivity at lesion-based was significantly higher for interpretation with TS compared with that without TS (46.1% vs. 33.9%; P = 0.003). Mean false positive count per subject was also significantly higher for interpretation with TS than for that without TS (0.28 vs. 0.15; P < 0.001). At the subject-based, mean sensitivity was significantly higher for interpretation with TS images than that without TS images (73.2% vs. 65.4%; P = 0.003). There was no significant difference in mean specificity (0.93 vs. 0.95; P = 0.083). TS significantly improved overall performance in the detection of various bone metastases.


Subject(s)
Bone Neoplasms/drug therapy , Tomography, X-Ray Computed/standards , Aged , Aged, 80 and over , Bone Neoplasms/secondary , Female , Humans , Male , Middle Aged , Observer Variation , Radiologists/statistics & numerical data , Sensitivity and Specificity , Software , Tomography, X-Ray Computed/methods
19.
Front Artif Intell ; 4: 694815, 2021.
Article in English | MEDLINE | ID: mdl-34337394

ABSTRACT

Purpose: The purpose of this study was to develop and evaluate lung cancer segmentation with a pretrained model and transfer learning. The pretrained model was constructed from an artificial dataset generated using a generative adversarial network (GAN). Materials and Methods: Three public datasets containing images of lung nodules/lung cancers were used: LUNA16 dataset, Decathlon lung dataset, and NSCLC radiogenomics. The LUNA16 dataset was used to generate an artificial dataset for lung cancer segmentation with the help of the GAN and 3D graph cut. Pretrained models were then constructed from the artificial dataset. Subsequently, the main segmentation model was constructed from the pretrained models and the Decathlon lung dataset. Finally, the NSCLC radiogenomics dataset was used to evaluate the main segmentation model. The Dice similarity coefficient (DSC) was used as a metric to evaluate the segmentation performance. Results: The mean DSC for the NSCLC radiogenomics dataset improved overall when using the pretrained models. At maximum, the mean DSC was 0.09 higher with the pretrained model than that without it. Conclusion: The proposed method comprising an artificial dataset and a pretrained model can improve lung cancer segmentation as confirmed in terms of the DSC metric. Moreover, the construction of the artificial dataset for the segmentation using the GAN and 3D graph cut was found to be feasible.

20.
Sci Rep ; 11(1): 14440, 2021 07 14.
Article in English | MEDLINE | ID: mdl-34262088

ABSTRACT

Endometrial cancer (EC) is the most common gynecological tumor in developed countries, and preoperative risk stratification is essential for personalized medicine. There have been several radiomics studies for noninvasive risk stratification of EC using MRI. Although tumor segmentation is usually necessary for these studies, manual segmentation is not only labor-intensive but may also be subjective. Therefore, our study aimed to perform the automatic segmentation of EC on MRI with a convolutional neural network. The effect of the input image sequence and batch size on the segmentation performance was also investigated. Of 200 patients with EC, 180 patients were used for training the modified U-net model; 20 patients for testing the segmentation performance and the robustness of automatically extracted radiomics features. Using multi-sequence images and larger batch size was effective for improving segmentation accuracy. The mean Dice similarity coefficient, sensitivity, and positive predictive value of our model for the test set were 0.806, 0.816, and 0.834, respectively. The robustness of automatically extracted first-order and shape-based features was high (median ICC = 0.86 and 0.96, respectively). Other high-order features presented moderate-high robustness (median ICC = 0.57-0.93). Our model could automatically segment EC on MRI and extract radiomics features with high reliability.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Female , Humans , Pregnancy
SELECTION OF CITATIONS
SEARCH DETAIL
...