Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Biomedicines ; 12(2)2024 Feb 07.
Article in English | MEDLINE | ID: mdl-38397986

ABSTRACT

Chemical exchange saturation transfer with glutamate (GluCEST) imaging is a novel technique for the non-invasive detection and quantification of cerebral Glu levels in neuromolecular processes. Here we used GluCEST imaging and 1H magnetic resonance spectroscopy (1H MRS) to assess in vivo changes in Glu signals within the hippocampus in a rat model of depression induced by a forced swim test. The forced swimming test (FST) group exhibited markedly reduced GluCEST-weighted levels and Glu concentrations when examined using 1H MRS in the hippocampal region compared to the control group (GluCEST-weighted levels: 3.67 ± 0.81% vs. 5.02 ± 0.44%, p < 0.001; and Glu concentrations: 6.560 ± 0.292 µmol/g vs. 7.133 ± 0.397 µmol/g, p = 0.001). Our results indicate that GluCEST imaging is a distinctive approach to detecting and monitoring Glu levels in a rat model of depression. Furthermore, the application of GluCEST imaging may provide a deeper insight into the neurochemical involvement of glutamate in various psychiatric disorders.

2.
J Gastric Cancer ; 23(3): 388-399, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37553127

ABSTRACT

Gastric cancer remains a significant global health concern, coercing the need for advancements in imaging techniques for ensuring accurate diagnosis and effective treatment planning. Artificial intelligence (AI) has emerged as a potent tool for gastric-cancer imaging, particularly for diagnostic imaging and body morphometry. This review article offers a comprehensive overview of the recent developments and applications of AI in gastric cancer imaging. We investigated the role of AI imaging in gastric cancer diagnosis and staging, showcasing its potential to enhance the accuracy and efficiency of these crucial aspects of patient management. Additionally, we explored the application of AI body morphometry specifically for assessing the clinical impact of gastrectomy. This aspect of AI utilization holds significant promise for understanding postoperative changes and optimizing patient outcomes. Furthermore, we examine the current state of AI techniques for the prognosis of patients with gastric cancer. These prognostic models leverage AI algorithms to predict long-term survival outcomes and assist clinicians in making informed treatment decisions. However, the implementation of AI techniques for gastric cancer imaging has several limitations. As AI continues to evolve, we hope to witness the translation of cutting-edge technologies into routine clinical practice, ultimately improving patient care and outcomes in the fight against gastric cancer.

3.
Diagnostics (Basel) ; 14(1)2023 Dec 27.
Article in English | MEDLINE | ID: mdl-38201379

ABSTRACT

We propose a self-supervised machine learning (ML) algorithm for sequence-type classification of brain MRI using a supervisory signal from DICOM metadata (i.e., a rule-based virtual label). A total of 1787 brain MRI datasets were constructed, including 1531 from hospitals and 256 from multi-center trial datasets. The ground truth (GT) was generated by two experienced image analysts and checked by a radiologist. An ML framework called ImageSort-net was developed using various features related to MRI acquisition parameters and used for training virtual labels and ML algorithms derived from rule-based labeling systems that act as labels for supervised learning. For the performance evaluation of ImageSort-net (MLvirtual), we compare and analyze the performances of models trained with human expert labels (MLhumans), using as a test set blank data that the rule-based labeling system failed to infer from each dataset. The performance of ImageSort-net (MLvirtual) was comparable to that of MLhuman (98.5% and 99%, respectively) in terms of overall accuracy when trained with hospital datasets. When trained with a relatively small multi-center trial dataset, the overall accuracy was relatively lower than that of MLhuman (95.6% and 99.4%, respectively). After integrating the two datasets and re-training them, MLvirtual showed higher accuracy than MLvirtual trained only on multi-center datasets (95.6% and 99.7%, respectively). Additionally, the multi-center dataset inference performances after the re-training of MLvirtual and MLhumans were identical (99.7%). Training of ML algorithms based on rule-based virtual labels achieved high accuracy for sequence-type classification of brain MRI and enabled us to build a sustainable self-learning system.

4.
BMC Med Imaging ; 22(1): 87, 2022 05 13.
Article in English | MEDLINE | ID: mdl-35562705

ABSTRACT

BACKGROUND: Despite the dramatic increase in the use of medical imaging in various therapeutic fields of clinical trials, the first step of image quality check (image QC), which aims to check whether images are uploaded appropriately according to the predefined rules, is still performed manually by image analysts, which requires a lot of manpower and time. METHODS: In this retrospective study, 1669 computed tomography (CT) images with five specific anatomical locations were collected from Asan Medical Center and Kangdong Sacred Heart Hospital. To generate the ground truth, two radiologists reviewed the anatomical locations and presence of contrast enhancement using the collected data. The individual deep learning model is developed through InceptionResNetv2 and transfer learning, and we propose Image Quality Check-Net (Image QC-Net), an ensemble AI model that utilizes it. To evaluate their clinical effectiveness, the overall accuracy and time spent on image quality check of a conventional model and ImageQC-net were compared. RESULTS: ImageQC-net body part classification showed excellent performance in both internal (precision, 100%; recall, 100% accuracy, 100%) and external verification sets (precision, 99.8%; recovery rate, 99.8%, accuracy, 99.8%). In addition, contrast enhancement classification performance achieved 100% precision, recall, and accuracy in the internal verification set and achieved (precision, 100%; recall, 100%; accuracy 100%) in the external dataset. In the case of clinical effects, the reduction of time by checking the quality of artificial intelligence (AI) support by analysts 1 and 2 (49.7% and 48.3%, respectively) was statistically significant (p < 0.001). CONCLUSIONS: Comprehensive AI techniques to identify body parts and contrast enhancement on CT images are highly accurate and can significantly reduce the time spent on image quality checks.


Subject(s)
Artificial Intelligence , Deep Learning , Human Body , Humans , Retrospective Studies , Tomography, X-Ray Computed/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...