Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
NMR Biomed ; : e5167, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38697612

ABSTRACT

Susceptibility source separation, or χ-separation, estimates diamagnetic (χdia) and paramagnetic susceptibility (χpara) signals in the brain using local field and R2' (= R2* - R2) maps. Recently proposed R2*-based χ-separation methods allow for χ-separation using only multi-echo gradient echo (ME-GRE) data, eliminating the need for additional data acquisition for R2 mapping. Although this approach reduces scan time and enhances clinical utility, the impact of missing R2 information remains a subject of exploration. In this study, we evaluate the viability of two previously proposed R2*-based χ-separation methods as alternatives to their R2'-based counterparts: model-based R2*-χ-separation versus χ-separation and deep learning-based χ-sepnet-R2* versus χ-sepnet-R2'. Their performances are assessed in individuals with multiple sclerosis (MS), comparing them with their corresponding R2'-based counterparts (i.e., R2*-χ-separation vs. χ-separation and χ-sepnet-R2* vs. χ-sepnet-R2'). The evaluations encompass qualitative visual assessments by experienced neuroradiologists and quantitative analyses, including region of interest analyses and linear regression analyses. Qualitatively, R2*-χ-separation tends to report higher χpara and χdia values compared with χ-separation, leading to less distinct lesion contrasts, while χ-sepnet-R2* closely aligns with χ-sepnet-R2'. Quantitative analysis reveals a robust correlation between both R2*-based methods and their R2'-based counterparts (r ≥ 0.88). Specifically, in the whole-brain voxels, χ-sepnet-R2* exhibits higher correlation and better linearity than R2*-χ-separation (χdia/χpara from R2*-χ-separation: r = 0.88/0.90, slope = 0.79/0.86; χdia/χpara from χ-sepnet-R2*: r = 0.90/0.92, slope = 0.99/0.97). In MS lesions, both R2*-based methods display comparable correlation and linearity (χdia/χpara from R2*-χ-separation: r = 0.90/0.91, slope = 0.98/0.91; χdia/χpara from χ-sepnet-R2*: r = 0.88/0.88, slope = 0.91/0.95). Notably, χ-sepnet-R2* demonstrates negligible offsets, whereas R2*-χ-separation exhibits relatively large offsets (0.02 ppm in the whole brain and 0.01 ppm in the MS lesions), potentially indicating the false presence of myelin or iron in MS lesions. Overall, both R2*-based χ-separation methods demonstrated their viability as alternatives to their R2'-based counterparts. χ-sepnet-R2* showed better alignment with its R2'-based counterpart with minimal susceptibility offsets, compared with R2*-χ-separation that reported higher χpara and χdia values compared with R2'-based χ-separation.

2.
Magn Reson Med Sci ; 23(3): 291-306, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38644201

ABSTRACT

In MRI, researchers have long endeavored to effectively visualize myelin distribution in the brain, a pursuit with significant implications for both scientific research and clinical applications. Over time, various methods such as myelin water imaging, magnetization transfer imaging, and relaxometric imaging have been developed, each carrying distinct advantages and limitations. Recently, an innovative technique named as magnetic susceptibility source separation has emerged, introducing a novel surrogate biomarker for myelin in the form of a diamagnetic susceptibility map. This paper comprehensively reviews this cutting-edge method, providing the fundamental concepts of magnetic susceptibility, susceptibility imaging, and the validation of the diamagnetic susceptibility map as a myelin biomarker that indirectly measures myelin content. Additionally, the paper explores essential aspects of data acquisition and processing, offering practical insights for readers. A comparison with established myelin imaging methods is also presented, and both current and prospective clinical and scientific applications are discussed to provide a holistic understanding of the technique. This work aims to serve as a foundational resource for newcomers entering this dynamic and rapidly expanding field.


Subject(s)
Brain , Magnetic Resonance Imaging , Myelin Sheath , Humans , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods
3.
Neuroimage ; 264: 119706, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36349597

ABSTRACT

Neuromelanin (NM)-sensitive MRI using a magnetization transfer (MT)-prepared T1-weighted sequence has been suggested as a tool to visualize NM contents in the brain. In this study, a new NM-sensitive imaging method, sandwichNM, is proposed by utilizing the incidental MT effects of spatial saturation RF pulses in order to generate consistent high-quality NM images using product sequences. The spatial saturation pulses are located both superior and inferior to the imaging volume, increasing MT weighting while avoiding asymmetric MT effects. When the parameters of the spatial saturation were optimized, sandwichNM reported a higher NM contrast ratio than those of conventional NM-sensitive imaging methods with matched parameters for comparability with sandwichNM (SandwichNM: 23.6 ± 5.4%; MT-prepared TSE: 20.6 ± 7.4%; MT-prepared GRE: 17.4 ± 6.0%). In a multi-vendor experiment, the sandwichNM images displayed higher means and lower standard deviations of the NM contrast ratio across subjects in all three vendors (SandwichNM vs. MT-prepared GRE; Vendor A: 28.4 ± 1.5% vs. 24.4 ± 2.8%; Vendor B: 27.2 ± 1.0% vs. 13.3 ± 1.3%; Vendor C: 27.3 ± 0.7% vs. 20.1 ± 0.9%). For each subject, the standard deviations of the NM contrast ratio across the vendors were substantially lower in SandwichNM (SandwichNM vs. MT-prepared GRE; subject 1: 1.5% vs. 8.1%, subject 2: 1.1 % vs. 5.1%, subject 3: 0.9% vs. 4.0%, subject 4: 1.1% vs. 5.3%), demonstrating consistent contrasts across the vendors. The proposed method utilizes product sequences, requiring no alteration of a sequence and, therefore, may have a wide practical utility in exploring the NM imaging.


Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Food
4.
J Magn Reson Imaging ; 55(4): 1013-1025, 2022 04.
Article in English | MEDLINE | ID: mdl-33188560

ABSTRACT

Synthetic MRI is a technique that synthesizes contrast-weighted images from multicontrast MRI data. There have been advances in synthetic MRI since the technique was introduced. Although a number of synthetic MRI methods have been developed for quantifying one or more relaxometric parameters and for generating multiple contrast-weighted images, this review focuses on several methods that quantify all three relaxometric parameters (T1 , T2 , and proton density) and produce multiple contrast-weighted images. Acquisition, quantification, and image synthesis techniques are discussed for each method. We discuss the image quality and diagnostic accuracy of synthetic MRI methods and their clinical applications in neuroradiology. Based on this analysis, we highlight areas that need to be addressed for synthetic MRI to be widely implemented in the clinic. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 1.


Subject(s)
Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods
5.
IEEE Trans Med Imaging ; 40(12): 3617-3626, 2021 12.
Article in English | MEDLINE | ID: mdl-34191724

ABSTRACT

Magnetic resonance imaging (MRI) can provide multiple contrast-weighted images using different pulse sequences and protocols. However, a long acquisition time of the images is a major challenge. To address this limitation, a new pulse sequence referred to as quad-contrast imaging is presented. The quad-contrast sequence enables the simultaneous acquisition of four contrast-weighted images (proton density (PD)-weighted, T2-weighted, PD-fluid attenuated inversion recovery (FLAIR), and T2-FLAIR), and the synthesis of T1-weighted images and T1- and T2-maps in a single scan. The scan time is less than 6 min and is further reduced to 2 min 50 s using a deep learning-based parallel imaging reconstruction. The natively acquired quad contrasts demonstrate high quality images, comparable to those from the conventional scans. The deep learning-based reconstruction successfully reconstructed highly accelerated data (acceleration factor 6), reporting smaller normalized root mean squared errors (NRMSEs) and higher structural similarities (SSIMs) than those from conventional generalized autocalibrating partially parallel acquisitions (GRAPPA)-reconstruction (mean NRMSE of 4.36% vs. 10.54% and mean SSIM of 0.990 vs. 0.953). In particular, the FLAIR contrast is natively acquired and does not suffer from lesion-like artifacts at the boundary of tissue and cerebrospinal fluid, differentiating the proposed method from synthetic imaging methods. The quad-contrast imaging method may have the potentials to be used in a clinical routine as a rapid diagnostic tool.


Subject(s)
Image Processing, Computer-Assisted , Protons , Artifacts , Brain/diagnostic imaging , Magnetic Resonance Imaging
6.
Radiology ; 300(3): 661-668, 2021 09.
Article in English | MEDLINE | ID: mdl-34156299

ABSTRACT

Background Evaluation of the glymphatic system with intrathecal contrast material injection has limited clinical use. Purpose To investigate the feasibility of using serial intravenous contrast-enhanced T1 mapping in the quantitative evaluation of putative dynamic glymphatic activity in various brain regions and to demonstrate the effect of sleep on glymphatic activity in humans. Materials and Methods In this prospective study from May 2019 to February 2020, 25 healthy participants (mean age, 25 years ± 2 [standard deviation]; 15 men) underwent two cycles of MRI (day and night cycles). For each cycle, T1 maps were acquired at baseline and 0.5, 1, 1.5, 2, and 12 hours after intravenous contrast material injection. For the night cycle, participants had a normal night of sleep between 2 and 12 hours. The time (tmin) to reach the minimum T1 value (T1min), the absolute difference between baseline T1 and T1min (peak ΔT1), and the slope between two measurements at 2 and 12 hours (slope[2h-12h]) were determined from T1 value-time curves in cerebral gray matter (GM), cerebral white matter (WM), cerebellar GM, cerebellar WM, and putamen. Mixed-model analysis of variance (ANOVA), Friedman test, and repeated-measures ANOVA were used to assess the effect of sleep on slope(2h-12h) and to compare tmin and peak ΔT1 among different regions. Results The slope(2h-12h) increased from the day to night cycles in cerebral GM, cerebellar GM, and putamen (geometric mean ratio [night/day] = 1.4 [95% CI: 1.2, 1.7], 1.3 [95% CI: 1.1, 1.4], and 2.4 [95% CI: 1.6, 3.6], respectively; P = .001, P < .001, and P < .001, respectively). Median tmin values were 0.5 hour in cerebral and cerebellar GM and putamen for both cycles. Cerebellar GM had the highest mean peak ΔT1, followed by cerebral GM and putamen in both day (159 msec ± 6, 99 msec ± 4, and 62 msec ± 5, respectively) and night (152 msec ± 6, 104 msec ± 6, and 58 msec ± 4, respectively) cycles. Conclusion Clearance of a gadolinium-based contrast agent was greater after sleep compared with daytime wakefulness. These results suggest that sleep was associated with greater glymphatic clearance compared with wakefulness. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Anzai and Minoshima in this issue.


Subject(s)
Brain/diagnostic imaging , Glymphatic System/diagnostic imaging , Magnetic Resonance Imaging/methods , Sleep/physiology , Wakefulness/physiology , Adult , Contrast Media , Feasibility Studies , Healthy Volunteers , Humans , Image Enhancement/methods , Male , Prospective Studies
7.
Neuroimage ; 224: 117432, 2021 01 01.
Article in English | MEDLINE | ID: mdl-33038539

ABSTRACT

Respiration-induced B0 fluctuation corrupts MRI images by inducing phase errors in k-space. A few approaches such as navigator have been proposed to correct for the artifacts at the expense of sequence modification. In this study, a new deep learning method, which is referred to as DeepResp, is proposed for reducing the respiration-artifacts in multi-slice gradient echo (GRE) images. DeepResp is designed to extract the respiration-induced phase errors from a complex image using deep neural networks. Then, the network-generated phase errors are applied to the k-space data, creating an artifact-corrected image. For network training, the computer-simulated images were generated using artifact-free images and respiration data. When evaluated, both simulated images and in-vivo images of two different breathing conditions (deep breathing and natural breathing) show improvements (simulation: normalized root-mean-square error (NRMSE) from 7.8 ± 5.2% to 1.3 ± 0.6%; structural similarity (SSIM) from 0.88 ± 0.08 to 0.99 ± 0.01; ghost-to-signal-ratio (GSR) from 7.9 ± 7.2% to 0.6 ± 0.6%; deep breathing: NRMSE from 13.9 ± 4.6% to 5.8 ± 1.4%; SSIM from 0.86 ± 0.03 to 0.95 ± 0.01; GSR 20.2 ± 10.2% to 5.7 ± 2.3%; natural breathing: NRMSE from 5.2 ± 3.3% to 4.0 ± 2.5%; SSIM from 0.94 ± 0.04 to 0.97 ± 0.02; GSR 5.7 ± 5.0% to 2.8 ± 1.1%). Our approach does not require any modification of the sequence or additional hardware, and may therefore find useful applications. Furthermore, the deep neural networks extract respiration-induced phase errors, which is more interpretable and reliable than results of end-to-end trained networks.


Subject(s)
Brain/diagnostic imaging , Deep Learning , Image Processing, Computer-Assisted/methods , Respiration , Artifacts , Humans , Magnetic Resonance Imaging , Neural Networks, Computer
8.
IEEE Trans Med Imaging ; 39(12): 4391-4400, 2020 12.
Article in English | MEDLINE | ID: mdl-32833629

ABSTRACT

A novel approach of applying deep reinforcement learning to an RF pulse design is introduced. This method, which is referred to as DeepRFSLR, is designed to minimize the peak amplitude or, equivalently, minimize the pulse duration of a multiband refocusing pulse generated by the Shinar Le-Roux (SLR) algorithm. In the method, the root pattern of SLR polynomial, which determines the RF pulse shape, is optimized by iterative applications of deep reinforcement learning and greedy tree search. When tested for the designs of the multiband pulses with three and seven slices, DeepRFSLR demonstrated improved performance compared to conventional methods, generating shorter duration RF pulses in shorter computational time. In the experiments, the RF pulse from DeepRFSLR produced a slice profile similar to the minimum-phase SLR RF pulse and the profiles matched to that of the computer simulation. Our approach suggests a new way of designing an RF by applying a machine learning algorithm, demonstrating a "machine-designed" MRI sequence.


Subject(s)
Algorithms , Radio Waves , Computer Simulation , Heart Rate , Magnetic Resonance Imaging , Phantoms, Imaging
9.
Neuroimage ; 211: 116619, 2020 05 01.
Article in English | MEDLINE | ID: mdl-32044437

ABSTRACT

Recently, deep neural network-powered quantitative susceptibility mapping (QSM), QSMnet, successfully performed ill-conditioned dipole inversion in QSM and generated high-quality susceptibility maps. In this paper, the network, which was trained by healthy volunteer data, is evaluated for hemorrhagic lesions that have substantially higher susceptibility than healthy tissues in order to test "linearity" of QSMnet for susceptibility. The results show that QSMnet underestimates susceptibility in hemorrhagic lesions, revealing degraded linearity of the network for the untrained susceptibility range. To overcome this limitation, a data augmentation method is proposed to generalize the network for a wider range of susceptibility. The newly trained network, which is referred to as QSMnet+, is assessed in computer-simulated lesions with an extended susceptibility range (-1.4 â€‹ppm to +1.4 â€‹ppm) and also in twelve hemorrhagic patients. The simulation results demonstrate improved linearity of QSMnet+ over QSMnet (root mean square error of QSMnet+: 0.04 â€‹ppm vs. QSMnet: 0.36 â€‹ppm). When applied to patient data, QSMnet+ maps show less noticeable artifacts to those of conventional QSM maps. Moreover, the susceptibility values of QSMnet+ in hemorrhagic lesions are better matched to those of the conventional QSM method than those of QSMnet when analyzed using linear regression (QSMnet+: slope â€‹= â€‹1.05, intercept â€‹= â€‹-0.03, R2 â€‹= â€‹0.93; QSMnet: slope â€‹= â€‹0.68, intercept â€‹= â€‹0.06, R2 â€‹= â€‹0.86), consolidating improved linearity in QSMnet+. This study demonstrates the importance of the trained data range in deep neural network-powered parametric mapping and suggests the data augmentation approach for generalization of network. The new network can be applicable for a wide range of susceptibility quantification.


Subject(s)
Cerebral Hemorrhage/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/standards , Magnetic Resonance Imaging/standards , Neuroimaging/standards , Adult , Artifacts , Computer Simulation , Humans , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods
10.
Neuroimage ; 188: 835-844, 2019 03.
Article in English | MEDLINE | ID: mdl-30476624

ABSTRACT

Gradient echo myelin water imaging (GRE-MWI) is an MRI technique to measure myelin concentration and involves the analysis of signal decay characteristics of the multi-echo gradient echo data. The method provides a myelin water fraction as a quantitative biomarker for myelin. In this work, a new sequence and post-processing methods were proposed to generate high quality GRE-MWI images at 3T and 7T. In order to capture the rapidly decaying myelin water signals, a bipolar readout GRE sequence was designed with "gradient pairing," compensating for the eddy current effects. The flip angle dependency from the multi-compartmental T1 effects was explored and avoided using a 2D multi-slice acquisition with a long TR. Additionally, the sequence was tested for the effects of inflow and magnetization transfer and demonstrated robustness to these error sources. Lastly, the temporal and spatial B0 inhomogeneity effects were mitigated by using the B0 navigator and field inhomogeneity corrections. Using the method, high-quality myelin water images were successfully generated for the in-vivo human brain at both field strengths. When the myelin water fraction at 3T and 7T were compared, they showed a good correlation (R2≥ 0.88; p < 0.001) with a larger myelin water fraction at 7T. The proposed method also opens the possibility of high resolution (isotropic 1.5 mm resolution) myelin water mapping at 7T.


Subject(s)
Body Water , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Myelin Sheath , Neuroimaging/methods , Adult , Humans , Magnetic Resonance Imaging/standards , Neuroimaging/standards , Young Adult
11.
Methods Mol Biol ; 1598: 405-419, 2017.
Article in English | MEDLINE | ID: mdl-28508375

ABSTRACT

In Traumatic Brain Injury (TBI), elevated Intracranial Pressure (ICP) causes severe brain damages due to hemorrhage and swelling. Monitoring ICP plays an important role in the treatment of TBI patients because ICP is considered a strong predictor of neurological outcome and a potentially amenable method to treat patients. However, it is difficult to predict and measure accurate ICP due to the complex nature of patients' clinical conditions. ICP monitoring for severe TBI patient is a challenging problem for clinicians because traditionally known ICP monitoring is an invasive procedure by placing a device inside the brain to measure pressure. Therefore, ICP monitoring might have a high infection risk and cause medical complications. In here, an ICP monitoring using texture features is proposed to overcome this limitation. The combination of image processing methods and a decision tree algorithm is utilized to estimate ICP of TBI patients noninvasively. In addition, a visual analytics tool is used to conduct an interactive visual factor analysis and outlier detection.


Subject(s)
Brain Injuries/diagnosis , Brain Injuries/physiopathology , Clinical Decision-Making , Decision Trees , Intracranial Pressure , Algorithms , Brain Injuries/pathology , Factor Analysis, Statistical , Humans , Image Processing, Computer-Assisted , Tomography, X-Ray Computed
12.
IEEE J Biomed Health Inform ; 21(1): 238-245, 2017 01.
Article in English | MEDLINE | ID: mdl-26552098

ABSTRACT

As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring forth oncological inference to the bioinformatics community through the analysis of large-scale cancer genomic (LSCG) DNA and mRNA microarray data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological interpretation by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale distributed parallel (CSDP) separable 1-D wavelet decomposition technique for denoising through differential expression thresholding and classification of LSCG microarray data. This research presents a novel methodology that utilizes a CSDP separable 1-D method for wavelet-based transformation in order to initialize a threshold which will retain significantly expressed genes through the denoising process for robust classification of cancer patients. Additionally, the overall study was implemented and encompassed within CSDP environment. The utilization of cloud computing and wavelet-based thresholding for denoising was used for the classification of samples within the Global Cancer Map, Cancer Cell Line Encyclopedia, and The Cancer Genome Atlas. The results proved that separable 1-D parallel distributed wavelet denoising in the cloud and differential expression thresholding increased the computational performance and enabled the generation of higher quality LSCG microarray datasets, which led to more accurate classification results.


Subject(s)
Genomics/methods , Neoplasms/genetics , Oligonucleotide Array Sequence Analysis/methods , Signal Processing, Computer-Assisted , Cell Line, Tumor , Cloud Computing , Databases, Genetic , Humans , Neoplasms/metabolism
13.
J Clin Monit Comput ; 27(3): 289-302, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23371800

ABSTRACT

Detection of hypovolemia prior to overt hemodynamic decompensation remains an elusive goal in the treatment of critically injured patients in both civilian and combat settings. Monitoring of heart rate variability has been advocated as a potential means to monitor the rapid changes in the physiological state of hemorrhaging patients, with the most popular methods involving calculation of the R-R interval signal's power spectral density (PSD) or use of fractal dimensions (FD). However, the latter method poses technical challenges, while the former is best suited to stationary signals rather than the non-stationary R-R interval. Both approaches are also limited by high inter- and intra-individual variability, a serious issue when applying these indices to the clinical setting. We propose an approach which applies the discrete wavelet transform (DWT) to the R-R interval signal to extract information at both 500 and 125 Hz sampling rates. The utility of machine learning models based on these features were tested in assessing electrocardiogram signals from volunteers subjected to lower body negative pressure induced central hypovolemia as a surrogate of hemorrhage. These machine learning models based on DWT features were compared against those based on the traditional PSD and FD, at both sampling rates and their performance was evaluated based on leave-one-subject-out fold cross-validation. Results demonstrate that the proposed DWT-based model outperforms individual PSD and FD methods as well as the combination of these two traditional methods at both sample rates of 500 Hz (p value <0.0001) and 125 Hz (p value <0.0001) in detecting the degree of hypovolemia. These findings indicate the potential of the proposed DWT approach in monitoring the physiological changes caused by hemorrhage. The speed and relatively low computational costs in deriving these features may make it particularly suited for implementation in portable devices for remote monitoring.


Subject(s)
Heart Rate/physiology , Hypovolemia/physiopathology , Monitoring, Physiologic/statistics & numerical data , Algorithms , Analysis of Variance , Artificial Intelligence , Diagnosis, Computer-Assisted , Electrocardiography/statistics & numerical data , Fractals , Humans , Hypovolemia/diagnosis , Lower Body Negative Pressure , Retrospective Studies , Severity of Illness Index , Wavelet Analysis
14.
Adv Bioinformatics ; : 454671, 2010.
Article in English | MEDLINE | ID: mdl-21197478

ABSTRACT

Understanding mechanisms of protein flexibility is of great importance to structural biology. The ability to detect similarities between proteins and their patterns is vital in discovering new information about unknown protein functions. A Distance Constraint Model (DCM) provides a means to generate a variety of flexibility measures based on a given protein structure. Although information about mechanical properties of flexibility is critical for understanding protein function for a given protein, the question of whether certain characteristics are shared across homologous proteins is difficult to assess. For a proper assessment, a quantified measure of similarity is necessary. This paper begins to explore image processing techniques to quantify similarities in signals and images that characterize protein flexibility. The dataset considered here consists of three different families of proteins, with three proteins in each family. The similarities and differences found within flexibility measures across homologous proteins do not align with sequence-based evolutionary methods.

15.
Article in English | MEDLINE | ID: mdl-19965226

ABSTRACT

Hemorrhagic shock (HS) potentially impacts the chance of survival in most traumatic injuries. Thus, it is highly desirable to maximize the survival rate in cases of blood loss by predicting the occurrence of hemorrhagic shock with biomedical signals. Since analyzing one physiological signal may not enough to accurately predict blood loss severity, two types of physiological signals - Electrocardiography (ECG) and Transcranial Doppler (TCD) - are used to discover the degree of severity. In this study, these degrees are classified as mild, moderate and severe, and also severe and non-severe. The data for this study were generated using the human simulated model of hemorrhage, which is called lower body negative pressure (LBNP). The analysis is done by applying discrete wavelet transformation (DWT). The wavelet-based features are defined using the detail and approximate coefficients and machine learning algorithms are used for classification. The objective of this study is to evaluate the improvement when analyzing ECG and TCD physiological signals together to classify the severity of blood loss. The results of this study show a prediction accuracy of 85.9% achieved by support vector machine in identifying severe/non-severe states.


Subject(s)
Shock, Hemorrhagic/diagnosis , Signal Processing, Computer-Assisted , Ultrasonography, Doppler, Transcranial/instrumentation , Ultrasonography, Doppler, Transcranial/methods , Algorithms , Artificial Intelligence , Biomedical Engineering/methods , Computer Simulation , Electrocardiography/methods , Humans , Lower Body Negative Pressure/methods , Models, Cardiovascular , Models, Statistical , Neural Networks, Computer , Pattern Recognition, Automated/methods , Reproducibility of Results , Shock, Hemorrhagic/physiopathology
16.
BMC Med Inform Decis Mak ; 9 Suppl 1: S4, 2009 Nov 03.
Article in English | MEDLINE | ID: mdl-19891798

ABSTRACT

BACKGROUND: Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI). Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. METHODS: First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM) and Maximum A Posteriori Spatial Probability (MASP) are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. RESULTS: Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. CONCLUSION: The experiments show the reliability of the proposed algorithms. The novelty of the proposed method lies in its incorporation of anatomical features for ideal midline detection and the two-step ventricle segmentation method. Our method offers the following improvements over existing approaches: accurate detection of the ideal midline and accurate recognition of ventricles using both anatomical features and spatial templates derived from Magnetic Resonance Images.


Subject(s)
Brain/diagnostic imaging , Cerebral Ventriculography/methods , Radiographic Image Interpretation, Computer-Assisted , Tomography, X-Ray Computed/methods , Algorithms , Brain Injuries/diagnostic imaging , Humans
17.
BMC Med Inform Decis Mak ; 9 Suppl 1: S6, 2009 Nov 03.
Article in English | MEDLINE | ID: mdl-19891800

ABSTRACT

BACKGROUND: Functional Magnetic Resonance Imaging (fMRI) has been proven to be useful for studying brain functions. However, due to the existence of noise and distortion, mapping between the fMRI signal and the actual neural activity is difficult. Because of the difficulty, differential pattern analysis of fMRI brain images for healthy and diseased cases is regarded as an important research topic. From fMRI scans, increased blood ows can be identified as activated brain regions. Also, based on the multi-sliced images of the volume data, fMRI provides the functional information for detecting and analyzing different parts of the brain. METHODS: In this paper, the capability of a hierarchical method that performed an optimization algorithm based on modified maximum model (MCM) in our previous study is evaluated. The optimization algorithm is designed by adopting modified maximum correlation model (MCM) to detect active regions that contain significant responses. Specifically, in the study, the optimization algorithm is examined based on two groups of datasets, dyslexia and healthy subjects to verify the ability of the algorithm that enhances the quality of signal activities in the interested regions of the brain. After verifying the algorithm, discrete wavelet transform (DWT) is applied to identify the difference between healthy and dyslexia subjects. RESULTS: We successfully showed that our optimization algorithm improves the fMRI signal activity for both healthy and dyslexia subjects. In addition, we found that DWT based features can identify the difference between healthy and dyslexia subjects. CONCLUSION: The results of this study provide insights of associations of functional abnormalities in dyslexic subjects that may be helpful for neurobiological identification from healthy subject.


Subject(s)
Algorithms , Brain Mapping , Dyslexia/diagnosis , Magnetic Resonance Imaging/methods , Signal Processing, Computer-Assisted , Computer Simulation , Dyslexia/metabolism , Humans , Models, Theoretical
18.
BMC Med Inform Decis Mak ; 9: 2, 2009 Jan 14.
Article in English | MEDLINE | ID: mdl-19144188

ABSTRACT

BACKGROUND: This paper focuses on the creation of a predictive computer-assisted decision making system for traumatic injury using machine learning algorithms. Trauma experts must make several difficult decisions based on a large number of patient attributes, usually in a short period of time. The aim is to compare the existing machine learning methods available for medical informatics, and develop reliable, rule-based computer-assisted decision-making systems that provide recommendations for the course of treatment for new patients, based on previously seen cases in trauma databases. Datasets of traumatic brain injury (TBI) patients are used to train and test the decision making algorithm. The work is also applicable to patients with traumatic pelvic injuries. METHODS: Decision-making rules are created by processing patterns discovered in the datasets, using machine learning techniques. More specifically, CART and C4.5 are used, as they provide grammatical expressions of knowledge extracted by applying logical operations to the available features. The resulting rule sets are tested against other machine learning methods, including AdaBoost and SVM. The rule creation algorithm is applied to multiple datasets, both with and without prior filtering to discover significant variables. This filtering is performed via logistic regression prior to the rule discovery process. RESULTS: For survival prediction using all variables, CART outperformed the other machine learning methods. When using only significant variables, neural networks performed best. A reliable rule-base was generated using combined C4.5/CART. The average predictive rule performance was 82% when using all variables, and approximately 84% when using significant variables only. The average performance of the combined C4.5 and CART system using significant variables was 89.7% in predicting the exact outcome (home or rehabilitation), and 93.1% in predicting the ICU length of stay for airlifted TBI patients. CONCLUSION: This study creates an efficient computer-aided rule-based system that can be employed in decision making in TBI cases. The rule-bases apply methods that combine CART and C4.5 with logistic regression to improve rule performance and quality. For final outcome prediction for TBI cases, the resulting rule-bases outperform systems that utilize all available variables.


Subject(s)
Artificial Intelligence , Brain Injuries , Decision Making, Computer-Assisted , Adult , Algorithms , Brain Injuries/diagnosis , Brain Injuries/therapy , Decision Support Systems, Clinical , Female , Humans , Length of Stay , Logistic Models , Male , Middle Aged , Neural Networks, Computer , Survival Analysis , Trauma Severity Indices
SELECTION OF CITATIONS
SEARCH DETAIL
...