Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Biomed Eng Online ; 22(1): 109, 2023 Nov 22.
Article in English | MEDLINE | ID: mdl-37993868

ABSTRACT

BACKGROUND: The Gross Motor Function Classification System (GMFCS) is a widely used tool for assessing the mobility of people with Cerebral Palsy (CP). It classifies patients into different levels based on their gross motor function and its level is typically determined through visual evaluation by a trained expert. Although gait analysis is commonly used in CP research, the functional aspects of gait patterns has yet to be fully exploited. By utilizing the gait patterns to predict GMFCS, we can gain a more comprehensive understanding of how CP affects mobility and develop more effective interventions for CP patients. RESULT: In this study, we propose a multivariate functional classification method to examine the relationship between kinematic gait measures and GMFCS levels in both normal individuals and CP patients with varying GMFCS levels. A sparse linear functional discrimination framework is utilized to achieve an interpretable prediction model. The method is generalized to handle multivariate functional data and multi-class classification. Our method offers competitive or improved prediction accuracy compared to state-of-the-art functional classification approaches and provides interpretable discriminant functions that can characterize the kinesiological progression of gait corresponding to higher GMFCS levels. CONCLUSION: We generalize the sparse functional linear discrimination framework to achieve interpretable classification of GMFCS levels using kinematic gait measures. The findings of this research will aid clinicians in diagnosing CP and assigning appropriate GMFCS levels in a more consistent, systematic, and scientifically supported manner.


Subject(s)
Cerebral Palsy , Gait Analysis , Humans , Gait
2.
J Appl Stat ; 50(3): 675-690, 2023.
Article in English | MEDLINE | ID: mdl-36819077

ABSTRACT

The current large amounts of data and advanced technologies have produced new types of complex data, such as histogram-valued data. The paper focuses on classification problems when predictors are observed as or aggregated into histograms. Because conventional classification methods take vectors as input, a natural approach converts histograms into vector-valued data using summary values, such as the mean or median. However, this approach forgoes the distributional information available in histograms. To address this issue, we propose a margin-based classifier called support histogram machine (SHM) for histogram-valued data. We adopt the support vector machine framework and the Wasserstein-Kantorovich metric to measure distances between histograms. The proposed optimization problem is solved by a dual approach. We then test the proposed SHM via simulated and real examples and demonstrate its superior performance to summary-value-based methods.

3.
Sensors (Basel) ; 22(8)2022 Apr 08.
Article in English | MEDLINE | ID: mdl-35458861

ABSTRACT

Moving object detection and tracking are technologies applied to wide research fields including traffic monitoring and recognition of workers in surrounding heavy equipment environments. However, the conventional moving object detection methods have faced many problems such as much computing time, image noises, and disappearance of targets due to obstacles. In this paper, we introduce a new moving object detection and tracking algorithm based on the sparse optical flow for reducing computing time, removing noises and estimating the target efficiently. The developed algorithm maintains a variety of corner features with refreshed corner features, and the moving window detector is proposed to determine the feature points for tracking, based on the location history of the points. The performance of detecting moving objects is greatly improved through the moving window detector and the continuous target estimation. The memory-based estimator provides the capability to recall the location of corner features for a period of time, and it has an effect of tracking targets obscured by obstacles. The suggested approach was applied to real environments including various illumination (indoor and outdoor) conditions, a number of moving objects and obstacles, and the performance was evaluated on an embedded board (Raspberry pi4). The experimental results show that the proposed method maintains a high FPS (frame per seconds) and improves the accuracy performance, compared with the conventional optical flow methods and vision approaches such as Haar-like and Hog methods.

5.
Hum Brain Mapp ; 40(1): 65-79, 2019 01.
Article in English | MEDLINE | ID: mdl-30184306

ABSTRACT

Combining statistical parametric maps (SPM) from individual subjects is the goal in some types of group-level analyses of functional magnetic resonance imaging data. Brain maps are usually combined using a simple average across subjects, making them susceptible to subjects with outlying values. Furthermore, t tests are prone to false positives and false negatives when outlying values are observed. We propose a regularized unsupervised aggregation method for SPMs to find an optimal weight for aggregation, which aids in detecting and mitigating the effect of outlying subjects. We also present a bootstrap-based weighted t test using the optimal weights to construct an activation map robust to outlying subjects. We validate the performance of the proposed aggregation method and test using simulated and real data examples. Results show that the regularized aggregation approach can effectively detect outlying subjects, lower their weights, and produce robust SPMs.


Subject(s)
Brain Mapping/methods , Brain/diagnostic imaging , Brain/physiology , Data Interpretation, Statistical , Image Processing, Computer-Assisted/methods , Unsupervised Machine Learning , Brain Mapping/standards , Humans , Image Processing, Computer-Assisted/standards , Magnetic Resonance Imaging
6.
Biometrics ; 75(2): 603-612, 2019 06.
Article in English | MEDLINE | ID: mdl-30430541

ABSTRACT

In recent years, there has been increased interest in symbolic data analysis, including for exploratory analysis, supervised and unsupervised learning, time series analysis, etc. Traditional statistical approaches that are designed to analyze single-valued data are not suitable because they cannot incorporate the additional information on data structure available in symbolic data, and thus new techniques have been proposed for symbolic data to bridge this gap. In this article, we develop a regularized convex clustering approach for grouping histogram-valued data. The convex clustering is a relaxation of hierarchical clustering methods, where prototypes are grouped by having exactly the same value in each group via penalization of parameters. We apply two different distance metrics to measure (dis)similarity between histograms. Various numerical examples confirm that the proposed method shows better performance than other competitors.


Subject(s)
Cluster Analysis , Computer Graphics , Data Interpretation, Statistical , Artificial Intelligence
7.
BMC Genet ; 18(1): 93, 2017 11 06.
Article in English | MEDLINE | ID: mdl-29110633

ABSTRACT

BACKGROUND: Undirected graphical models or Markov random fields have been a popular class of models for representing conditional dependence relationships between nodes. In particular, Markov networks help us to understand complex interactions between genes in biological processes of a cell. Local Poisson models seem to be promising in modeling positive as well as negative dependencies for count data. Furthermore, when zero counts are more frequent than are expected, excess zeros should be considered in the model. METHODS: We present a penalized Poisson graphical model for zero inflated count data and derive an expectation-maximization (EM) algorithm built on coordinate descent. Our method is shown to be effective through simulated and real data analysis. RESULTS: Results from the simulated data indicate that our method outperforms the local Poisson graphical model in the presence of excess zeros. In an application to a RNA sequencing data, we also investigate the gender effect by comparing the estimated networks according to different genders. Our method may help us in identifying biological pathways linked to sex hormone regulation and thus understanding underlying mechanisms of the gender differences. CONCLUSIONS: We have presented a penalized version of zero inflated spatial Poisson regression and derive an efficient EM algorithm built on coordinate descent. We discuss possible improvements of our method as well as potential research directions associated with our findings from the RNA sequencing data.


Subject(s)
Algorithms , Gene Expression Profiling/methods , Gene Regulatory Networks , High-Throughput Nucleotide Sequencing/methods , Models, Statistical , Sequence Analysis, RNA/methods , Computer Simulation , Female , Humans , Male , Poisson Distribution
8.
Genetics ; 207(3): 1147-1155, 2017 11.
Article in English | MEDLINE | ID: mdl-28899997

ABSTRACT

Despite the many successes of genome-wide association studies (GWAS), the known susceptibility variants identified by GWAS have modest effect sizes, leading to notable skepticism about the effectiveness of building a risk prediction model from large-scale genetic data. However, in contrast to genetic variants, the family history of diseases has been largely accepted as an important risk factor in clinical diagnosis and risk prediction. Nevertheless, the complicated structures of the family history of diseases have limited their application in clinical practice. Here, we developed a new method that enables incorporation of the general family history of diseases with a liability threshold model, and propose a new analysis strategy for risk prediction with penalized regression analysis that incorporates both large numbers of genetic variants and clinical risk factors. Application of our model to type 2 diabetes in the Korean population (1846 cases and 1846 controls) demonstrated that single-nucleotide polymorphisms accounted for 32.5% of the variation explained by the predicted risk scores in the test data set, and incorporation of family history led to an additional 6.3% improvement in prediction. Our results illustrate that family medical history provides valuable information on the variation of complex diseases and improves prediction performance.


Subject(s)
Genetic Predisposition to Disease , Genome-Wide Association Study/methods , Medical History Taking/methods , Models, Genetic , Pedigree , Diabetes Mellitus, Type 2/genetics , Genetic Variation , Genome-Wide Association Study/standards , Humans , Medical History Taking/standards
9.
Biomed Res Int ; 2015: 605891, 2015.
Article in English | MEDLINE | ID: mdl-26346893

ABSTRACT

Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.


Subject(s)
Genetic Predisposition to Disease , Models, Genetic , Polymorphism, Single Nucleotide , Animals , Humans , Predictive Value of Tests
10.
Med Phys ; 38(12): 6449-57, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22149828

ABSTRACT

PURPOSE: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. METHODS: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. RESULTS: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0.86, 0.95) and 0.92 (0.87, 0.96), respectively. The correlation coefficients of the MLR and ANN predictions with the number of radiologists who responded as distinguishable were 0.76 (0.69, 0.82, p < 0.001) and 0.78 (0.71, 0.83, p < 0.001), respectively. CONCLUSIONS: The MLR and ANN models constructed using the DICOM header information offer promise in predicting the fidelity of JPEG2000 compressed abdomen CT images.


Subject(s)
Algorithms , Data Compression/methods , Information Storage and Retrieval/methods , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Abdominal/methods , Tomography, X-Ray Computed/methods , Neural Networks, Computer , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...