Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
MethodsX ; 11: 102289, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37560402

ABSTRACT

Some statistical analysis techniques may require complete data matrices, but a frequent problem in the construction of databases is the incomplete collection of information for different reasons. One option to tackle the problem is to estimate and impute the missing data. This paper describes a form of imputation that mixes regression with lower rank approximations. To improve the quality of the imputations, a generalisation is proposed that replaces the singular value decomposition (SVD) of the matrix with a regularised SVD in which the regularisation parameter is estimated by cross-validation. To evaluate the performance of the proposal, ten sets of real data from multienvironment trials were used. Missing values were created in each set at four percentages of missing not at random, and three criteria were then considered to investigate the effectiveness of the proposal. The results show that the regularised method proves very competitive when compared to the original method, beating it in several of the considered scenarios. As it is a very general system, its application can be extended to all multivariate data matrices. •The imputation method is modified through the inclusion of a stable and efficient computational algorithm that replaces the classical SVD least squares criterion by a penalised criterion. This penalty produces smoothed eigenvectors and eigenvalues that avoid overfitting problems, improving the performance of the method when the penalty is necessary. The size of the penalty can be determined by minimising one of the following criteria: the prediction errors, the Procrustes similarity statistic or the critical angles between subspaces of principal components.

2.
MethodsX ; 9: 101683, 2022.
Article in English | MEDLINE | ID: mdl-35478595

ABSTRACT

This paper describes strategies to reduce the possible effect of outliers on the quality of imputations produced by a method that uses a mixture of two least squares techniques: regression and lower rank approximation of a matrix. To avoid the influence of discrepant data and maintain the computational speed of the original scheme, pre-processing options were explored before applying the imputation method. The first proposal is to previously use a robust singular value decomposition, the second is to detect outliers and then treat the potential outliers as missing. To evaluate the proposed methods, a cross-validation study was carried out on ten complete matrices of real data from multi-environment trials. The imputations were compared with the original data using three statistics: a measure of goodness of fit, the squared cosine between matrices and the prediction error. The results show that the original method should be replaced by one of the options presented here because outliers can cause low quality imputations or convergence problems.•The imputation algorithm based on Gabriel's cross-validation method uses two least squares techniques that can be affected by the presence of outliers. The inclusion of a robust singular value decomposition allows both to robustify the procedure and to detect outliers and consider them later as missing. These forms of pre-processing ensure that the algorithm performs well on any dataset that has a matrix form with suspected contamination.

3.
IEEE Trans Inf Technol Biomed ; 11(3): 312-9, 2007 May.
Article in English | MEDLINE | ID: mdl-17521081

ABSTRACT

Bayesian averaging (BA) over ensembles of decision models allows evaluation of the uncertainty of decisions that is of crucial importance for safety-critical applications such as medical diagnostics. The interpretability of the ensemble can also give useful information for experts responsible for making reliable decisions. For this reason, decision trees (DTs) are attractive decision models for experts. However, BA over such models makes an ensemble of DTs uninterpretable. In this paper, we present a new approach to probabilistic interpretation of Bayesian DT ensembles. This approach is based on the quantitative evaluation of uncertainty of the DTs, and allows experts to find a DT that provides a high predictive accuracy and confident outcomes. To make the BA over DTs feasible in our experiments, we use a Markov Chain Monte Carlo technique with a reversible jump extension. The results obtained from clinical data show that in terms of predictive accuracy, the proposed method outperforms the maximum a posteriori (MAP) method that has been suggested for interpretation of DT ensembles.


Subject(s)
Algorithms , Artificial Intelligence , Bayes Theorem , Decision Support Systems, Clinical , Decision Support Techniques , Diagnosis, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Monte Carlo Method
4.
Article in English | MEDLINE | ID: mdl-17044169

ABSTRACT

Microarrays have become a standard tool for investigating gene function and more complex microarray experiments are increasingly being conducted. For example, an experiment may involve samples from several groups or may investigate changes in gene expression over time for several subjects, leading to large three-way data sets. In response to this increase in data complexity, we propose some extensions to the plaid model, a biclustering method developed for the analysis of gene expression data. This model-based method lends itself to the incorporation of any additional structure such as external grouping or repeated measures. We describe how the extended models may be fitted and illustrate their use on real data.


Subject(s)
Cluster Analysis , Models, Genetic , Oligonucleotide Array Sequence Analysis/methods , Algorithms , Computer Simulation , Databases, Genetic , Gene Expression Profiling , Humans , Tuberculosis, Meningeal/genetics , Tuberculosis, Pulmonary/complications , Tuberculosis, Pulmonary/genetics
SELECTION OF CITATIONS
SEARCH DETAIL
...