Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Neural Syst ; 20(3): 177-92, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20556846

RESUMO

We consider principal curves and surfaces in the context of multivariate regression modelling. For predictor spaces featuring complex dependency patterns between the involved variables, the intrinsic dimensionality of the data tends to be very small due to the high redundancy induced by the dependencies. In situations of this type, it is useful to approximate the high-dimensional predictor space through a low-dimensional manifold (i.e., a curve or a surface), and use the projections onto the manifold as compressed predictors in the regression problem. In the case that the intrinsic dimensionality of the predictor space equals one, we use the local principal curve algorithm for the the compression step. We provide a novel algorithm which extends this idea to local principal surfaces, thus covering cases of an intrinsic dimensionality equal to two, which is in principle extendible to manifolds of arbitrary dimension. We motivate and apply the novel techniques using astrophysical and oceanographic data examples.


Assuntos
Algoritmos , Fenômenos Astronômicos , Simulação por Computador , Modelos Teóricos , Análise de Regressão
2.
Bioinformatics ; 24(14): 1632-8, 2008 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-18515276

RESUMO

UNLABELLED: Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches. AVAILABILITY: Software is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html.


Assuntos
Biologia Computacional/métodos , Perfilação da Expressão Gênica , Algoritmos , Inteligência Artificial , Análise por Conglomerados , Simulação por Computador , Metodologias Computacionais , Humanos , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/mortalidade , Linfoma/genética , Linfoma/mortalidade , Modelos Estatísticos , Reconhecimento Automatizado de Padrão , Modelos de Riscos Proporcionais , Análise de Regressão , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...