Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Commun ; 15(1): 5922, 2024 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-39004638

RESUMO

The discovery of scientific formulae that parsimoniously explain natural phenomena and align with existing background theory is a key goal in science. Historically, scientists have derived natural laws by manipulating equations based on existing knowledge, forming new equations, and verifying them experimentally. However, this does not include experimental data within the discovery process, which may be inefficient. We propose a solution to this problem when all axioms and scientific laws are expressible as polynomials and argue our approach is widely applicable. We model notions of minimal complexity using binary variables and logical constraints, solve polynomial optimization problems via mixed-integer linear or semidefinite optimization, and prove the validity of our scientific discoveries in a principled manner using Positivstellensatz certificates. We demonstrate that some famous scientific laws, including Kepler's Law of Planetary Motion and the Radiated Gravitational Wave Power equation, can be derived in a principled manner from axioms and experimental data.

2.
Front Big Data ; 7: 1363978, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38873283

RESUMO

Learning from complex, multidimensional data has become central to computational mathematics, and among the most successful high-dimensional function approximators are deep neural networks (DNNs). Training DNNs is posed as an optimization problem to learn network weights or parameters that well-approximate a mapping from input to target data. Multiway data or tensors arise naturally in myriad ways in deep learning, in particular as input data and as high-dimensional weights and features extracted by the network, with the latter often being a bottleneck in terms of speed and memory. In this work, we leverage tensor representations and processing to efficiently parameterize DNNs when learning from high-dimensional data. We propose tensor neural networks (t-NNs), a natural extension of traditional fully-connected networks, that can be trained efficiently in a reduced, yet more powerful parameter space. Our t-NNs are built upon matrix-mimetic tensor-tensor products, which retain algebraic properties of matrix multiplication while capturing high-dimensional correlations. Mimeticity enables t-NNs to inherit desirable properties of modern DNN architectures. We exemplify this by extending recent work on stable neural networks, which interpret DNNs as discretizations of differential equations, to our multidimensional framework. We provide empirical evidence of the parametric advantages of t-NNs on dimensionality reduction using autoencoders and classification using fully-connected and stable variants on benchmark imaging datasets MNIST and CIFAR-10.

3.
Nat Commun ; 14(1): 1777, 2023 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-37045814

RESUMO

Scientists aim to discover meaningful formulae that accurately describe experimental data. Mathematical models of natural phenomena can be manually created from domain knowledge and fitted to data, or, in contrast, created automatically from large datasets with machine-learning algorithms. The problem of incorporating prior knowledge expressed as constraints on the functional form of a learned model has been studied before, while finding models that are consistent with prior knowledge expressed via general logical axioms is an open problem. We develop a method to enable principled derivations of models of natural phenomena from axiomatic knowledge and experimental data by combining logical reasoning with symbolic regression. We demonstrate these concepts for Kepler's third law of planetary motion, Einstein's relativistic time-dilation law, and Langmuir's theory of adsorption. We show we can discover governing laws from few data points when logical reasoning is used to distinguish between candidate formulae having similar error on the data.

4.
J Biomed Inform ; 122: 103901, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34474189

RESUMO

In this study, we address three important challenges related to disease transmissions such as the COVID-19 pandemic, namely, (a) providing an early warning to likely exposed individuals, (b) identifying individuals who are asymptomatic, and (c) prescription of optimal testing when testing capacity is limited. First, we present a dynamic-graph based SEIR epidemiological model in order to describe the dynamics of the disease propagation. Our model considers a dynamic graph/network that accounts for the interactions between individuals over time, such as the ones obtained by manual or automated contact tracing, and uses a diffusion-reaction mechanism to describe the state dynamics. This dynamic graph model helps identify likely exposed/infected individuals to whom we can provide early warnings, even before they display any symptoms and/or are asymptomatic. Moreover, when the testing capacity is limited compared to the population size, reliable estimation of individual's health state and disease transmissibility using epidemiological models is extremely challenging. Thus, estimation of state uncertainty is paramount for both eminent risk assessment, as well as for closing the tracing-testing loop by optimal testing prescription. Therefore, we propose the use of arbitrary Polynomial Chaos Expansion, a popular technique used for uncertainty quantification, to represent the states, and quantify the uncertainties in the dynamic model. This design enables us to assign uncertainty of the state of each individual, and consequently optimize the testing as to reduce the overall uncertainty given a constrained testing budget. These tools can also be used to optimize vaccine distribution to curb the disease spread when limited vaccines are available. We present a few simulation results that illustrate the performance of the proposed framework, and estimate the impact of incomplete contact tracing data.


Assuntos
COVID-19 , Busca de Comunicante , Análise de Dados , Humanos , Pandemias , Prescrições , SARS-CoV-2
5.
Proc Natl Acad Sci U S A ; 118(28)2021 07 13.
Artigo em Inglês | MEDLINE | ID: mdl-34234014

RESUMO

With the advent of machine learning and its overarching pervasiveness it is imperative to devise ways to represent large datasets efficiently while distilling intrinsic features necessary for subsequent analysis. The primary workhorse used in data dimensionality reduction and feature extraction has been the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. A primary goal in this study is to show that high-dimensional datasets are more compressible when treated as tensors (i.e., multiway arrays) and compressed via tensor-SVDs under the tensor-tensor product constructs and its generalizations. We begin by proving Eckart-Young optimality results for families of tensor-SVDs under two different truncation strategies. Since such optimality properties can be proven in both matrix and tensor-based algebras, a fundamental question arises: Does the tensor construct subsume the matrix construct in terms of representation efficiency? The answer is positive, as proven by showing that a tensor-tensor representation of an equal dimensional spanning space can be superior to its matrix counterpart. We then use these optimality results to investigate how the compressed representation provided by the truncated tensor SVD is related both theoretically and empirically to its two closest tensor-based analogs, the truncated high-order SVD and the truncated tensor-train SVD.

6.
Neuroimage ; 43(2): 258-68, 2008 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-18694835

RESUMO

Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Interpretação de Imagem Assistida por Computador/métodos , Modelos Neurológicos , Algoritmos , Anisotropia , Simulação por Computador , Impedância Elétrica , Cabeça/fisiologia , Humanos , Imagens de Fantasmas , Projetos Piloto , Pletismografia de Impedância , Tomografia
7.
Phys Med Biol ; 51(3): 497-516, 2006 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-16424578

RESUMO

Diffuse optical tomography (DOT) is an emerging functional medical imaging modality which aims to recover the optical properties of biological tissue. The forward problem of the light propagation of DOT can be modelled in the frequency domain as a diffusion equation with Robin boundary conditions. In the case of multilayered geometries with piecewise constant parameters, the forward problem is equivalent to a set of coupled Helmholtz equations. In this paper, we present solutions for the multilayered diffuse light propagation for a three-layer concentric sphere model using a series expansion method and for a general layered geometry using the boundary element method (BEM). Results are presented comparing these solutions to an independent Monte Carlo model, and for an example three layered head model.


Assuntos
Cabeça/efeitos da radiação , Tomografia Óptica/métodos , Algoritmos , Simulação por Computador , Difusão , Cabeça/anatomia & histologia , Humanos , Luz , Modelos Estatísticos , Modelos Teóricos , Método de Monte Carlo , Distribuição Normal , Imagens de Fantasmas , Fótons , Espalhamento de Radiação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...