Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 15(1): 2057, 2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38448426

RESUMEN

We link changes in crustal permeability to informative features of microearthquakes (MEQs) using two field hydraulic stimulation experiments where both MEQs and permeability evolution are recorded simultaneously. The Bidirectional Long Short-Term Memory (Bi-LSTM) model effectively predicts permeability evolution and ultimate permeability increase. Our findings confirm the form of key features linking the MEQs to permeability, offering mechanistically consistent interpretations of this association. Transfer learning correctly predicts permeability evolution of one experiment from a model trained on an alternate dataset and locale, which further reinforces the innate interdependency of permeability-to-seismicity. Models representing permeability evolution on reactivated fractures in both shear and tension suggest scaling relationships in which changes in permeability ( Δ k ) are linearly related to the seismic moment ( M ) of individual MEQs as Δ k ∝ M . This scaling relation rationalizes our observation of the permeability-to-seismicity linkage, contributes to its predictive robustness and accentuates its potential in characterizing crustal permeability evolution using MEQs.

2.
Nat Commun ; 14(1): 3693, 2023 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-37344479

RESUMEN

Predicting failure in solids has broad applications including earthquake prediction which remains an unattainable goal. However, recent machine learning work shows that laboratory earthquakes can be predicted using micro-failure events and temporal evolution of fault zone elastic properties. Remarkably, these results come from purely data-driven models trained with large datasets. Such data are equivalent to centuries of fault motion rendering application to tectonic faulting unclear. In addition, the underlying physics of such predictions is poorly understood. Here, we address scalability using a novel Physics-Informed Neural Network (PINN). Our model encodes fault physics in the deep learning loss function using time-lapse ultrasonic data. PINN models outperform data-driven models and significantly improve transfer learning for small training datasets and conditions outside those used in training. Our work suggests that PINN offers a promising path for machine learning-based failure prediction and, ultimately for improving our understanding of earthquake physics and prediction.

3.
IEEE Trans Neural Netw Learn Syst ; 31(10): 4267-4278, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-31976910

RESUMEN

Temporal models based on recurrent neural networks have proven to be quite powerful in a wide variety of applications, including language modeling and speech processing. However, training these models often relies on backpropagation through time (BPTT), which entails unfolding the network over many time steps, making the process of conducting credit assignment considerably more challenging. Furthermore, the nature of backpropagation itself does not permit the use of nondifferentiable activation functions and is inherently sequential, making parallelization of the underlying training process difficult. Here, we propose the parallel temporal neural coding network (P-TNCN), a biologically inspired model trained by the learning algorithm we call local representation alignment. It aims to resolve the difficulties and problems that plague recurrent networks trained by BPTT. The architecture requires neither unrolling in time nor the derivatives of its internal activation functions. We compare our model and learning procedure with other BPTT alternatives (which also tend to be computationally expensive), including real-time recurrent learning, echo state networks, and unbiased online recurrent optimization. We show that it outperforms these on-sequence modeling benchmarks such as Bouncing MNIST, a new benchmark we denote as Bouncing NotMNIST, and Penn Treebank. Notably, our approach can, in some instances, outperform full BPTT as well as variants such as sparse attentive backtracking. Significantly, the hidden unit correction phase of P-TNCN allows it to adapt to new data sets even if its synaptic weights are held fixed (zero-shot adaptation) and facilitates retention of prior generative knowledge when faced with a task sequence. We present results that show the P-TNCN's ability to conduct zero-shot adaptation and online continual sequence modeling.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...