Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
ArXiv ; 2023 Aug 30.
Article in English | MEDLINE | ID: mdl-37693178

ABSTRACT

Ultrasound computed tomography (USCT) is an emerging imaging modality that holds great promise for breast imaging. Full-waveform inversion (FWI)-based image reconstruction methods incorporate accurate wave physics to produce high spatial resolution quantitative images of speed of sound or other acoustic properties of the breast tissues from USCT measurement data. However, the high computational cost of FWI reconstruction represents a significant burden for its widespread application in a clinical setting. The research reported here investigates the use of a convolutional neural network (CNN) to learn a mapping from USCT waveform data to speed of sound estimates. The CNN was trained using a supervised approach with a task-informed loss function aiming at preserving features of the image that are relevant to the detection of lesions. A large set of anatomically and physiologically realistic numerical breast phantoms (NBPs) and corresponding simulated USCT measurements was employed during training. Once trained, the CNN can perform real-time FWI image reconstruction from USCT waveform data. The performance of the proposed method was assessed and compared against FWI using a hold-out sample of 41 NBPs and corresponding USCT data. Accuracy was measured using relative mean square error (RMSE), structural self-similarity index measure (SSIM), and lesion detection performance (DICE score). This numerical experiment demonstrates that a supervised learning model can achieve accuracy comparable to FWI in terms of RMSE and SSIM, and better performance in terms of task performance, while significantly reducing computational time.

2.
J Imaging ; 8(12)2022 Nov 22.
Article in English | MEDLINE | ID: mdl-36547477

ABSTRACT

Seismic full wave inversion (FWI) is a widely used non-linear seismic imaging method used to reconstruct subsurface velocity images, however it is time consuming, has high computational cost and depend heavily on human interaction. Recently, deep learning has accelerated it's use in several data-driven techniques, however most deep learning techniques suffer from overfitting and stability issues. In this work, we propose an edge computing-based data-driven inversion technique based on supervised deep convolutional neural network to accurately reconstruct the subsurface velocities. Deep learning based data-driven technique depends mostly on bulk data training. In this work, we train our deep convolutional neural network (DCN) (UNet and InversionNet) on the raw seismic data and their corresponding velocity models during the training phase to learn the non-linear mapping between the seismic data and velocity models. The trained network is then used to estimate the velocity models from new input seismic data during the prediction phase. The prediction phase is performed on a resource-constrained edge device such as Raspberry Pi. Raspberry Pi provides real-time and on-device computational power to execute the inference process. In addition, we demonstrate robustness of our models to perform inversion in the presence on noise by performing both noise-aware and no-noise training and feeding the resulting trained models with noise at different signal-to-noise (SNR) ratio values. We make great efforts to achieve very feasible inference times on the Raspberry Pi for both models. Specifically, the inference times per prediction for UNet and InversionNet models on Raspberry Pi were 22 and 4 s respectively whilst inference times for both models on the GPU were 2 and 18 s which are very comparable. Finally, we have designed a user-friendly interactive graphical user interface (GUI) to automate the model execution and inversion process on the Raspberry Pi.

3.
J Acoust Soc Am ; 152(4): 2434, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36319237

ABSTRACT

We develop a deep learning-based infrasonic detection and categorization methodology that uses convolutional neural networks with self-attention layers to identify stationary and non-stationary signals in infrasound array processing results. Using features extracted from the coherence and direction-of-arrival information from beamforming at different infrasound arrays, our model more reliably detects signals compared with raw waveform data. Using three infrasound stations maintained as part of the International Monitoring System, we construct an analyst-reviewed data set for model training and evaluation. We construct models using a 4-category framework, a generalized noise vs non-noise detection scheme, and a signal-of-interest (SOI) categorization framework that merges short duration stationary and non-stationary categories into a single SOI category. We evaluate these models using a combination of k-fold cross-validation, comparison with an existing "state-of-the-art" detector, and a transportability analysis. Although results are mixed in distinguishing stationary and non-stationary short duration signals, f-scores for the noise vs non-noise and SOI analyses are consistently above 0.96, implying that deep learning-based infrasonic categorization is a highly accurate means of identifying signals-of-interest in infrasonic data records.

4.
J Geophys Res Solid Earth ; 127(11): e2022JB024401, 2022 Nov.
Article in English | MEDLINE | ID: mdl-37033773

ABSTRACT

Accurate earthquake location and magnitude estimation play critical roles in seismology. Recent deep learning frameworks have produced encouraging results on various seismological tasks (e.g., earthquake detection, phase picking, seismic classification, and earthquake early warning). Many existing machine learning earthquake location methods utilize waveform information from a single station. However, multiple stations contain more complete information for earthquake source characterization. Inspired by recent successes in applying graph neural networks (GNNs) in graph-structured data, we develop a Spatiotemporal Graph Neural Network (STGNN) for estimating earthquake locations and magnitudes. Our graph neural network leverages geographical and waveform information from multiple stations to construct graphs automatically and dynamically by adaptive message passing based on graphs' edges. Using a recent graph neural network and a fully convolutional neural network as baselines, we apply STGNN to earthquakes recorded by the Southern California Seismic Network from 2000 to 2019 and earthquakes collected in Oklahoma from 2014 to 2015. STGNN yields more accurate earthquake locations than those obtained by the baseline models and performs comparably in terms of depth and magnitude prediction, though the ability to predict depth and magnitude remains weak for all tested models. Our work demonstrates the potential of using GNNs and multiple stations for better automatic estimation of earthquake epicenters.

SELECTION OF CITATIONS
SEARCH DETAIL
...