Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 13(1): 1394, 2023 Jan 25.
Article in English | MEDLINE | ID: mdl-36697487

ABSTRACT

For centuries, scientists have observed nature to understand the laws that govern the physical world. The traditional process of turning observations into physical understanding is slow. Imperfect models are constructed and tested to explain relationships in data. Powerful new algorithms can enable computers to learn physics by observing images and videos. Inspired by this idea, instead of training machine learning models using physical quantities, we used images, that is, pixel information. For this work, and as a proof of concept, the physics of interest are wind-driven spatial patterns. These phenomena include features in Aeolian dunes and volcanic ash deposition, wildfire smoke, and air pollution plumes. We use computer model simulations of spatial deposition patterns to approximate images from a hypothetical imaging device whose outputs are red, green, and blue (RGB) color images with channel values ranging from 0 to 255. In this paper, we explore deep convolutional neural network-based autoencoders to exploit relationships in wind-driven spatial patterns, which commonly occur in geosciences, and reduce their dimensionality. Reducing the data dimension size with an encoder enables training deep, fully connected neural network models linking geographic and meteorological scalar input quantities to the encoded space. Once this is achieved, full spatial patterns are reconstructed using the decoder. We demonstrate this approach on images of spatial deposition from a pollution source, where the encoder compresses the dimensionality to 0.02% of the original size, and the full predictive model performance on test data achieves a normalized root mean squared error of 8%, a figure of merit in space of 94% and a precision-recall area under the curve of 0.93.

2.
Sci Rep ; 12(1): 8968, 2022 05 27.
Article in English | MEDLINE | ID: mdl-35624187

ABSTRACT

After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged buildings images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~ 6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and when ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important regions on the images that facilitate the decision.


Subject(s)
Crowdsourcing , Earthquakes , Social Media , Humans , Machine Learning , Smartphone
3.
Sci Rep ; 12(1): 1634, 2022 01 31.
Article in English | MEDLINE | ID: mdl-35102161

ABSTRACT

Online social networks (OSNs) have become a powerful tool to study collective human responses to extreme events such as earthquakes. Most previous research concentrated on a single platform and utilized users' behaviors on a single platform to study people's general responses. In this study, we explore the characteristics of people's behaviors on different OSNs and conduct a cross-platform analysis of public responses to earthquakes. Our findings support the Uses and Gratification theory that users on Reddit and Twitter are engaging with platforms that they may feel best reflect their sense of self. Using the 2019 Ridgecrest earthquakes as our study cases, we collected 510,579 tweets and 45,770 Reddit posts (including 1437 submissions and 44,333 comments) to answer the following research questions: (1) What were the similarities and differences between public responses on Twitter and Reddit? (2) Considering the different mechanisms of Twitter and Reddit, what unique information of public responses can we learn from Reddit as compared with Twitter? By answering these research questions, we aim to bridge the gap of cross-platform public responses research towards natural hazards. Our study evinces that the users on the two different platforms have both different topics of interest and different sentiments towards the same earthquake, which indicates the necessity of investigating cross-platform OSNs to reveal a more comprehensive picture of people's general public responses towards certain disasters. Our analysis also finds that r/conspiracy subreddit is one of the major venues where people discuss the 2019 Ridgecrest earthquakes on Reddit and different misinformation/conspiracies spread on Twitter and Reddit platforms (e.g., "Big one is coming" on Twitter and "Nuclear test" on Reddit).

4.
Sci Adv ; 2(2): e1501055, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26933682

ABSTRACT

Large magnitude earthquakes in urban environments continue to kill and injure tens to hundreds of thousands of people, inflicting lasting societal and economic disasters. Earthquake early warning (EEW) provides seconds to minutes of warning, allowing people to move to safe zones and automated slowdown and shutdown of transit and other machinery. The handful of EEW systems operating around the world use traditional seismic and geodetic networks that exist only in a few nations. Smartphones are much more prevalent than traditional networks and contain accelerometers that can also be used to detect earthquakes. We report on the development of a new type of seismic system, MyShake, that harnesses personal/private smartphone sensors to collect data and analyze earthquakes. We show that smartphones can record magnitude 5 earthquakes at distances of 10 km or less and develop an on-phone detection capability to separate earthquakes from other everyday shakes. Our proof-of-concept system then collects earthquake data at a central site where a network detection algorithm confirms that an earthquake is under way and estimates the location and magnitude in real time. This information can then be used to issue an alert of forthcoming ground shaking. MyShake could be used to enhance EEW in regions with traditional networks and could provide the only EEW capability in regions without. In addition, the seismic waveforms recorded could be used to deliver rapid microseism maps, study impacts on buildings, and possibly image shallow earth structure and earthquake rupture kinematics.


Subject(s)
Disasters , Earthquakes , Smartphone , Algorithms , Disasters/prevention & control , Disasters/statistics & numerical data , Earthquakes/classification , Earthquakes/statistics & numerical data , Humans , Risk Assessment , Smartphone/statistics & numerical data , Urban Population
SELECTION OF CITATIONS
SEARCH DETAIL
...