Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Angew Chem Int Ed Engl ; 63(14): e202317978, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38357744

ABSTRACT

Nanoparticle (NP) characterization is essential because diverse shapes, sizes, and morphologies inevitably occur in as-synthesized NP mixtures, profoundly impacting their properties and applications. Currently, the only technique to concurrently determine these structural parameters is electron microscopy, but it is time-intensive and tedious. Here, we create a three-dimensional (3D) NP structural space to concurrently determine the purity, size, and shape of 1000 sets of as-synthesized Ag nanocubes mixtures containing interfering nanospheres and nanowires from their extinction spectra, attaining low predictive errors at 2.7-7.9 %. We first use plasmonically-driven feature enrichment to extract localized surface plasmon resonance attributes from spectra and establish a lasso regressor (LR) model to predict purity, size, and shape. Leveraging the learned LR, we artificially generate 425,592 augmented extinction spectra to overcome data scarcity and create a comprehensive NP structural space to bidirectionally predict extinction spectra from structural parameters with <4 % error. Our interpretable NP structural space further elucidates the two higher-order combined electric dipole, quadrupole, and magnetic dipole as the critical structural parameter predictors. By incorporating other NP shapes and mixtures' extinction spectra, we anticipate our approach, especially the data augmentation, can create a fully generalizable NP structural space to drive on-demand, autonomous synthesis-characterization platforms.

2.
Eur J Oper Res ; 304(1): 84-98, 2023 Jan 01.
Article in English | MEDLINE | ID: mdl-34785855

ABSTRACT

Although social distancing can effectively contain the spread of infectious diseases by reducing social interactions, it may have economic effects. Crises such as the COVID-19 pandemic create dilemmas for policymakers because the long-term implementation of restrictive social distancing policies may cause massive economic damage and ultimately harm healthcare systems. This paper proposes an epidemic control framework that policymakers can use as a data-driven decision support tool for setting efficient social distancing targets. The framework addresses three aspects of the COVID-19 pandemic that are related to social distancing or community mobility data: modeling, financial implications, and policy-making. Thus, we explore the COVID-19 pandemic and concurrent economic situation as functions of historical pandemic data and mobility control. This approach allows us to formulate an efficient social distancing policy as a stochastic feedback control problem that minimizes the aggregated risks of disease transmission and economic volatility. We further demonstrate the use of a deep learning algorithm to solve this control problem. Finally, by applying our framework to U.S. data, we empirically examine the efficiency of the U.S. social distancing policy.

3.
Nanoscale Horiz ; 7(6): 626-633, 2022 05 31.
Article in English | MEDLINE | ID: mdl-35507320

ABSTRACT

Determination of nanoparticle size and size distribution is important because these key parameters dictate nanomaterials' properties and applications. Yet, it is only accomplishable using low-throughput electron microscopy. Herein, we incorporate plasmonic-domain-driven feature engineering with machine learning (ML) for accurate and bidirectional prediction of both parameters for complete characterization of nanoparticle ensembles. Using gold nanospheres as our model system, our ML approach achieves the lowest prediction errors of 2.3% and ±1.0 nm for ensemble size and size distribution respectively, which is 3-6 times lower than previously reported ML or Mie approaches. Knowledge elicitation from the plasmonic domain and concomitant translation into featurization allow us to mitigate noise and boost data interpretability. This enables us to overcome challenges arising from size anisotropy and small sample size limitations to achieve highly generalizable ML models. We further showcase inverse prediction capabilities, using size and size distribution as inputs to generate spectra with LSPRs that closely match experimental data. This work illustrates a ML-empowered total nanocharacterization strategy that is rapid (<30 s), versatile, and applicable over a wide size range of 200 nm.


Subject(s)
Nanospheres , Nanostructures , Gold , Machine Learning
4.
PLoS One ; 15(8): e0237747, 2020.
Article in English | MEDLINE | ID: mdl-32822369

ABSTRACT

With the great significance of biomolecular flexibility in biomolecular dynamics and functional analysis, various experimental and theoretical models are developed. Experimentally, Debye-Waller factor, also known as B-factor, measures atomic mean-square displacement and is usually considered as an important measurement for flexibility. Theoretically, elastic network models, Gaussian network model, flexibility-rigidity model, and other computational models have been proposed for flexibility analysis by shedding light on the biomolecular inner topological structures. Recently, a topology-based machine learning model has been proposed. By using the features from persistent homology, this model achieves a remarkable high Pearson correlation coefficient (PCC) in protein B-factor prediction. Motivated by its success, we propose weighted-persistent-homology (WPH)-based machine learning (WPHML) models for RNA flexibility analysis. Our WPH is a newly-proposed model, which incorporate physical, chemical and biological information into topological measurements using a weight function. In particular, we use local persistent homology (LPH) to focus on the topological information of local regions. Our WPHML model is validated on a well-established RNA dataset, and numerical experiments show that our model can achieve a PCC of up to 0.5822. The comparison with the previous sequence-information-based learning models shows that a consistent improvement in performance by at least 10% is achieved in our current model.


Subject(s)
RNA/chemistry , Algorithms , Elasticity , Machine Learning , Normal Distribution , Nucleic Acid Conformation
5.
Risk Anal ; 37(8): 1532-1549, 2017 08.
Article in English | MEDLINE | ID: mdl-28370082

ABSTRACT

Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach.

SELECTION OF CITATIONS
SEARCH DETAIL
...