Your browser doesn't support javascript.
loading
High-Performance Hybrid-Global-Deflated-Local Optimization with Applications to Active Learning.
Noack, Marcus Michael; Perryman, David; Krishnan, Harinarayan; Zwart, Petrus H.
Afiliación
  • Noack MM; The Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory, Berkeley, CA, USA.
  • Perryman D; Physics Department, The University of Tennessee at Knoxville, Knoxville, Tennessee, USA.
  • Krishnan H; The Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory, Berkeley, CA, USA.
  • Zwart PH; The Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory, Berkeley, CA, USA.
Article en En | MEDLINE | ID: mdl-38947249
ABSTRACT
Mathematical optimization lies at the core of many science and industry applications. One important issue with many current optimization strategies is a well-known trade-off between the number of function evaluations and the probability to find the global, or at least sufficiently high-quality local optima. In machine learning (ML), and by extension in active learning - for instance for autonomous experimentation - mathematical optimization is often used to find the underlying uncertain surrogate model from which subsequent decisions are made and therefore ML relies on high-quality optima to obtain the most accurate models. Active learning often has the added complexity of missing offline training data; therefore, the training has to be conducted during the data collection which can stall the acquisition if standard methods are used. In this work, we highlight recent efforts to create a high-performance hybrid optimization algorithm (HGDL), combining derivative-free global optimization strategies with local, derivative-based optimization, ultimately yielding an ordered list of unique local optima. Redundancies are avoided by deflating the objective function around earlier encountered optima. HGDL is designed to take full advantage of parallelism by having the most computationally expensive process, the local first and second-order-derivative-based optimizations, run in parallel on separate compute nodes in separate processes. In addition, the algorithm runs asynchronously; as soon as the first solution is found, it can be used while the algorithm continues to find more solutions. We apply the proposed optimization and training strategy to Gaussian-Process-driven stochastic function approximation and active learning.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Annu Workshop Extrem Scale Exp Loop Comput Año: 2021 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Annu Workshop Extrem Scale Exp Loop Comput Año: 2021 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Estados Unidos