Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Chem Theory Comput ; 19(1): 25-32, 2023 Jan 10.
Article in English | MEDLINE | ID: mdl-36508260

ABSTRACT

We demonstrate the use of Googles cloud-based Tensor Processing Units (TPUs) to accelerate and scale up conventional (cubic-scaling) density functional theory (DFT) calculations. Utilizing 512 TPU cores, we accomplish the largest such DFT computation to date, with 247848 orbitals, corresponding to a cluster of 10327 water molecules with 103270 electrons, all treated explicitly. Our work thus paves the way toward accessible and systematic use of conventional DFT, free of any system-specific constraints, at unprecedented scales.

2.
Proc Natl Acad Sci U S A ; 119(33): e2122762119, 2022 Aug 16.
Article in English | MEDLINE | ID: mdl-35939669

ABSTRACT

We have repurposed Google tensor processing units (TPUs), application-specific chips developed for machine learning, into large-scale dense linear algebra supercomputers. The TPUs' fast intercore interconnects (ICIs), physically two-dimensional network topology, and high-bandwidth memory (HBM) permit distributed matrix multiplication algorithms to rapidly become computationally bound. In this regime, the matrix-multiply units (MXUs) dominate the runtime, yielding impressive scaling, performance, and raw size: Operating in float32 precision, a full 2,048-core pod of third-generation TPUs can multiply two matrices with linear size [Formula: see text] in about 2 min. Via curated algorithms emphasizing large, single-core matrix multiplications, other tasks in dense linear algebra can similarly scale. As examples, we present 1) QR decomposition; 2) resolution of linear systems; and 3) the computation of matrix functions by polynomial iteration, demonstrated by the matrix polar factorization.

SELECTION OF CITATIONS
SEARCH DETAIL
...