Page 25 - Summer2020
P. 25

High-Performance Computing for Department of Energy/Department of Defense Applications
The emphasis on using HPC as a pillar of science and engineering for national security purposes can be traced back to 1992 when the United States passed a morato- rium on nuclear testing. A consequence of this was the establishment of the Stockpile Stewardship Program (SSP) that was given the task of certifying the safety and reliability of the nuclear weapons stockpile without nuclear explosives testing. The Advanced Simulation and Computing Program (ASC) is a vital element of the SSP, creating the modeling and simulation capabilities necessary to combine theory and past experiments to create future engineering designs and assess aging com- ponents in the stockpile. Sierra Mechanics is one software in the ASC toolset and was developed at Sandia National Laboratories (see sandia.gov). Within the Sierra Mechan- ics suite, the Sierra-SD (structural dynamics) module includes capabilities for massively parallel acoustics and structural acoustics capabilities in both the time and frequency domains as well as eigenvalue capabilities for mode shape and frequency calculations (Bhardwaj et al., 2002; Bunting, 2019). In Applications we show examples of the use of these capabilities to solve acoustics problems on some of the world’s largest supercomputers.
High-Performance Computing
Taking advantage of modern HPC platforms for acoustics requires an understanding of the architectures them- selves to develop optimal software strategies to maximize performance. The evolution of computing platforms in the past couple of decades can be illustrated by compar- ing the top platforms in the late 1990s versus those of today. In 1997, the Advanced Strategic Computing Initia- tive (ASCI) Red machine of the DOE came online with over 9,000 processors and 1 terabyte of total memory, becoming the first supercomputer in the world to achieve the speed of 1 teraflop (i.e., 1012 floating point operations per second). We can compare that with the more recent DOE Summit (Oak Ridge National Laboratory, 2020) machine, which has over 200,000 CPU cores and 27,000 GPUs. Summit has a peak performance of 200 petaflops (or 200,000 teraflops), currently making it the fastest supercomputer in the world (Top500, 2019). Cloud com- puting, such as those offered by Google and Amazon, has a prohibitively slower communication time between CPU cores and thus are not optimal for scientific computing.
Given this rapidly evolving hardware, navigating the last 20 years of supercomputing changes has led to some fun- damental changes in the way the software is structured.
Parallel Scalability
An important concept in HPC is the notion of scalability, which encompasses both the size of the problem that can be solvedandhowfasttheproblemcanbesolved.Theformer istypicallyreferredtoasweakscaling,andthelatterasstrong scaling. In cases where the goal is to solve the problem very fast, the intent is to use HPC to achieve strong scaling. In other cases, the goal may be to solve very large problems (i.e., many degrees of freedom), in which case one wants to achieve weak scaling on the HPC platform.
Strong scaling is demonstrated when doubling the amount of processing power available for a given problem cuts the solution time in half. Weak scaling represents the ability to solve very large problems with many degrees of freedom. In the case of a finite-element solution, this would imply that the model has many nodes and/or elements, which is commonly the case in acoustics appli- cations when the domain size and/or frequency range of interest becomes large. Weak scaling is demonstrated when one simultaneously increases both the problem size and the processing power available and is able to solve the same problem in the same amount of time. More generally, it can be stated as the ability to solve an n times larger problem using n times more compute processors in nearly constant CPU time.
Linear Solvers
The HPC hardware described in High-Performance Comput- ing is only useful for computational acoustics if one also has the software to solve Eq. 2 for large dimension n. A simple example of a linear system is given by the two equations 2x + y =4andx+3y=7,wherexandyaretheunknowns.Inthiscase, the solution x = 1 and y = 2 can be obtained by hand. When the dimension n of matrix A from Eq. 2 is of moderate size (less than several million), sparse direct solvers (Davis, 2006) can be used effectively to solve for the unknowns. For larger problems in computational acoustics when n exceeds several million, however, the computational resources required by a direct solver quickly become prohibitive. In the case of the Orion capsule example described in Applications with over 2 billion unknowns, a direct solution would take order of months on a hypothetical CPU processor with a peak perfor- mance of 1 teraflop and enough memory for the computations.
 Summer 2020 • Acoustics Today 25
























































































   23   24   25   26   27