Restructuring the Multifluid PPM Gas Dynamics Code for GPUs
Decanato - Facoltà di scienze informatiche
Data d'inizio: 14 Agosto 2015
Data di fine: 15 Agosto 2015
|
|||||||||||
|
|||||||||||
Abstract: | |||||||||||
I will describe simulations on the Blue Waters system at NCSA that involve compressible turbulent mixing of multiple fluids in two contexts: brief eruptions in stars and simplified problems related to inertial confinement fusion (ICF). The ICF simulation was run in 32-bit precision on a grid of over a trillion cells on 702,000 cores at 1.5 Pflop/s. The simulation of a hydrogen ingestion flash in a 2 solar mass star was run with 64-bit precision on just 3.6 billion cells but still 443,000 cores at 0.42 Pflop/s. These simulations represent extremes in a spectrum of parallel fluid dynamics computation, with the first an example of weak scaling and the second a case of strong scaling. They scale and perform well as a result of lessons we learned several years ago from adapting our codes to the IBM Cell processors of the Roadrunner machine at Los Alamos. Beginning with a visit to the University of Zurich last summer, I have been attempting to transfer these techniques to machines with GPU accelerated nodes. The Cell processor techniques do not carry over without very important modifications, but they do carry over successfully. The resulting code structure is quite unusual, but it runs very well on both CPUs and GPUs. GPU nodes have a performance advantage on these CFD codes ranging from factors of 2.4 to 1.7 times, depending upon the generation of the CPU and GPU devices. I will first describe the special code and data structure we use for CPUs and then describe how it is modified to accommodate the requirements of GPUs. My collaborator, Pei-Hung Lin at Livermore, has written a tool that translates the Fortran for the GPU automatically into CUDA, so that a single source code serves for both devices. We find that changes in the Nvidia GPUs in going from the K20 to the K80 deliver very significant benefits in performance and allow the GPU to maintain its performance advantage. |
|||||||||||
|
|||||||||||
Biography: | |||||||||||
Woodward received his B.A. in mathematics and physics in 1967 from Cornell University and his Ph.D. in physics in 1973 from the University of California at Berkeley. He worked as a physicist for Lawrence Livermore National Laboratory (LLNL) in California until 1975 and spent three years as a research associate at Leiden University Observatory in The Netherlands before returning to LLNL. In 1985, he joined the faculty at the University of Minnesota as an astronomy professor. He founded and became director of the university's Laboratory for Computational Science and Engineering in 1995. |
|||||||||||
|
|||||||||||
|