Designing a graphics processing unit accelerated petaflop capable lattice Boltzmann solver: Read aligned data layouts and asynchronous communication

Abstract
The lattice Boltzmann method is a well-established numerical approach for complex fluid flow simulations. Recently, general-purpose graphics processing units (GPUs) have become available as high-performance computing resources at large scale. We report on designing and implementing a lattice Boltzmann solver for multi-GPU systems that achieves 1.79 PFLOPS performance on 16,384 GPUs. To achieve this performance, we introduce a GPU compatible version of the so-called bundle data layout and eliminate the halo sites in order to improve data access alignment. Furthermore, we make use of the possibility to overlap data transfer between the host central processing unit and the device GPU with computing on the GPU. As a benchmark case, we simulate flow in porous media and measure both strong and weak scaling performance with the emphasis being on large-scale simulations using realistic input data.
Main Authors
Format
Articles Research article
Published
2017
Series
Subjects
Publication in research information system
Publisher
Sage
The permanent address of the publication
https://urn.fi/URN:NBN:fi:jyu-201707193327Use this for linking
Review status
Peer reviewed
ISSN
1094-3420
DOI
https://doi.org/10.1177/1094342016658109
Language
English
Published in
International Journal of High Performance Computing Applications
Citation
  • Robertsén, F., Westerholm, J., & Mattila, K. (2017). Designing a graphics processing unit accelerated petaflop capable lattice Boltzmann solver: Read aligned data layouts and asynchronous communication. International Journal of High Performance Computing Applications, 31(3), 246-255. https://doi.org/10.1177/1094342016658109
License
Open Access
Copyright© The Author(s) 2016. This is a final draft version of an article whose final and definitive form has been published by Sage. Published in this repository with the kind permission of the publisher.

Share