A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence
- Autores
- Mininni, Pablo Daniel; Rosenberg, Duane; Reddy, Raghu; Pouquet, Annick
- Año de publicación
- 2011
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves good scalability up to ∼20,000 compute cores with a maximum efficiency of 89%, and a mean of 79%. Data are presented that help guide the choice of the optimal number of MPI tasks and OpenMP threads in order to maximize code performance on two different platforms. © 2011 Elsevier B.V. All rights reserved.
Fil: Mininni, Pablo Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentina. National Center for Atmospheric Research; Estados Unidos
Fil: Rosenberg, Duane. National Center for Atmospheric Research; Estados Unidos
Fil: Reddy, Raghu. Pittsburgh Supercomputing Center; Estados Unidos
Fil: Pouquet, Annick. National Center for Atmospheric Research; Estados Unidos - Materia
-
Computational Fluids
Mpi
Numerical Simulation
Openmp
Parallel Scalability - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
- Repositorio
- Institución
- Consejo Nacional de Investigaciones Científicas y Técnicas
- OAI Identificador
- oai:ri.conicet.gov.ar:11336/56975
Ver los metadatos del registro completo
id |
CONICETDig_b01e88ffeaa0dc3f28f2312d8813a9b7 |
---|---|
oai_identifier_str |
oai:ri.conicet.gov.ar:11336/56975 |
network_acronym_str |
CONICETDig |
repository_id_str |
3498 |
network_name_str |
CONICET Digital (CONICET) |
spelling |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulenceMininni, Pablo DanielRosenberg, DuaneReddy, RaghuPouquet, AnnickComputational FluidsMpiNumerical SimulationOpenmpParallel Scalabilityhttps://purl.org/becyt/ford/1.2https://purl.org/becyt/ford/1https://purl.org/becyt/ford/1.3https://purl.org/becyt/ford/1A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves good scalability up to ∼20,000 compute cores with a maximum efficiency of 89%, and a mean of 79%. Data are presented that help guide the choice of the optimal number of MPI tasks and OpenMP threads in order to maximize code performance on two different platforms. © 2011 Elsevier B.V. All rights reserved.Fil: Mininni, Pablo Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentina. National Center for Atmospheric Research; Estados UnidosFil: Rosenberg, Duane. National Center for Atmospheric Research; Estados UnidosFil: Reddy, Raghu. Pittsburgh Supercomputing Center; Estados UnidosFil: Pouquet, Annick. National Center for Atmospheric Research; Estados UnidosElsevier Science2011-06info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/56975Mininni, Pablo Daniel; Rosenberg, Duane; Reddy, Raghu; Pouquet, Annick; A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence; Elsevier Science; Parallel Computing; 37; 6-7; 6-2011; 316-3260167-8191CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/doi/10.1016/j.parco.2011.05.004info:eu-repo/semantics/altIdentifier/url/https://www.sciencedirect.com/science/article/pii/S0167819111000512info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-29T10:07:10Zoai:ri.conicet.gov.ar:11336/56975instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-29 10:07:10.602CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse |
dc.title.none.fl_str_mv |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence |
title |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence |
spellingShingle |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence Mininni, Pablo Daniel Computational Fluids Mpi Numerical Simulation Openmp Parallel Scalability |
title_short |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence |
title_full |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence |
title_fullStr |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence |
title_full_unstemmed |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence |
title_sort |
A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence |
dc.creator.none.fl_str_mv |
Mininni, Pablo Daniel Rosenberg, Duane Reddy, Raghu Pouquet, Annick |
author |
Mininni, Pablo Daniel |
author_facet |
Mininni, Pablo Daniel Rosenberg, Duane Reddy, Raghu Pouquet, Annick |
author_role |
author |
author2 |
Rosenberg, Duane Reddy, Raghu Pouquet, Annick |
author2_role |
author author author |
dc.subject.none.fl_str_mv |
Computational Fluids Mpi Numerical Simulation Openmp Parallel Scalability |
topic |
Computational Fluids Mpi Numerical Simulation Openmp Parallel Scalability |
purl_subject.fl_str_mv |
https://purl.org/becyt/ford/1.2 https://purl.org/becyt/ford/1 https://purl.org/becyt/ford/1.3 https://purl.org/becyt/ford/1 |
dc.description.none.fl_txt_mv |
A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves good scalability up to ∼20,000 compute cores with a maximum efficiency of 89%, and a mean of 79%. Data are presented that help guide the choice of the optimal number of MPI tasks and OpenMP threads in order to maximize code performance on two different platforms. © 2011 Elsevier B.V. All rights reserved. Fil: Mininni, Pablo Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentina. National Center for Atmospheric Research; Estados Unidos Fil: Rosenberg, Duane. National Center for Atmospheric Research; Estados Unidos Fil: Reddy, Raghu. Pittsburgh Supercomputing Center; Estados Unidos Fil: Pouquet, Annick. National Center for Atmospheric Research; Estados Unidos |
description |
A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves good scalability up to ∼20,000 compute cores with a maximum efficiency of 89%, and a mean of 79%. Data are presented that help guide the choice of the optimal number of MPI tasks and OpenMP threads in order to maximize code performance on two different platforms. © 2011 Elsevier B.V. All rights reserved. |
publishDate |
2011 |
dc.date.none.fl_str_mv |
2011-06 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://hdl.handle.net/11336/56975 Mininni, Pablo Daniel; Rosenberg, Duane; Reddy, Raghu; Pouquet, Annick; A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence; Elsevier Science; Parallel Computing; 37; 6-7; 6-2011; 316-326 0167-8191 CONICET Digital CONICET |
url |
http://hdl.handle.net/11336/56975 |
identifier_str_mv |
Mininni, Pablo Daniel; Rosenberg, Duane; Reddy, Raghu; Pouquet, Annick; A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence; Elsevier Science; Parallel Computing; 37; 6-7; 6-2011; 316-326 0167-8191 CONICET Digital CONICET |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.parco.2011.05.004 info:eu-repo/semantics/altIdentifier/url/https://www.sciencedirect.com/science/article/pii/S0167819111000512 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
dc.format.none.fl_str_mv |
application/pdf application/pdf |
dc.publisher.none.fl_str_mv |
Elsevier Science |
publisher.none.fl_str_mv |
Elsevier Science |
dc.source.none.fl_str_mv |
reponame:CONICET Digital (CONICET) instname:Consejo Nacional de Investigaciones Científicas y Técnicas |
reponame_str |
CONICET Digital (CONICET) |
collection |
CONICET Digital (CONICET) |
instname_str |
Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.name.fl_str_mv |
CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.mail.fl_str_mv |
dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar |
_version_ |
1844613928310538240 |
score |
13.069144 |