Supporting nested parallelism

Autores
Gonzalez, Jesús A.; León, Coromoto; Piccoli, María Fabiana; Printista, Alicia Marcela; Roda García, José Luis; Rodríguez, Casiano; Sande Gonzalez, Francisco de
Año de publicación
2000
Idioma
inglés
Tipo de recurso
documento de conferencia
Estado
versión publicada
Descripción
Many parallel applications do not completely fit into the data parallel model. Although these applications contain data parallelism, task parallelism is needed to represent the natural computation structure or enhance performance. To combine the easiness of programming of the data parallel model with the efficiency of the task parallel model allows to parallel forms to be nested, giving Nested parallelism. In this work, we examine the solutions provided to N ested parallelism in two standard parallel programming platforms, HPF and MPI. Both their expression capacity and their efficiency are compared on a Cray- 3TE, which is distributed memory machine. Finally, an additional speech about the use of the methodology proposed for MPI is done on two different architectures
I Workshop de Procesamiento Distribuido y Paralelo (WPDP)
Red de Universidades con Carreras en Informática (RedUNCI)
Materia
Ciencias Informáticas
nested parallel model
divide and conquer technique
Parallel programming
Nivel de accesibilidad
acceso abierto
Condiciones de uso
http://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Repositorio
SEDICI (UNLP)
Institución
Universidad Nacional de La Plata
OAI Identificador
oai:sedici.unlp.edu.ar:10915/23361

id SEDICI_433119d7b6d52fac35b969e3cd9cd50c
oai_identifier_str oai:sedici.unlp.edu.ar:10915/23361
network_acronym_str SEDICI
repository_id_str 1329
network_name_str SEDICI (UNLP)
spelling Supporting nested parallelismGonzalez, Jesús A.León, CoromotoPiccoli, María FabianaPrintista, Alicia MarcelaRoda García, José LuisRodríguez, CasianoSande Gonzalez, Francisco deCiencias Informáticasnested parallel modeldivide and conquer techniqueParallel programmingMany parallel applications do not completely fit into the data parallel model. Although these applications contain data parallelism, task parallelism is needed to represent the natural computation structure or enhance performance. To combine the easiness of programming of the data parallel model with the efficiency of the task parallel model allows to parallel forms to be nested, giving Nested parallelism. In this work, we examine the solutions provided to N ested parallelism in two standard parallel programming platforms, HPF and MPI. Both their expression capacity and their efficiency are compared on a Cray- 3TE, which is distributed memory machine. Finally, an additional speech about the use of the methodology proposed for MPI is done on two different architecturesI Workshop de Procesamiento Distribuido y Paralelo (WPDP)Red de Universidades con Carreras en Informática (RedUNCI)2000-10info:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/publishedVersionObjeto de conferenciahttp://purl.org/coar/resource_type/c_5794info:ar-repo/semantics/documentoDeConferenciaapplication/pdfhttp://sedici.unlp.edu.ar/handle/10915/23361enginfo:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/2.5/ar/Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-09-29T10:55:25Zoai:sedici.unlp.edu.ar:10915/23361Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-09-29 10:55:26.002SEDICI (UNLP) - Universidad Nacional de La Platafalse
dc.title.none.fl_str_mv Supporting nested parallelism
title Supporting nested parallelism
spellingShingle Supporting nested parallelism
Gonzalez, Jesús A.
Ciencias Informáticas
nested parallel model
divide and conquer technique
Parallel programming
title_short Supporting nested parallelism
title_full Supporting nested parallelism
title_fullStr Supporting nested parallelism
title_full_unstemmed Supporting nested parallelism
title_sort Supporting nested parallelism
dc.creator.none.fl_str_mv Gonzalez, Jesús A.
León, Coromoto
Piccoli, María Fabiana
Printista, Alicia Marcela
Roda García, José Luis
Rodríguez, Casiano
Sande Gonzalez, Francisco de
author Gonzalez, Jesús A.
author_facet Gonzalez, Jesús A.
León, Coromoto
Piccoli, María Fabiana
Printista, Alicia Marcela
Roda García, José Luis
Rodríguez, Casiano
Sande Gonzalez, Francisco de
author_role author
author2 León, Coromoto
Piccoli, María Fabiana
Printista, Alicia Marcela
Roda García, José Luis
Rodríguez, Casiano
Sande Gonzalez, Francisco de
author2_role author
author
author
author
author
author
dc.subject.none.fl_str_mv Ciencias Informáticas
nested parallel model
divide and conquer technique
Parallel programming
topic Ciencias Informáticas
nested parallel model
divide and conquer technique
Parallel programming
dc.description.none.fl_txt_mv Many parallel applications do not completely fit into the data parallel model. Although these applications contain data parallelism, task parallelism is needed to represent the natural computation structure or enhance performance. To combine the easiness of programming of the data parallel model with the efficiency of the task parallel model allows to parallel forms to be nested, giving Nested parallelism. In this work, we examine the solutions provided to N ested parallelism in two standard parallel programming platforms, HPF and MPI. Both their expression capacity and their efficiency are compared on a Cray- 3TE, which is distributed memory machine. Finally, an additional speech about the use of the methodology proposed for MPI is done on two different architectures
I Workshop de Procesamiento Distribuido y Paralelo (WPDP)
Red de Universidades con Carreras en Informática (RedUNCI)
description Many parallel applications do not completely fit into the data parallel model. Although these applications contain data parallelism, task parallelism is needed to represent the natural computation structure or enhance performance. To combine the easiness of programming of the data parallel model with the efficiency of the task parallel model allows to parallel forms to be nested, giving Nested parallelism. In this work, we examine the solutions provided to N ested parallelism in two standard parallel programming platforms, HPF and MPI. Both their expression capacity and their efficiency are compared on a Cray- 3TE, which is distributed memory machine. Finally, an additional speech about the use of the methodology proposed for MPI is done on two different architectures
publishDate 2000
dc.date.none.fl_str_mv 2000-10
dc.type.none.fl_str_mv info:eu-repo/semantics/conferenceObject
info:eu-repo/semantics/publishedVersion
Objeto de conferencia
http://purl.org/coar/resource_type/c_5794
info:ar-repo/semantics/documentoDeConferencia
format conferenceObject
status_str publishedVersion
dc.identifier.none.fl_str_mv http://sedici.unlp.edu.ar/handle/10915/23361
url http://sedici.unlp.edu.ar/handle/10915/23361
dc.language.none.fl_str_mv eng
language eng
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
http://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:SEDICI (UNLP)
instname:Universidad Nacional de La Plata
instacron:UNLP
reponame_str SEDICI (UNLP)
collection SEDICI (UNLP)
instname_str Universidad Nacional de La Plata
instacron_str UNLP
institution UNLP
repository.name.fl_str_mv SEDICI (UNLP) - Universidad Nacional de La Plata
repository.mail.fl_str_mv alira@sedici.unlp.edu.ar
_version_ 1844615813219221504
score 13.070432