Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework
- Autores
- Otero, Alejandro; Quinteros, Javier
- Año de publicación
- 2010
- Idioma
- inglés
- Tipo de recurso
- documento de conferencia
- Estado
- versión publicada
- Descripción
- In this work, we present the design and implementation of a highly modular and flexible software framework to implement numerical models based on the finite element method (FEM) and its extension to deal with distributed problems. This work improves the current implementation by the adition of parallel calculations capabilities by means of the substructure technique applied to solve problems by the FEM in clusters of computers using the MPI protocol. We considered the solution of a general Poisson problem as a test case to conduct experiments in order to evaluate the scaling capabilities of our code. Conclusions are extracted with focus on future lines of development.
Sociedad Argentina de Informática e Investigación Operativa - Materia
-
Ciencias Informáticas
Finite element method (FEM)
MPI protocol - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Repositorio
- Institución
- Universidad Nacional de La Plata
- OAI Identificador
- oai:sedici.unlp.edu.ar:10915/152634
Ver los metadatos del registro completo
id |
SEDICI_3f0f16f0c497f3afd98728e20051a929 |
---|---|
oai_identifier_str |
oai:sedici.unlp.edu.ar:10915/152634 |
network_acronym_str |
SEDICI |
repository_id_str |
1329 |
network_name_str |
SEDICI (UNLP) |
spelling |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ frameworkOtero, AlejandroQuinteros, JavierCiencias InformáticasFinite element method (FEM)MPI protocolIn this work, we present the design and implementation of a highly modular and flexible software framework to implement numerical models based on the finite element method (FEM) and its extension to deal with distributed problems. This work improves the current implementation by the adition of parallel calculations capabilities by means of the substructure technique applied to solve problems by the FEM in clusters of computers using the MPI protocol. We considered the solution of a general Poisson problem as a test case to conduct experiments in order to evaluate the scaling capabilities of our code. Conclusions are extracted with focus on future lines of development.Sociedad Argentina de Informática e Investigación Operativa2010info:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/publishedVersionObjeto de conferenciahttp://purl.org/coar/resource_type/c_5794info:ar-repo/semantics/documentoDeConferenciaapplication/pdf3195-3210http://sedici.unlp.edu.ar/handle/10915/152634enginfo:eu-repo/semantics/altIdentifier/url/http://39jaiio.sadio.org.ar/sites/default/files/39jaiio-hpc-02.pdfinfo:eu-repo/semantics/altIdentifier/issn/1851-9326info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/4.0/Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-10-22T17:20:18Zoai:sedici.unlp.edu.ar:10915/152634Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-10-22 17:20:18.476SEDICI (UNLP) - Universidad Nacional de La Platafalse |
dc.title.none.fl_str_mv |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework |
title |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework |
spellingShingle |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework Otero, Alejandro Ciencias Informáticas Finite element method (FEM) MPI protocol |
title_short |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework |
title_full |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework |
title_fullStr |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework |
title_full_unstemmed |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework |
title_sort |
Towards parallel solution of continuous problems by means of a general finite/spectral–element oriented C/C++ framework |
dc.creator.none.fl_str_mv |
Otero, Alejandro Quinteros, Javier |
author |
Otero, Alejandro |
author_facet |
Otero, Alejandro Quinteros, Javier |
author_role |
author |
author2 |
Quinteros, Javier |
author2_role |
author |
dc.subject.none.fl_str_mv |
Ciencias Informáticas Finite element method (FEM) MPI protocol |
topic |
Ciencias Informáticas Finite element method (FEM) MPI protocol |
dc.description.none.fl_txt_mv |
In this work, we present the design and implementation of a highly modular and flexible software framework to implement numerical models based on the finite element method (FEM) and its extension to deal with distributed problems. This work improves the current implementation by the adition of parallel calculations capabilities by means of the substructure technique applied to solve problems by the FEM in clusters of computers using the MPI protocol. We considered the solution of a general Poisson problem as a test case to conduct experiments in order to evaluate the scaling capabilities of our code. Conclusions are extracted with focus on future lines of development. Sociedad Argentina de Informática e Investigación Operativa |
description |
In this work, we present the design and implementation of a highly modular and flexible software framework to implement numerical models based on the finite element method (FEM) and its extension to deal with distributed problems. This work improves the current implementation by the adition of parallel calculations capabilities by means of the substructure technique applied to solve problems by the FEM in clusters of computers using the MPI protocol. We considered the solution of a general Poisson problem as a test case to conduct experiments in order to evaluate the scaling capabilities of our code. Conclusions are extracted with focus on future lines of development. |
publishDate |
2010 |
dc.date.none.fl_str_mv |
2010 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/conferenceObject info:eu-repo/semantics/publishedVersion Objeto de conferencia http://purl.org/coar/resource_type/c_5794 info:ar-repo/semantics/documentoDeConferencia |
format |
conferenceObject |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://sedici.unlp.edu.ar/handle/10915/152634 |
url |
http://sedici.unlp.edu.ar/handle/10915/152634 |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/url/http://39jaiio.sadio.org.ar/sites/default/files/39jaiio-hpc-02.pdf info:eu-repo/semantics/altIdentifier/issn/1851-9326 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess http://creativecommons.org/licenses/by-nc-sa/4.0/ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
http://creativecommons.org/licenses/by-nc-sa/4.0/ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) |
dc.format.none.fl_str_mv |
application/pdf 3195-3210 |
dc.source.none.fl_str_mv |
reponame:SEDICI (UNLP) instname:Universidad Nacional de La Plata instacron:UNLP |
reponame_str |
SEDICI (UNLP) |
collection |
SEDICI (UNLP) |
instname_str |
Universidad Nacional de La Plata |
instacron_str |
UNLP |
institution |
UNLP |
repository.name.fl_str_mv |
SEDICI (UNLP) - Universidad Nacional de La Plata |
repository.mail.fl_str_mv |
alira@sedici.unlp.edu.ar |
_version_ |
1846783626795024384 |
score |
12.928904 |