A parallel approach for backpropagation learning of neural networks

Autores
Crespo, María Liz; Piccoli, María Fabiana; Printista, Alicia Marcela; Gallard, Raúl Hector
Año de publicación
1997
Idioma
inglés
Tipo de recurso
documento de conferencia
Estado
versión publicada
Descripción
Learning algorithms for neural networks involve CPU intensive processing and consequently great effort has been done to develop parallel implemetations intended for a reduction of learning time. This work briefly describes parallel schemes for a backpropagation algorithm and proposes a distributed system architecture for developing parallel training with a partition pattern scheme. Under this approach, weight changes are computed concurrently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. Some comparative results are also shown.
Eje: Procesamiento distribuido y paralelo. Tratamiento de señales
Red de Universidades con Carreras en Informática (RedUNCI)
Materia
Ciencias Informáticas
Neutral networks
parallelised backpropagation
partitioning schemes
pattern partitioning
system architecture
Architectures
Parallel
Neural nets
Distributed
Nivel de accesibilidad
acceso abierto
Condiciones de uso
http://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Repositorio
SEDICI (UNLP)
Institución
Universidad Nacional de La Plata
OAI Identificador
oai:sedici.unlp.edu.ar:10915/23892

id SEDICI_5453ebfb8d9b51f8c65c953afca513af
oai_identifier_str oai:sedici.unlp.edu.ar:10915/23892
network_acronym_str SEDICI
repository_id_str 1329
network_name_str SEDICI (UNLP)
spelling A parallel approach for backpropagation learning of neural networksCrespo, María LizPiccoli, María FabianaPrintista, Alicia MarcelaGallard, Raúl HectorCiencias InformáticasNeutral networksparallelised backpropagationpartitioning schemespattern partitioningsystem architectureArchitecturesParallelNeural netsDistributedLearning algorithms for neural networks involve CPU intensive processing and consequently great effort has been done to develop parallel implemetations intended for a reduction of learning time. This work briefly describes parallel schemes for a backpropagation algorithm and proposes a distributed system architecture for developing parallel training with a partition pattern scheme. Under this approach, weight changes are computed concurrently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. Some comparative results are also shown.Eje: Procesamiento distribuido y paralelo. Tratamiento de señalesRed de Universidades con Carreras en Informática (RedUNCI)1997info:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/publishedVersionObjeto de conferenciahttp://purl.org/coar/resource_type/c_5794info:ar-repo/semantics/documentoDeConferenciaapplication/pdfhttp://sedici.unlp.edu.ar/handle/10915/23892enginfo:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/2.5/ar/Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-09-03T10:28:25Zoai:sedici.unlp.edu.ar:10915/23892Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-09-03 10:28:25.447SEDICI (UNLP) - Universidad Nacional de La Platafalse
dc.title.none.fl_str_mv A parallel approach for backpropagation learning of neural networks
title A parallel approach for backpropagation learning of neural networks
spellingShingle A parallel approach for backpropagation learning of neural networks
Crespo, María Liz
Ciencias Informáticas
Neutral networks
parallelised backpropagation
partitioning schemes
pattern partitioning
system architecture
Architectures
Parallel
Neural nets
Distributed
title_short A parallel approach for backpropagation learning of neural networks
title_full A parallel approach for backpropagation learning of neural networks
title_fullStr A parallel approach for backpropagation learning of neural networks
title_full_unstemmed A parallel approach for backpropagation learning of neural networks
title_sort A parallel approach for backpropagation learning of neural networks
dc.creator.none.fl_str_mv Crespo, María Liz
Piccoli, María Fabiana
Printista, Alicia Marcela
Gallard, Raúl Hector
author Crespo, María Liz
author_facet Crespo, María Liz
Piccoli, María Fabiana
Printista, Alicia Marcela
Gallard, Raúl Hector
author_role author
author2 Piccoli, María Fabiana
Printista, Alicia Marcela
Gallard, Raúl Hector
author2_role author
author
author
dc.subject.none.fl_str_mv Ciencias Informáticas
Neutral networks
parallelised backpropagation
partitioning schemes
pattern partitioning
system architecture
Architectures
Parallel
Neural nets
Distributed
topic Ciencias Informáticas
Neutral networks
parallelised backpropagation
partitioning schemes
pattern partitioning
system architecture
Architectures
Parallel
Neural nets
Distributed
dc.description.none.fl_txt_mv Learning algorithms for neural networks involve CPU intensive processing and consequently great effort has been done to develop parallel implemetations intended for a reduction of learning time. This work briefly describes parallel schemes for a backpropagation algorithm and proposes a distributed system architecture for developing parallel training with a partition pattern scheme. Under this approach, weight changes are computed concurrently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. Some comparative results are also shown.
Eje: Procesamiento distribuido y paralelo. Tratamiento de señales
Red de Universidades con Carreras en Informática (RedUNCI)
description Learning algorithms for neural networks involve CPU intensive processing and consequently great effort has been done to develop parallel implemetations intended for a reduction of learning time. This work briefly describes parallel schemes for a backpropagation algorithm and proposes a distributed system architecture for developing parallel training with a partition pattern scheme. Under this approach, weight changes are computed concurrently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. Some comparative results are also shown.
publishDate 1997
dc.date.none.fl_str_mv 1997
dc.type.none.fl_str_mv info:eu-repo/semantics/conferenceObject
info:eu-repo/semantics/publishedVersion
Objeto de conferencia
http://purl.org/coar/resource_type/c_5794
info:ar-repo/semantics/documentoDeConferencia
format conferenceObject
status_str publishedVersion
dc.identifier.none.fl_str_mv http://sedici.unlp.edu.ar/handle/10915/23892
url http://sedici.unlp.edu.ar/handle/10915/23892
dc.language.none.fl_str_mv eng
language eng
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
http://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:SEDICI (UNLP)
instname:Universidad Nacional de La Plata
instacron:UNLP
reponame_str SEDICI (UNLP)
collection SEDICI (UNLP)
instname_str Universidad Nacional de La Plata
instacron_str UNLP
institution UNLP
repository.name.fl_str_mv SEDICI (UNLP) - Universidad Nacional de La Plata
repository.mail.fl_str_mv alira@sedici.unlp.edu.ar
_version_ 1842260123316125696
score 13.13397