A parallel approach for backpropagation learning of neural networks
- Autores
- Crespo, María Liz; Piccoli, María Fabiana; Printista, Alicia Marcela; Gallard, Raúl Hector
- Año de publicación
- 1999
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently great effort hass been done to develop parallel implementation intended for a reduction of learning time. Looking at both sides of the coin, this paper shows firstly two alternatives to parallelise the learning process and then an apllication of neural networks to computing systems. On the parallel alternative distributed implementations to parallelise the learning process of neural networks using pattern partitioning approach. Under this approach weight changes are computed concurently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. On the application side, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device are shown. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system perfomance facilitating further dynamic load balancing. A neural network device inserted into the kernel of a distributed system as an intelligent dool, allows to achieve automatic allocation of execution requests under some predefinided perfomance criteria based on resource availability and incoming process requeriments. Perfomamnec results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities to support parallelism.
Facultad de Informática - Materia
-
Ciencias Informáticas
Neural nets - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- http://creativecommons.org/licenses/by-nc/3.0/
- Repositorio
- Institución
- Universidad Nacional de La Plata
- OAI Identificador
- oai:sedici.unlp.edu.ar:10915/9378
Ver los metadatos del registro completo
id |
SEDICI_4d77abded2d33cf0b517d39ce2ec6ab7 |
---|---|
oai_identifier_str |
oai:sedici.unlp.edu.ar:10915/9378 |
network_acronym_str |
SEDICI |
repository_id_str |
1329 |
network_name_str |
SEDICI (UNLP) |
spelling |
A parallel approach for backpropagation learning of neural networksCrespo, María LizPiccoli, María FabianaPrintista, Alicia MarcelaGallard, Raúl HectorCiencias InformáticasNeural netsFast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently great effort hass been done to develop parallel implementation intended for a reduction of learning time. Looking at both sides of the coin, this paper shows firstly two alternatives to parallelise the learning process and then an apllication of neural networks to computing systems. On the parallel alternative distributed implementations to parallelise the learning process of neural networks using pattern partitioning approach. Under this approach weight changes are computed concurently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. On the application side, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device are shown. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system perfomance facilitating further dynamic load balancing. A neural network device inserted into the kernel of a distributed system as an intelligent dool, allows to achieve automatic allocation of execution requests under some predefinided perfomance criteria based on resource availability and incoming process requeriments. Perfomamnec results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities to support parallelism.Facultad de Informática1999-03info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionArticulohttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfhttp://sedici.unlp.edu.ar/handle/10915/9378enginfo:eu-repo/semantics/altIdentifier/url/http://journal.info.unlp.edu.ar/wp-content/uploads/2015/papers_01/a%20parallel.pdfinfo:eu-repo/semantics/altIdentifier/issn/1666-6038info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc/3.0/Creative Commons Attribution-NonCommercial 3.0 Unported (CC BY-NC 3.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-09-03T10:23:29Zoai:sedici.unlp.edu.ar:10915/9378Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-09-03 10:23:30.032SEDICI (UNLP) - Universidad Nacional de La Platafalse |
dc.title.none.fl_str_mv |
A parallel approach for backpropagation learning of neural networks |
title |
A parallel approach for backpropagation learning of neural networks |
spellingShingle |
A parallel approach for backpropagation learning of neural networks Crespo, María Liz Ciencias Informáticas Neural nets |
title_short |
A parallel approach for backpropagation learning of neural networks |
title_full |
A parallel approach for backpropagation learning of neural networks |
title_fullStr |
A parallel approach for backpropagation learning of neural networks |
title_full_unstemmed |
A parallel approach for backpropagation learning of neural networks |
title_sort |
A parallel approach for backpropagation learning of neural networks |
dc.creator.none.fl_str_mv |
Crespo, María Liz Piccoli, María Fabiana Printista, Alicia Marcela Gallard, Raúl Hector |
author |
Crespo, María Liz |
author_facet |
Crespo, María Liz Piccoli, María Fabiana Printista, Alicia Marcela Gallard, Raúl Hector |
author_role |
author |
author2 |
Piccoli, María Fabiana Printista, Alicia Marcela Gallard, Raúl Hector |
author2_role |
author author author |
dc.subject.none.fl_str_mv |
Ciencias Informáticas Neural nets |
topic |
Ciencias Informáticas Neural nets |
dc.description.none.fl_txt_mv |
Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently great effort hass been done to develop parallel implementation intended for a reduction of learning time. Looking at both sides of the coin, this paper shows firstly two alternatives to parallelise the learning process and then an apllication of neural networks to computing systems. On the parallel alternative distributed implementations to parallelise the learning process of neural networks using pattern partitioning approach. Under this approach weight changes are computed concurently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. On the application side, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device are shown. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system perfomance facilitating further dynamic load balancing. A neural network device inserted into the kernel of a distributed system as an intelligent dool, allows to achieve automatic allocation of execution requests under some predefinided perfomance criteria based on resource availability and incoming process requeriments. Perfomamnec results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities to support parallelism. Facultad de Informática |
description |
Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently great effort hass been done to develop parallel implementation intended for a reduction of learning time. Looking at both sides of the coin, this paper shows firstly two alternatives to parallelise the learning process and then an apllication of neural networks to computing systems. On the parallel alternative distributed implementations to parallelise the learning process of neural networks using pattern partitioning approach. Under this approach weight changes are computed concurently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. On the application side, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device are shown. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system perfomance facilitating further dynamic load balancing. A neural network device inserted into the kernel of a distributed system as an intelligent dool, allows to achieve automatic allocation of execution requests under some predefinided perfomance criteria based on resource availability and incoming process requeriments. Perfomamnec results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities to support parallelism. |
publishDate |
1999 |
dc.date.none.fl_str_mv |
1999-03 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion Articulo http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://sedici.unlp.edu.ar/handle/10915/9378 |
url |
http://sedici.unlp.edu.ar/handle/10915/9378 |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/url/http://journal.info.unlp.edu.ar/wp-content/uploads/2015/papers_01/a%20parallel.pdf info:eu-repo/semantics/altIdentifier/issn/1666-6038 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess http://creativecommons.org/licenses/by-nc/3.0/ Creative Commons Attribution-NonCommercial 3.0 Unported (CC BY-NC 3.0) |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
http://creativecommons.org/licenses/by-nc/3.0/ Creative Commons Attribution-NonCommercial 3.0 Unported (CC BY-NC 3.0) |
dc.format.none.fl_str_mv |
application/pdf |
dc.source.none.fl_str_mv |
reponame:SEDICI (UNLP) instname:Universidad Nacional de La Plata instacron:UNLP |
reponame_str |
SEDICI (UNLP) |
collection |
SEDICI (UNLP) |
instname_str |
Universidad Nacional de La Plata |
instacron_str |
UNLP |
institution |
UNLP |
repository.name.fl_str_mv |
SEDICI (UNLP) - Universidad Nacional de La Plata |
repository.mail.fl_str_mv |
alira@sedici.unlp.edu.ar |
_version_ |
1842260059901394944 |
score |
13.13397 |