Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights

Autores
Jarne, Cecilia Gisele
Año de publicación
2024
Idioma
inglés
Tipo de recurso
artículo
Estado
versión publicada
Descripción
Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work contributes by comprehensively investigating and describing the application of RNNs for temporal processing through a study of a 3-bit Flip Flop memory implementation. We delve into the entire modeling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modeling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.
Fil: Jarne, Cecilia Gisele. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y Tecnología; Argentina. University Aarhus; Dinamarca
Materia
RNNs
Flip Flps
Nivel de accesibilidad
acceso abierto
Condiciones de uso
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Repositorio
CONICET Digital (CONICET)
Institución
Consejo Nacional de Investigaciones Científicas y Técnicas
OAI Identificador
oai:ri.conicet.gov.ar:11336/232597

id CONICETDig_efb43fcac00be7626b9fc124e3f649f4
oai_identifier_str oai:ri.conicet.gov.ar:11336/232597
network_acronym_str CONICETDig
repository_id_str 3498
network_name_str CONICET Digital (CONICET)
spelling Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insightsJarne, Cecilia GiseleRNNsFlip Flpshttps://purl.org/becyt/ford/1.3https://purl.org/becyt/ford/1Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work contributes by comprehensively investigating and describing the application of RNNs for temporal processing through a study of a 3-bit Flip Flop memory implementation. We delve into the entire modeling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modeling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.Fil: Jarne, Cecilia Gisele. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y Tecnología; Argentina. University Aarhus; DinamarcaFrontiers Media2024-03info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/232597Jarne, Cecilia Gisele; Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights; Frontiers Media; Frontiers in Systems Neuroscience; 18; 3-2024; 1-131662-5137CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/url/https://www.frontiersin.org/articles/10.3389/fnsys.2024.1269190/fullinfo:eu-repo/semantics/altIdentifier/doi/10.3389/fnsys.2024.1269190info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-03T10:07:21Zoai:ri.conicet.gov.ar:11336/232597instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-03 10:07:21.601CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse
dc.title.none.fl_str_mv Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
title Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
spellingShingle Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
Jarne, Cecilia Gisele
RNNs
Flip Flps
title_short Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
title_full Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
title_fullStr Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
title_full_unstemmed Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
title_sort Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights
dc.creator.none.fl_str_mv Jarne, Cecilia Gisele
author Jarne, Cecilia Gisele
author_facet Jarne, Cecilia Gisele
author_role author
dc.subject.none.fl_str_mv RNNs
Flip Flps
topic RNNs
Flip Flps
purl_subject.fl_str_mv https://purl.org/becyt/ford/1.3
https://purl.org/becyt/ford/1
dc.description.none.fl_txt_mv Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work contributes by comprehensively investigating and describing the application of RNNs for temporal processing through a study of a 3-bit Flip Flop memory implementation. We delve into the entire modeling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modeling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.
Fil: Jarne, Cecilia Gisele. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y Tecnología; Argentina. University Aarhus; Dinamarca
description Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work contributes by comprehensively investigating and describing the application of RNNs for temporal processing through a study of a 3-bit Flip Flop memory implementation. We delve into the entire modeling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modeling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.
publishDate 2024
dc.date.none.fl_str_mv 2024-03
dc.type.none.fl_str_mv info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
http://purl.org/coar/resource_type/c_6501
info:ar-repo/semantics/articulo
format article
status_str publishedVersion
dc.identifier.none.fl_str_mv http://hdl.handle.net/11336/232597
Jarne, Cecilia Gisele; Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights; Frontiers Media; Frontiers in Systems Neuroscience; 18; 3-2024; 1-13
1662-5137
CONICET Digital
CONICET
url http://hdl.handle.net/11336/232597
identifier_str_mv Jarne, Cecilia Gisele; Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights; Frontiers Media; Frontiers in Systems Neuroscience; 18; 3-2024; 1-13
1662-5137
CONICET Digital
CONICET
dc.language.none.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv info:eu-repo/semantics/altIdentifier/url/https://www.frontiersin.org/articles/10.3389/fnsys.2024.1269190/full
info:eu-repo/semantics/altIdentifier/doi/10.3389/fnsys.2024.1269190
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
eu_rights_str_mv openAccess
rights_invalid_str_mv https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.format.none.fl_str_mv application/pdf
application/pdf
dc.publisher.none.fl_str_mv Frontiers Media
publisher.none.fl_str_mv Frontiers Media
dc.source.none.fl_str_mv reponame:CONICET Digital (CONICET)
instname:Consejo Nacional de Investigaciones Científicas y Técnicas
reponame_str CONICET Digital (CONICET)
collection CONICET Digital (CONICET)
instname_str Consejo Nacional de Investigaciones Científicas y Técnicas
repository.name.fl_str_mv CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas
repository.mail.fl_str_mv dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar
_version_ 1842270000385097728
score 13.13397