Learning obstacle avoidance with an operant behavioral model

Autores
Gutnisky, D. A.; Zanutto, Bonifacio Silvano
Año de publicación
2004
Idioma
inglés
Tipo de recurso
artículo
Estado
versión publicada
Descripción
Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.
Fil: Gutnisky, D. A.. Universidad de Buenos Aires. Facultad de Ingeniería.Instituto de Ingeniería Biomédica; Argentina
Fil: Zanutto, Bonifacio Silvano. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina. Universidad de Buenos Aires. Facultad de Ingeniería.Instituto de Ingeniería Biomédica; Argentina
Materia
OPERANT LEARNING
NEURAL NETWORKS
REINFORCEMENT LEARNING
ARTIFICIAL NEURAL NETWORKS
BIOINGENIERIA
Nivel de accesibilidad
acceso abierto
Condiciones de uso
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
Repositorio
CONICET Digital (CONICET)
Institución
Consejo Nacional de Investigaciones Científicas y Técnicas
OAI Identificador
oai:ri.conicet.gov.ar:11336/29109

id CONICETDig_c649f89ddc570487d4896889df139415
oai_identifier_str oai:ri.conicet.gov.ar:11336/29109
network_acronym_str CONICETDig
repository_id_str 3498
network_name_str CONICET Digital (CONICET)
spelling Learning obstacle avoidance with an operant behavioral modelGutnisky, D. A.Zanutto, Bonifacio SilvanoOPERANT LEARNINGNEURAL NETWORKSREINFORCEMENT LEARNINGARTIFICIAL NEURAL NETWORKSBIOINGENIERIAhttps://purl.org/becyt/ford/3.1https://purl.org/becyt/ford/3https://purl.org/becyt/ford/2.11https://purl.org/becyt/ford/2Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.Fil: Gutnisky, D. A.. Universidad de Buenos Aires. Facultad de Ingeniería.Instituto de Ingeniería Biomédica; ArgentinaFil: Zanutto, Bonifacio Silvano. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina. Universidad de Buenos Aires. Facultad de Ingeniería.Instituto de Ingeniería Biomédica; ArgentinaMassachusetts Institute of Technology2004info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/29109Gutnisky, D. A.; Zanutto, Bonifacio Silvano; Learning obstacle avoidance with an operant behavioral model; Massachusetts Institute of Technology; Artificial Life; 10; 1; -1-2004; 65-811064-54621530-9185CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/url/http://www.mitpressjournals.org/doi/abs/10.1162/106454604322875913info:eu-repo/semantics/altIdentifier/doi/10.1162/106454604322875913info:eu-repo/semantics/altIdentifier/url/https://dl.acm.org/citation.cfm?id=982224info:eu-repo/semantics/altIdentifier/url/15035863info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-29T10:07:56Zoai:ri.conicet.gov.ar:11336/29109instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-29 10:07:56.79CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse
dc.title.none.fl_str_mv Learning obstacle avoidance with an operant behavioral model
title Learning obstacle avoidance with an operant behavioral model
spellingShingle Learning obstacle avoidance with an operant behavioral model
Gutnisky, D. A.
OPERANT LEARNING
NEURAL NETWORKS
REINFORCEMENT LEARNING
ARTIFICIAL NEURAL NETWORKS
BIOINGENIERIA
title_short Learning obstacle avoidance with an operant behavioral model
title_full Learning obstacle avoidance with an operant behavioral model
title_fullStr Learning obstacle avoidance with an operant behavioral model
title_full_unstemmed Learning obstacle avoidance with an operant behavioral model
title_sort Learning obstacle avoidance with an operant behavioral model
dc.creator.none.fl_str_mv Gutnisky, D. A.
Zanutto, Bonifacio Silvano
author Gutnisky, D. A.
author_facet Gutnisky, D. A.
Zanutto, Bonifacio Silvano
author_role author
author2 Zanutto, Bonifacio Silvano
author2_role author
dc.subject.none.fl_str_mv OPERANT LEARNING
NEURAL NETWORKS
REINFORCEMENT LEARNING
ARTIFICIAL NEURAL NETWORKS
BIOINGENIERIA
topic OPERANT LEARNING
NEURAL NETWORKS
REINFORCEMENT LEARNING
ARTIFICIAL NEURAL NETWORKS
BIOINGENIERIA
purl_subject.fl_str_mv https://purl.org/becyt/ford/3.1
https://purl.org/becyt/ford/3
https://purl.org/becyt/ford/2.11
https://purl.org/becyt/ford/2
dc.description.none.fl_txt_mv Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.
Fil: Gutnisky, D. A.. Universidad de Buenos Aires. Facultad de Ingeniería.Instituto de Ingeniería Biomédica; Argentina
Fil: Zanutto, Bonifacio Silvano. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina. Universidad de Buenos Aires. Facultad de Ingeniería.Instituto de Ingeniería Biomédica; Argentina
description Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.
publishDate 2004
dc.date.none.fl_str_mv 2004
dc.type.none.fl_str_mv info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
http://purl.org/coar/resource_type/c_6501
info:ar-repo/semantics/articulo
format article
status_str publishedVersion
dc.identifier.none.fl_str_mv http://hdl.handle.net/11336/29109
Gutnisky, D. A.; Zanutto, Bonifacio Silvano; Learning obstacle avoidance with an operant behavioral model; Massachusetts Institute of Technology; Artificial Life; 10; 1; -1-2004; 65-81
1064-5462
1530-9185
CONICET Digital
CONICET
url http://hdl.handle.net/11336/29109
identifier_str_mv Gutnisky, D. A.; Zanutto, Bonifacio Silvano; Learning obstacle avoidance with an operant behavioral model; Massachusetts Institute of Technology; Artificial Life; 10; 1; -1-2004; 65-81
1064-5462
1530-9185
CONICET Digital
CONICET
dc.language.none.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv info:eu-repo/semantics/altIdentifier/url/http://www.mitpressjournals.org/doi/abs/10.1162/106454604322875913
info:eu-repo/semantics/altIdentifier/doi/10.1162/106454604322875913
info:eu-repo/semantics/altIdentifier/url/https://dl.acm.org/citation.cfm?id=982224
info:eu-repo/semantics/altIdentifier/url/15035863
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
eu_rights_str_mv openAccess
rights_invalid_str_mv https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.format.none.fl_str_mv application/pdf
application/pdf
dc.publisher.none.fl_str_mv Massachusetts Institute of Technology
publisher.none.fl_str_mv Massachusetts Institute of Technology
dc.source.none.fl_str_mv reponame:CONICET Digital (CONICET)
instname:Consejo Nacional de Investigaciones Científicas y Técnicas
reponame_str CONICET Digital (CONICET)
collection CONICET Digital (CONICET)
instname_str Consejo Nacional de Investigaciones Científicas y Técnicas
repository.name.fl_str_mv CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas
repository.mail.fl_str_mv dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar
_version_ 1844613944480628736
score 13.070432