Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms
- Autores
- Gutnisky, D. A.; Zanutto, Bonifacio Silvano
- Año de publicación
- 2004
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- The prisoner's dilemma (PD) is the leading metaphor for the evolution of cooperative behavior in populations of selfish agents. Although cooperation in the iterated prisoner's dilemma (IPD) has been studied for over twenty years, most of this research has been focused on strategies that involve nonlearned behavior. Another approach is to suppose that players' selection of the preferred reply might he enforced in the same way as feeding animals track the best way to feed in changing nonstationary environments. Learning mechanisms such as operant conditioning enable animals to acquire relevant characteristics of their environment in order to get reinforcements and to avoid punishments. In this study, the role of operant conditioning in the learning of cooperation was evaluated in the PD. We found that operant mechanisms allow the learning of IPD play against other strategies. When random moves are allowed in the game, the operant learning model showed low sensitivity. On the basis of this evidence, it is suggested that operant learning might be involved in reciprocal altruism.
Fil: Gutnisky, D. A.. Universidad de Buenos Aires. Facultad de Ingenieria. Instituto de Ingeniería Biomédica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina
Fil: Zanutto, Bonifacio Silvano. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina. Universidad de Buenos Aires. Facultad de Ingenieria. Instituto de Ingeniería Biomédica; Argentina - Materia
-
OPERANT LEARNING
NEURAL NETWORKS
PRISIONERS DILEMMA
RECIPROCAL ALTRUISM
NEUROLOGY NEUROSCIENCE COMPUTATIONAL
COOPERATIVE BEHAVIOR - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
- Repositorio
- Institución
- Consejo Nacional de Investigaciones Científicas y Técnicas
- OAI Identificador
- oai:ri.conicet.gov.ar:11336/29110
Ver los metadatos del registro completo
id |
CONICETDig_3fa80122d6b30f900a76fe827fc0a238 |
---|---|
oai_identifier_str |
oai:ri.conicet.gov.ar:11336/29110 |
network_acronym_str |
CONICETDig |
repository_id_str |
3498 |
network_name_str |
CONICET Digital (CONICET) |
spelling |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanismsGutnisky, D. A.Zanutto, Bonifacio SilvanoOPERANT LEARNINGNEURAL NETWORKSPRISIONERS DILEMMARECIPROCAL ALTRUISMNEUROLOGY NEUROSCIENCE COMPUTATIONALCOOPERATIVE BEHAVIORhttps://purl.org/becyt/ford/3.1https://purl.org/becyt/ford/3The prisoner's dilemma (PD) is the leading metaphor for the evolution of cooperative behavior in populations of selfish agents. Although cooperation in the iterated prisoner's dilemma (IPD) has been studied for over twenty years, most of this research has been focused on strategies that involve nonlearned behavior. Another approach is to suppose that players' selection of the preferred reply might he enforced in the same way as feeding animals track the best way to feed in changing nonstationary environments. Learning mechanisms such as operant conditioning enable animals to acquire relevant characteristics of their environment in order to get reinforcements and to avoid punishments. In this study, the role of operant conditioning in the learning of cooperation was evaluated in the PD. We found that operant mechanisms allow the learning of IPD play against other strategies. When random moves are allowed in the game, the operant learning model showed low sensitivity. On the basis of this evidence, it is suggested that operant learning might be involved in reciprocal altruism.Fil: Gutnisky, D. A.. Universidad de Buenos Aires. Facultad de Ingenieria. Instituto de Ingeniería Biomédica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; ArgentinaFil: Zanutto, Bonifacio Silvano. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina. Universidad de Buenos Aires. Facultad de Ingenieria. Instituto de Ingeniería Biomédica; ArgentinaM I T Press2004info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/29110Gutnisky, D. A. ; Zanutto, Bonifacio Silvano; Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms; M I T Press; Artificial Life; 10; 4; -1-2004; 433-4611064-5462CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/url/http://www.mitpressjournals.org/doi/abs/10.1162/1064546041766479info:eu-repo/semantics/altIdentifier/doi/10.1162/1064546041766479info:eu-repo/semantics/altIdentifier/url/https://dl.acm.org/citation.cfm?id=1032178info:eu-repo/semantics/altIdentifier/pmid/15479547info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-29T10:01:40Zoai:ri.conicet.gov.ar:11336/29110instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-29 10:01:40.616CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse |
dc.title.none.fl_str_mv |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms |
title |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms |
spellingShingle |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms Gutnisky, D. A. OPERANT LEARNING NEURAL NETWORKS PRISIONERS DILEMMA RECIPROCAL ALTRUISM NEUROLOGY NEUROSCIENCE COMPUTATIONAL COOPERATIVE BEHAVIOR |
title_short |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms |
title_full |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms |
title_fullStr |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms |
title_full_unstemmed |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms |
title_sort |
Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms |
dc.creator.none.fl_str_mv |
Gutnisky, D. A. Zanutto, Bonifacio Silvano |
author |
Gutnisky, D. A. |
author_facet |
Gutnisky, D. A. Zanutto, Bonifacio Silvano |
author_role |
author |
author2 |
Zanutto, Bonifacio Silvano |
author2_role |
author |
dc.subject.none.fl_str_mv |
OPERANT LEARNING NEURAL NETWORKS PRISIONERS DILEMMA RECIPROCAL ALTRUISM NEUROLOGY NEUROSCIENCE COMPUTATIONAL COOPERATIVE BEHAVIOR |
topic |
OPERANT LEARNING NEURAL NETWORKS PRISIONERS DILEMMA RECIPROCAL ALTRUISM NEUROLOGY NEUROSCIENCE COMPUTATIONAL COOPERATIVE BEHAVIOR |
purl_subject.fl_str_mv |
https://purl.org/becyt/ford/3.1 https://purl.org/becyt/ford/3 |
dc.description.none.fl_txt_mv |
The prisoner's dilemma (PD) is the leading metaphor for the evolution of cooperative behavior in populations of selfish agents. Although cooperation in the iterated prisoner's dilemma (IPD) has been studied for over twenty years, most of this research has been focused on strategies that involve nonlearned behavior. Another approach is to suppose that players' selection of the preferred reply might he enforced in the same way as feeding animals track the best way to feed in changing nonstationary environments. Learning mechanisms such as operant conditioning enable animals to acquire relevant characteristics of their environment in order to get reinforcements and to avoid punishments. In this study, the role of operant conditioning in the learning of cooperation was evaluated in the PD. We found that operant mechanisms allow the learning of IPD play against other strategies. When random moves are allowed in the game, the operant learning model showed low sensitivity. On the basis of this evidence, it is suggested that operant learning might be involved in reciprocal altruism. Fil: Gutnisky, D. A.. Universidad de Buenos Aires. Facultad de Ingenieria. Instituto de Ingeniería Biomédica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina Fil: Zanutto, Bonifacio Silvano. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Biología y Medicina Experimental. Fundación de Instituto de Biología y Medicina Experimental. Instituto de Biología y Medicina Experimental; Argentina. Universidad de Buenos Aires. Facultad de Ingenieria. Instituto de Ingeniería Biomédica; Argentina |
description |
The prisoner's dilemma (PD) is the leading metaphor for the evolution of cooperative behavior in populations of selfish agents. Although cooperation in the iterated prisoner's dilemma (IPD) has been studied for over twenty years, most of this research has been focused on strategies that involve nonlearned behavior. Another approach is to suppose that players' selection of the preferred reply might he enforced in the same way as feeding animals track the best way to feed in changing nonstationary environments. Learning mechanisms such as operant conditioning enable animals to acquire relevant characteristics of their environment in order to get reinforcements and to avoid punishments. In this study, the role of operant conditioning in the learning of cooperation was evaluated in the PD. We found that operant mechanisms allow the learning of IPD play against other strategies. When random moves are allowed in the game, the operant learning model showed low sensitivity. On the basis of this evidence, it is suggested that operant learning might be involved in reciprocal altruism. |
publishDate |
2004 |
dc.date.none.fl_str_mv |
2004 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://hdl.handle.net/11336/29110 Gutnisky, D. A. ; Zanutto, Bonifacio Silvano; Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms; M I T Press; Artificial Life; 10; 4; -1-2004; 433-461 1064-5462 CONICET Digital CONICET |
url |
http://hdl.handle.net/11336/29110 |
identifier_str_mv |
Gutnisky, D. A. ; Zanutto, Bonifacio Silvano; Cooperation in the iterated prisoner's dilemma is learned by operant conditioning mechanisms; M I T Press; Artificial Life; 10; 4; -1-2004; 433-461 1064-5462 CONICET Digital CONICET |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/url/http://www.mitpressjournals.org/doi/abs/10.1162/1064546041766479 info:eu-repo/semantics/altIdentifier/doi/10.1162/1064546041766479 info:eu-repo/semantics/altIdentifier/url/https://dl.acm.org/citation.cfm?id=1032178 info:eu-repo/semantics/altIdentifier/pmid/15479547 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
dc.format.none.fl_str_mv |
application/pdf application/pdf |
dc.publisher.none.fl_str_mv |
M I T Press |
publisher.none.fl_str_mv |
M I T Press |
dc.source.none.fl_str_mv |
reponame:CONICET Digital (CONICET) instname:Consejo Nacional de Investigaciones Científicas y Técnicas |
reponame_str |
CONICET Digital (CONICET) |
collection |
CONICET Digital (CONICET) |
instname_str |
Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.name.fl_str_mv |
CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.mail.fl_str_mv |
dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar |
_version_ |
1844613812807794688 |
score |
13.070432 |