Neurally driven synthesis of learned, complex vocalizations
- Autores
- Arneodo, Ezequiel Matías; Chen, Shukai; Brown, Daril E.; Gilja, Vikash; Gentner, Timothy Q.
- Año de publicación
- 2021
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- Brain machine interfaces (BMIs) hold promise to restore impaired motor function and serve as powerful tools to study learned motor skill. While limb-based motor prosthetic systems have leveraged nonhuman primates as an important animal model,1–4 speech prostheses lack a similar animal model and are more limited in terms of neural interface technology, brain coverage, and behavioral study design.5–7 Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech,8–10 and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill.11–18 In addition, the biomechanics of song production bear similarity to those of humans and some nonhuman primates.19–23 Here, we demonstrate a vocal synthesizer for birdsong, realized by mapping neural population activity recorded from electrode arrays implanted in the premotor nucleus HVC onto low-dimensional compressed representations of song, using simple computational methods that are implementable in real time. Using a generative biomechanical model of the vocal organ (syrinx) as the low-dimensional target for these mappings allows for the synthesis of vocalizations that match the bird's own song. These results provide proof of concept that high-dimensional, complex natural behaviors can be directly synthesized from ongoing neural activity. This may inspire similar approaches to prosthetics in other species by exploiting knowledge of the peripheral systems and the temporal structure of their output.
Fil: Arneodo, Ezequiel Matías. University of California; Estados Unidos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; Argentina
Fil: Chen, Shukai. University of California; Estados Unidos
Fil: Brown, Daril E.. University of California; Estados Unidos
Fil: Gilja, Vikash. University of California; Estados Unidos
Fil: Gentner, Timothy Q.. The Kavli Institute For Brain And Mind; Estados Unidos. University of California; Estados Unidos - Materia
-
BIOPROSTHETICS
BIRDSONG
BRAIN MACHINE INTERFACES
ELECTROPHYSIOLOGY
NEURAL NETWORKS
NONLINEAR DYNAMICS
SPEECH - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
- Repositorio
- Institución
- Consejo Nacional de Investigaciones Científicas y Técnicas
- OAI Identificador
- oai:ri.conicet.gov.ar:11336/179036
Ver los metadatos del registro completo
id |
CONICETDig_cbf47c9d4039a5e79c96d607445ceec4 |
---|---|
oai_identifier_str |
oai:ri.conicet.gov.ar:11336/179036 |
network_acronym_str |
CONICETDig |
repository_id_str |
3498 |
network_name_str |
CONICET Digital (CONICET) |
spelling |
Neurally driven synthesis of learned, complex vocalizationsArneodo, Ezequiel MatíasChen, ShukaiBrown, Daril E.Gilja, VikashGentner, Timothy Q.BIOPROSTHETICSBIRDSONGBRAIN MACHINE INTERFACESELECTROPHYSIOLOGYNEURAL NETWORKSNONLINEAR DYNAMICSSPEECHhttps://purl.org/becyt/ford/1.3https://purl.org/becyt/ford/1Brain machine interfaces (BMIs) hold promise to restore impaired motor function and serve as powerful tools to study learned motor skill. While limb-based motor prosthetic systems have leveraged nonhuman primates as an important animal model,1–4 speech prostheses lack a similar animal model and are more limited in terms of neural interface technology, brain coverage, and behavioral study design.5–7 Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech,8–10 and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill.11–18 In addition, the biomechanics of song production bear similarity to those of humans and some nonhuman primates.19–23 Here, we demonstrate a vocal synthesizer for birdsong, realized by mapping neural population activity recorded from electrode arrays implanted in the premotor nucleus HVC onto low-dimensional compressed representations of song, using simple computational methods that are implementable in real time. Using a generative biomechanical model of the vocal organ (syrinx) as the low-dimensional target for these mappings allows for the synthesis of vocalizations that match the bird's own song. These results provide proof of concept that high-dimensional, complex natural behaviors can be directly synthesized from ongoing neural activity. This may inspire similar approaches to prosthetics in other species by exploiting knowledge of the peripheral systems and the temporal structure of their output.Fil: Arneodo, Ezequiel Matías. University of California; Estados Unidos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; ArgentinaFil: Chen, Shukai. University of California; Estados UnidosFil: Brown, Daril E.. University of California; Estados UnidosFil: Gilja, Vikash. University of California; Estados UnidosFil: Gentner, Timothy Q.. The Kavli Institute For Brain And Mind; Estados Unidos. University of California; Estados UnidosCell Press2021-08info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/179036Arneodo, Ezequiel Matías; Chen, Shukai; Brown, Daril E.; Gilja, Vikash; Gentner, Timothy Q.; Neurally driven synthesis of learned, complex vocalizations; Cell Press; Current Biology; 31; 15; 8-2021; 3419-34250960-9822CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/url/https://doi.org/10.1016/j.cub.2021.05.035info:eu-repo/semantics/altIdentifier/doi/10.1016/j.cub.2021.05.035info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-03T09:44:27Zoai:ri.conicet.gov.ar:11336/179036instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-03 09:44:27.483CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse |
dc.title.none.fl_str_mv |
Neurally driven synthesis of learned, complex vocalizations |
title |
Neurally driven synthesis of learned, complex vocalizations |
spellingShingle |
Neurally driven synthesis of learned, complex vocalizations Arneodo, Ezequiel Matías BIOPROSTHETICS BIRDSONG BRAIN MACHINE INTERFACES ELECTROPHYSIOLOGY NEURAL NETWORKS NONLINEAR DYNAMICS SPEECH |
title_short |
Neurally driven synthesis of learned, complex vocalizations |
title_full |
Neurally driven synthesis of learned, complex vocalizations |
title_fullStr |
Neurally driven synthesis of learned, complex vocalizations |
title_full_unstemmed |
Neurally driven synthesis of learned, complex vocalizations |
title_sort |
Neurally driven synthesis of learned, complex vocalizations |
dc.creator.none.fl_str_mv |
Arneodo, Ezequiel Matías Chen, Shukai Brown, Daril E. Gilja, Vikash Gentner, Timothy Q. |
author |
Arneodo, Ezequiel Matías |
author_facet |
Arneodo, Ezequiel Matías Chen, Shukai Brown, Daril E. Gilja, Vikash Gentner, Timothy Q. |
author_role |
author |
author2 |
Chen, Shukai Brown, Daril E. Gilja, Vikash Gentner, Timothy Q. |
author2_role |
author author author author |
dc.subject.none.fl_str_mv |
BIOPROSTHETICS BIRDSONG BRAIN MACHINE INTERFACES ELECTROPHYSIOLOGY NEURAL NETWORKS NONLINEAR DYNAMICS SPEECH |
topic |
BIOPROSTHETICS BIRDSONG BRAIN MACHINE INTERFACES ELECTROPHYSIOLOGY NEURAL NETWORKS NONLINEAR DYNAMICS SPEECH |
purl_subject.fl_str_mv |
https://purl.org/becyt/ford/1.3 https://purl.org/becyt/ford/1 |
dc.description.none.fl_txt_mv |
Brain machine interfaces (BMIs) hold promise to restore impaired motor function and serve as powerful tools to study learned motor skill. While limb-based motor prosthetic systems have leveraged nonhuman primates as an important animal model,1–4 speech prostheses lack a similar animal model and are more limited in terms of neural interface technology, brain coverage, and behavioral study design.5–7 Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech,8–10 and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill.11–18 In addition, the biomechanics of song production bear similarity to those of humans and some nonhuman primates.19–23 Here, we demonstrate a vocal synthesizer for birdsong, realized by mapping neural population activity recorded from electrode arrays implanted in the premotor nucleus HVC onto low-dimensional compressed representations of song, using simple computational methods that are implementable in real time. Using a generative biomechanical model of the vocal organ (syrinx) as the low-dimensional target for these mappings allows for the synthesis of vocalizations that match the bird's own song. These results provide proof of concept that high-dimensional, complex natural behaviors can be directly synthesized from ongoing neural activity. This may inspire similar approaches to prosthetics in other species by exploiting knowledge of the peripheral systems and the temporal structure of their output. Fil: Arneodo, Ezequiel Matías. University of California; Estados Unidos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; Argentina Fil: Chen, Shukai. University of California; Estados Unidos Fil: Brown, Daril E.. University of California; Estados Unidos Fil: Gilja, Vikash. University of California; Estados Unidos Fil: Gentner, Timothy Q.. The Kavli Institute For Brain And Mind; Estados Unidos. University of California; Estados Unidos |
description |
Brain machine interfaces (BMIs) hold promise to restore impaired motor function and serve as powerful tools to study learned motor skill. While limb-based motor prosthetic systems have leveraged nonhuman primates as an important animal model,1–4 speech prostheses lack a similar animal model and are more limited in terms of neural interface technology, brain coverage, and behavioral study design.5–7 Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech,8–10 and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill.11–18 In addition, the biomechanics of song production bear similarity to those of humans and some nonhuman primates.19–23 Here, we demonstrate a vocal synthesizer for birdsong, realized by mapping neural population activity recorded from electrode arrays implanted in the premotor nucleus HVC onto low-dimensional compressed representations of song, using simple computational methods that are implementable in real time. Using a generative biomechanical model of the vocal organ (syrinx) as the low-dimensional target for these mappings allows for the synthesis of vocalizations that match the bird's own song. These results provide proof of concept that high-dimensional, complex natural behaviors can be directly synthesized from ongoing neural activity. This may inspire similar approaches to prosthetics in other species by exploiting knowledge of the peripheral systems and the temporal structure of their output. |
publishDate |
2021 |
dc.date.none.fl_str_mv |
2021-08 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://hdl.handle.net/11336/179036 Arneodo, Ezequiel Matías; Chen, Shukai; Brown, Daril E.; Gilja, Vikash; Gentner, Timothy Q.; Neurally driven synthesis of learned, complex vocalizations; Cell Press; Current Biology; 31; 15; 8-2021; 3419-3425 0960-9822 CONICET Digital CONICET |
url |
http://hdl.handle.net/11336/179036 |
identifier_str_mv |
Arneodo, Ezequiel Matías; Chen, Shukai; Brown, Daril E.; Gilja, Vikash; Gentner, Timothy Q.; Neurally driven synthesis of learned, complex vocalizations; Cell Press; Current Biology; 31; 15; 8-2021; 3419-3425 0960-9822 CONICET Digital CONICET |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/url/https://doi.org/10.1016/j.cub.2021.05.035 info:eu-repo/semantics/altIdentifier/doi/10.1016/j.cub.2021.05.035 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
dc.format.none.fl_str_mv |
application/pdf application/pdf |
dc.publisher.none.fl_str_mv |
Cell Press |
publisher.none.fl_str_mv |
Cell Press |
dc.source.none.fl_str_mv |
reponame:CONICET Digital (CONICET) instname:Consejo Nacional de Investigaciones Científicas y Técnicas |
reponame_str |
CONICET Digital (CONICET) |
collection |
CONICET Digital (CONICET) |
instname_str |
Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.name.fl_str_mv |
CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.mail.fl_str_mv |
dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar |
_version_ |
1842268666916241408 |
score |
13.13397 |