A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning

Autores
Barsce, Juan Cruz; Palombarini, Jorge; Martínez, Ernesto
Año de publicación
2020
Idioma
inglés
Tipo de recurso
artículo
Estado
versión publicada
Descripción
Optimization of hyper-parameters in real-world applications of reinforcement learning (RL) is a key issue, because their settings determine how fast the agent will learn its policy by interacting with its environment due to the information content of data gathered. In this work, an approach that uses Bayesian optimization to perform an autonomous two-tier optimization of both representation decisions and algorithm hyper-parameters is proposed: first, categorical / structural RL hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such type of variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, whereas the categorical hyper-parameters found in the optimization at the upper level of abstraction are fixed. This two-tier approach is validated with a tabular and neural network setting of the value function, in a classic simulated control task. Results obtained are promising and open the way for more user-independent applications of reinforcement learning.
Sociedad Argentina de Informática e Investigación Operativa
Materia
Ciencias Informáticas
Reinforcement learning
hyper-parameter optimization
Bayesian optimization
Bayesian optimization of combinatorial structures (BOCS)
Nivel de accesibilidad
acceso abierto
Condiciones de uso
http://creativecommons.org/licenses/by/4.0/
Repositorio
SEDICI (UNLP)
Institución
Universidad Nacional de La Plata
OAI Identificador
oai:sedici.unlp.edu.ar:10915/135049

id SEDICI_b005fdeaf978ccd9fc45425c13c99a39
oai_identifier_str oai:sedici.unlp.edu.ar:10915/135049
network_acronym_str SEDICI
repository_id_str 1329
network_name_str SEDICI (UNLP)
spelling A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement LearningBarsce, Juan CruzPalombarini, JorgeMartínez, ErnestoCiencias InformáticasReinforcement learninghyper-parameter optimizationBayesian optimizationBayesian optimization of combinatorial structures (BOCS)Optimization of hyper-parameters in real-world applications of reinforcement learning (RL) is a key issue, because their settings determine how fast the agent will learn its policy by interacting with its environment due to the information content of data gathered. In this work, an approach that uses Bayesian optimization to perform an autonomous two-tier optimization of both representation decisions and algorithm hyper-parameters is proposed: first, categorical / structural RL hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such type of variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, whereas the categorical hyper-parameters found in the optimization at the upper level of abstraction are fixed. This two-tier approach is validated with a tabular and neural network setting of the value function, in a classic simulated control task. Results obtained are promising and open the way for more user-independent applications of reinforcement learning.Sociedad Argentina de Informática e Investigación Operativa2020-05-19info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionArticulohttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdf2-27http://sedici.unlp.edu.ar/handle/10915/135049enginfo:eu-repo/semantics/altIdentifier/url/https://publicaciones.sadio.org.ar/index.php/EJS/article/view/165info:eu-repo/semantics/altIdentifier/issn/1514-6774info:eu-repo/semantics/reference/hdl/10915/87851info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International (CC BY 4.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-10-15T11:25:50Zoai:sedici.unlp.edu.ar:10915/135049Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-10-15 11:25:50.808SEDICI (UNLP) - Universidad Nacional de La Platafalse
dc.title.none.fl_str_mv A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
spellingShingle A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
Barsce, Juan Cruz
Ciencias Informáticas
Reinforcement learning
hyper-parameter optimization
Bayesian optimization
Bayesian optimization of combinatorial structures (BOCS)
title_short A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_full A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_fullStr A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_full_unstemmed A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
title_sort A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
dc.creator.none.fl_str_mv Barsce, Juan Cruz
Palombarini, Jorge
Martínez, Ernesto
author Barsce, Juan Cruz
author_facet Barsce, Juan Cruz
Palombarini, Jorge
Martínez, Ernesto
author_role author
author2 Palombarini, Jorge
Martínez, Ernesto
author2_role author
author
dc.subject.none.fl_str_mv Ciencias Informáticas
Reinforcement learning
hyper-parameter optimization
Bayesian optimization
Bayesian optimization of combinatorial structures (BOCS)
topic Ciencias Informáticas
Reinforcement learning
hyper-parameter optimization
Bayesian optimization
Bayesian optimization of combinatorial structures (BOCS)
dc.description.none.fl_txt_mv Optimization of hyper-parameters in real-world applications of reinforcement learning (RL) is a key issue, because their settings determine how fast the agent will learn its policy by interacting with its environment due to the information content of data gathered. In this work, an approach that uses Bayesian optimization to perform an autonomous two-tier optimization of both representation decisions and algorithm hyper-parameters is proposed: first, categorical / structural RL hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such type of variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, whereas the categorical hyper-parameters found in the optimization at the upper level of abstraction are fixed. This two-tier approach is validated with a tabular and neural network setting of the value function, in a classic simulated control task. Results obtained are promising and open the way for more user-independent applications of reinforcement learning.
Sociedad Argentina de Informática e Investigación Operativa
description Optimization of hyper-parameters in real-world applications of reinforcement learning (RL) is a key issue, because their settings determine how fast the agent will learn its policy by interacting with its environment due to the information content of data gathered. In this work, an approach that uses Bayesian optimization to perform an autonomous two-tier optimization of both representation decisions and algorithm hyper-parameters is proposed: first, categorical / structural RL hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such type of variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, whereas the categorical hyper-parameters found in the optimization at the upper level of abstraction are fixed. This two-tier approach is validated with a tabular and neural network setting of the value function, in a classic simulated control task. Results obtained are promising and open the way for more user-independent applications of reinforcement learning.
publishDate 2020
dc.date.none.fl_str_mv 2020-05-19
dc.type.none.fl_str_mv info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
Articulo
http://purl.org/coar/resource_type/c_6501
info:ar-repo/semantics/articulo
format article
status_str publishedVersion
dc.identifier.none.fl_str_mv http://sedici.unlp.edu.ar/handle/10915/135049
url http://sedici.unlp.edu.ar/handle/10915/135049
dc.language.none.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv info:eu-repo/semantics/altIdentifier/url/https://publicaciones.sadio.org.ar/index.php/EJS/article/view/165
info:eu-repo/semantics/altIdentifier/issn/1514-6774
info:eu-repo/semantics/reference/hdl/10915/87851
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
http://creativecommons.org/licenses/by/4.0/
Creative Commons Attribution 4.0 International (CC BY 4.0)
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by/4.0/
Creative Commons Attribution 4.0 International (CC BY 4.0)
dc.format.none.fl_str_mv application/pdf
2-27
dc.source.none.fl_str_mv reponame:SEDICI (UNLP)
instname:Universidad Nacional de La Plata
instacron:UNLP
reponame_str SEDICI (UNLP)
collection SEDICI (UNLP)
instname_str Universidad Nacional de La Plata
instacron_str UNLP
institution UNLP
repository.name.fl_str_mv SEDICI (UNLP) - Universidad Nacional de La Plata
repository.mail.fl_str_mv alira@sedici.unlp.edu.ar
_version_ 1846064309384249344
score 12.891075