Model-free learning control of neutralization processes using reinforcement learning
- Autores
- Syafiie, S.; Tadeo, F.; Martínez, Ernesto Carlos
- Año de publicación
- 2007
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- The pH process dynamic often exhibits severe nonlinear and time-varying behavior and therefore cannot be adequately controlled with a conventional PI control. This article discusses an alternative approach to pH process control using model-free learning control (MFLC), which is based on reinforcement learning algorithms. The MFLC control technique is proposed because this algorithm gives a general solution for acid-base systems, yet is simple enough to be implemented in existing control hardware without a model. Reinforcement learning is selected because it is a learning technique based on interaction with a dynamic system or process for which a goal-seeking control task must be performed. This "on-the-fly" learning is suitable for time varying or nonlinear processes for which the development of a model is too costly, time consuming or even not feasible. Results obtained in a laboratory plant show that MFLC gives good performance for pH process control. Also, control actions generated by MFLC are much smoother than conventional PID controller.
Fil: Syafiie, S.. Universidad de Valladolid; España
Fil: Tadeo, F.. Universidad de Valladolid; España
Fil: Martínez, Ernesto Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; Argentina - Materia
-
Learning Control
Reinforcement Learning
Ph Control
Model-Free Control - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
- Repositorio
- Institución
- Consejo Nacional de Investigaciones Científicas y Técnicas
- OAI Identificador
- oai:ri.conicet.gov.ar:11336/83738
Ver los metadatos del registro completo
id |
CONICETDig_e9eecc3217c7bcfc633e50d2e925da43 |
---|---|
oai_identifier_str |
oai:ri.conicet.gov.ar:11336/83738 |
network_acronym_str |
CONICETDig |
repository_id_str |
3498 |
network_name_str |
CONICET Digital (CONICET) |
spelling |
Model-free learning control of neutralization processes using reinforcement learningSyafiie, S.Tadeo, F.Martínez, Ernesto CarlosLearning ControlReinforcement LearningPh ControlModel-Free Controlhttps://purl.org/becyt/ford/2.4https://purl.org/becyt/ford/2The pH process dynamic often exhibits severe nonlinear and time-varying behavior and therefore cannot be adequately controlled with a conventional PI control. This article discusses an alternative approach to pH process control using model-free learning control (MFLC), which is based on reinforcement learning algorithms. The MFLC control technique is proposed because this algorithm gives a general solution for acid-base systems, yet is simple enough to be implemented in existing control hardware without a model. Reinforcement learning is selected because it is a learning technique based on interaction with a dynamic system or process for which a goal-seeking control task must be performed. This "on-the-fly" learning is suitable for time varying or nonlinear processes for which the development of a model is too costly, time consuming or even not feasible. Results obtained in a laboratory plant show that MFLC gives good performance for pH process control. Also, control actions generated by MFLC are much smoother than conventional PID controller.Fil: Syafiie, S.. Universidad de Valladolid; EspañaFil: Tadeo, F.. Universidad de Valladolid; EspañaFil: Martínez, Ernesto Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; ArgentinaPergamon-Elsevier Science Ltd2007-09info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/83738Syafiie, S.; Tadeo, F.; Martínez, Ernesto Carlos; Model-free learning control of neutralization processes using reinforcement learning; Pergamon-Elsevier Science Ltd; Engineering Applications Of Artificial Intelligence; 20; 6; 9-2007; 767-7820952-1976CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/doi/10.1016/j.engappai.2006.10.009info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-29T09:44:41Zoai:ri.conicet.gov.ar:11336/83738instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-29 09:44:41.295CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse |
dc.title.none.fl_str_mv |
Model-free learning control of neutralization processes using reinforcement learning |
title |
Model-free learning control of neutralization processes using reinforcement learning |
spellingShingle |
Model-free learning control of neutralization processes using reinforcement learning Syafiie, S. Learning Control Reinforcement Learning Ph Control Model-Free Control |
title_short |
Model-free learning control of neutralization processes using reinforcement learning |
title_full |
Model-free learning control of neutralization processes using reinforcement learning |
title_fullStr |
Model-free learning control of neutralization processes using reinforcement learning |
title_full_unstemmed |
Model-free learning control of neutralization processes using reinforcement learning |
title_sort |
Model-free learning control of neutralization processes using reinforcement learning |
dc.creator.none.fl_str_mv |
Syafiie, S. Tadeo, F. Martínez, Ernesto Carlos |
author |
Syafiie, S. |
author_facet |
Syafiie, S. Tadeo, F. Martínez, Ernesto Carlos |
author_role |
author |
author2 |
Tadeo, F. Martínez, Ernesto Carlos |
author2_role |
author author |
dc.subject.none.fl_str_mv |
Learning Control Reinforcement Learning Ph Control Model-Free Control |
topic |
Learning Control Reinforcement Learning Ph Control Model-Free Control |
purl_subject.fl_str_mv |
https://purl.org/becyt/ford/2.4 https://purl.org/becyt/ford/2 |
dc.description.none.fl_txt_mv |
The pH process dynamic often exhibits severe nonlinear and time-varying behavior and therefore cannot be adequately controlled with a conventional PI control. This article discusses an alternative approach to pH process control using model-free learning control (MFLC), which is based on reinforcement learning algorithms. The MFLC control technique is proposed because this algorithm gives a general solution for acid-base systems, yet is simple enough to be implemented in existing control hardware without a model. Reinforcement learning is selected because it is a learning technique based on interaction with a dynamic system or process for which a goal-seeking control task must be performed. This "on-the-fly" learning is suitable for time varying or nonlinear processes for which the development of a model is too costly, time consuming or even not feasible. Results obtained in a laboratory plant show that MFLC gives good performance for pH process control. Also, control actions generated by MFLC are much smoother than conventional PID controller. Fil: Syafiie, S.. Universidad de Valladolid; España Fil: Tadeo, F.. Universidad de Valladolid; España Fil: Martínez, Ernesto Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; Argentina |
description |
The pH process dynamic often exhibits severe nonlinear and time-varying behavior and therefore cannot be adequately controlled with a conventional PI control. This article discusses an alternative approach to pH process control using model-free learning control (MFLC), which is based on reinforcement learning algorithms. The MFLC control technique is proposed because this algorithm gives a general solution for acid-base systems, yet is simple enough to be implemented in existing control hardware without a model. Reinforcement learning is selected because it is a learning technique based on interaction with a dynamic system or process for which a goal-seeking control task must be performed. This "on-the-fly" learning is suitable for time varying or nonlinear processes for which the development of a model is too costly, time consuming or even not feasible. Results obtained in a laboratory plant show that MFLC gives good performance for pH process control. Also, control actions generated by MFLC are much smoother than conventional PID controller. |
publishDate |
2007 |
dc.date.none.fl_str_mv |
2007-09 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://hdl.handle.net/11336/83738 Syafiie, S.; Tadeo, F.; Martínez, Ernesto Carlos; Model-free learning control of neutralization processes using reinforcement learning; Pergamon-Elsevier Science Ltd; Engineering Applications Of Artificial Intelligence; 20; 6; 9-2007; 767-782 0952-1976 CONICET Digital CONICET |
url |
http://hdl.handle.net/11336/83738 |
identifier_str_mv |
Syafiie, S.; Tadeo, F.; Martínez, Ernesto Carlos; Model-free learning control of neutralization processes using reinforcement learning; Pergamon-Elsevier Science Ltd; Engineering Applications Of Artificial Intelligence; 20; 6; 9-2007; 767-782 0952-1976 CONICET Digital CONICET |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.engappai.2006.10.009 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
dc.format.none.fl_str_mv |
application/pdf application/pdf |
dc.publisher.none.fl_str_mv |
Pergamon-Elsevier Science Ltd |
publisher.none.fl_str_mv |
Pergamon-Elsevier Science Ltd |
dc.source.none.fl_str_mv |
reponame:CONICET Digital (CONICET) instname:Consejo Nacional de Investigaciones Científicas y Técnicas |
reponame_str |
CONICET Digital (CONICET) |
collection |
CONICET Digital (CONICET) |
instname_str |
Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.name.fl_str_mv |
CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.mail.fl_str_mv |
dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar |
_version_ |
1844613406033707008 |
score |
13.069144 |