One Metric for All: Calculating Interaction Effort of Individual Widgets

Autores
Grigera, Julián; Gardey, Juan Cruz; Rodríguez, Andrés Santiago; Garrido, Alejandra; Rossi, Gustavo Héctor
Año de publicación
2019
Idioma
inglés
Tipo de recurso
documento de conferencia
Estado
versión publicada
Descripción
Automating usability diagnose and repair can be a powerful assistance to usability experts and even less knowledgeable developers. To accomplish this goal, evaluating user interaction automatically is crucial, and it has been broadly explored. However, most works focus in long interaction sessions, which makes it difficult to tell how individual interface components influence usability. In contrast, this work aims to compare how different widgets perform for the same task, in the context of evaluating alternative designs for small components, implemented as refactorings. For this purpose, we propose a unified score to compare the widgets involved in each refactoring by the level of effort required by users to interact with them. This score is based on micro-measures automatically captured from interaction logs, so it can be automatically predicted. We show the results of predicting such score using a decision tree.
Laboratorio de Investigación y Formación en Informática Avanzada
Materia
Ciencias Informáticas
Web usability
Interactivity
Usability refactoring
A/B testing
User interaction metrics
Nivel de accesibilidad
acceso abierto
Condiciones de uso
http://creativecommons.org/licenses/by-nc-sa/4.0/
Repositorio
SEDICI (UNLP)
Institución
Universidad Nacional de La Plata
OAI Identificador
oai:sedici.unlp.edu.ar:10915/119016

id SEDICI_c9ba34f9efed58277ce744efd25090c2
oai_identifier_str oai:sedici.unlp.edu.ar:10915/119016
network_acronym_str SEDICI
repository_id_str 1329
network_name_str SEDICI (UNLP)
spelling One Metric for All: Calculating Interaction Effort of Individual WidgetsGrigera, JuliánGardey, Juan CruzRodríguez, Andrés SantiagoGarrido, AlejandraRossi, Gustavo HéctorCiencias InformáticasWeb usabilityInteractivityUsability refactoringA/B testingUser interaction metricsAutomating usability diagnose and repair can be a powerful assistance to usability experts and even less knowledgeable developers. To accomplish this goal, evaluating user interaction automatically is crucial, and it has been broadly explored. However, most works focus in long interaction sessions, which makes it difficult to tell how individual interface components influence usability. In contrast, this work aims to compare how different widgets perform for the same task, in the context of evaluating alternative designs for small components, implemented as refactorings. For this purpose, we propose a unified score to compare the widgets involved in each refactoring by the level of effort required by users to interact with them. This score is based on micro-measures automatically captured from interaction logs, so it can be automatically predicted. We show the results of predicting such score using a decision tree.Laboratorio de Investigación y Formación en Informática Avanzada2019-05info:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/publishedVersionObjeto de conferenciahttp://purl.org/coar/resource_type/c_5794info:ar-repo/semantics/documentoDeConferenciaapplication/pdfhttp://sedici.unlp.edu.ar/handle/10915/119016enginfo:eu-repo/semantics/altIdentifier/isbn/978-1-4503-5971-9info:eu-repo/semantics/altIdentifier/url/https://dl.acm.org/doi/10.1145/3290607.3312902info:eu-repo/semantics/altIdentifier/doi/10.1145/3290607.3312902info:eu-repo/semantics/altIdentifier/hdl/11746/10677info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/4.0/Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-09-29T11:28:04Zoai:sedici.unlp.edu.ar:10915/119016Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-09-29 11:28:04.808SEDICI (UNLP) - Universidad Nacional de La Platafalse
dc.title.none.fl_str_mv One Metric for All: Calculating Interaction Effort of Individual Widgets
title One Metric for All: Calculating Interaction Effort of Individual Widgets
spellingShingle One Metric for All: Calculating Interaction Effort of Individual Widgets
Grigera, Julián
Ciencias Informáticas
Web usability
Interactivity
Usability refactoring
A/B testing
User interaction metrics
title_short One Metric for All: Calculating Interaction Effort of Individual Widgets
title_full One Metric for All: Calculating Interaction Effort of Individual Widgets
title_fullStr One Metric for All: Calculating Interaction Effort of Individual Widgets
title_full_unstemmed One Metric for All: Calculating Interaction Effort of Individual Widgets
title_sort One Metric for All: Calculating Interaction Effort of Individual Widgets
dc.creator.none.fl_str_mv Grigera, Julián
Gardey, Juan Cruz
Rodríguez, Andrés Santiago
Garrido, Alejandra
Rossi, Gustavo Héctor
author Grigera, Julián
author_facet Grigera, Julián
Gardey, Juan Cruz
Rodríguez, Andrés Santiago
Garrido, Alejandra
Rossi, Gustavo Héctor
author_role author
author2 Gardey, Juan Cruz
Rodríguez, Andrés Santiago
Garrido, Alejandra
Rossi, Gustavo Héctor
author2_role author
author
author
author
dc.subject.none.fl_str_mv Ciencias Informáticas
Web usability
Interactivity
Usability refactoring
A/B testing
User interaction metrics
topic Ciencias Informáticas
Web usability
Interactivity
Usability refactoring
A/B testing
User interaction metrics
dc.description.none.fl_txt_mv Automating usability diagnose and repair can be a powerful assistance to usability experts and even less knowledgeable developers. To accomplish this goal, evaluating user interaction automatically is crucial, and it has been broadly explored. However, most works focus in long interaction sessions, which makes it difficult to tell how individual interface components influence usability. In contrast, this work aims to compare how different widgets perform for the same task, in the context of evaluating alternative designs for small components, implemented as refactorings. For this purpose, we propose a unified score to compare the widgets involved in each refactoring by the level of effort required by users to interact with them. This score is based on micro-measures automatically captured from interaction logs, so it can be automatically predicted. We show the results of predicting such score using a decision tree.
Laboratorio de Investigación y Formación en Informática Avanzada
description Automating usability diagnose and repair can be a powerful assistance to usability experts and even less knowledgeable developers. To accomplish this goal, evaluating user interaction automatically is crucial, and it has been broadly explored. However, most works focus in long interaction sessions, which makes it difficult to tell how individual interface components influence usability. In contrast, this work aims to compare how different widgets perform for the same task, in the context of evaluating alternative designs for small components, implemented as refactorings. For this purpose, we propose a unified score to compare the widgets involved in each refactoring by the level of effort required by users to interact with them. This score is based on micro-measures automatically captured from interaction logs, so it can be automatically predicted. We show the results of predicting such score using a decision tree.
publishDate 2019
dc.date.none.fl_str_mv 2019-05
dc.type.none.fl_str_mv info:eu-repo/semantics/conferenceObject
info:eu-repo/semantics/publishedVersion
Objeto de conferencia
http://purl.org/coar/resource_type/c_5794
info:ar-repo/semantics/documentoDeConferencia
format conferenceObject
status_str publishedVersion
dc.identifier.none.fl_str_mv http://sedici.unlp.edu.ar/handle/10915/119016
url http://sedici.unlp.edu.ar/handle/10915/119016
dc.language.none.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv info:eu-repo/semantics/altIdentifier/isbn/978-1-4503-5971-9
info:eu-repo/semantics/altIdentifier/url/https://dl.acm.org/doi/10.1145/3290607.3312902
info:eu-repo/semantics/altIdentifier/doi/10.1145/3290607.3312902
info:eu-repo/semantics/altIdentifier/hdl/11746/10677
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
http://creativecommons.org/licenses/by-nc-sa/4.0/
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-sa/4.0/
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:SEDICI (UNLP)
instname:Universidad Nacional de La Plata
instacron:UNLP
reponame_str SEDICI (UNLP)
collection SEDICI (UNLP)
instname_str Universidad Nacional de La Plata
instacron_str UNLP
institution UNLP
repository.name.fl_str_mv SEDICI (UNLP) - Universidad Nacional de La Plata
repository.mail.fl_str_mv alira@sedici.unlp.edu.ar
_version_ 1844616158747033600
score 13.070432