Automated benchmarking of peptide-MHC class i binding predictions
- Autores
- Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten
- Año de publicación
- 2015
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto-bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto-bench/mhci/join.
Fil: Trolle, Thomas. Technical University of Denmark; Dinamarca
Fil: Metushi, Imir G.. La Jolla Institute for Allergy and Immunology; Estados Unidos
Fil: Greenbaum, Jason A.. La Jolla Institute for Allergy and Immunology; Estados Unidos
Fil: Kim, Yohan. La Jolla Institute for Allergy and Immunology; Estados Unidos
Fil: Sidney, John. La Jolla Institute for Allergy and Immunology; Estados Unidos
Fil: Lund, Ole. Technical University of Denmark; Dinamarca
Fil: Sette, Alessandro. La Jolla Institute for Allergy and Immunology; Estados Unidos
Fil: Peters, Bjoern. La Jolla Institute for Allergy and Immunology; Estados Unidos
Fil: Nielsen, Morten. Technical University of Denmark; Dinamarca. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Investigaciones Biotecnológicas. Universidad Nacional de San Martín. Instituto de Investigaciones Biotecnológicas; Argentina - Materia
-
Mhc
Benchmark - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
- Repositorio
- Institución
- Consejo Nacional de Investigaciones Científicas y Técnicas
- OAI Identificador
- oai:ri.conicet.gov.ar:11336/38180
Ver los metadatos del registro completo
id |
CONICETDig_179a1168ca0af99ed45038ebfaf7d880 |
---|---|
oai_identifier_str |
oai:ri.conicet.gov.ar:11336/38180 |
network_acronym_str |
CONICETDig |
repository_id_str |
3498 |
network_name_str |
CONICET Digital (CONICET) |
spelling |
Automated benchmarking of peptide-MHC class i binding predictionsTrolle, ThomasMetushi, Imir G.Greenbaum, Jason A.Kim, YohanSidney, JohnLund, OleSette, AlessandroPeters, BjoernNielsen, MortenMhcBenchmarkhttps://purl.org/becyt/ford/1.2https://purl.org/becyt/ford/1Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto-bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto-bench/mhci/join.Fil: Trolle, Thomas. Technical University of Denmark; DinamarcaFil: Metushi, Imir G.. La Jolla Institute for Allergy and Immunology; Estados UnidosFil: Greenbaum, Jason A.. La Jolla Institute for Allergy and Immunology; Estados UnidosFil: Kim, Yohan. La Jolla Institute for Allergy and Immunology; Estados UnidosFil: Sidney, John. La Jolla Institute for Allergy and Immunology; Estados UnidosFil: Lund, Ole. Technical University of Denmark; DinamarcaFil: Sette, Alessandro. La Jolla Institute for Allergy and Immunology; Estados UnidosFil: Peters, Bjoern. La Jolla Institute for Allergy and Immunology; Estados UnidosFil: Nielsen, Morten. Technical University of Denmark; Dinamarca. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Investigaciones Biotecnológicas. Universidad Nacional de San Martín. Instituto de Investigaciones Biotecnológicas; ArgentinaOxford University Press2015-07info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/38180Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; et al.; Automated benchmarking of peptide-MHC class i binding predictions; Oxford University Press; Bioinformatics (Oxford, England); 31; 13; 7-2015; 2174-21811367-4803CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/doi/10.1093/bioinformatics/btv123info:eu-repo/semantics/altIdentifier/url/https://academic.oup.com/bioinformatics/article/31/13/2174/196331info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-10T13:24:16Zoai:ri.conicet.gov.ar:11336/38180instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-10 13:24:16.488CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse |
dc.title.none.fl_str_mv |
Automated benchmarking of peptide-MHC class i binding predictions |
title |
Automated benchmarking of peptide-MHC class i binding predictions |
spellingShingle |
Automated benchmarking of peptide-MHC class i binding predictions Trolle, Thomas Mhc Benchmark |
title_short |
Automated benchmarking of peptide-MHC class i binding predictions |
title_full |
Automated benchmarking of peptide-MHC class i binding predictions |
title_fullStr |
Automated benchmarking of peptide-MHC class i binding predictions |
title_full_unstemmed |
Automated benchmarking of peptide-MHC class i binding predictions |
title_sort |
Automated benchmarking of peptide-MHC class i binding predictions |
dc.creator.none.fl_str_mv |
Trolle, Thomas Metushi, Imir G. Greenbaum, Jason A. Kim, Yohan Sidney, John Lund, Ole Sette, Alessandro Peters, Bjoern Nielsen, Morten |
author |
Trolle, Thomas |
author_facet |
Trolle, Thomas Metushi, Imir G. Greenbaum, Jason A. Kim, Yohan Sidney, John Lund, Ole Sette, Alessandro Peters, Bjoern Nielsen, Morten |
author_role |
author |
author2 |
Metushi, Imir G. Greenbaum, Jason A. Kim, Yohan Sidney, John Lund, Ole Sette, Alessandro Peters, Bjoern Nielsen, Morten |
author2_role |
author author author author author author author author |
dc.subject.none.fl_str_mv |
Mhc Benchmark |
topic |
Mhc Benchmark |
purl_subject.fl_str_mv |
https://purl.org/becyt/ford/1.2 https://purl.org/becyt/ford/1 |
dc.description.none.fl_txt_mv |
Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto-bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto-bench/mhci/join. Fil: Trolle, Thomas. Technical University of Denmark; Dinamarca Fil: Metushi, Imir G.. La Jolla Institute for Allergy and Immunology; Estados Unidos Fil: Greenbaum, Jason A.. La Jolla Institute for Allergy and Immunology; Estados Unidos Fil: Kim, Yohan. La Jolla Institute for Allergy and Immunology; Estados Unidos Fil: Sidney, John. La Jolla Institute for Allergy and Immunology; Estados Unidos Fil: Lund, Ole. Technical University of Denmark; Dinamarca Fil: Sette, Alessandro. La Jolla Institute for Allergy and Immunology; Estados Unidos Fil: Peters, Bjoern. La Jolla Institute for Allergy and Immunology; Estados Unidos Fil: Nielsen, Morten. Technical University of Denmark; Dinamarca. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Investigaciones Biotecnológicas. Universidad Nacional de San Martín. Instituto de Investigaciones Biotecnológicas; Argentina |
description |
Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto-bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto-bench/mhci/join. |
publishDate |
2015 |
dc.date.none.fl_str_mv |
2015-07 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://hdl.handle.net/11336/38180 Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; et al.; Automated benchmarking of peptide-MHC class i binding predictions; Oxford University Press; Bioinformatics (Oxford, England); 31; 13; 7-2015; 2174-2181 1367-4803 CONICET Digital CONICET |
url |
http://hdl.handle.net/11336/38180 |
identifier_str_mv |
Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; et al.; Automated benchmarking of peptide-MHC class i binding predictions; Oxford University Press; Bioinformatics (Oxford, England); 31; 13; 7-2015; 2174-2181 1367-4803 CONICET Digital CONICET |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/doi/10.1093/bioinformatics/btv123 info:eu-repo/semantics/altIdentifier/url/https://academic.oup.com/bioinformatics/article/31/13/2174/196331 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
dc.format.none.fl_str_mv |
application/pdf application/pdf application/pdf |
dc.publisher.none.fl_str_mv |
Oxford University Press |
publisher.none.fl_str_mv |
Oxford University Press |
dc.source.none.fl_str_mv |
reponame:CONICET Digital (CONICET) instname:Consejo Nacional de Investigaciones Científicas y Técnicas |
reponame_str |
CONICET Digital (CONICET) |
collection |
CONICET Digital (CONICET) |
instname_str |
Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.name.fl_str_mv |
CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.mail.fl_str_mv |
dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar |
_version_ |
1842981345183137792 |
score |
12.493442 |