Large language models debunk fake and sensational wildlife news
- Autores
- Santangeli, Andrea; Mammola, Stefano; Nanni, Veronica; Lambertucci, Sergio Agustin
- Año de publicación
- 2024
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- In the current era of rapid online information growth, distinguishing facts from sensationalized or fake content is a major challenge. Here, we explore the potential of large language models as a tool to fact-check fake news and sensationalized content about animals. We queried the most popular large language models (ChatGPT 3.5 and 4, and Microsoft Bing), asking them to quantify the likelihood of 14 wildlife groups, often portrayed as dangerous or sensationalized, killing humans or livestock. We then compared these scores with the “real” risk obtained from relevant literature and/or expert opinion. We found a positive relationship between the likelihood risk score obtained from large language models and the “real” risk. This indicates the promising potential of large language models in fact-checking information about commonly misrepresented and widely feared animals, including jellyfish, wasps, spiders, vultures, and various large carnivores. Our analysis underscores the crucial role of large language models in dispelling wildlife myths, helping to mitigate human–wildlife conflicts, shaping a more just and harmonious coexistence, and ultimately aiding biological conservation.Plain language summaryIn today´s digital age, distinguishing accurate information from misinformation, sensationalized, or fake content is very challenging. We investigated the effectiveness of large language models, such as ChatGPT and Microsoft Bing, in fact-checking fake news about animals. We asked these large language models to evaluate the likelihood of wildlife, often portrayed as dangerous, killing humans or livestock. We selected 14 wildlife groups, including jellyfish, wasps, spiders, vultures, and various large carnivores. The scores from the large language models were then compared to data from scientific literature and expert opinions. We found a clear positive correlation between the risk assessments made by the large language models and real-world data, suggesting that these models may be useful for debunking wildlife myths. For example, the large language models accurately identified that animals like vultures pose no measurable risk to humans or livestock, while some large carnivores are more dangerous to livestock. By accurately identifying the true risks posed by various wildlife species, large language models can help reduce fear and misinformation, thereby promoting a more balanced understanding of human–wildlife interactions. This can aid in mitigating conflicts and ultimately promote harmonious coexistence.Practitioner pointsLarge language models, such as ChatGPT and Microsoft Bing, can provide accurate and balanced assessments of the true risks posed by wildlife to humans and livestock.Large language models correctly classified animals that pose no threat to humans and wildlife from others that are more dangerous, aligning well with real-world data.By providing accurate risk assessments, these models can help promote coexistence between humans and wildlife.
Fil: Santangeli, Andrea. Consejo Superior de Investigaciones Científicas; España
Fil: Mammola, Stefano. Consejo Superior de Investigaciones Científicas; España
Fil: Nanni, Veronica. Consejo Superior de Investigaciones Científicas; España
Fil: Lambertucci, Sergio Agustin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Norte. Instituto de Investigaciones en Biodiversidad y Medioambiente. Universidad Nacional del Comahue. Centro Regional Universidad Bariloche. Instituto de Investigaciones en Biodiversidad y Medioambiente; Argentina - Materia
-
AI
LARGE LANGUAGE MODELS
FAKE NEWS
WILDLIFE - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- https://creativecommons.org/licenses/by/2.5/ar/
- Repositorio
- Institución
- Consejo Nacional de Investigaciones Científicas y Técnicas
- OAI Identificador
- oai:ri.conicet.gov.ar:11336/264472
Ver los metadatos del registro completo
id |
CONICETDig_8756932f89558f5aea66fd7f45ffe77a |
---|---|
oai_identifier_str |
oai:ri.conicet.gov.ar:11336/264472 |
network_acronym_str |
CONICETDig |
repository_id_str |
3498 |
network_name_str |
CONICET Digital (CONICET) |
spelling |
Large language models debunk fake and sensational wildlife newsSantangeli, AndreaMammola, StefanoNanni, VeronicaLambertucci, Sergio AgustinAILARGE LANGUAGE MODELSFAKE NEWSWILDLIFEhttps://purl.org/becyt/ford/1.6https://purl.org/becyt/ford/1In the current era of rapid online information growth, distinguishing facts from sensationalized or fake content is a major challenge. Here, we explore the potential of large language models as a tool to fact-check fake news and sensationalized content about animals. We queried the most popular large language models (ChatGPT 3.5 and 4, and Microsoft Bing), asking them to quantify the likelihood of 14 wildlife groups, often portrayed as dangerous or sensationalized, killing humans or livestock. We then compared these scores with the “real” risk obtained from relevant literature and/or expert opinion. We found a positive relationship between the likelihood risk score obtained from large language models and the “real” risk. This indicates the promising potential of large language models in fact-checking information about commonly misrepresented and widely feared animals, including jellyfish, wasps, spiders, vultures, and various large carnivores. Our analysis underscores the crucial role of large language models in dispelling wildlife myths, helping to mitigate human–wildlife conflicts, shaping a more just and harmonious coexistence, and ultimately aiding biological conservation.Plain language summaryIn today´s digital age, distinguishing accurate information from misinformation, sensationalized, or fake content is very challenging. We investigated the effectiveness of large language models, such as ChatGPT and Microsoft Bing, in fact-checking fake news about animals. We asked these large language models to evaluate the likelihood of wildlife, often portrayed as dangerous, killing humans or livestock. We selected 14 wildlife groups, including jellyfish, wasps, spiders, vultures, and various large carnivores. The scores from the large language models were then compared to data from scientific literature and expert opinions. We found a clear positive correlation between the risk assessments made by the large language models and real-world data, suggesting that these models may be useful for debunking wildlife myths. For example, the large language models accurately identified that animals like vultures pose no measurable risk to humans or livestock, while some large carnivores are more dangerous to livestock. By accurately identifying the true risks posed by various wildlife species, large language models can help reduce fear and misinformation, thereby promoting a more balanced understanding of human–wildlife interactions. This can aid in mitigating conflicts and ultimately promote harmonious coexistence.Practitioner pointsLarge language models, such as ChatGPT and Microsoft Bing, can provide accurate and balanced assessments of the true risks posed by wildlife to humans and livestock.Large language models correctly classified animals that pose no threat to humans and wildlife from others that are more dangerous, aligning well with real-world data.By providing accurate risk assessments, these models can help promote coexistence between humans and wildlife.Fil: Santangeli, Andrea. Consejo Superior de Investigaciones Científicas; EspañaFil: Mammola, Stefano. Consejo Superior de Investigaciones Científicas; EspañaFil: Nanni, Veronica. Consejo Superior de Investigaciones Científicas; EspañaFil: Lambertucci, Sergio Agustin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Norte. Instituto de Investigaciones en Biodiversidad y Medioambiente. Universidad Nacional del Comahue. Centro Regional Universidad Bariloche. Instituto de Investigaciones en Biodiversidad y Medioambiente; ArgentinaWiley2024-06info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/264472Santangeli, Andrea; Mammola, Stefano; Nanni, Veronica; Lambertucci, Sergio Agustin; Large language models debunk fake and sensational wildlife news; Wiley; Integrative Conservation; 3; 2; 6-2024; 127-1332770-9329CONICET DigitalCONICETenginfo:eu-repo/semantics/altIdentifier/url/https://onlinelibrary.wiley.com/doi/10.1002/inc3.55info:eu-repo/semantics/altIdentifier/doi/10.1002/inc3.55info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-09-29T10:29:10Zoai:ri.conicet.gov.ar:11336/264472instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-09-29 10:29:11.125CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse |
dc.title.none.fl_str_mv |
Large language models debunk fake and sensational wildlife news |
title |
Large language models debunk fake and sensational wildlife news |
spellingShingle |
Large language models debunk fake and sensational wildlife news Santangeli, Andrea AI LARGE LANGUAGE MODELS FAKE NEWS WILDLIFE |
title_short |
Large language models debunk fake and sensational wildlife news |
title_full |
Large language models debunk fake and sensational wildlife news |
title_fullStr |
Large language models debunk fake and sensational wildlife news |
title_full_unstemmed |
Large language models debunk fake and sensational wildlife news |
title_sort |
Large language models debunk fake and sensational wildlife news |
dc.creator.none.fl_str_mv |
Santangeli, Andrea Mammola, Stefano Nanni, Veronica Lambertucci, Sergio Agustin |
author |
Santangeli, Andrea |
author_facet |
Santangeli, Andrea Mammola, Stefano Nanni, Veronica Lambertucci, Sergio Agustin |
author_role |
author |
author2 |
Mammola, Stefano Nanni, Veronica Lambertucci, Sergio Agustin |
author2_role |
author author author |
dc.subject.none.fl_str_mv |
AI LARGE LANGUAGE MODELS FAKE NEWS WILDLIFE |
topic |
AI LARGE LANGUAGE MODELS FAKE NEWS WILDLIFE |
purl_subject.fl_str_mv |
https://purl.org/becyt/ford/1.6 https://purl.org/becyt/ford/1 |
dc.description.none.fl_txt_mv |
In the current era of rapid online information growth, distinguishing facts from sensationalized or fake content is a major challenge. Here, we explore the potential of large language models as a tool to fact-check fake news and sensationalized content about animals. We queried the most popular large language models (ChatGPT 3.5 and 4, and Microsoft Bing), asking them to quantify the likelihood of 14 wildlife groups, often portrayed as dangerous or sensationalized, killing humans or livestock. We then compared these scores with the “real” risk obtained from relevant literature and/or expert opinion. We found a positive relationship between the likelihood risk score obtained from large language models and the “real” risk. This indicates the promising potential of large language models in fact-checking information about commonly misrepresented and widely feared animals, including jellyfish, wasps, spiders, vultures, and various large carnivores. Our analysis underscores the crucial role of large language models in dispelling wildlife myths, helping to mitigate human–wildlife conflicts, shaping a more just and harmonious coexistence, and ultimately aiding biological conservation.Plain language summaryIn today´s digital age, distinguishing accurate information from misinformation, sensationalized, or fake content is very challenging. We investigated the effectiveness of large language models, such as ChatGPT and Microsoft Bing, in fact-checking fake news about animals. We asked these large language models to evaluate the likelihood of wildlife, often portrayed as dangerous, killing humans or livestock. We selected 14 wildlife groups, including jellyfish, wasps, spiders, vultures, and various large carnivores. The scores from the large language models were then compared to data from scientific literature and expert opinions. We found a clear positive correlation between the risk assessments made by the large language models and real-world data, suggesting that these models may be useful for debunking wildlife myths. For example, the large language models accurately identified that animals like vultures pose no measurable risk to humans or livestock, while some large carnivores are more dangerous to livestock. By accurately identifying the true risks posed by various wildlife species, large language models can help reduce fear and misinformation, thereby promoting a more balanced understanding of human–wildlife interactions. This can aid in mitigating conflicts and ultimately promote harmonious coexistence.Practitioner pointsLarge language models, such as ChatGPT and Microsoft Bing, can provide accurate and balanced assessments of the true risks posed by wildlife to humans and livestock.Large language models correctly classified animals that pose no threat to humans and wildlife from others that are more dangerous, aligning well with real-world data.By providing accurate risk assessments, these models can help promote coexistence between humans and wildlife. Fil: Santangeli, Andrea. Consejo Superior de Investigaciones Científicas; España Fil: Mammola, Stefano. Consejo Superior de Investigaciones Científicas; España Fil: Nanni, Veronica. Consejo Superior de Investigaciones Científicas; España Fil: Lambertucci, Sergio Agustin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Norte. Instituto de Investigaciones en Biodiversidad y Medioambiente. Universidad Nacional del Comahue. Centro Regional Universidad Bariloche. Instituto de Investigaciones en Biodiversidad y Medioambiente; Argentina |
description |
In the current era of rapid online information growth, distinguishing facts from sensationalized or fake content is a major challenge. Here, we explore the potential of large language models as a tool to fact-check fake news and sensationalized content about animals. We queried the most popular large language models (ChatGPT 3.5 and 4, and Microsoft Bing), asking them to quantify the likelihood of 14 wildlife groups, often portrayed as dangerous or sensationalized, killing humans or livestock. We then compared these scores with the “real” risk obtained from relevant literature and/or expert opinion. We found a positive relationship between the likelihood risk score obtained from large language models and the “real” risk. This indicates the promising potential of large language models in fact-checking information about commonly misrepresented and widely feared animals, including jellyfish, wasps, spiders, vultures, and various large carnivores. Our analysis underscores the crucial role of large language models in dispelling wildlife myths, helping to mitigate human–wildlife conflicts, shaping a more just and harmonious coexistence, and ultimately aiding biological conservation.Plain language summaryIn today´s digital age, distinguishing accurate information from misinformation, sensationalized, or fake content is very challenging. We investigated the effectiveness of large language models, such as ChatGPT and Microsoft Bing, in fact-checking fake news about animals. We asked these large language models to evaluate the likelihood of wildlife, often portrayed as dangerous, killing humans or livestock. We selected 14 wildlife groups, including jellyfish, wasps, spiders, vultures, and various large carnivores. The scores from the large language models were then compared to data from scientific literature and expert opinions. We found a clear positive correlation between the risk assessments made by the large language models and real-world data, suggesting that these models may be useful for debunking wildlife myths. For example, the large language models accurately identified that animals like vultures pose no measurable risk to humans or livestock, while some large carnivores are more dangerous to livestock. By accurately identifying the true risks posed by various wildlife species, large language models can help reduce fear and misinformation, thereby promoting a more balanced understanding of human–wildlife interactions. This can aid in mitigating conflicts and ultimately promote harmonious coexistence.Practitioner pointsLarge language models, such as ChatGPT and Microsoft Bing, can provide accurate and balanced assessments of the true risks posed by wildlife to humans and livestock.Large language models correctly classified animals that pose no threat to humans and wildlife from others that are more dangerous, aligning well with real-world data.By providing accurate risk assessments, these models can help promote coexistence between humans and wildlife. |
publishDate |
2024 |
dc.date.none.fl_str_mv |
2024-06 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://hdl.handle.net/11336/264472 Santangeli, Andrea; Mammola, Stefano; Nanni, Veronica; Lambertucci, Sergio Agustin; Large language models debunk fake and sensational wildlife news; Wiley; Integrative Conservation; 3; 2; 6-2024; 127-133 2770-9329 CONICET Digital CONICET |
url |
http://hdl.handle.net/11336/264472 |
identifier_str_mv |
Santangeli, Andrea; Mammola, Stefano; Nanni, Veronica; Lambertucci, Sergio Agustin; Large language models debunk fake and sensational wildlife news; Wiley; Integrative Conservation; 3; 2; 6-2024; 127-133 2770-9329 CONICET Digital CONICET |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/url/https://onlinelibrary.wiley.com/doi/10.1002/inc3.55 info:eu-repo/semantics/altIdentifier/doi/10.1002/inc3.55 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by/2.5/ar/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by/2.5/ar/ |
dc.format.none.fl_str_mv |
application/pdf application/pdf |
dc.publisher.none.fl_str_mv |
Wiley |
publisher.none.fl_str_mv |
Wiley |
dc.source.none.fl_str_mv |
reponame:CONICET Digital (CONICET) instname:Consejo Nacional de Investigaciones Científicas y Técnicas |
reponame_str |
CONICET Digital (CONICET) |
collection |
CONICET Digital (CONICET) |
instname_str |
Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.name.fl_str_mv |
CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.mail.fl_str_mv |
dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar |
_version_ |
1844614297399853056 |
score |
13.069144 |