Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Language
Publication year range
1.
Colomb. med ; 54(3)sept. 2023.
Article in English | LILACS-Express | LILACS | ID: biblio-1534290

ABSTRACT

This statement revises our earlier "WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications" (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop.


Esta declaración revisa las anteriores "Recomendaciones de WAME sobre ChatGPT y Chatbots en Relation to Scholarly Publications" (20 de enero de 2023). La revisión refleja la proliferación de chatbots y su creciente uso en las publicaciones académicas en los últimos meses, así como la preocupación por la falta de autenticidad de los contenidos cuando se utilizan chatbots. Estas recomendaciones pretenden informar a los editores y ayudarles a desarrollar políticas para el uso de chatbots en los artículos sometidos en sus revistas. Su objetivo es ayudar a autores y revisores a entender cuál es la mejor manera de atribuir el uso de chatbots en su trabajo y a la necesidad de que todos los editores de revistas tengan acceso a herramientas de selección de manuscritos. En este campo en rápida evolución, seguiremos modificando estas recomendaciones a medida que se desarrollen el software y sus aplicaciones.

2.
Patterns (N Y) ; 4(3): 100706, 2023 Mar 10.
Article in English | MEDLINE | ID: mdl-36960451

ABSTRACT

Fabricating research within the scientific community has consequences for one's credibility and undermines honest authors. We demonstrate the feasibility of fabricating research using an AI-based language model chatbot. Human detection versus AI detection will be compared to determine accuracy in identifying fabricated works. The risks of utilizing AI-generated research works will be underscored and reasons for falsifying research will be highlighted.

SELECTION OF CITATIONS
SEARCH DETAIL