Policies On The Use Of Artificial Intelligence (AI)
Authorship and AI
Large Language Models (LLMs), such as ChatGPT, do not currently meet our criteria for authorship. It is important to note that authorship attribution carries responsibility for the work, which cannot be properly assigned when utilizing LLMs.
Use of AI Tools
The use of an LLM should be properly documented in the Methods section of the manuscript. If this is not feasible within the Methods section, it should be documented in an appropriate alternative section, which can be placed at the end of the manuscript, following the final considerations.
The use of an LLM (or other AI tools) for AI-assisted text editing purposes ("AI-Assisted Copy Editing") does not need to be formally declared. In this context, we define “AI-Assisted Copy Editing” as AI-enhanced revisions made to texts generated by humans, aimed at improving readability and style. This process ensures that the text is free from grammatical, spelling, punctuation, and tone errors. It may involve adjustments to writing and formatting, but it does not include generative editorial work or autonomous content creation. The responsibility for the final version of the text remains with the human authors, who must confirm that the edits align with their original work.
Use of AI by Peer Reviewers
Peer review is a cornerstone of scientific publishing. The evaluations and recommendations provided by experts guide editors in their decision-making processes and help ensure the validity, rigor, and reliability of published research. Editors select reviewers based on their expertise in the relevant field or methods used in the evaluation process. In this context, reviewers are primarily responsible for the accuracy of their assessments and the opinions expressed in their reports. The peer review process operates on a foundation of mutual trust between authors, reviewers, and editors.
Despite significant advancements, general-purpose AI tools have notable limitations: they may lack up-to-date information and could generate erroneous, biased, or misleading content. Manuscripts may also contain confidential or proprietary information that should not be disclosed outside the peer review process. For these reasons, we request that reviewers refrain from using generative AI tools to assess manuscripts.
If any part of the manuscript evaluation was supported by an AI tool, we ask that reviewers explicitly declare the use of such tools in their peer review report. Reviewers should include this information in the “Recommendations to the author(s)” section, specifying the commands used and attaching screenshots of the AI-generated content in a PDF document, as part of the evaluation form.
References
Inteligência artificial (IA) e artigos falsos [online]. COPE: Committee on Publication Ethics. 2023 [visualizado em 01 de outubro de 2024]. Disponível em: Link
Inteligência artificial e autoria [online]. COPE: Committee on Publication Ethics. 2023 [visualizado em 01 de outubro de 2024]. Disponível em: Link
Guia de uso de ferramentas e recursos de Inteligência Artificial na comunicação de pesquisas na Rede SciELO [online]. SciELO 2023 [visualizado em 01 de outubro de 2024] Disponível em: Link