Used thoughtfully and transparently, generative AI may support, but must not replace, human judgment, expertise, and critical thinking in peer review.
With this editorial, we would like to draw the attention of peer reviewers to the responsible use of generative artificial intelligence (AI) tools when reviewing a manuscript. The general guidelines, shared throughout the Nature Portfolio journals, are described on our website at https://www.nature.com/nnano/editorial-policies/ai.
The alternative text for this image may have been generated using AI.
Credit: Panther Media Global / Alamy Stock Image
When using AI tools to produce any type of content, the most important thing to keep in mind is that the user is always accountable. This tenet has several consequences1,2.
The first consequence is that every output generated by an AI tool requires human validation. That is, it should not be assumed that the output of an AI tool is factually accurate (even if the prompt contains the instruction ‘do not hallucinate’). The output and related sources must always be double-checked by an accountable person. In the case of peer review, we ask reviewers, “to declare the use of such tools transparently in the peer review report”. As users become more familiar with AI tools, they can exercise varying degrees of scepticism when reading the tool’s outputs and improve accuracy by using more precise prompts. For example, when validation is considered, reviewers may find that using an AI tool is more time-consuming than not using it, or that it is useful only for certain aspects of the manuscript review.
The second consequence concerns the legal implications of uploading a manuscript to an AI tool. Authors trust us to share manuscripts with reviewers in strict confidence. Uploading a manuscript into an AI tool could breach this confidentiality. There are certain AI tools that are closed, meaning they do not share uploaded content with the World Wide Web or use it for training. However, depending on the settings or end-user agreements of the specific AI tool, uploaded content can still be discoverable by other users within the closed environment (for example, colleagues within an institution). To avoid any legal consequences, we ask reviewers to “not upload manuscripts into generative AI tools”.
Using AI tools to improve the grammar or readability of human-generated texts does not need to be declared (though it still requires human validation)3. The main risk we see at this point in using AI for peer-reviewing a manuscript is over-reliance on a tool that is still largely seen as a black box and can produce inaccurate results.
Like everyone else, we editors are still learning how to best use AI tools, for example, to summarize the major points, extract the key performance metrics, or identify suitable reviewers. Like any new technology, it is crucial to get educated about it. We invest in training and awareness to ensure the ethical use of the tools that the publisher provides us with. What are the advantages? What are the legal implications of misuse? What is the best way to extract the desired result? What are the limitations of the tools? What is the dataset used for training it? With what specific tasks can it help us be more productive (or faster)? When is using AI tools a waste of time? How much energy or CO2 equivalent does a prompt consume?
As reviewers also learn to craft effective prompts, validate results, and preserve confidentiality, we believe that AI tools will eventually help the peer review process. Reviewers and editors will be able to exercise their judgment in light of a vast amount of information in the literature that AI tools could retrieve effectively. If used inattentively, thoughd, we risk delegating critical thinking to an algorithm, giving us a false sense of achievement. This undermines the role of our academic training, critical thinking, and expertise, as well as the institution of peer review4.
Looking ahead, as generative AI becomes more capable and deeply embedded in automatable scholarly workflows, our shared priority must be to ensure that efficiency gains never come at the expense of rigour, confidentiality, or accountability5. As the sensibility of scientific communities around AI evolves, Nature Portfolio’s guidelines are bound to adapt accordingly. We will continue to refine our guidance in step with technology, emerging standards, and community expectations by providing clearer guidelines for peer reviewers. AI tools that are demonstrably secure and fit for purpose, when used transparently and critically, can help reviewers navigate an ever-expanding literature and focus their expertise where it matters most; used uncritically, they risk eroding the very judgment peer review exists to apply. Our aim, therefore, is not to accelerate peer review by outsourcing thought, but to strengthen it by enabling informed human decisions, grounded in evidence, integrity, and trust.