For researchers, an important factor to consider when deciding whether to use generative AI for scientific writing are the policies of journals. Publishers’ editorial policies on generative AI range from a ban unless explicit permission is given by the editors (e.g. Science) to the more common position of disclosure in the manuscript (e.g. JAMA Network, Elsevier, Springer Nature, ACS, PLOS, AGU, Sage, Taylor & Francis, PNAS). These policies pertain to scientific writing and images; journals continue to support the transparent use of AI-based research and analysis methods.

While there exist differences in policy, all publishers appear to agree that AI tools (e.g. ChatGPT, Bard, Bing, DALL-E, Jenni, Elicit, etc.) are not co-authors; only humans can be authors of scientific articles. Authors bear the responsibility for the integrity of all content submitted. Although there is currently no sure-fire tool to detect AI-generated text and images, some publishers are using newly developed “integrity detection” tools (e.g. Proofig), while continuing to use plagiarism detection software (e.g. iThenticate).

Among publishers who favor disclosure (rather than a ban) of generative AI for writing, there is some variation in what ‘disclosure’ entails. For example, Elsevier journals require a “Declaration of Generative AI and AI-assisted technologies in the writing process,” using this template:

During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.

PLOS journals have a similar requirement, yet authors must provide more detailed information about their use of AI tools and evaluation of outputs:

Contributions by artificial intelligence (AI) tools and technologies to a study or to an article’s contents must be clearly reported in a dedicated section of the Methods, or in the Acknowledgements section for article types lacking a Methods section. This section should include the name(s) of any tools used, a description of how the authors used the tool(s) and evaluated the validity of the tool’s outputs, and a clear statement of which aspects of the study, article contents, data, or supporting files were affected/generated by AI tool usage.

Providing accurate disclosure may not be as simple as it seems. For example, let’s say a researcher uses ChatGPT to draft notes for an upcoming talk, or uses DALL-E to generate images for a presentation. What if, months later, those outputs, or even revised excerpts of those outputs, are included in a journal article submission? A clear record of AI usage would need to have been kept in order to craft an accurate disclosure statement.

Researchers are used to keeping track of outside sources with citation management tools, but keeping track of generative AI usage requires new practices. Before outputs are edited or revised, recording the prompt that was used to generate the original output could be useful. Verifying the information in the output is also critical, including the citations generated, as ChatGPT has been reported to insert fake references in scientific writing. Moreover, to meet the requirements of a disclosure statement, researchers should be able to provide a compelling reason for using the tool in the first place, and that rationale could be recorded prior to use to ensure accuracy. This type of real-time awareness and note-taking is likely unfamiliar territory outside of the lab; however, these practices could help mitigate the risk of accidentally including AI-generated material in a subsequent article without full disclosure.

In addition to influencing whether (and how) to use generative AI, journal policies can offer faculty-researchers ideas on how to guide students’ use of these tools in their coursework. Adopting the policy of a specific journal could be one way to promote professional practices in the classroom, which could be communicated in a syllabus statement and during class discussions. For example, in my first-year writing course, 21W.035 (Communicating Science to the Public), I state the Science policy verbatim on the syllabus:

Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors. In addition, an AI program cannot be an author of a Science journal paper. A violation of this policy constitutes scientific misconduct.

I also share with students my reasoning for adopting the Science position in this particular class, namely, to introduce them to professional scientific writing practices that many researchers follow. I also want to ensure that first-year students experience the challenges (and successes) of human brainstorming, drafting, and revising articles in this early stage of their college career. For faculty who adopt classroom policies that allow for or even promote generative AI use, they could share (or develop) with students strategies for accurate record-keeping during the term, in order to help students learn the practices necessary for transparent disclosure when submitting assignments.

Ultimately, given the current publishing landscape, the decision of whether to use generative AI tools for scientific writing has critical implications. As tools continue to advance, publishers’ policies will continue to adapt, which in turn, will influence how researchers choose to craft scientific writing.