V. Use of Artificial Intelligence in Publishing

Proliferation of artificial intelligence (AI) and large language models (LLMs) has made their use ubiquitous and the utility of AI/LLM in advancing science holds promise. However, there are concerns regarding validity, confidentiality, transparency, and accountability in the use of AI/LLM in scientific publishing. Because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased, humans are ultimately responsible for reviewing and ensuring the accuracy of any content generated with assistance of AI. Therefore, authors, reviewers, and editors/publishers should adhere to the following principles when using AI/LLM:

  • Authors should not list or cite AI and AI-assisted technologies as an author.
  • Because AI tools may incorporate information from sources without proper attribution (including copyright material), humans are responsible for citing appropriate sources, obtaining permissions when necessary, and ensuring that plagiarism has not occurred when using content generated by AI tools.
  • Because manuscripts submitted to journals are authors' privileged communication, using AI tools in the processing or evaluations of manuscripts may violate confidentiality. Editors, reviewers, or publishers should not upload submitted manuscript into AI systems where confidentiality cannot be assured without authors' explicit permission.
  • Editors/publishers, authors, and reviewers should be transparent in their use of AI tools at any stage of the editorial process, including manuscript preparation, editing, and review. When AI is used, users should disclose which tool was used, and for what purpose.

Journals should have a policy for the use of AI adhering to the above principles and make all editors, reviewers, and authors aware of this journal policy.