AI tools and Generative AI Use

Guidance for Authors:

Authors preparing a manuscript can use AI Tools to support them. However, these tools must never be used as a substitute for human critical thinking, expertise and evaluation. AI Tools should always be applied with human oversight and control. Ultimately, authors are responsible and accountable for the contents of their work. This includes accountability for:

1. Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output (including checking the sources, as AI-generated references can be incorrect or fabricated).

2. Editing and adapting all material thoroughly to ensure the manuscript represents the author’s authentic and original contribution and reflects their own analysis, interpretation, insights and ideas.

3. Ensuring the use of any tools or sources, AI-based or otherwise, is made clear and transparent to readers — for the use of AI Tools we require a disclosure statement upon submission.

4. Ensuring the manuscript is developed in a way that safeguards data privacy, intellectual property and other rights, by checking the terms and conditions of any AI Tool that is used

Please note that AI bots such as ChatGPT should not be listed as authors in your submission. Figures made from generative AI is not allowed in the manuscript. 

Editors and Reviewers:

Maintaining the integrity of the editorial and peer review process requires reviewers and editors to uphold strict standards of confidentiality, objectivity, and accuracy at every stage of manuscript evaluation. With the increasing use of artificial intelligence (AI) technologies, including generative AI tools such as ChatGPT and similar systems, it is essential to establish clear ethical boundaries.

For reviewers, any manuscript submitted for peer review is confidential and must not be uploaded, shared, or processed using generative AI tools in any form. Uploading manuscripts or review reports to such tools may compromise author confidentiality, infringe copyright, and breach data privacy regulations. Furthermore, scientific peer review demands critical thinking and expert judgment, which cannot be delegated to AI. Reviewers are therefore prohibited from using generative AI to write, structure, or assist in the preparation of review reports. Reviewers remain fully responsible and accountable for the content and conclusions of their evaluations.

For editors, all manuscripts and related editorial correspondence must be treated as confidential. Editors must not upload manuscripts, decision letters, or any editorial communications into generative AI tools, even for purposes such as language refinement. Editorial decisions, including the acceptance or rejection of a manuscript, must be based on the editor’s own professional evaluation and judgment, not on the output or suggestions generated by AI systems.

The use of AI technologies may be permitted for limited technical functions—such as plagiarism detection or formatting checks—provided that these tools do not access or process the scientific content of manuscripts directly, and that they are operated within secure, official systems that adhere to established data privacy and ethical standards.

If there is reasonable suspicion that a reviewer or author has misused generative AI without proper disclosure, editors should report the matter promptly to the publisher or the journal’s governing body for investigation.

While we acknowledge and support the responsible use of emerging technologies, it is important to emphasize that the scientific assessment of research and editorial decision-making remain fundamentally human responsibilities that must not be delegated to machines.