AI Policy for Authors, Reviewers, and Editors

Image on this page was generated from a text description using Adobe Firefly.

For Authors

Authors may use AI-assisted tools during research and manuscript preparation; however, all such use must comply with current COPE, ICMJE, and Scopus research-integrity standards. Authors remain fully responsible for the accuracy, integrity, originality, and ethical compliance of all content produced with or without the use of AI tools. Therefore, when utilizing AI for substantive tasks, authors should provide detailed documentation of the AI tools used, including their versions, settings, and the specific applications. The use of AI can range from non-substantive to substantive contributions.

Distinguishing Non-Substantive and Substantive Use of AI

AI use ranges from non-substantive assistance to substantive research contributions:

  • Non-substantive use includes grammar and spelling correction, text polishing, formatting, and language refinement through AI-assisted editing tools.
  • Substantive use includes generative or analytical functions that influence the research process, such as data analysis, code generation, statistical modeling, hypothesis exploration, text generation with scientific content, or drafting of sections of the manuscript.

Any substantive use must meet high standards of reliability, reproducibility, and transparency.

Mandatory Disclosure Requirements

If AI technologies (including but not limited to generative AI tools, large language models, or machine-learning frameworks) were used at any stage of the research workflow or manuscript preparation, authors must provide a clear and detailed disclosure in the Materials and Methods section. This disclosure must include:

  1. Identification of Tools: Specify all AI technologies used, including provider, tool name, model or software version, and date of access.
  2. Purpose and Application: Describe precisely how the tool was used (e.g., data preprocessing, coding, statistical analysis, text generation, figure draft creation, language editing).
  3. Prompts and Parameters: When AI output materially affects the content, provide representative prompts, key settings, and parameters that would allow others to understand and evaluate the process.
  4. Verification Procedures: Explain how authors validated all AI-generated or AI-assisted outputs to ensure accuracy, replicability, and scientific integrity.
  5. Limitations and Interpretation: Note any limitations associated with the AI tool’s use that may affect interpretation.

Restrictions on AI Use

To maintain research integrity:

  • AI tools must not be used to fabricate, manipulate, or alter research data, images, or results.
  • AI tools must not generate citations or references without human verification; authors must check all citations for accuracy and existence.
  • Confidential, personal, or copyrighted materials must not be uploaded to public AI systems unless allowed under institutional and legal data-protection requirements.

Author Accountability

Authors are fully responsible for:

  • Validating all AI-assisted content: Authors must critically assess any material produced with AI support to confirm its accuracy, completeness, and neutrality. This includes independently verifying factual claims, data, and references, as AI tools may generate incorrect or non-existent sources.
  • Maintaining authorship integrity: Any AI-generated text or analysis must be carefully revised and integrated so that the final manuscript reflects the authors’ own reasoning, interpretation, and scholarly contribution rather than unexamined AI output.
  • Ensuring transparency: All uses of AI—whether for drafting, editing, analysis, or any other purpose—must be openly disclosed to readers. A clear AI-use statement is required at submission.
  • Protecting rights and confidentiality: Before using any AI tool, authors must review its terms to ensure the protection of sensitive data, intellectual property, and privacy. No material may be entered into AI systems in ways that compromise legal or ethical obligations.
  • Preserving original materials: Authors must keep unedited versions of text or datasets that were later refined with AI tools, as editors may request these for evaluation.

Authorship

Please note that large Language Models (e.g., ChatGPT) or other artificial intelligence tools cannot be listed as authors as “they cannot be responsible for the accuracy, integrity, and originality of the work[1]”, all of which are required to be considered according to the guidelines set by ICMJE and COPE.

Examples of AI-Related Tools

  • Generative AI and LLMs: ChatGPT, Claude, Gemini.
  • AI-assisted editing tools: Grammarly, QuillBot.
  • Machine-learning frameworks: TensorFlow, PyTorch.
  • Data-analysis tools with AI components: DataRobot, IBM SPSS Modeler.
  • Computer vision tools: OpenCV.
  • Speech-to-text tools: Google Speech-to-Text.

Use of AI in Figures, Images and Artwork

Generative AI or AI-assisted tools must not be used to create, modify, or manipulate images or figures in submitted manuscripts. This includes altering, enhancing, removing, or adding visual elements. Basic adjustments such as brightness, contrast, or color correction are acceptable only when they preserve all information present in the original image. The journal may use image-analysis tools to screen submissions for irregularities.

Methodological Use (Permitted With Disclosure)

AI tools may be used in image generation or processing only when they form an inherent and scientifically justified part of the research methodology—for example, AI-supported image reconstruction, segmentation, classification, or other analytical pipelines. In such cases, authors must:

  • Explain the purpose and function of the AI method within the workflow,
  • Name the tool, model, software version, and developer/provider,
  • Provide sufficient procedural detail to allow independent assessment and reproducibility,
  • Comply with the tool’s usage policies and accurately acknowledge its role.

Authors may be asked to provide original, unprocessed images or data supporting the final figures.

This methodological-use exception also applies to research-related visual outputs that are integral to the scientific process (e.g., diagrams or images generated by computational models that are themselves the subject of study). It does not extend to decorative or promotional artwork. Visual pieces such as graphical summaries or cover-image proposals may only incorporate AI-generated material when the use is part of an approved methodological workflow and when explicit permission has been obtained from the journal, and only if authors can demonstrate that rights and attribution requirements have been satisfied.

Use of AI Tools in the Editorial and Peer Review Process

Manuscripts submitted to the journal are privileged communications shared solely for editorial assessment. All Editors and Reviewers must handle these materials with strict confidentiality. Under no circumstances may any part of a submission—including associated files or author correspondence—be entered into generative AI systems or external AI-driven platforms, as doing so may expose protected information or create privacy risks.

This confidentiality obligation also applies to documents created during the evaluation process. Editorial decisions, reviewer comments, and internal discussions often include interpretations of unpublished data or details about the authors. These materials must be written and managed without the involvement of generative AI tools, including for language editing or stylistic refinement.

This obligation extends to all written evaluations and editorial communications. Editorial decision letters, reviewer reports, and any internal notes often contain sensitive information about the manuscript and its authors. These documents should not be processed through generative AI tools, even for language editing or stylistic revision.

The assessment of scholarly work requires human expertise, critical reasoning, and informed judgment. For this reason, AI tools must not be used by Editors or Reviewers to draft, refine, interpret, or influence the scientific evaluation of a manuscript. AI systems may produce incomplete, biased, or inaccurate assessments, and cannot substitute for the responsibility held by human evaluators. Editors are accountable for the editorial process and the final decision rendered to authors; Reviewers are accountable for the content and integrity of their reports.

The journal may employ controlled, non-generative AI technologies for limited administrative tasks—such as checking format compliance, identifying missing elements in submissions, or assisting in reviewer matching. These internal tools operate within secure environments designed to preserve confidentiality and are not used to evaluate scientific content.

Maintaining the trust and integrity of the editorial and peer review process is essential. Editors and Reviewers are expected to follow the restrictions on AI use outlined in this policy, particularly with regard to confidentiality and the requirement that all evaluative judgments be made by humans. Any use of AI tools that would compromise these principles—such as uploading manuscript content to external systems or relying on AI to assess the scientific merit of a submission—is not permitted.

Conclusion

These guidelines ensure that all uses of AI—whether in research workflows, manuscript preparation, image handling, or within the editorial and peer review process—are carried out with transparency, accountability, and respect for confidentiality. By requiring clear disclosure, preserving the integrity of visual and research data, restricting the use of generative AI in evaluative decision-making, and emphasizing the responsibilities of authors, reviewers, and editors, the journal follows best practices established by leading research-integrity organizations.

This policy is informed by the principles and recommendations of the Committee on Publication Ethics (COPE), the International Committee of Medical Journal Editors (ICMJE), the World Association of Medical Editors (WAME). In cases where concerns, disputes, or uncertainties related to AI use arise, the journal will consult and apply the guidance provided by these organizations to ensure a fair, consistent, and ethically grounded resolution.

As AI technologies continue to develop rapidly, the journal will regularly review and update these guidelines to ensure they remain aligned with best practices in research integrity, confidentiality protections, and responsible editorial conduct. Adhering to these principles strengthens trust in the editorial process, protects the scholarly record, and supports the responsible integration of emerging technologies into scientific publishing.

Image Source: The image on this page was generated from a text description using Adobe Firefly.

[1] Source: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html