


Authors may use AI-assisted tools during research and manuscript preparation; however, all such use must comply with current COPE, ICMJE, and Scopus research-integrity standards. Authors remain fully responsible for the accuracy, integrity, originality, and ethical compliance of all content produced with or without the use of AI tools. Therefore, when utilizing AI for substantive tasks, authors should provide detailed documentation of the AI tools used, including their versions, settings, and the specific applications. The use of AI can range from non-substantive to substantive contributions.
AI use ranges from non-substantive assistance to substantive research contributions:
Any substantive use must meet high standards of reliability, reproducibility, and transparency.
If AI technologies (including but not limited to generative AI tools, large language models, or machine-learning frameworks) were used at any stage of the research workflow or manuscript preparation, authors must provide a clear and detailed disclosure in the Materials and Methods section. This disclosure must include:
To maintain research integrity:
Authors are fully responsible for:
Please note that large Language Models (e.g., ChatGPT) or other artificial intelligence tools cannot be listed as authors as “they cannot be responsible for the accuracy, integrity, and originality of the work[1]”, all of which are required to be considered according to the guidelines set by ICMJE and COPE.
Generative AI or AI-assisted tools must not be used to create, modify, or manipulate images or figures in submitted manuscripts. This includes altering, enhancing, removing, or adding visual elements. Basic adjustments such as brightness, contrast, or color correction are acceptable only when they preserve all information present in the original image. The journal may use image-analysis tools to screen submissions for irregularities.
AI tools may be used in image generation or processing only when they form an inherent and scientifically justified part of the research methodology—for example, AI-supported image reconstruction, segmentation, classification, or other analytical pipelines. In such cases, authors must:
Authors may be asked to provide original, unprocessed images or data supporting the final figures.
This methodological-use exception also applies to research-related visual outputs that are integral to the scientific process (e.g., diagrams or images generated by computational models that are themselves the subject of study). It does not extend to decorative or promotional artwork. Visual pieces such as graphical summaries or cover-image proposals may only incorporate AI-generated material when the use is part of an approved methodological workflow and when explicit permission has been obtained from the journal, and only if authors can demonstrate that rights and attribution requirements have been satisfied.
Manuscripts submitted to the journal are privileged communications shared solely for editorial assessment. All Editors and Reviewers must handle these materials with strict confidentiality. Under no circumstances may any part of a submission—including associated files or author correspondence—be entered into generative AI systems or external AI-driven platforms, as doing so may expose protected information or create privacy risks.
This confidentiality obligation also applies to documents created during the evaluation process. Editorial decisions, reviewer comments, and internal discussions often include interpretations of unpublished data or details about the authors. These materials must be written and managed without the involvement of generative AI tools, including for language editing or stylistic refinement.
This obligation extends to all written evaluations and editorial communications. Editorial decision letters, reviewer reports, and any internal notes often contain sensitive information about the manuscript and its authors. These documents should not be processed through generative AI tools, even for language editing or stylistic revision.
The assessment of scholarly work requires human expertise, critical reasoning, and informed judgment. For this reason, AI tools must not be used by Editors or Reviewers to draft, refine, interpret, or influence the scientific evaluation of a manuscript. AI systems may produce incomplete, biased, or inaccurate assessments, and cannot substitute for the responsibility held by human evaluators. Editors are accountable for the editorial process and the final decision rendered to authors; Reviewers are accountable for the content and integrity of their reports.
The journal may employ controlled, non-generative AI technologies for limited administrative tasks—such as checking format compliance, identifying missing elements in submissions, or assisting in reviewer matching. These internal tools operate within secure environments designed to preserve confidentiality and are not used to evaluate scientific content.
Maintaining the trust and integrity of the editorial and peer review process is essential. Editors and Reviewers are expected to follow the restrictions on AI use outlined in this policy, particularly with regard to confidentiality and the requirement that all evaluative judgments be made by humans. Any use of AI tools that would compromise these principles—such as uploading manuscript content to external systems or relying on AI to assess the scientific merit of a submission—is not permitted.
These guidelines ensure that all uses of AI—whether in research workflows, manuscript preparation, image handling, or within the editorial and peer review process—are carried out with transparency, accountability, and respect for confidentiality. By requiring clear disclosure, preserving the integrity of visual and research data, restricting the use of generative AI in evaluative decision-making, and emphasizing the responsibilities of authors, reviewers, and editors, the journal follows best practices established by leading research-integrity organizations.
This policy is informed by the principles and recommendations of the Committee on Publication Ethics (COPE), the International Committee of Medical Journal Editors (ICMJE), the World Association of Medical Editors (WAME). In cases where concerns, disputes, or uncertainties related to AI use arise, the journal will consult and apply the guidance provided by these organizations to ensure a fair, consistent, and ethically grounded resolution.
As AI technologies continue to develop rapidly, the journal will regularly review and update these guidelines to ensure they remain aligned with best practices in research integrity, confidentiality protections, and responsible editorial conduct. Adhering to these principles strengthens trust in the editorial process, protects the scholarly record, and supports the responsible integration of emerging technologies into scientific publishing.
Image Source: The image on this page was generated from a text description using Adobe Firefly.
[1] Source: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html



InterPore Journal now provides official LaTeX and Word templates to support manuscript preparation. Use of the templates is recommended but not mandatory.
👉 Find out more here.


For more information on the homepage image, see here.
InterPore Journal is excited to announce the continuation of the Invited Student Paper Award for a second year! This award recognizes outstanding students presenting at the InterPore annual meeting. Nominees will be invited to submit a paper to InterPore Journal, and the award will be granted upon successful publication of their paper. Find out more here!
This Open Access Publication is supported by and is the official organ of The International Society for Porous Media
Contact us or follow us on:
© 2025 InterPore Journal. All rights reserved for website design, layout, and branding.
Articles are published under their respective Creative Commons licenses and copyright remains with the authors.