Policy on generative AI & digital tools

How authors, peer reviewers and editors of IUMS medical journals may – and may not – use generative AI and other digital tools in manuscripts, peer review and the publication workflow, while keeping human judgment and research integrity at the centre.

Applies across the IUMS journals portfolio
Version v1.0 – last updated March 2025

Overview

Generative AI can support research and publishing workflows, but also raises questions about responsibility, originality and confidentiality.

This policy explains how generative artificial intelligence (“AI tools”) and other AI-assisted technologies may be used by authors, peer reviewers and editors of IUMS journals, and how IUMS itself uses such tools in the publication workflow. The aim is to keep human judgment at the centre, match emerging international good practice, and give readers a transparent record of any AI involvement.

Generative AI and AI-assisted tools can help with literature exploration, language editing and workflow efficiency. At the same time, they introduce new risks around reliability, privacy, intellectual property and the protection of confidential material. IUMS journals therefore allow carefully defined use of such tools and set clear limits where necessary.

This policy applies across all IUMS journals and complements the policies on authorship & contributorship, peer review, editorial governance and research integrity. It will be reviewed periodically as technology and community expectations evolve.

Scope & definitions

What is covered by this policy, and what is not.

For the purposes of this policy, “AI tools” include generative and AI-assisted technologies that can produce or substantially transform text, images, audio, video or code in response to prompts. Examples include large language models and chatbots, AI “agents” and “deep research” tools, code assistants and image generators.

This policy governs the use of such tools in:

  • manuscript preparation (drafting, revising and translating text, editing figures and images, organising material);
  • peer review (reading, evaluating and reporting on manuscripts); and
  • editorial decision-making (handling submissions and making accept / revise / reject decisions).

Use of AI tools as part of the research methods—for example, in biomedical image analysis, natural language processing pipelines or simulation—is governed by research ethics and reporting standards. When AI forms part of the methodology, it must be described in sufficient detail in the Methods section, including the tool or model name, provider, version and parameters, so that others can understand and, where appropriate, reproduce the work.

This policy does not cover conventional spelling and grammar checkers or reference managers (for example, EndNote, Zotero, Mendeley) that simply help authors organise and format citations. These can be used without disclosure as long as they are not used to generate substantive content.

Use of AI tools by authors – writing process

AI can support drafting and revision, but the manuscript must remain the authors’ own intellectual work.

3.1 Permitted support for authors

IUMS recognises that, when used responsibly, AI tools can help authors read more efficiently, explore complex literatures and refine the language of their manuscripts. Authors may therefore use AI tools to support the writing process, provided that the final article expresses the authors’ own analysis, interpretation and conclusions.

With careful oversight, authors may use AI tools to:

  • improve grammar, spelling, clarity, readability and style of text that the authors have drafted;
  • assist with translation between languages and harmonising terminology;
  • suggest ways to organise sections, headings or the flow of an argument; and
  • summarise the authors’ own notes or drafts, or generate high-level overviews of topics as a starting point for deeper reading.

Any AI-generated wording should be treated as a draft. Authors must review, edit and adapt all such material so that the manuscript reflects their authentic scholarly contribution.

3.2 Boundaries and responsibilities

AI tools must never be a substitute for human critical thinking and scientific judgment. In particular:

  • authors must check that AI-generated content is factually accurate, balanced and appropriate, and that any references are real and correctly cited;
  • authors should not copy large blocks of AI-generated text directly into the manuscript without substantial revision and full intellectual ownership; and
  • authors must ensure that use of AI tools does not introduce plagiarism, copyright infringement or undisclosed reuse of third-party content.

Authors are responsible for checking the terms and conditions of any AI tool they use and ensuring that:

  • confidential and unpublished material is not exposed in ways that would breach privacy, ethics approvals or intellectual property rights;
  • they do not grant the provider broad rights (for example, permission to train on the full manuscript) that would conflict with later journal publication; and
  • the tool’s licence does not impose restrictions on the use of outputs that would prevent publication of the article.

Use of AI tools by authors – figures, images & artwork

Protecting the integrity of scientific images and visual data.

To protect the integrity of the scientific record, IUMS journals do not permit the use of generative AI or AI-assisted tools to create or manipulate figures, images or other scientific artwork in submitted manuscripts, except where AI image generation or processing is itself part of the study’s research methods.

This prohibition includes, for example:

  • generating entirely synthetic images to represent data that were not actually collected;
  • enhancing, obscuring, moving, removing or inserting specific features, bands, lesions or signals in an image or figure; and
  • using AI image tools to redraw, stylise or “clean up” scientific images in ways that could mislead readers about the underlying data.

Standard, global adjustments (such as brightness, contrast or colour balance) are acceptable if, and only if, they do not remove, hide or alter information present in the original. Authors should retain the raw, unprocessed data and may be asked to supply pre-adjusted versions or underlying composite images for editorial assessment.

Where AI-based image generation or processing is itself part of the research design or analytic methods, it must be explained in detail in the Methods section, including model or tool name, version and provider, input data and parameters. Any images presented must be clearly identified as outputs of the described method.

As a default, IUMS journals also do not accept generative-AI artwork for graphical abstracts or cover images. Any rare exceptions (for example, for cover art) require prior written approval from the journal and assurance that all necessary rights have been cleared.

Authorship, accountability & disclosure

AI tools cannot be authors; human beings remain fully responsible for the work.

5.1 AI tools cannot be authors

Generative AI systems and other tools cannot meet authorship criteria and must never be listed as authors or co-authors, nor cited as if they were persons. Authorship of an IUMS journal article requires the ability to:

  • make substantial intellectual contributions to the work;
  • approve the final version of the manuscript; and
  • take public responsibility for the accuracy and integrity of the work and respond to questions about it.

These responsibilities can only be fulfilled by human beings. Each named author remains fully accountable for the entire article, including any parts where AI tools were used during manuscript preparation.

5.2 AI disclosure statement for manuscripts

When AI tools have been used in the writing process, authors must include a brief disclosure statement in their manuscript. IUMS journals recommend placing this statement near the end of the article, just above the reference list, under a heading such as “Use of AI and AI-assisted tools in the writing process”.

The statement should, in concise form:

  • name the tool(s) used and provider (for example, “chat-based large language model provided by …”);
  • explain the purpose of use (for example, “to assist with English language editing” or “to help refine the structure of the introduction”); and
  • confirm that the authors reviewed and edited all outputs and take full responsibility for the final content.

A possible example (to be adapted appropriately) is:

“During preparation of this article, the authors used [TOOL NAME] for [brief description of purpose, e.g. language editing or suggesting alternative phrasing]. The authors carefully checked and edited all tool-assisted content and accept full responsibility for the final text and any errors.”

Basic spelling and grammar checks and the use of standard reference managers do not require a specific AI disclosure. However, if AI tools were used within the research methods (for example to analyse data), this must be described in the Methods section rather than in the AI writing disclosure.

Use of AI tools by peer reviewers

Manuscripts under review and reviewer reports are confidential and must not be exposed to public AI services.

Manuscripts sent for peer review, together with any associated data or supplementary materials, are strictly confidential. Peer-review reports themselves may also contain confidential information about authors, patients or ongoing studies. For this reason, reviewers must treat all such content with care and must not expose it to third-party AI tools that do not guarantee confidentiality.

In particular:

  • reviewers must not upload all or part of an unpublished manuscript, figures, tables, datasets or their review report into a public generative AI interface, even for “language polishing” or summarisation; and
  • reviewers must not delegate the intellectual task of reviewing to AI tools. Generative AI should not be used to produce a review or to provide the scientific assessment on which the review is based.

A reviewer may use standard tools (for example, reference managers or literature search engines) to support their assessment, and may use institutionally approved software that guarantees confidentiality. However, the substance, structure and tone of the review must reflect the reviewer’s own reading, expertise and judgment. The reviewer remains solely responsible for the content of the report submitted to the journal.

Use of AI tools by editors

Editors may use AI-assisted systems as decision-support tools, but not as a replacement for human editorial judgment.

Editors of IUMS journals handle manuscripts, reviewer reports and author correspondence that are not publicly available. They must protect this material and ensure that editorial decisions are based on human expert assessment, not automated judgment by AI systems.

Editors therefore:

  • must not upload full manuscripts, figures, tables or reviewer reports into public generative AI tools for evaluation, summarisation or language editing;
  • must not upload editorial decision letters or other confidential correspondence with authors into such tools; and
  • must not rely on generative AI to make or justify editorial decisions; the critical appraisal and final judgment must be the editor’s own.

At the same time, IUMS may provide editors with in-house or licensed AI-assisted systems for tasks such as:

  • checking basic completeness and formatting of submissions;
  • supporting plagiarism and image-integrity screening as part of research integrity checks; and
  • suggesting potential reviewers or flagging possible conflicts for editorial consideration.

These tools are operated under contractual and technical safeguards designed to protect confidentiality, data privacy and to monitor potential bias. They are decision-support systems only; editors remain fully accountable for the editorial process and its outcomes. Suspected misuse of AI tools by authors or reviewers should be reported via the journal’s research integrity procedures.

IUMS use of AI & AI-assisted technologies in the publication process

How the publisher may use AI internally, with human oversight.

Like other international publishers, IUMS is exploring ways in which AI technologies can safely support the publication process while preserving editorial independence and research integrity. Wherever AI is used internally, human oversight remains central and responsibility for decisions lies with editors and publishing staff.

Examples of how AI or AI-assisted tools may be used internally include:

  • assisting editors in identifying potential reviewers and matching manuscripts to expertise and journal scope;
  • supporting technical checks of submissions (for example, checking for missing files, incomplete metadata or obvious formatting problems);
  • enhancing research integrity checks, such as similarity screening and image forensics, where tools are configured to respect author confidentiality; and
  • assisting production teams during proof preparation and copy-editing, including flagging possible inconsistencies or formatting issues for human review.

The range of AI use may vary between journals. Where authors, reviewers or editors have questions about how AI is used in relation to a specific title, they are encouraged to contact the journal office for further information. IUMS will continue to monitor developments around generative AI and adjust or refine this policy as needed to protect the reliability of the scholarly record.

Questions & contact

How to get in touch about this policy or potential misuse of AI tools.

Queries about this policy, or concerns about potential misuse of AI tools in connection with an IUMS journal, can be directed to:

  • General policy & research integrity: journals@iums.ac.ir
  • Journal-specific questions: please contact the editorial office via the email address listed on the journal’s home page.

Please include the journal name, the manuscript or article identifier, and a concise description of your query or concern. This will help the editorial and publishing teams review the issue promptly and fairly.

Policy version: v1.0 – last updated March 2025. This page will be revised as community standards and AI technologies evolve.