Policy on generative AI & digital tools
How authors, peer reviewers and editors of IUMS medical journals may – and may not – use generative AI and other digital tools in manuscripts, peer review and the publication workflow, while keeping human judgment and research integrity at the centre.
1 Overview
Generative AI can support research and publishing workflows, but also raises questions about responsibility, originality and confidentiality.
This policy explains how generative artificial intelligence (“AI tools”) and other AI-assisted technologies may be used by authors, peer reviewers and editors of IUMS journals, and how IUMS itself uses such tools in the publication workflow. The aim is to keep human judgment at the centre, match emerging international good practice, and give readers a transparent record of any AI involvement.
Generative AI and AI-assisted tools can help with literature exploration, language editing and workflow efficiency. At the same time, they introduce new risks around reliability, privacy, intellectual property and the protection of confidential material. IUMS journals therefore allow carefully defined use of such tools and set clear limits where necessary.
This policy applies across all IUMS journals and complements the policies on authorship & contributorship, peer review, editorial governance and research integrity. It will be reviewed periodically as technology and community expectations evolve.
2 Scope & definitions
What is covered by this policy, and what is not.
For the purposes of this policy, “AI tools” include generative and AI-assisted technologies that can produce or substantially transform text, images, audio, video or code in response to prompts. Examples include large language models and chatbots, AI “agents” and “deep research” tools, code assistants and image generators.
This policy governs the use of such tools in:
- manuscript preparation (drafting, revising and translating text, editing figures and images, organising material);
- peer review (reading, evaluating and reporting on manuscripts); and
- editorial decision-making (handling submissions and making accept / revise / reject decisions).
Use of AI tools as part of the research methods—for example, in biomedical image analysis, natural language processing pipelines or simulation—is governed by research ethics and reporting standards. When AI forms part of the methodology, it must be described in sufficient detail in the Methods section, including the tool or model name, provider, version and parameters, so that others can understand and, where appropriate, reproduce the work.
This policy does not cover conventional spelling and grammar checkers or reference managers (for example, EndNote, Zotero, Mendeley) that simply help authors organise and format citations. These can be used without disclosure as long as they are not used to generate substantive content.
6 Use of AI tools by peer reviewers
Manuscripts under review and reviewer reports are confidential and must not be exposed to public AI services.
Manuscripts sent for peer review, together with any associated data or supplementary materials, are strictly confidential. Peer-review reports themselves may also contain confidential information about authors, patients or ongoing studies. For this reason, reviewers must treat all such content with care and must not expose it to third-party AI tools that do not guarantee confidentiality.
In particular:
- reviewers must not upload all or part of an unpublished manuscript, figures, tables, datasets or their review report into a public generative AI interface, even for “language polishing” or summarisation; and
- reviewers must not delegate the intellectual task of reviewing to AI tools. Generative AI should not be used to produce a review or to provide the scientific assessment on which the review is based.
A reviewer may use standard tools (for example, reference managers or literature search engines) to support their assessment, and may use institutionally approved software that guarantees confidentiality. However, the substance, structure and tone of the review must reflect the reviewer’s own reading, expertise and judgment. The reviewer remains solely responsible for the content of the report submitted to the journal.
7 Use of AI tools by editors
Editors may use AI-assisted systems as decision-support tools, but not as a replacement for human editorial judgment.
Editors of IUMS journals handle manuscripts, reviewer reports and author correspondence that are not publicly available. They must protect this material and ensure that editorial decisions are based on human expert assessment, not automated judgment by AI systems.
Editors therefore:
- must not upload full manuscripts, figures, tables or reviewer reports into public generative AI tools for evaluation, summarisation or language editing;
- must not upload editorial decision letters or other confidential correspondence with authors into such tools; and
- must not rely on generative AI to make or justify editorial decisions; the critical appraisal and final judgment must be the editor’s own.
At the same time, IUMS may provide editors with in-house or licensed AI-assisted systems for tasks such as:
- checking basic completeness and formatting of submissions;
- supporting plagiarism and image-integrity screening as part of research integrity checks; and
- suggesting potential reviewers or flagging possible conflicts for editorial consideration.
These tools are operated under contractual and technical safeguards designed to protect confidentiality, data privacy and to monitor potential bias. They are decision-support systems only; editors remain fully accountable for the editorial process and its outcomes. Suspected misuse of AI tools by authors or reviewers should be reported via the journal’s research integrity procedures.
8 IUMS use of AI & AI-assisted technologies in the publication process
How the publisher may use AI internally, with human oversight.
Like other international publishers, IUMS is exploring ways in which AI technologies can safely support the publication process while preserving editorial independence and research integrity. Wherever AI is used internally, human oversight remains central and responsibility for decisions lies with editors and publishing staff.
Examples of how AI or AI-assisted tools may be used internally include:
- assisting editors in identifying potential reviewers and matching manuscripts to expertise and journal scope;
- supporting technical checks of submissions (for example, checking for missing files, incomplete metadata or obvious formatting problems);
- enhancing research integrity checks, such as similarity screening and image forensics, where tools are configured to respect author confidentiality; and
- assisting production teams during proof preparation and copy-editing, including flagging possible inconsistencies or formatting issues for human review.
The range of AI use may vary between journals. Where authors, reviewers or editors have questions about how AI is used in relation to a specific title, they are encouraged to contact the journal office for further information. IUMS will continue to monitor developments around generative AI and adjust or refine this policy as needed to protect the reliability of the scholarly record.
9 Questions & contact
How to get in touch about this policy or potential misuse of AI tools.
Queries about this policy, or concerns about potential misuse of AI tools in connection with an IUMS journal, can be directed to:
- General policy & research integrity: journals@iums.ac.ir
- Journal-specific questions: please contact the editorial office via the email address listed on the journal’s home page.
Please include the journal name, the manuscript or article identifier, and a concise description of your query or concern. This will help the editorial and publishing teams review the issue promptly and fairly.
Policy version: v1.0 – last updated March 2025. This page will be revised as community standards and AI technologies evolve.