GMPC Policy on the Use of Generative AI in the Publishing Process

The integration of generative Artificial Intelligence (AI) tools in scholarly publishing presents both opportunities and challenges. These tools can enhance productivity and creativity, but they also raise important ethical concerns. Issues such as transparency, accountability, authorship, data security, and research integrity must be carefully managed. This policy offers clear guidelines for authors, peer reviewers, and editors on the responsible use of generative AI within the GMPC framework, aligning with the COPE Guidelines, ICMJE recommendations, and CSE guidelines.

  1. Use of AI by Authors

    Authors are permitted to use generative AI technology for specific, limited tasks such as brainstorming, language enhancement and editing, literature categorization, coding support, and conducting exploratory searches via AI-powered tools. Additionally, AI may be used to improve the grammar, structure, or clarity of the manuscript. However, authors must ensure that any AI-generated content is factually accurate, original, and ethically sound. The use of AI tools must be transparently disclosed in the methods or acknowledgments sections of the manuscript. Authors are also responsible for ensuring that these tools comply with relevant standards for data security, confidentiality, and intellectual property.

    AI should not be used for the following activities: i) Generating substantial portions of the manuscript without human oversight, ii) Replacing expert input in scholarly responsibilities, such as creating text or code, producing synthetic data without proper methodology, or generating misleading or unverifiable content (e.g., abstracts, citations). AI should not be used to modify or generate research data, charts, tables, figures, images, mathematical formulas, or medical images. Failure to disclose AI use or misuse of AI tools may be considered a violation of publishing ethics, leading to investigation under GMPC's misconduct procedures, and possible sanctions including manuscript rejection, correction, or retraction.

  2. Use of AI by Editors

    Editors play a crucial role in maintaining research integrity. To protect confidential information and uphold ethical standards, editors are prohibited from uploading unpublished manuscripts or associated files (such as figures or datasets) to any AI system, including chatbots. Editors who wish to use generative AI for permissible tasks (e.g., grammar checks) must first consult with their GMPC contact unless pre-approval has been granted. Editors are expected to adhere to the GMPC Code of Conduct and consult the Editor Resource Page for further guidance.

  3. Use of AI by Peer Reviewers

    Peer reviewers are entrusted with the confidential and independent evaluation of manuscripts. To maintain the integrity and confidentiality of the review process, reviewers are prohibited from using generative AI tools to: i) Summarize or evaluate any portion of an unpublished manuscript, ii) Generate peer review reports, and iii) Analyze data or visuals submitted for review.

    All content in peer review reports must reflect the reviewer's expert judgment. Unauthorized use of AI tools for review-related tasks constitutes a breach of peer review confidentiality and may result in removal from the GMPC reviewer pool.

  4. Future Review and Policy Updates

    As AI technologies and ethical standards continue to evolve, GMPC will periodically review and update this policy. Authors, editors, and reviewers will be informed of significant changes. GMPC welcomes feedback to ensure that the policy remains current, fair, and in line with international best practices.

Last updated: 07-May-2025