Generative AI or AI-Assisted Services Policy
AI Policy for Responsible Use of Generative Artificial Intelligence and AI-Assisted Technologies by the Authors, Editors and Reviewers
The International Journal of Applied and Experimental Biology (IJAaEB) acknowledges the growing significance of generative artificial intelligence (AI) and AI-assisted tools in the scholarly publishing process. From language support to idea generation, these tools, when used responsibly, can augment human creativity, improve efficiency, and facilitate knowledge dissemination. However, the use of AI in research must adhere to the principles of academic integrity, transparency, confidentiality, and authorship accountability. This policy outlines permissible and impermissible uses of AI tools by authors, editors, and reviewers, while providing recommendations for AI-assisted research tools such as ChatGPT, Copilot, Gemini, Claude, NovelAI, DALL-E, Midjourney, Runway, ResearchRabbit, Consensus, Connected Papers, SciSpace, PaperPal, Jenni AI, Grammarly, AfforAI, Clarivate Reviewer Locator, Journal Suggester, and others.
Although AI tools offer significant support for enhancing creativity and productivity, they also pose notable risks, which include:
- Potential inaccuracies or biases due to its statistical, non-factual nature, which may lead to misleading content.
- AI tools often lack proper attribution of sources, undermining scholarly integrity.
- AI tools use third-party platforms, it may compromise confidentiality, data security, and copyright protection.
- Input/output data may be reused by AI providers without consent. Authors must therefore review AI-assisted outputs carefully. Use of such tools must always be transparent, ethical, and aligned with publication standards.
This AI policy for the International Journal of Applied and Experimental Biology (IJAaEB) has been formulated in alignment with the evolving standards of responsible AI use in academic publishing. It incorporates expanded guidance on AI application for authors, editors and reviewers by Taylor & Francis (https://taylorandfrancis.com/our-policies/ai-policy/; https://taylorandfrancis.com/about/ai/; Press release (February 17, 2023): Taylor & Francis Clarifies the Responsible use of AI Tools in Academic Content Creation - Taylor & Francis Newsroom (taylorandfrancisgroup.com), a leading global publisher known for its commitment to research integrity, innovation, and ethical scholarly communication.
1. For Authors
Authors are accountable for the originality, validity, and integrity of the content of their submitted manuscripts. In choosing to use AI tools, authors are expected to do so responsibly and by our journal's editorial policies.
Permitted Uses (With caution and oversight)
Authors may use AI tools with human supervision for the following:
- Language editing and grammar correction (e.g., Grammarly, PaperPal, Microsoft Editor)
- Idea generation and topic exploration (e.g., ResearchRabbit, SciSpace, Connected Papers, CONSENSUS, AfforAI)
- Literature review synthesis with AI summarization tools (e.g., SciSpace, JennAI)
- Organizing references and concepts using visualization tools (e.g., Connected Papers, ResearchRabbit)
- Coding assistance and data visualization, only when reproducible and validated
- Support for paraphrasing, text clarity improvement, or outlining sections (NOT generating full sections without human editing)
All outputs must be reviewed, edited, and validated by the human author(s). Authors must verify the factual correctness and originality of all AI-assisted content.
Prohibited Uses
Authors must not:
- Use AI tools to generate complete manuscripts, research findings, results, data, images, or figures without human authorship and validation
- Submit AI-generated text without substantial human editing
- Use AI to create or manipulate data sets, including synthetic data, without disclosing methodology
- Generate citations using AI without verifying the source (AI hallucination risk)
- Attribute authorship to AI tools or list them as co-authors
- Rely on AI for ethical decisions, participant analysis, or interpretation of findings
- Text or code generation without rigorous revision
- Synthetic data generation to substitute missing data without a robust methodology
- Generation of any type of content that is inaccurate, including abstracts or supplemental materials
Disclosure Requirements
All use of AI tools must be transparently disclosed in the manuscript at the end of the text or the Methods section of the manuscript, including:
- Full name and version of the AI tool
- Purpose of use (e.g., “language editing,” “literature overview”)
- Confirmation of human review and final responsibility
Sample Disclosure Statement (if AI tools were used)
“The authors used [Tool name, version] to assist with [specific function, e.g., literature mapping or grammar correction]. All AI-supported content was critically reviewed and edited by the authors, who take full responsibility for the final manuscript.”
If no AI tools were used, Sample Disclosure Statement:
“The authors declare that no AI tools were used in the preparation or submission of this manuscript.”
2. For Editors and Section Editors
Editors are the shepherds of quality and responsible research content. Use of manuscripts in Generative AI systems may pose a risk to confidentiality, proprietary rights, and data, including personally identifiable information. Therefore, editors must keep submission and peer review details confidential.
Permitted Uses
Editors and section editors may use AI tools with discretion and oversight for:
- Language polishing or editorial correspondence drafting (e.g., Grammarly, PaperPal, JenniAi, WordVice)
- Journal-matching, reviewer suggestions, and keyword tagging (e.g., Clarivate Reviewer Locator, Journal Suggester)
- Plagiarism detection and content verification (e.g., CrossRef Similarity Check, Imagetwin)
- Metadata indexing and content summarization for discoverability
Editors must ensure final decisions, judgments, and communications are made by humans, not AI tools.
Prohibited Uses
Editors must not:
- Upload unpublished manuscripts, figures, or supplementary data to generative AI platforms for summarization or analysis
- Use AI to make editorial decisions, interpret scientific content, or generate review comments
- Use AI without strict confidentiality and data protection measures
All use of AI by editors must maintain confidentiality, comply with ethical guidelines, and follow journal policy.
3. For Peer Reviewers
Permitted Uses
Reviewers may use AI tools with caution for:
- Grammar and clarity suggestions when writing review comments (e.g., Grammarly)
- Literature verification or contextual understanding (e.g., CONSENSUS, AfforAI)
- Summarizing existing literature or supporting contextual feedback (e.g., SciSpace, ResearchRabbit)
All evaluations of the manuscript’s scientific merit, methodology, originality, and validity must be done by the reviewer and not delegated to AI tools.
Prohibited Uses
Reviewers must not:
- Upload any part of a confidential manuscript into AI tools for content generation, summarization, or assessment
- Use AI to generate, replace, or summarize peer review reports
- Use AI to make or influence judgments about methodology, conclusions, or originality of the work
4. Enforcement and Ethical Considerations
Violations of this AI policy, including plagiarism, data fabrication, or misuse of AI tools, may result in:
- Rejection of the manuscript
- Retraction of a published article
- Notification to institutions
- Suspension of review/editorial privileges