The following position statement was published in November, 2025 by highly reputable leaders in evidence synthesis. For more information and guidance on AI use, contact one of your research librarians:
Position statement on artificial intelligence (AI) use in evidence synthesis across Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence 2025
-
Evidence synthesists are ultimately responsible for their evidence synthesis, including the decision to use artificial intelligence (AI) and automation, and to ensure adherence to legal and ethical standards.
-
Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence support the aims of the Responsible use of AI in evidence SynthEsis (RAISE) recommendations, which provides a framework for ensuring responsible use of AI and automation across all roles within the evidence synthesis ecosystem.
-
Evidence synthesists developing and publishing syntheses with Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence can use AI and automation as long as they can demonstrate that it will not compromise the methodological rigour or integrity of their synthesis.
-
AI and automation in evidence synthesis should be used with human oversight.
-
Any use of AI or automation that makes or suggests judgements should be fully and transparently reported in the evidence synthesis report.
-
AI tool developers should proactively ensure their AI systems or tools adhere to the RAISE recommendations so we have clear, transparent and publicly available information to inform decisions about whether an AI system or tool could and should be used in evidence synthesis.
For the full position statement, head here: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.ED000178/full
Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence position for evidence synthesists on AI use based on Responsible use of AI in evidence SynthEsis (RAISE) recommendations (version 2.1 in development as of 22 September 2025)
| RAISE Recommendation |
Further Guidance |
| Remain ultimately responsible for the evidence synthesis |
-
An author is accountable for the content, methods and findings of their evidence synthesis, including the decision to use AI, how it is used, and its impact on the synthesis.
-
When considering using an AI system or tool, be critical of its evaluations [5], to understand whether it does what it states to an adequate level, as well as its limitations and whether it can be applied to the context of the specific synthesis [6].
-
Use of AI should be justified and should involve demonstrating that the tools are methodologically sound, that they do not undermine the trustworthiness or reliability of the synthesis or its conclusions, and that it is appropriate to use a specific AI system or tool in the context of the specific evidence synthesis.
|
| Report AI use in your evidence synthesis manuscript transparently |
-
Authors can use AI within their syntheses and to prepare their manuscript [7, 8, 9, 10].
-
Authors should declare when they have used AI if it makes or suggests judgements, such as in relation to the eligibility of a study, appraisals (including risk of bias assessments), extraction of bibliographic, numerical or qualitative data from a study or its results, synthesis of data from two or more studies, assessments of the certainty of evidence (including GRADE domains or overall certainty ratings for an outcome or finding), drafting text that summarizes the overall strength of evidence, related implications for decision making or research, or plain language summaries. Generally, AI used to improve spelling, grammar or manuscript structure does not need to be listed, but we recommend authors check the journal's specific policy to ensure adherence.
-
Adhere to the established reporting standards used by each journal, such as PRISMA [11] or ROSES [12]. PRISMA, for example, includes items on reporting automation tools used at different stages of the synthesis process. This should be reported in the section specified by each journal, such as Acknowledgements, Methods or a specific section for disclosure of AI use. If these details are extensive or the AI is used in multiple stages of the synthesis process, consider using supplementary materials or tabular presentation, or both. In general, authors should report the following.
-
The name(s) of the AI system(s), tool(s) or platform(s), version(s) and date(s) used.
-
The purpose of using AI and which parts of the evidence synthesis process were impacted. Cite or reference user guidance or report how AI was used, including any modifications that were applied.
-
The justification for using AI, including evidence that the AI system or tool is methodologically sound and will not undermine the trustworthiness or reliability of the synthesis or its conclusions (e.g. citing or referencing evaluations of its performance that detail the impact of errors, limitations and generalizability) and how it has been validated (and piloted, if applicable) to ensure that it is appropriate for use in the context of the specific evidence synthesis. Wherever possible and practical, make the inputs (e.g. prompt development), outputs, datasets and code publicly and freely available (for instance, on repositories or as supplementary materials) and describe any steps taken to verify AI‐generated outputs.
-
Any financial and non‐financial interests the evidence synthesists have in the AI system or tool, along with the AI system or tool's funding sources.
-
Any limitations of using AI in the review processes, including any potential biases. Comment on the potential impact of each limitation.
|
|
| Ensure ethical, legal and regulatory standards are adhered to when using AI |
Ensure ethical, legal and regulatory standards are adhered to as part of applying AI to your synthesis. For example, be aware of issues relating to plagiarism, provenance, copyright, intellectual property, jurisdiction, licensing; and confidentiality, compliance and privacy responsibilities, including data protection laws [6]. |
Flemyng E, Noel-Storr A, Macura B, Gartlehner G, Thomas J, Meerpohl JJ, Jordan Z, Minx J, Eisele-Metzger A, Hamel C, Jemioło P, Porritt K, Grainger M. Position statement on artificial intelligence (AI) use in evidence synthesis across Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence 2025. Cochrane Database of Systematic Reviews 2025, Issue 10. Art. No.: ED000178. DOI: 10.1002/14651858.ED000178.