Policy on the Use of Artificial Intelligence and AI-Assisted Technologies
The policy of the journal regarding the use of Artificial Intelligence (AI) tools is based on the statements of COPE, WAME, JAMA Network, ICMJE recommendations, the requirements of the EU AI Act, the Concept of AI Development in Ukraine, and the Law of Ukraine "On Academic Integrity".
The growing popularity of generative artificial intelligence and AI-supported technologies, which are expected to be used more and more by authors of publications, has led the journal's editorial board to create a policy on the use of AI. These rules are designed to ensure greater transparency and improve the quality of publications for authors, reviewers, editors, and readers. The editorial board will monitor the development of AI technologies and adjust or refine these rules.
For authors
If authors use generative artificial intelligence and AI-supported technologies in the writing process, these technologies should only be used to improve readability and correct grammatical errors in the work. The use of AI technology should be supervised and controlled by humans, and authors should carefully review and edit the results, as AI can produce authoritative results that may be incorrect, incomplete, or biased. Authors bear ultimate responsibility for the content of their work.
Authors should disclose in their manuscripts any use of artificial intelligence and AI-supported technologies. The use of AI should be indicated in the published work. Declaring the use of these technologies promotes transparency and trust between authors, readers, reviewers, and editors, and helps ensure compliance with the terms of use of the relevant tool or technology.
Authors should not list artificial intelligence and AI-supported technologies as authors or co-authors, nor should they cite artificial intelligence as an author. Authorship entails responsibilities and tasks that can only be assigned to and performed by humans. Each (co-)author is responsible for ensuring that issues related to the accuracy or integrity of any part of the work are properly investigated and resolved, and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original and that the work does not infringe on the rights of third parties. All authors should familiarise themselves with our publication ethics policy before submitting.
Use of generative AI in figures, images, and illustrations
The use of generative AI or AI-assisted tools to create or alter images in submitted manuscripts is prohibited. This includes enhancing, obscuring, moving, removing, or adding specific features within an image. Adjustments to brightness, contrast, or colour balance are acceptable only if they do not obscure or eliminate information present in the original.
The only exception is when the use of artificial intelligence or artificial intelligence tools is part of the research design or research methods (e.g., in approaches to artificial intelligence visualisation for creating or interpreting key research data, such as in biomedical imaging). If this is done, such use should be indicated and described appropriately in the text of the manuscript. This should include an explanation of how artificial intelligence or artificial intelligence-assisted tools were used in the process of creating or modifying the image, as well as the name of the model or tool, version number and extension, and manufacturer. Authors must adhere to specific rules for using artificial intelligence-based software and ensure that content is properly referenced. Where possible, authors may be asked to provide AI-edited versions of images and/or raw images used to create the final submitted versions for editorial review.
The use of generative artificial intelligence or AI-supported tools in the creation of artistic works, such as graphic annotations, is prohibited. The use of generative artificial intelligence in the creation of covers may be permitted in some cases if the author obtains prior permission from the editor and publisher of the journal, can demonstrate that all necessary rights to use the relevant material have been obtained, and ensures that the content is correctly referenced.
For reviewers
When a researcher is invited to review another researcher's article, the manuscript should be treated as a confidential document. Reviewers should not upload the submitted manuscript or any part of it into a generative artificial intelligence tool, as this may violate the confidentiality and property rights of the authors, and if the article contains personal information, it may violate data privacy rights.
This confidentiality requirement extends to the reviewer's report (review), as it may contain confidential information about the manuscript and/or authors. For this reason, reviewers should not upload their review to an artificial intelligence tool, even if it is only for the purpose of correcting grammatical errors and readability.
Peer review is the foundation of the scientific ecosystem, and the Editorial Board adheres to the highest standards of integrity in this process. Peer review of a scientific manuscript involves a responsibility that can only be entrusted to humans. Reviewers should not use generative artificial intelligence or AI-assisted technologies to assist in the scientific review of an article, as the critical thinking and original assessment required for peer review are beyond the scope of this technology, and there is a risk that this technology will lead to incorrect, incomplete, or biased conclusions about the manuscript. The reviewer is responsible for the content of the review.
The Editorial Board's policy on authors using artificial intelligence states that authors are permitted to use generative artificial intelligence and artificial intelligence-assisted technologies in the writing process prior to submitting an article, but only to improve the readability and correct grammatical errors in their article, and with appropriate disclosure.
For editors
The submitted manuscript should be treated as a confidential document. Editors should not upload the submitted manuscript or any part of it into a generative artificial intelligence tool, as this may violate the confidentiality and property rights of the authors and, if the article contains personal information, may violate data privacy rights.
This confidentiality requirement applies to all communications about the manuscript, including any communications or letters with decisions, as they may contain confidential information about the manuscript and/or authors. For this reason, editors should not upload their letters to artificial intelligence tools, even if it is only for the purpose of improving language and readability.
Managing the editorial evaluation of scientific manuscripts involves responsibilities that can only be entrusted to humans. Generative artificial intelligence or AI-supported technologies should not be used by editors to assist in the evaluation or decision-making process for a manuscript, as the critical thinking and original assessment required for this work are beyond the scope of this technology, and there is a risk that the technology will lead to incorrect, incomplete, or biased conclusions about the manuscript. The editor is responsible for the editorial process, the final decision, and communicating that decision to the authors.
The Editorial Board's policy regarding authors states that authors are permitted to use generative artificial intelligence and artificial intelligence technologies in the writing process prior to submission, but only to improve readability and correct grammatical errors in their article, and with appropriate disclosure. If an editor suspects that an author or reviewer has violated the AI Policy, they should report it to the Editorial Board.

