Artificial Intelligence (AI) Use Policy

This policy on the use of Artificial Intelligence (AI) in scientific publications aims to establish clear guidelines, guaranteeing integrity, ethics and transparency in the editorial process. Authors are required to disclose the use of AI during submission, providing details on the approach, parameters and ethical considerations. Transparency and reproducibility are emphasized, requiring authors to share information to enable reproduction of results. Scientific validation is ensured through a rigorous review that assesses not only the results, but also the validity of the approach. Authors are responsible for the ethical use of data, including compliance with ethical standards and minimizing bias in the data. Specialized AI reviewers may be appointed when necessary, and copyright and intellectual property must be respected. The policy will be updated periodically to reflect changes in technology and ethical practices, and the journal is committed to highlighting the use of AI in its editorial disclosures, recognizing the importance of this technology in the evolution of scientific research.

Use of Artificial Intelligence (AI) in Manuscript Preparation

In accordance with ICMJE (https://www.icmje.org/) recommendations and EQUATOR (https://www.equator-network.org/) reporting principles, the authors must disclose the use of artificial intelligence tools during manuscript preparation by providing the following details:

  • The name of the AI software platform, program, or tool utilized.
  • The version number and any associated extensions or plugins.
  • The developer or manufacturer of the software.
  • The date(s) on which the software or tool was used.
  • A concise description of how the AI was employed, including the prompts used (if applicable), and the specific sections or tasks in which the tool contributed (e.g., idea generation, literature synthesis, language translation, or manuscript editing).
  • A clear statement affirming that all authors take full responsibility for the accuracy and integrity of any content generated, edited, or supported by AI tools.

Note: AI tools cannot be listed as authors, as they cannot assume accountability for the submitted work (per ICMJE authorship criteria).

Use of AI in the Research Process

Where AI is involved in the conduct or analysis of research, authors must provide comprehensive documentation, particularly in the Methods section, to ensure transparency and reproducibility. This includes:

Reporting Guidelines Compliance

  • Adhere to study-design-specific reporting guidelines, where applicable (e.g., CONSORT-AI, SPIRIT-AI, MI-CLAIM, CLAIM, MINIMAR, DECIDE-AI, TRIPOD-AI, STARD-AI, PROBAST-AI, CANGARU, CHART), and report each checklist item with adequate detail to enable replication.

AI Use Specification

  • Describe how AI was integrated into the research workflow, specifying its role in tasks such as hypothesis generation, feature engineering, adjustment variable selection, or data visualization.

For LLM-Based Studies (Large Language Models)

  • Specify the name, version, and developer of the AI tool used.
  • Indicate the dates of usage and provide a detailed log or description of the prompt(s) used, including prompt sequences and any modifications made in response to AI outputs.

For Machine Learning and Algorithm Development

  • Detail the datasets used for training, development, and validation, explicitly stating whether the data were retrospective, prospectively collected, or part of a real-world deployment.
  • Clearly define the machine learning model used, the input variables, outcomes, and the approach for hyperparameter optimization.
  • Document all assumptions made and the procedures used to test their validity.

Model Performance and Bias Evaluation

  • Specify the evaluation metrics applied (e.g., bias, discrimination, calibration, reclassification) and provide justification for their selection.
  • Describe the approach to missing data, including imputation methods if used.
  • State whether the research received approval or exemption from an institutional review board (IRB) or ethics committee.

Bias and Subgroup Analyses

  • Outline any methodological strategies used to identify, minimize, or correct AI-related bias or inaccuracies.
  • When applicable, indicate whether sensitivity analyses were performed to assess model performance across vulnerable or underrepresented populations.

Data and Code Availability

  • Provide a data sharing statement, specifying whether model code, training datasets, or prompts will be shared, and under what conditions.

The guidelines for the use of AI in scientific research were based on the following references:

Flanagin, A., Kendall-Taylor, J., & Bibbins-Domingo, K. (2023). Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots. JAMA, 330(8), 702-703.

Flanagin, A., Pirracchio, R., Khera, R., Berkwits, M., Hswen, Y., & Bibbins-Domingo, K. (2024). Reporting Use of AI in Research and Scholarly Publication—JAMA Network Guidance. JAMA, 331(13), 1096-1098.

Kaebnick, G. E., Magnus, D. C., Kao, A., Hosseini, M., Resnik, D., Dubljevi?, V., Rentmeester, C., Gordijn, B., & Cherry, M. J. (2023). Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing. Medicine, Health Care and Philosophy, 26, 499–503.

Victor, B. G., Sokol, R. L., Goldkind, L., & Perron, B. E. (2023). Recommendations for Social Work Researchers and Journal Editors on the Use of Generative AI and Large Language Models. Journal of the Society for Social Work and Research, 14(3), 563-580.