Reporting checklists as compulsory supplements to artificial intelligence manuscript submissions
    PDF
    Cite
    Share
    Request
    Artificial Intelligence And Informatics – Commentary
    P: -

    Reporting checklists as compulsory supplements to artificial intelligence manuscript submissions

    Diagn Interv Radiol 0;0(0):undefined-undefined
    1. Department of Radiology School of Medicine, University of Crete, Heraklion, Greece
    2. Computational Biomedicine Laboratory Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Greece
    3. Division of Radiology Department for Clinical Science Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
    No information available.
    No information available
    Received Date: 07.05.2024
    Accepted Date: 21.05.2024
    PDF
    Cite
    Share
    Request

    Research on artificial intelligence (AI) for radiology is rapidly expanding, with an exponentially increasing number of AI submissions being accepted for publication in radiology journals. The quality of these publications significantly depends on the information included in the text or accompanying it (e.g., code and data), which allows for the accurate evaluation of the proposed work and the reproduction of the results. Reporting checklists can be instrumental in assisting authors in including all required information; they also help reviewers to comprehensively evaluate the manuscripts before publication. Journals in medical imaging have an available arsenal of reporting checklists and guidelines that can be used to ensure a minimum standard of reporting quality in any published paper.1

    The work by Koçak et al.2 indicated  that, unfortunately, only a small minority of journals encourage authors to use these reporting guidelines. In their well-designed analysis, the  authors clearly point out that only 5 out of 98 journals encouraged using reporting guidelines, and only 3 out of these 5 mandated uploading the filled checklist together with the manuscript files.

    Journals were considered to encourage the use of a guideline if they mentioned the name of the guideline, had a direct reference to it, or explicitly recommended its use, adherence, or referral, even if the authors were not asked to upload a completed version of the guideline with the manuscript. The authors looked for a series of AI-specific guidelines, including the Checklist for AI in Medical Imaging (CLAIM),3, 4 the Consolidated Standards of Reporting  Trials-AI (CONSORT-AI),5 the Fairness, Universality, Traceability, Usability, Robustness, and Explainability-AI (FUTURE-AI) checklist,6 the CheckList for EvaluAtion of Radiomics Research (CLEAR),7 and the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis-AI (TRIPOD+AI).8

    Some guidelines cover any AI application (e.g., CLAIM, FUTURE-AI, minimum information about clinical AI modeling, and TRIPOD+AI), and others are intended for specific AI applications related to clinical trials (e.g., CONSORT-AI and Standard Protocol Items: Recommendations for Interventional Trials-AI) and radiomics (e.g., CLEAR guidelines and the METhodological RadiomiCs Score (METRICS).9 Despite the wide variety of purpose-specific checklists and their importance in increasing manuscript quality, most journals do not encourage authors to use them.

    Even though the use of checklists is not encouraged by all journals, the peer review process aims to filter out low-quality or flawed articles and provide reviewer suggestions to enhance the quality of published research. Reviewers should use checklists while evaluating AI papers and should require authors to use and include, as a supplementary file, the completed checklist document, indicating how the manuscript covers each point of the checklist. According to Koçak et al.2 only 6% of journals included instructions for reviewers that encouraged the use of AI-related checklists. This does not necessarily mean that individual reviewers neglect the use of these checklists; however, journals should prompt their reviewers to ensure that the points of the relevant checklist are discussed in the manuscript. In cases where a manuscript fails to include important information, the reason for this should be mentioned in the text or the accompanying checklist document.

    In addition, caution needs to be exercised when authors provide self-filled checklists. This has become evident in another publication by Kocak et al.10 Their study shows that almost 60% of publications asserting the adherence to CLAIM did not provide a completed checklist. Moreover, most papers that provided a filled-out checklist contained errors. Therefore, the proper use of these checklists should be assessed to avoid misuse or misinterpretation of individual points. Common mistakes include the omission of explanations in case one of the requirements is not fulfilled or false claims that all requirements have been fulfilled.

    Appropriate use of checklists should ideally be evaluated at multiple stages (Figure 1). At the pre-submission stage, journals should mandate the selection of a checklist related to the topic of the paper (e.g., medical imaging, clinical trials, and generic AI applications), and authors should prepare the manuscript in accordance with the requirements of the checklist. Subsequently, authors should submit a completed, detailed version of the checklist at the submission stage. The submission of this completed version should be obligatory to proceed to the review stage. During the review stage, reviewers should ensure the inclusion of all necessary information outlined in the checklist, and any missing data or discrepancies should be addressed in subsequent review rounds.

    In conclusion, reporting checklists can enhance the quality of manuscripts, ease the work of reviewers, and increase the reproducibility of published work. Nonetheless, their proper use requires: (i) adoption by journals as a mandatory requirement for submission, (ii) author adherence to checklist points, (iii) meticulous evaluation by reviewers to ensure that all checklist requirements are fulfilled prior to submission. This pipeline will ensure a smooth review process, without surprises to authors, leading to high quality publications.

    Conflict of interest disclosure

    The author declared no conflicts of interest.

    References

    1
    Klontzas ME, Gatti AA, Tejani AS, Kahn CE Jr. AI Reporting Guidelines: how to select the best one for your research.Radiol Artif Intell. 2023;5(3):e230055.
    2
    Koçak B, Keleş A, Köse F. Meta-research on reporting guidelines for artificial intelligence: are authors and reviewers encouraged enough in radiology, nuclear medicine, and medical imaging journals?Diagn Interv Radiol.2024.
    3
    Tejani AS, Klontzas ME, Gatti AA, et al. Updating the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) for reporting AI research.Nat Mach Intell. 2023;5(9):950-951.
    4
    Tejani AS, Klontzas ME, Gatti AA, et al. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update.Radiol Artif Intell.2024:e240300.
    5
    Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension.Nat Med.2020;26(9):1364-1374.
    6
    Lekadir K, Osuala R, Gallin C, et al. FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging.arXiv [cs.CV]. 2021.
    7
    Kocak B, Baessler B, Bakas S, et al. CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII.Insights Imaging. 2023;14(1):75.
    8
    Collins GS, Moons KGM, Dhiman P, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods.BMJ. 2024;385:e078378. Erratum in: BMJ. 2024;385:q902.
    9
    Kocak B, Akinci D’Antonoli T, Mercaldo N, et al. METhodological RadiomICs Score (METRICS): a quality scoring tool for radiomics research endorsed by EuSoMII.Insights Imaging. 2024;15(1):8.
    10
    Kocak B, Keles A, Akinci D’Antonoli T. Self-reporting with checklists in artificial intelligence research on medical imaging: a systematic review based on citations of CLAIM. Eur Radiol. 2024;34(4):2805-2815.
    2024 ©️ Galenos Publishing House