This study offers insights into the perspectives, current applications, and obstacles of AI-based autocontouring software in radiation oncology across DEGRO and ÖGRO members. The results support the implementation and acceptance of AI-based autocontouring software, which is of utmost importance given the huge potential of AI in radiation oncology.
Previous studies have highlighted various aspects of AI-based autocontouring. Zhai et al. [24] reported on a self-developed model to assess the acceptance of AI-based autocontouring software in China. Among the 307 respondents, technical resistance was low, and the overall perception of AI was high. However, current usage, fears, and expectations were not captured in this study, which may be due to the fact that almost 60% of the respondents had not yet used AI-based autocontouring software and almost a quarter of the respondents were still medical students. Mugabe [26] reported the views of a multidisciplinary group including 15 radiation oncologists from New Zealand. However, they focused on the impact of AI in general, and only 35% reported using AI tools for autosegmentation. Brouwer et al. reported on the perception of 213 medical physicists from 202 radiation oncology centers across Europe regarding AI applications in general [27]. Wong et al. [28] reported on perceptions of Canadian radiation oncologists, physicists, radiation therapists, and radiation trainees regarding the general impact of AI. To date, only two surveys have been conducted that truly focus on AI-based autocontouring software: Hindocha et al. [25] reported on the responses of 51 clinical oncologists in the UK and Bourbonne et al. [20] reported on the French perspective of young radiation oncologists (85% residents).
In our survey, 65.7% of respondents reported using AI-based autocontouring software in routine clinical practice, which is higher than the 45% of respondents in the UK survey [25] and closer to the 60.7% of French respondents [20]. Like our study, these two surveys are not representative because they report on different numbers of respondents per center and did not include all centers in each country, so there are no data on the true prevalence of AI use.
Nevertheless, these studies suggest that the clinical use of AI technologies in radiation oncology is still in its early stages. A study conducted in New Zealand [26] reported that “AI usage was low” but overall, respondents had “a high likelihood to adopt AI.” Similarly, nearly 90% of Turkish radiation oncologists surveyed believed that adopting AI would improve their work [29]. While there is optimism regarding the potential of AI, several barriers and concerns might slow the widespread adoption of AI in clinical routine. A key challenge might be represented by the lack of AI expertise. One survey reported that a quarter of radiation oncologists rated their knowledge of AI as “very poor” and 94% expressed a need for further training [29]. The study conducted in New Zealand identified lower familiarity with AI as a barrier, which correlated with a lower intention to use AI.
In addition, a Canadian survey found that while most healthcare professionals recognize the potential of AI to improve patient care, concerns about the threat of job displacement and about changing professional roles contribute to some reluctance [28]. Addressing these psychological barriers is critical; raising awareness of AI as a collaborative tool, rather than a replacement threat, can help foster trust and acceptance. Rosenbacke et al. highlight that so-called explainable AI (XAI), which provides clear, clinically relevant explanations, increased clinicians’ trust. Their findings emphasize the nuanced role of comprehensive explanations [30]. Consequently, one of the most important strategies for driving AI adoption is adequate education and training, as explicitly requested by 94% of surveyed radiation oncologists [29]. Professional workshops and hands-on training may thus help to demystify AI tools. An acceptance study conducted in China reported that clinicians are more likely to adopt AI if they believe it will significantly improve patient care or their workflow efficiency [24]. Successfully integrating AI into radiation oncology requires addressing both technical limitations and human factors. Overcoming skepticism requires a multifaceted approach; education, training, functional transparency, and guided institutional support are all crucial to promote AI adoption. While early adopters pave the way, late adopters can gain confidence as the benefits of AI become increasingly evident in clinical practice.
In our study, AI-based autocontouring was reportedly used in over 90% of cases for OAR contouring of the brain, head and neck, thorax, abdomen, and pelvis, compared to only 43–67% in the UK survey [25], suggesting that AI-based autocontouring is now increasingly used. An overwhelming 88.8% of our participants reported time savings in OAR delineation, with 41.1% estimating savings of 11–20 min per case and 27.1% reporting even greater time savings of over 20 min. These results are comparable to the 88.7% of young French radiation oncologists [20] who reported savings of 25–100% in segmentation time, highlighting the great potential of AI for revolutionizing the time-consuming task of manual segmentation.
While AI-based autocontouring has demonstrated significant time savings in OAR delineation, its application to target volume segmentation remains limited. In our study, only 56.1% of participants had access to an AI solution capable of automatic target volume segmentation. More notably, among those who used such a software, only 40% reported actual time savings in the contouring process. These findings underscore a critical limitation: despite advancements in AI-based autocontouring, its effectiveness and efficiency in target volume segmentation are still lacking. The fact that only about half of the respondents have access to an AI solution for this task—and the fact that the majority of users do not experience meaningful time savings—highlights an unmet need for more reliable and clinically useful AI-driven target volume segmentation tools. Further development and validation of AI models tailored to target volume contouring are necessary to fully harness the potential of automation in radiotherapy planning. Irrespective of these limitations, an overwhelming 92% of all respondents already consider AI-based autocontouring software solutions helpful, underscoring the technology’s perceived value and its promising role in clinical practice.
Given the widespread appreciation of AI-based software benefits, it is not surprising that respondents who added free-text comments highlighted its potential for improving the clinical workflow, addressing staffing shortages, and facilitating the implementation of advanced technologies such as adaptive planning. Others advocated the expansion of AI applications into additional areas of clinical practice. However, while the integration of AI-based autocontouring software has been largely well received, it is important to recognize its limitations and potential risks. Accordingly, respondents raised concerns about quality assurance, education, and training, and warned of the potential deskilling of clinicians and overreliance on automated systems (for more detailed analysis of free-text commentaries, see Appendix B).
To address these challenges, 60% of respondents would welcome guidelines for the implementation and use of AI-based autocontouring software solutions. Indeed, already in 2020, Vandewinckele et al. published recommendations for implementation and quality assurance regarding AI-based applications in radiotherapy [19]. They recommend, as one of our interviewees also noted, the formation of a dedicated multidisciplinary team to ensure safe and appropriate AI use and to educate the entire team on the use and limitations of AI-based autosegmentation. They proposed a two-stage workflow: in the “commissioning phase,” the AI model should be evaluated using an internal dataset. During the “implementation and quality assurance phase”, the implementation team should train and educate all future users in the correct application and interpretation of AI output. Ongoing documentation of the necessary changes, regular meetings between the implementation team and users, and regular quality assurance (QA) of AI output performance following successful implementation have been recommended. Importantly, specific QA runs should specifically address changes in the overall imaging workflow, e.g., after changes in CT scanners or acquisition protocols, as suggested by Vandewinckele et al. In parallel to our study, Hurkmans et al. elaborated “A joint ESTRO and AAPM guideline for development, clinical validation and reporting of artificial intelligence models in radiation therapy” in 2024 [31]. They emphasize the difficulty of validating AI-based segmentation, especially since defining a gold standard or ground truth segmentation is challenging. They also recommend that once an appropriate ground truth has been established, a qualitative (e.g., Likert scale) and a quantitative (e.g., Hausdorff distance [32]) metric should be used as well as a time trial to evaluate the usefulness of the model.
In our view, both reports address in detail aspects relevant to reliably developing and clinically validating AI models, e.g., by implementation of skilled teaching and quality control teams. In light of the very recently published cohesive guideline by the joint European and American expert group [31], the development of a valid and reliable work guide for the implementation of clinically used AI tools, as desired by the majority of our study participants, has already made encouraging progress. We thus further encourage all clinicians already using AI-based autocontouring software solutions to share their experiences and concerns in existing and newly formed national and international expert panels. The thereby-supported continuous improvement of consensus guidelines will then help radiation oncologists considering the implementation of such automated tools in their clinical routine and ensure widespread acceptance and safe implementation of AI-based autocontouring software.
LimitationsAn important limitation of online surveys is response bias: those who favor AI may be more likely to complete the questionnaire. Responses are inherently self-reported and may not reflect the true usage of AI-based autocontouring. The survey was not designed to be representative in terms of providing a complete documentation of the use of AI-based autocontouring solutions in German-speaking radiotherapy clinics and practices, so it is possible that several respondents reflect clustered experiences and the opinions of larger centers. Thus, our study does not provide representative data on the prevalence of actual AI use and acceptance. In addition, topics related to the data security of AI-based autocontouring software solutions were not explicitly addressed in the questionnaire. Cross-professional comparisons are limited by the very unequal numbers of answers from physicists and physicians. Furthermore, although radiation therapists were invited to participate, no DEGRO or ÖGRO representative of this professional group responded, thus further limiting the generalizability of the findings to all relevant professions using AI-based autocontouring software solutions.
Comments (0)