A total of 429 records were identified through the systematic literature search. Following screening and eligibility assessment, 15 studies met the inclusion criteria and were included in the final analysis [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. A summary of the selection process is presented in Fig. 1.
Fig. 1PRISMA flowchart of the article selection process
Characteristics of Included StudiesOut of 15 studies, both medical oncologists and patients with various types of cancer were included in the study's population. The AI intervention consisted of generative artificial intelligence such as large language models [6, 14, 16, 18], supervised machine learning for instance random survival forest [15], explainable AI techniques [7], and a machine learning–based web application [9, 10] The primary impact of therapeutic decision-making was assessed for each study, and subsequently, they were classified into thematic groups for the analysis of ethical, legal, and informed consent challenges. Of the chosen papers, all of them addresed AI-related ethical issues, but only eleven covered legal concerns [7,8,9,10,11,12,13, 15, 16, 19, 20], and ten the informed consent process, respectively [6, 8,9,10,11,12,13, 15,16,17] Table 1 summarizes all the important characteristics of included studies; Table 3 (Supplementary) presents all the aspects in a more detailed manner.
Enhancing Treatment Recommendations and Decision SupportAI models have exhibited strong performance in aligning with expert recommendations for cancer treatment. In the work done by Lazris et al., 2024, ChatGPT- 3.5 provided case-specific treatment recommendations in 81% of cases, with an overall treatment strategy concordance of 83%. However, exact treatment plan concordance was lower (65%), with challenges in chemotherapy regimen recommendations and follow-up protocols. Similarly, AI-based models demonstrated high sensitivity and accuracy in classifying cancer risk, supporting clinicians in making informed treatment decisions [16].
Ng et al., 2023 described in their work how they designed ADBoard to improve decision-making in multidisciplinary meetings (MDMs). Their tool ensured the completeness of patient information and enhanced the explainability of decision protocols. The adoption of AI in MDM is expected to streamline workflow, reduce administrative burdens, and allow clinicians to focus on complex cases [10].
Action point: Future practices should focus on integrating AI precision in chemotherapy recommendations, enhancing multidisciplinary decision transparency, and streamlining clinical workflows for oncology treatment support.
Optimizing Drug Dosing and Precision MedicineAI-driven platforms have shown promise in personalized dose optimization. The CURATE.AI platform successfully provided individualized dosing recommendations, aiming to enhance therapeutic decision-making by optimizing drug administration. Tan et al. 2021 conducted a pilot study to evaluate its viability by monitoring the timeliness of dose recommendations, and the frequency of clinically significant dose modifications suggesting feasibility for broader implementation in precision oncology. Secondary outcomes included physician adherence to AI-recommended dose. Furthermore, AI-assisted molecular tumor boards (MTBs) demonstrated close alignment with ideal treatment plans (consensus annotations), often outperforming conventional MTB recommendations. The AI model facilitated standardised MTB discussions, reduced physician workload, and supported precision oncology frameworks that could be expanded globally [6, 14].
Action point: To effectively formulate precision medicine treatment recommendations, MTBs must navigate the vast array of molecular alterations and immune markers unique to each patient's tumor. Integrating AI and ML into MTB workflows could significantly enhance accessibility, facilitate standardized implementation, and enable routine adoption even in smaller clinical centers.
Improving Treatment Adherence and Patient ManagementMasiero et al., 2024 found that AI-driven clinical decision support systems (DSS) could improve adherence to oral anticancer treatments by predicting behavior [11]. A ML-integrated DSS was evaluated for its effectiveness in promoting medication adherence, with secondary objectives of identifying new predictive variables to refine adherence behavior models. The study anticipates that improved adherence will positively influence clinical outcomes and reduce the economic burden of nonadherence. Similarly, AI was utilized to support personalized patient follow-up strategies, enhancing engagement and optimizing clinician decision-making regarding supportive care interventions. These AI-assisted approaches are expected to improve quality of life by reducing treatment-related toxicity and unnecessary interventions [12].
Beyond direct therapeutic decision-making, AI has shown potential in predicting nonvisible symptoms relevant to palliative care. AI models achieved varying levels of accuracy (55.5%–88.0%) in predicting symptoms such as pain, dyspnea, fatigue, delirium, and anxiety. These predictive capabilities could help optimize symptom assessment and management, assisting clinicians in delivering early interventions for patients requiring supportive and palliative care [17].
Action point: AI models could help shape multiple different patient profiles to assist the medical team in better identifying individual issues, needs and expectations that, once addressed, might improve treatment results beyond the generally predicted values obtained from clinical trials and real-world registries.
Challenges and Considerations in AI-Driven Therapeutic Decision-MakingWhile AI has demonstrated potential in optimizing therapeutic decisions, several concerns remain. Clinicians acknowledged AI’s role in validating their prognostic judgments and guiding treatment discussions but expressed reservations regarding AI accuracy, variability in patient responses, and the risk of over-reliance on algorithmic predictions [8].
According to Stalp et al., 2024, ChatGPT’s treatment recommendations were generally sufficient, with high accuracy observed in HER2-positive breast cancer cases and better performance in primary, non-complicated scenarios, particularly for chemotherapy [18]. However, its limitations became evident in complex and postoperative cases, where it struggled to provide proper chronological treatment sequences and precise recommendations, underscoring the need for human oversight in intricate therapeutic decisions.
Li et al., 2024 conducted a cross-survey study to explore the Chinese oncologists'perspectives on integrating AI into clinical practice. Cancer care providers expressed concerns about AI integration in clinical practice, particularly regarding its potential to mislead diagnoses and treatments, overreliance on its recommendations, and risks related to data security, algorithm bias, and patient privacy. Additionally, many highlighted the slow adaptation of regulations to AI advancements. Views on AI’s impact on the doctor-patient relationship were mixed, with some fearing increased disputes, while opinions on whether AI might replace physicians remained divided, with no clear consensus [20].
In their qualitative study, Hesjedal et al., 2024 emphasized various opinions regarding AI's role in therapeutic decision-making. Scientists pointed out the need for robust validation and high-quality training datasets, while clinicians underscored the necessity of human oversight in AI-generated recommendations. Patients exhibited caution, expressing trust in physicians to ensure AI's reliability and safety in clinical practice. Ethical and regulatory concerns, including data security, algorithm bias, and AI's potential impact on the doctor-patient relationship, were also highlighted [19].
Evaluations are most often carried out retrospectively on small datasets that may not accurately represent the language, demographic and epidemiological characteristics of patients encountered in clinical practice.
Action point: Providing training and education needed for healthcare professionals to effectively utilize AI tools in oncology settings, ensuring that they have the necessary skills to interpret and implement recommendations accurately.
Ethical, Legal, and Informed Consent ChallengesThe ethical integration of AI in oncology decision-making is complex, requiring careful attention to transparency, fairness, and interpretability. One of the most pressing ethical concerns is the"black-box"nature of many ML models, which can generate treatment recommendations without clearly explaining their reasoning. This lack of interpretability challenges the principle of patient autonomy, as clinicians may struggle to justify AI-driven decisions, and patients may find it difficult to make informed choices about their care. Additionally, bias in AI models remains a major ethical issue. Given that training data often come from specific populations, AI-driven decision support tools may not generalize well to diverse patient groups, potentially exacerbating existing healthcare disparities. If these biases are not systematically addressed, AI could reinforce inequities rather than mitigate them. Another ethical dilemma is the potential psychological impact of AI-driven prognostic tools. AI models predicting survival or treatment outcomes must be used with caution, as disclosing algorithm-derived mortality estimates to patients might cause distress or even inadvertently influence clinical decision-making in ways that deprive patients of beneficial therapies. Ensuring ethical AI deployment in oncology requires rigorous validation, continuous bias assessment, and a commitment to human oversight in decision-making.
The legal landscape surrounding AI-driven decision-making in oncology remains unclear, posing significant medico-legal risks. A central concern is liability: if an AI system provides incorrect or harmful treatment recommendations, it is unclear whether responsibility falls on the clinician using the AI, the institution implementing the technology, or the developers who trained the algorithm. Existing regulatory frameworks, including those governing AI-based medical devices, have yet to fully address these liability concerns.
Additionally, compliance with data privacy regulations, such as the General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act (HIPAA), presents another legal hurdle. AI systems often rely on large, multicenter datasets for training, which necessitates robust safeguards to ensure data de-identification and secure storage. However, retrospective data collection, especially across multiple institutions, complicates adherence to these regulations. Furthermore, intellectual property rights and data ownership raise legal ambiguities, particularly when AI models are trained using proprietary clinical data. Striking a balance between protecting patient data and enabling AI-driven innovation requires ongoing legal scrutiny, ensuring that regulatory adaptations keep pace with technological advancements.
The use of AI in oncology decision-making presents unique challenges to the informed consent process, as patients may not fully understand the implications of AI-driven recommendations. Traditionally, informed consent relies on clear communication of treatment options, risks, and expected outcomes. However, AI's involvement introduces an additional layer of complexity, as patients must also be informed about how AI systems function, their limitations, and the extent to which they influence clinical decisions. A key issue is the potential for"automation bias,"in which both patients and clinicians over-rely on AI recommendations without critically evaluating their validity. This phenomenon raises concerns about whether consent obtained under such circumstances truly meets ethical and legal standards. Additionally, when AI systems generate treatment plans that deviate from established guidelines, there is a need for explicit disclosure to ensure patients remain adequately informed. In clinical trials incorporating AI-assisted decision-making, standardized consent protocols must be developed to explicitly outline AI’s role and potential risks. Addressing these challenges requires educational initiatives for both clinicians and patients, ensuring that AI-driven recommendations do not undermine the principles of transparency, autonomy, and shared decision-making in oncology care.
Action point: Discuss the challenges of obtaining informed consent from patients when using AI technologies in oncology. Propose ways to improve transparency and communication between providers and patients by informing them of the decision-making process, potential consequences, and right to contest.
Comments (0)