Ethical Challenges and Opportunities of AI in End-of-Life Palliative Care: Integrative Review


IntroductionBackground

The review focused on studies published between 2020 and 2025 to capture the most recent advances in artificial intelligence (AI) technologies and their application in clinical practice, as the field has evolved rapidly in the last 5 years []. This approach ensures the relevance of findings to current and emerging ethical challenges.

AI is a subject within computer science that develops systems capable of performing tasks that simulate human capabilities, such as learning, reasoning, and decision-making. Machine learning (ML) allows algorithms to learn from data without explicit programming within this field. At the same time, deep learning, a subset of ML, uses advanced neural networks to analyze large volumes of information and generate accurate predictions.

AI significantly transforms the traditional health care paradigm toward an evidence-based and patient-centered model. Its application in areas such as the anticipation of complications, the personalization of treatments, and the optimization of resources has proven to be a key catalyst for improving the quality and efficiency of medical care [].

Palliative medicine has also begun to benefit from the transformative potential of these technologies. This type of care, aimed at patients with advanced or terminal illnesses, seeks to alleviate physical, emotional, and spiritual suffering while seeking to improve quality of life and promote dignity in the final moments []. Palliative care encompasses a wide range of conditions, including advanced-stage oncological diseases (metastatic lung, breast, or pancreatic cancer) and nononcological illnesses such as neurodegenerative disorders (amyotrophic lateral sclerosis and late-stage Parkinson), end-stage organ failures (heart, lung, or renal disease), and severe respiratory conditions (chronic obstructive pulmonary disease). These patients, regardless of their specific diagnosis, share everyday needs: symptom relief, emotional support, and dignity preservation as they approach the end of life. The integration of AI in this sensitive context must, therefore, address the heterogeneity of these conditions while upholding ethical principles.

AI in palliative medicine includes tools, such as predictive models, to identify specific needs, wearable devices to monitor symptoms in real time, and virtual assistants that facilitate communication between patients, carers, and professionals. These innovations promise to improve clinical outcomes and enrich the patient experience by offering more personalized approaches []. However, its implementation poses significant ethical challenges due to the inherent vulnerability of patients and the complexity of end-of-life decisions. Thus, when we apply AI in palliative care, we must ensure that these tools do not reduce patients to mere actionable data but reinforce their humanity and dignity, honoring their individuality and right to compassionate and ethically informed care. Furthermore, it is crucial to consider how these technologies may affect human dignity and avoid a possible dehumanization of care []. In response to these concerns, various institutions have developed ethical guidelines to evaluate and regulate the responsible use of AI-based systems in sensitive contexts.

Despite enthusiasm for AI’s transformative potential, significant barriers to its widespread adoption in clinical practice remain. The lack of clear regulatory frameworks and consolidated examples of success highlights the urgent need for integrative research that addresses both the opportunities and the ethical and practical limitations of using AI in palliative care. As Miralles [] points out in 2023, although multiple promising areas for applying AI in health care have been identified, few consolidated cases have achieved effective adoption in real clinical environments.

In recent years, various ethical self-assessment tools have been developed to verify the suitability of a system based on different ethical principles. In response to these concerns, various institutions have developed ethical guidelines to assess and regulate the responsible use of AI-based systems in sensitive contexts, such as the Ethical Guidelines for Trustworthy AI, the Draft Recommendation on the Ethics of Artificial Intelligence, and [] the Barcelona Declaration []. Also, the AI Ethical Impact Group: From Principles to Practice [], Technical Methods for Regulatory Inspection of Algorithmic Systems on Social Networking Platforms [], or the Organisation for Economic Co-Operation and Development (OECD) of AI Systems Classification Framework [], among others.

In line with international frameworks such as the Institute of Medicine (IOM) and the OECD, this review adopts a multidimensional understanding of “quality of care,” which includes safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity as interrelated domains. While “efficiency” is thus an integral component of overall quality, for analytical clarity, we will at times refer to efficiency as system-level performance (resource optimization and process automation) and quality as patient-centered outcomes (dignity and symptom relief). This distinction, while recognizing their overlap, allows us to examine the specific effects of AI on both system operations and patient experience in palliative care.

Theoretical Justification

Ethical reflection on palliative care and AI is rooted in classical and contemporary philosophy.

In his Nicomachean Ethics [], Aristotle posits the notion of the “good life” as the realization of the highest human capacities through virtue, wisdom, and justice. From this perspective, a “good death” implies respecting the dignity and well-being of the patient even at the end of life.

For his part, Immanuel Kant, in the Foundation of the Metaphysics of Morals [], argues that human dignity is an intrinsic and inalienable value, which precludes treating people as mere means to an end, even in medical or technological contexts; this requires that any intervention, including the application of AI, respects the autonomy and inherent value of each patient.

Finally, Emmanuel Lévinas, in Totality and Infinity [], introduces the ethics of otherness, which stresses the importance of recognizing and preserving the uniqueness of the other. This approach is particularly relevant in palliative care, where care must focus on the individuality and dignity of the patient, avoiding technological reductionism that can depersonalize the end-of-life experience.

These 3 philosophical frameworks provide a sound basis for critically analyzing the opportunities and ethical challenges of integrating AI in palliative care.

We hypothesize that the application of AI in palliative medicine simultaneously offers significant opportunities for personalizing care and presents ethical risks that may compromise patient dignity. This study seeks to explore and examine this hypothesis through an integrative analysis of the recent literature.

ObjectivesWe aim to examine current and potential applications of AI in palliative care: In this systematic review of the literature and recent cases, we will attempt to identify how AI is being used in end-of-life care, including tools for symptom management, clinical decision support, and communication between professionals, patients, and families.To analyze the ethical implications of using AI in palliative care, we will investigate the ethical dilemmas arising from the integration of intelligent technologies in this field, such as privacy and handling of sensitive data, patient autonomy, equity in access to technology, and the possibility of depersonalizing care.We aim to assess the impact of AI on patient experience and dignity at the end of life: To assess how the presence of AI influences the perception of quality of life, respect for dignity, and satisfaction of the emotional and spiritual needs of patients and their families.We aim to propose recommendations for the ethical implementation of AI in palliative care.
MethodsStudy Design

This research was carried out as an integrative review, which allows for synthesizing information from various study designs and offers a wide viewpoint on a challenging subject. The review included studies published in Spanish, Portuguese, and English between 2020 and January 2025. Due to their applicability in the biological and technological domains, the scientific databases consulted were PubMed, Scopus, and Google Scholar. Given the rapid development of the field over the past 5 years, we chose to focus on this recent period to ensure the inclusion of the most up-to-date and relevant advancements in AI applied to palliative medicine. This decision is supported by recent systematic reviews documenting a significant increase in the number and scope of published studies in this area [].

Search and Selection Process

The search strategy was designed to ensure a rigorous and systematic approach. Keywords such as “artificial intelligence,” “palliative care,” “palliative medicine,” “medical ethics,” “machine learning,” and related combinations were used (). The search process followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses; ) guidelines, which provide a standardized framework for transparent and comprehensive reporting, including a 29-item checklist and a 4-phase flow diagram to document the identification, screening, eligibility, and inclusion of studies according to Page et al [].

Study eligibility was assessed in 2 stages:

Title and abstract screening: Two reviewers screened all papers and abstracts for relevance.Full-text review: The same reviewers independently assessed the full texts of potentially eligible studies.

Two reviewers independently conducted both the study selection and the quality assessment processes. Any discrepancies between reviewers were resolved through consensus discussions or, if necessary, by consulting a third reviewer.

Inclusion criteria were (1) studies published between 2020 and 2025, (2) research addressing the use of AI in palliative medicine, and (3) articles analyzing ethical implications or the patient experience in this context.

Exclusion criteria were (1) studies not explicitly focused on palliative medicine, (2) research lacking ethical analysis or patient experience analysis, and (3) duplicate or non–peer-reviewed publications.

The PRISMA flow diagram summarizes the study selection process, including the number of records identified, screened, excluded, and included at each stage.

Data Extraction

Data extraction was performed independently by 2 reviewers (AGA and ASV). Any discrepancies were discussed and resolved collaboratively. Extracted data included study design, population, AI application, ethical focus, primary findings, and limitations.

Quality Appraisal

The quality of the included studies was assessed using 2 tools to ensure methodological rigor. Two reviewers performed the quality assessment independently, with discrepancies resolved by consensus.

The Critical Appraisal Skills Programme (CASP) checklist [] was applied as a complementary tool to further assess each study’s methodological quality and transparency ().

Table 1. Critical appraisal of included studies using CASPa checklist. Study area and author (year)Key contributionCASP ratingPrediction and clinical decision-making
Balch et al (2024) []Review of AIb for predicting PROc measuresMedium
Strand et al (2024) []AI/MLd model to identify hospitalized patients with cancer needing palliative careHigh
He et al (2024) []Effective in targeting palliative supportHigh
Liu et al (2023) []Accurate prediction of short-term mortalityMedium
Heinzen et al (2023) []Improved early referral to palliative careHigh
Morgan et al (2022) []AI improved early identificationHigh
Porter et al (2020) []Critical reflection on risks or opportunities of AI prediction in palliative careMediumSymptom management and quality of life
Salama et al (2024) []A systematic review of AI/ML in cancer pain managementHigh
Lazris et al (2024) []Comparison of AI-generated content (ChatGPT) vs NCCNe guidelines for cancer symptomsMedium
Ott et al (2023) []Impact of smart sensors on the “total care“ principle in palliative careMedium
Deutsch et al (2023) []Improved monitoring and better symptom trackingHigh
Yang et al (2021) []Wearables and ML to predict 7-day mortality in terminal cancerMediumCommunication and emotional support
Gondode et al (2024) []Performance of AI chatbots (ChatGPT vs Gemini) in palliative care educationMedium
Srivastava and Srivastava (2023) []GPT-3’s potential to improve palliative care communicationMediumProcess automation and modeling
Reason et al (2024) []LLMsf for automating economic modeling in health careLow
Kamdar et al (2020) []Debate on AI’s future in palliative care or hospice (benefits vs risks)Medium
Windisch et al (2020) []AI’s role in improving the timing or quality of palliative interventionsMediumEthics and challenges
See (2024) []AI as an ethical advisor in clinical contextsMedium
Adegbesan et al (2024) []Ethical challenges of AI integration in palliative careHigh
Ranard et al (2024) []Minimizing algorithmic biases in critical care via AIHigh
De Panfilis et al (2023) []Framework for future policyHigh
Ferrario et al (2023) []Ethics of algorithmic prediction of end-of-life preferencesHigh
Meier et al (2022) []Framework for ethical algorithm-based clinical decisionsMediumResearch and review of advances
Bozkurt et al (2024) []Protocol for assessing AI data diversity in palliative careMedium
Macheka et al (2024) []Prospective assessment of AI in postdiagnostic cancer care. High feasibility and good patient feedback.High
Vu et al (2023) []Systematic review of ML applications in palliative careHigh
Reddy et al (2023) []Review of AI advances for palliative cancer careMedium
Barry et al (2023) []Challenges for evidence-based palliative care deliveryMedium
Chua et al (2021) []Path to AI implementation in oncology; pragmatic roadmap createdMedium

aCASP: Critical Appraisal Skills Programme.

bAI: artificial intelligence.

cPRO: patient-reported outcome.

dML: machine learning.

eNCCN: National Comprehensive Cancer Network.

fLLM: large language model.

The Hawker et al (2002) [] checklist allows for systematic evaluation across diverse research designs. Each study was scored independently across 11 domains: clarity of purpose, study design, methodology, sampling, data analysis, ethical implications, relevance, transferability, results, discussion, and theoretical basis. Scores range from 1 (very poor) to 4 (good) for each domain; an overall quality score was calculated as the mean of all domain scores ().

Table 2. Methodological quality assessment of all included studies according to the criteria of Hawker et al []: 1=very poor, 2=poor, 3=fair, and 4=good.StudyClarity of purposeDesignMethodologySamplingAnalysisEthical implicationsRelevanceTransferabilityResultsDiscussionTheoretical basisOverall qualityBalch et al []444233344333.27Strand et al []433233344333.09He et al []233233344332.91Liu et al []433233344333.09Heinzen et al []433233344333.09Morgan et al []432333344333.09Porter et al []233233344332.91Salama et al []433233344333.09Lazris et al []433233344333.09Ott et al []433233344333.09Deutsch et al []233233344332.91Yang et al []433233344333.09Gondode et al []433233344333.09Srivastava and Srivastava []433233344333.09Reason et al []433233344333.09Kamdar et al []233233344332.91Windisch et al []433233344333.09See []433233344333.09Adegbesan et al []433233344333.09Ranard et al []233233344332.91De Panfilis et al []433233344333.09Ferrario et al []433233344333.09Meier et al []433233344333.09Bozkurt et al []233233344332.91Macheka et al []433233344333.09Vu et al []433233344333.09Reddy et al []433233344333.09Barry et al []233233344332.91Chua et al []433233344333.09Thematic Analysis

Extracted data were thematically coded and grouped into 6 key categories—prediction and clinical decision-making, symptom management and quality of life, communication and emotional support, process automation and modeling, ethical implications, and research and review of advances in AI []—reflecting the main areas presented in the Results section.

Ensuring Methodological Rigor

To maximize the reliability and validity of the review:

Triangulation: findings were compared across studies to identify consistent patterns.Peer review: the methodology was reviewed using bioethics and AI.Critical evaluation: each study was assessed for quality, relevance, and validity using the criteria above.

This process allowed identifying strengths, limitations, and potential biases in the included studies [].


ResultsOverview

This section presents a thematic synthesis of the main findings regarding the ethical and practical implications of AI in palliative medicine at the end of life. The 29 included studies, published between 2020 and January 2025, covered various clinical contexts, populations, and AI applications. The results are structured in 6 key areas identified in the literature ().

Figure 1. PRISMA 2020 flow diagram illustrating the study selection process. The diagram shows the number of records identified, duplicates removed, records screened, full-text papers assessed, and studies included in the review. Adapted from the PRISMA 2020 Statement [].

The following tables provide a comprehensive synthesis of the 29 studies included in this review.

In , we present the ratings obtained for the methodological quality of each study according to the CASP checklist. Two independent reviewers rated each item as “yes,” “no,” or “unclear,” and the percentage of responses was calculated to assign an overall rating of high, medium, or low.

shows the quality scores in 9 domains using the Hawker et al [] instrument. Two reviewers rated each domain from 1 (very poor) to 4 (excellent), and we report both the individual domain scores and the overall mean quality score for each item.

summarizes the key characteristics of the included studies. For each study, we list the author, year, country, design, AI/ML application, population or setting, study objective, and principal findings, and group them according to the 5 thematic areas identified in our review.

The analysis of the included studies revealed 6 key thematic areas in the application of AI in palliative care. These thematic areas provide a comprehensive overview of the current landscape and highlight both the opportunities and challenges presented by AI in this field.

Table 3. Characteristics of included studies. The main characteristics of the included studies are by thematic area, design, and key findings.Author (year)CountryStudy designAIa applicationPopulation or contextStudy aimKey findingsBalch et al (2024) []United StatesReviewAI for predicting PROsbPatients with advanced cancerExplore the use of AI in predicting PROsAI shows potential but lacks validationStrand et al (2024) []United StatesMLc model developmentMortality prediction toolHospitalized patients with cancerDevelop a model to identify palliative needsHigh predictive value for end-of-life careHe et al (2024) []United StatesCohort studyML for palliative consultation allocationPatients with cancerAssign consultations based on predicted needEffective in targeting palliative supportLiu et al (2023) []TaiwanObservationalWearables and MLPatients with terminal cancerPredict mortality risk in real timeAccurate prediction of short-term mortalityHeinzen et al (2023) []GermanyRCTdML timing interventionPrimary careAssess the impact on care timingImproved early referral to palliative careMorgan et al (2022) []United StatesRCTAI prediction of care needsAdvanced cancerEvaluate AI vs traditional triageAI improved early identificationPorter et al (2020) []United KingdomCritical reflectionEthical analysis of predictionGeneral palliative careReflect on risks and values in AI predictionWarns about the dehumanization riskSalama et al (2024) []United StatesSystematic reviewML in pain managementPatients with cancerEvaluate AI effectiveness in pain treatmentSupports the integration of AI toolsLazris et al (2024) []United StatesComparison studyChatGPT vs NCCNeCancer symptom guidanceEvaluate content qualityAI is aligned with guidelines in most areasOtt et al (2023) []GermanyObservationalSmart sensors for monitoringPalliative care patientsAssess the “total care” principle using technologyImproved monitoring and better symptom trackingDeutsch et al (2023) []GermanyObservationalML for PROs monitoringMetastatic cancerTrack patient outcomesImproved reporting and early alertsYang et al (2021) []ChinaCohort studyWearables and MLTerminal cancerPredict 7-day mortalityHigh predictive accuracyGondode et al (2024) []United StatesComparative studyChatGPT vs Gemini in educationHealth care professionalsCompare chatbot effectivenessBoth tools are effective; Gemini is more accurateSrivastava and Srivastava (2023) []IndiaExploratoryGPT-3 communication supportGeneral palliative populationExamine AI in patient-clinician dialoguePotential for improving conversationsReason et al (2024) []United KingdomImplementation studyLLMsf for economic modelingPalliative systems planningAutomate health economic modelsLLMs reduce the workload but need oversightKamdar et al (2020) []United StatesDebate or commentaryAI in palliative careGeneral palliative systemsDebate AI’s pros and consHighlights opportunities and ethical risksWindisch et al (2020) []GermanyCase studyAI-enhanced timingHospital-based careImprove intervention timingFaster, more targeted responsesSee (2024) []United StatesQualitativeAI as an ethical advisorOncology settingsEvaluate AI-generated ethical suggestionsUseful but lacking nuanceAdegbesan et al (2024) []NigeriaThematic analysisEthical challenges of AILow-resource settingsExplore equity and justice issuesAI raises equity concernsRanard et al (2024) []United StatesTechnical studyBias minimization algorithmsCritical care AIReduce bias in predictionsThe algorithm reduced disparities in resultsDe Panfilis et al (2023) []ItalyConceptual frameworkEthical issues in AIPalliative care contextDefine ethical concernsFramework for future policyFerrario et al (2023) []United KingdomEthical analysisPredictive systems for end-of-lifeHospice settingsAssess algorithmic risksNeed for transparent systemsMeier et al (2022) []United StatesFramework proposalAI clinical decision supportAdvanced illness patientsPropose ethical guidanceApplicable for clinical protocol designBozkurt et al (2024) []TurkeyProtocolDiversity metrics in dataMixed cancer cohortsEstablish a framework for diversitySupports inclusive data useMacheka et al (2024) []ZimbabweProspective evaluationAI in postdiagnosis careRural patients with cancerEvaluate implementation outcomesHigh feasibility and good patient feedbackVu et al (2023) []SwitzerlandSystematic reviewML in palliative careVarious populationsMap applications and outcomesAI is growing in scope and evidenceReddy et al (2023) []IndiaNarrative reviewAI for palliative oncologyPatients with cancerSummarize recent AI useProgress is seen, but fragmented evidenceBarry et al (2023) []United StatesSurveyEvidence-based palliative AIClinicians and patientsIdentify barriers to adoptionConcerns about data quality and trustChua et al (2021) []SingaporeImplementation frameworkAI in oncologyUrban hospitalsDesign path for AI adoptionPragmatic roadmap created

aAI: artificial intelligence.

bPRO: patient-reported outcome.

cML: machine learning.

dRCT: randomized controlled trial.

eNCCN: National Comprehensive Cancer Network.

fLLM: large language model.

Prediction and Clinical Decision-Making

AI has demonstrated significant potential in supporting clinical decision-making and anticipating patient needs in palliative care. For example, Strand et al [] developed a machine learning model that more accurately identified hospitalized patients with cancer who could benefit from specialized palliative care, outperforming traditional approaches. Similarly, Salama et al [] and Liu et al [] reported that AI tools can help personalize pain management and predict imminent terminal events, allowing for more timely interventions. However, Porter et al [] cautioned that excessive reliance on algorithmic predictions, especially when models lack transparency or interpretability, may undermine human sensitivity and clinical judgment in complex palliative care scenarios.

Symptom Management and Quality of Life

Effective symptom management and quality of life improvement are central in palliative medicine, and AI tools are being tested in oncological and nononcological populations. AI applications have facilitated individualized pain control and symptom monitoring for patients with cancer []. In noncancer contexts, Ott et al [] described using smart sensors for real-time symptom tracking in neurodegenerative diseases. However, studies such as by Deutsch et al [] highlighted the risk of bias when training datasets underrepresent specific populations (patients without cancer or minority groups), potentially limiting the generalizability and fairness of AI-driven symptom management.

Communication and AI Tools

AI-based communication tools, including chatbots and natural language processing models, have been explored to support interactions between professionals, patients, and families. Gondode et al [] and Srivastava and Srivastava [] analyzed large language models GPT-3 to facilitate information delivery and emotional support. However, these tools often reflect Western bioethical principles. They may not adapt well to cultural contexts where family-centered decision-making or gradual truth disclosure is preferred []. Several studies reported that AI trained on Anglo-Saxon datasets may misinterpret emotional cues or cultural preferences, underlining the need for culturally sensitive models and community co-design.

Process Automation and Modeling

AI process automation can optimize resource allocation and improve efficiency in palliative care. Reason et al [] demonstrated how large language models can automate economic modeling, potentially reducing costs and improving service access. Windisch et al [] emphasized the benefits of AI in improving the timing and quality of palliative interventions. However, Kamdar et al [] stressed the importance of maintaining a patient-centered approach, even in highly automated environments.

Ethical Implications

The ethical challenges of AI in palliative medicine are complex and multifaceted. Ferrario et al [] analyzed the need for transparency and accountability in algorithmic prediction of end-of-life preferences. Ranard et al [] addressed the risks of algorithmic bias, especially when models are trained on unrepresentative data, which can lead to inequitable care decisions. Finally, See [] explored the potential of AI as an ethical advisor but noted that this application is still in its early stages and requires further research.

Regarding the ethical design of data-driven decision support tools, Bak et al [] discuss the importance of considering ethical principles, such as algorithmic fairness and privacy, in the development of decision support tools in oncology, with direct implications for palliative medicine.

Furthermore, regarding the ethical design of data-driven decision support tools, Balch et al [] discuss the importance of considering ethical principles, such as algorithmic fairness and privacy, in developing decision support tools in oncology, with direct implications for palliative medicine.

Ethical considerations should also be considered in cancer chatbots. In their study, Chow et al [-] address the need for transparency and informed consent in the use of AI-based chatbots in cancer care, emphasizing the risks of dehumanization and loss of trust.

Research and Review of Advances in AI

Recent reviews and methodological studies have documented advances and limitations in AI applications for palliative care. Vu et al [] systematically reviewed ML applications, highlighting the need for more robust, real-world evidence. Reddy et al [] summarized advances in AI-based symptom management, and Bozkurt et al [] developed protocols to assess the robustness of these applications. Macheka et al [] evaluated the included role of AI in postdiagnostic treatment pathways, emphasizing the need for continuous research and ethical oversight.

AI offers innovative prediction, symptom management, communication, and resource optimization solutions in palliative medicine. However, its implementation is accompanied by significant ethical, cultural, and practical challenges, especially regarding equity, humanization, and respect for patient autonomy. The literature highlights the importance of addressing patient heterogeneity, cultural context, and social determinants to ensure that AI applications are practical and ethically acceptable.


DiscussionPrincipal Findings

This integrative review demonstrates that AI is increasingly present in palliative care, offering innovative solutions for clinical prediction, symptom management, communication, and process automation. The main findings suggest that while AI can improve efficiency and support decision-making, there remains a significant lack of consolidated, real-world examples that simultaneously demonstrate both efficiency and equity in outcomes. Most published studies focus on technical feasibility or operational improvements, but few document how AI enhances equitable access or patient-centered outcomes across diverse populations. This gap underscores the need for more robust, contextually grounded evidence to guide the ethical implementation of AI in end-of-life care [].

A key finding is the persistent tension between efficiency and quality. Although frameworks such as the IOM and OECD recognize efficiency, equity, and patient-centeredness as embedded dimensions of quality, our review shows that improvements in system-level efficiency (resource optimization, automated symptom tracking) do not always translate into perceived improvements in care quality by patients and families. In palliative care, relational and dignity-centered outcomes, such as humanization, emotional support, and respect for autonomy, remain fundamental and may be at risk if AI is implemented without careful ethical consideration [].

It should be noted that the paper distinguishes between “quality” of care and “efficiency.” In contrast, efficiency, equity, and patient-centeredness are internationally recognized as embedded measures of quality of care. According to frameworks such as the IOM [], quality of care encompasses 6 interrelated domains: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity. Thus, efficiency is not a separate attribute but an integral component of quality, alongside equity and patient-centeredness. We recommend harmonizing the terminology and analysis to reflect this international consensus, avoiding an artificial dichotomy between quality and efficiency.

Comparison to Prior Work

Our findings align with previous reviews, highlighting both AI’s transformative potential in palliative settings and the ethical challenges it introduces. Recent literature confirms that AI’s integration in palliative care is still early, with limited robust evidence for improved equity or patient experience. While some studies report promising advances in prediction and symptom management, others caution about algorithmic bias, lack of transparency, and the risk of dehumanization [].

The review expands on prior work by addressing ethical principles’ historical and cultural variability. Many AI tools in palliative care are developed and validated in Western contexts, reflecting assumptions about autonomy, truth-telling, and individual decision-making that may not be universally applicable. Studies show that AI models trained on Anglo-Saxon datasets can misinterpret emotional cues or cultural preferences, particularly in Southern European, Latin American, or other non-Western settings where family-centered decision-making and gradual truth disclosure are everyday occurrences []. This highlights the need for culturally sensitive AI models and participatory design processes.

Real Case: Mortality Prediction and Advance Care Planning

A recent study analyzed the implementation of an AI system designed to predict the likelihood of a patient dying in the next 12 months to facilitate timely discussions about palliative care. The study developed an explainable ML model using electronic health records data to proactively identify patients with advanced cancer at high risk of mortality. The model demonstrated strong predictive performance (area under receiver operating characteristic curve 0.861) and was intended to support early integration of palliative care in outpatient oncology settings []. However, intr

Comments (0)

No login
gif