While pancreatic cancer (PC) is ranked as the 11th most common cancer in the world with 458,918 new cases in 2018 (1), it is projected to be the second leading cause of cancer-related mortality in the United States by 2030 (2). Most of the mortality is attributed to advanced stage at diagnosis, and hence, only a minority of patients (15%–20%) are eligible for surgical resection (3,4). Earlier diagnosis of PC with localized disease correlates with improved survival (5). The low incidence of PC and lack of accurate biomarkers for early-stage disease have made effective screening challenging and hindered efforts to improve overall survival. As PC screening in the general population is not recommended, efforts have been made to identify high-risk individuals who may benefit from PC screening (6). In current practice, PC screening is limited to individuals with pathogenic/likely pathogenic germline mutations in PC susceptibility genes and those with multiple affected family members (7,8). However, less than 20% of patients with PC have known familial and genetic risk factors thereby limiting the ability to enrich and screen the population at risk. Therefore, identifying novel risk factors for PC is critical.
Electronic health records (EHR) data contain a variety of structured and unstructured data, which have shown promising results in disease and risk prediction. With EHR being more pervasively used across health systems and with the recent developments in the field of machine learning (ML) and deep learning (DL), EHR data could potentially be explored for effective prediction of PC risk (9). Identified high-risk individuals could then benefit from PC screening. Also, with emerging explainable-artificial intelligence (X-AI) techniques, interpretable risk factors of PC could be identified from the EHR data (10).
We therefore sought to systematically review the existing ML/AI literature that utilizes EHR data to predict PC risk, and to summarize model development, evaluation strategies, and model effectiveness in predicting PC.
METHODS Data sources and searchesA comprehensive search of several databases from January 1, 2012, to February 1, 2024, in the English language, was conducted. The databases included Ovid MEDLINE(R) and Epub Ahead of Print, In-Process, and Other Nonindexed Citations, and Daily, Ovid EMBASE, Ovid Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, Scopus, and Web of Science. The search strategy was designed and conducted by an experienced librarian with input from the study's principal investigator. Controlled vocabulary supplemented with keywords was used to search for ML and natural language processing models pertaining to prediction of PC and PC risk factors using EHR data. The actual strategy listing all search terms used and how they are combined is available in the Supplementary Digital Content 1 (see Article Search Strategies document, https://links.lww.com/AJG/D286).
Study inclusion criteriaWe included articles that developed a multivariable ML model to predict PC using EHR data.
OutcomeThe outcome was PC.
Compilation and screening of articlesTwo independent reviewers (A.K.M., B.C.) screened articles for eligibility, based on title and abstract, followed by a second round of full-text review to identify eligible articles. This was followed by screening their respective reference lists and citation matching for additional articles. A third independent reviewer (S.M.) adjudicated any disagreement in eligible articles. The articles were archived into Endnote software (11).
Extraction and quality assessmentWe used the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) (12) to extract data for the appraisal of the articles. We extracted study details including study type and time-period, data sources, participants, reporting and handling of missing data, ML modeling methods, model calibration, validation, and performance. In addition to the CHARMS framework, we extracted data including choice of candidate predictors in the study: curated PC predictors derived from literature or identified by experts vs noncurated predictors in the EHR; study population type: high-risk subgroups vs general population; prediction time window; and novel risk factor identification through model explainability. We also used prediction model risk of bias assessment tool (PROBAST) to evaluate risk-of-bias and applicability of the models developed and validated in the included articles (13,14). For quality assessment, we applied the preferred reporting items for systematic reviews and meta-analyses checklist to guide our systematic review (15).
We used the C-index metric as the metric for model performance. Studies included in our systematic review were very heterogeneous in data exclusion time intervals (excluding data immediately before diagnosis), prediction time windows (duration of future disease risk period from date of clinical assessment), number of independent data sets and subset groups used, and modeling techniques. Thus, for studies that explored multiple data exclusion time intervals, results corresponding to the smallest exclusion window were used. For studies that experimented with multiple prediction time windows, results corresponding to the shortest prediction window were considered. If studies utilized multiple independent data sets, all data sets were included as individual data points. However, if studies performed both full-cohort and subset analyses, e.g., subset to patients with new onset diabetes, full-cohort results are reported, but subset results were excluded. For studies that explored multiple modeling techniques, results from each modeling technique were included as individual data points if the corresponding results were reported consistently across data sets. If results from 2 or more similar modeling techniques (e.g., light gradient boosting machine and gradient boosting machine) were reported in an article, only results from the best performing model were reported. Also, modeling techniques were categorized into 3 groups: group A included linear ML models, group B included nonlinear models excluding DL models, and group C included DL models only.
RESULTS Study characteristicsWith our population, intervention, comparison, outcome search, we identified 183 articles after removing duplicates. These articles were screened to identify 21 articles which have implemented ML algorithms to predict PC. We added 9 additional articles from references that met our inclusion criteria. Figure 1 shows the process of study identification and inclusion for data extraction and analysis. Tables 1 and 2 describe the study characteristics for risk prediction models including study type, data sources, modeling development techniques, and validation results using the CHARMS framework. Supplementary Digital Content (see Supplementary Tables 1 and 2, https://links.lww.com/AJG/D287, https://links.lww.com/AJG/D288) describes novel risk factors identified by studies and additional modeling characteristics such as missing data handling, respectively. We excluded 4 articles because of unclear data sources (16), no multivariate model development (17), unclear predictor utilization in modeling (18), and significant overlap of data and modeling methods with another included study (19).
Figure 1.:Systematic review flow diagram—selection of articles. ML, machine learning.
Table 1.:Machine learning-based pancreatic cancer prediction study characteristics
Table 2.:Machine learning modeling results of the included studies
Most studies considered a composite PC outcome and did not differentiate between pancreatic ductal adenocarcinoma, neuroendocrine tumors, or other specific types of PC.
Most of the studies utilized curated high-risk predictors based on PC literature or clinical expertise (n = 20) (20–39). Figure 2a shows the percentage of studies that utilized curated vs noncurated predictors. Moreover, we observed that a greater proportion of models in group A (linear models, 8/14) and group B (nonlinear models excluding DL, 9/14) utilized curated sets of candidate predictors as compared with group C (DL, 1/9) (Figure 3a). Models that limited their analysis to curated risk factors reported a similar discrimination performance (mean C-index = 0.81, min = 0.61, max = 1.0, n = 18) when compared with models that did not (mean C-index = 0.80, min = 0.72, max = 0.93, n = 19) (Figure 3b).
Figure 2.:Study and machine learning/artificial intelligence modeling characteristics—(a) electronic health record candidate predictors used for model development by the studies, (b) missing data reported by the studies, (c) model validation conducted by the studies, and (d) model calibration conducted by the studies.
Figure 3.:Use of curated risk factors by models: (a) number of models with and without using curated risk factors per model groups (group A: linear, group B: nonlinear excluding deep learning models, and group C: deep learning models only) and (b) performance of models in internal validations with and without using curated risk factors of PC from literature. PC, pancreatic cancer.
ML model development and evaluationLogistic regression was most frequently (n = 18) utilized for model development (Table 2). In addition, a diverse range of modeling techniques were used to build PC prediction models. These include tree-based models such as XGBoost, random forests, survival models such as random survival forests, cox regression, and multistate models. Furthermore, neural network-based models such as artificial neural networks, as well as more advanced DL-based approaches including gated recurrent units, and transformers were utilized to build the models.
Sixteen studies provided information about missing data and how missing data were handled (Figure 2b, see Supplementary Table 2, https://links.lww.com/AJG/D288). The most common approaches of missing data handling included exclusion of patients (28,40), exclusion of predictors with large percentage of missingness (22–25,33), and imputation of predictors (22,24,25,39). In 3 studies, missingness had been replaced by categorical values such as “Not known” (26) and “missing” (28) or created a binary variable with value −1 for missing data (31). We also observed that in one study, missing laboratory result values were considered the same as those with normal results (36).
The studies predicted PC occurrence within a prediction time window of up to 8 years after the date of risk assessment (Table 2). We observed that 6 articles did not provide any information about the prediction time window or data exclusion time intervals (20,26,28,31,36,41). Only 12 studies experimented with 1 month–5 years of data exclusion time intervals (21,29,30,33,35,40,42–47). The C-index for the models without a curated set of predictors and 1 year lead time or exclusion time interval ranged from 0.71 to 0.83 for internal validations and 0.60–0.78 for external validations. Figure 4 shows performance of the same models with no data exclusion (or smallest time interval data exclusion) settings vs data exclusion (or maximum time interval data exclusion) settings in different model groups. The figure represents results from internal validations of 9 models presented in 5 different articles (group A: linear models, n = 3 (21,40); B: nonlinear models excluding DL models, n = 3 (40,43); and C: DL models only, n = 3 (33,42)). Four studies that experimented with data exclusion time intervals were excluded from this analysis because of no minimum and maximum data exclusion experiment results reported (29,30), no C-index reported (35), or no internal validation results reported (44).
Figure 4.:Model performance (group A: linear, group B: nonlinear excluding deep learning models, and group C: deep learning models only) with and without data exclusion time intervals before diagnosis.
We observed that 24 studies performed either an internal or external or both internal and external validations (Table 2). Some internal validations were conducted by evaluating the model on a holdout test set, typically 20% of the data set. Several studies used bootstrapping for internal validation. External validations were conducted by evaluating the model performance on an external data set from a different health system or geographic region (42,44,46). A distribution of model validation methods utilized in the different studies included in our review is presented in Figure 2c. Figure 5a,b presents the performance of different model groups in internal and external validation settings, respectively. Models from the 6 studies that did not perform any form of validation were excluded from this illustration (26–28,32,34,36). For internal validation, the average C-index for models in groups A, B, and C was 0.77, 0.83, and 0.83, respectively. For external validation, the average C-index for models in groups A, B, and C was 0.77, 0.79, and 0.88, respectively. Group C for external validation included results from a single study only. Model performances on all exclusion/lead time intervals, prediction time windows, and data sets are presented in Table 2.
Figure 5.:(a) Internal and (b) external validation model performances by groups (group A: linear, group B: nonlinear excluding deep learning models, and group C: deep learning models only).
Ten studies performed a calibration analysis (Figure 2d, see Supplementary Table 2, https://links.lww.com/AJG/D288) (20–25,37,43,44,46). The model calibration analyses were conducted using Hosmer-Lemeshow χ2 goodness-of-fit tests, Greenwood-Nam-D'Agostino calibration tests, Platt calibration, and calibration graphs.
Identifying novel risk factors of PCSix studies that did not rely on a curated set of predictors (42–47) identified novel risk factors utilizing X-AI techniques (see Supplementary Table 1, https://links.lww.com/AJG/D287). Chen et al (43) utilized XGBoost gains to identify that pancreatic disorders (noncancerous and not relating to diabetes mellitus) were the most important model predictor. Placido et al (42) explored integrated gradients in neural networks, finding jaundice, abdominal pain, and weight loss as key features 0–6 months before PC diagnosis. With a longer interval before cancer diagnosis, key contributors included diabetes mellitus, anemia, functional bowel disease, and other pancreatic, bile duct diseases, and cancers (42). Salvatore et al grouped relevant International Classification of Diseases, Tenth Revision (ICD-10) codes into clinically relevant phenotypically related aggregates, “phecodes.” Using co-occurrence analysis, they identified that digestive and neoplasm phecodes were strong predictors of PC (44). Park et al utilized SHapley Additive exPlanations values to identify that kidney, liver function, diabetes, red blood cell, and white blood cell groups contributed the most in predicting PC risk from laboratory results (47). Jia et al (46) ranked features by univariate C-index to identify the independent contributors to PC risk prediction. The top 5 predictors from their analysis were age, number of recent records, creatinine in serum, plasma, or blood, number of early records, diabetes mellitus without complication diagnosis group, and essential hypertension diagnosis group. Zhu et al (45) reported that unspecified disease of pancreas (ICD-10 K86.9), malignant neoplasm of transverse colon (ICD-10 C18.4), pseudocyst of pancreas (ICD-10 K86.3), hypertrophy of breast (ICD-10 N62), and neoplasm of unspecified behavior of digestive system (ICD-10 D49.0) were the key PC risk factors based on model odds ratios.
Risk of bias assessmentWe used PROBAST to assess risk of bias of the models included in our study (13). If 2 or more models were developed in a study, risk of bias for the best performing model (highest C-index) was assessed using PROBAST. Models from only 4 studies had low risk of bias (33,42,46,47). Supplementary Digital Content (see Supplementary Table 3, https://links.lww.com/AJG/D289) presents a summary of the PROBAST risk of bias and applicability assessment.
DISCUSSIONWe extracted and reviewed data from 30 studies to discern state-of-the-art ML methods for predicting PC risk and identifying novel risk factors from EHR data. Most studies could develop models with a discriminative performance ranging from 0.57 to 1.0. However, there were many potential sources for risk of bias including outcome definition, predictor selection, data exclusion window, prediction time window, and reporting and handling of missing data.
Most of the studies defined PC as a composite outcome, by using a range of ICD codes. Two types of PC account for most cases: pancreatic adenocarcinoma, pancreatic ductal adenocarcinoma (PDAC) (85% of cases), and pancreatic neuroendocrine tumor (PNET) (less than 5%) (48). PDAC, PNET, and other PC types have different tumor biology, natural history, and risk factors. Predicting PC as a composite outcome is problematic because key contributing predictors identified for all PCs may not apply to PDAC or PNET, specifically.
Most of the studies used logistic regression for model development but did not provide information about assessing modeling assumptions. Nor was sufficient information provided to determine whether controls were sampled appropriately, ensuring they are representative of the population from which cases develop at the case index date (49). Nonlinear and DL-based AI models had similar discrimination performance (C-index) when compared with traditional linear ML models (Figure 5). It is crucial to note that more caution is warranted for computationally expensive models to prevent overfitting (50). This is because the model complexity that enables identifying the signal in the training data to make accurate predictions can also make the model more susceptible to capturing nuanced noise that does not generalize to other populations as patterns. Therefore, to mitigate these issues, increasing sample size, using regularization and resampling/internal model validation techniques, and conducting external validation in data from other populations and institutions, is crucial. External validations will test model robustness and generalizability beyond the initial development setting. Also, it is important to note that group C has only 1 sample. Hence, our understanding of the performance of group C in an external validation setting is currently limited.
It is critical to examine performance of the final models in different subgroups to ensure that the model is fair to the subgroups (similar discrimination ability) and not significantly advantaged/disadvantaged in certain groups. We found that only 4 studies performed/mentioned any subgroup analysis by age (30,43) and race (32,33). Jia et al (46) performed model development using data from different race groups and geographic locations and tested model performance using data from excluded races and locations. However, none of the studies reported any fairness matrices such as equalized odds and equalized opportunity (51).
Most of the studies used a curated set of high-risk predictors based on PC literature or clinical expertise (see Supplementary Table 2, https://links.lww.com/AJG/D288). The EHR clinical data include structured data such as medications and unstructured data such as free text clinical notes. Few studies used a combination of structured and unstructured data to develop the models. Figure 3b shows that not utilizing a curated set of high-risk predictors resulted in similar mean discriminatory performance, although this could potentially favor identifying novel risk factors. Chen et al (43) used various EHR-based candidate predictors to develop their XGBoost models, but many of the features have limited interpretability, such as “strain” and “runny”. The XGBoost model viewed each word in clinical notes individually, while a transformer-based approach can retain the context of words and phrases in the clinical notes data (52).
Several studies did not provide any information about missing data and missing data handling (see Supplementary Table 2, https://links.lww.com/AJG/D288) (20,21,27,30,32,34,38,41–44,53). Missing data and how the missingness has been handled could affect prognostic model performance and applicability (54). The estimated predictor outcome associations and predictive performance measures of the model are unbiased only if excluded participants are a completely random subset of the original study sample (55). A comparison of the participants with and without missing values could provide better understanding of potential bias in the data. For models utilizing structured data, multiple imputation has shown to perform superior in terms of bias and precision (56,57). Also, DL-based approaches including recurrent neural networks can efficiently handle irregularities and missing patterns in time series clinical data (58,59).
The PC occurrence prediction time window in the studies ranged up to 8 years of the date of risk assessment (Table 2). Most studies did not consider data exclusion time intervals. Such modeling strategies are not appropriate for early detection and can introduce high risk of bias (see Supplementary Table 3, https://links.lww.com/AJG/D289) because the predictor data close to the time of PC diagnosis will most likely be symptoms of the disease instead of true predictors of future risk. Among studies that did consider data exclusion time intervals, DL-based modeling techniques performed better on average with minimum or no data exclusion and performed comparable with nonlinear models for maximum data exclusion time intervals (3 months to 1 year) for the same models in each group; linear models had least discrimination performance with data exclusion time intervals as shown in Figure 4. There was also a decline in performance with data exclusion in group C models when compared with group A and B models. With a sample size of 3 across groups, it is difficult to draw any strong conclusions. However, this could suggest that the group C DL models developed in these studies depended more on data closer to the PDAC event than other groups. Studies show that predictor data considered with a lead time of 24–36 months before PC diagnosis may be most appropriate (35,60,61).
Identification of novel risk factors is important because about 80% of PC is considered sporadic in etiology. Explainability of an ML model pertains to the clarity of its internal logic and mechanics, enabling deeper comprehension of its training and decision-making processes (10). Few articles explored such techniques (see Supplementary Table 1, https://links.lww.com/AJG/D287) (42–44). Pancreatic disorders, diseases of biliary tract, abdominal-pelvic pain, digestive neoplasms, and jaundice were identified as the most common risk factors.
Table 3 presents a list of best practice recommendations for AI/ML model development to predict PC early using EHR data.
Table 3.:Best practices and recommendations for future ML/AI modeling studies in early detection of PC using EHR data
A limitation of this review was potentially missing studies that could be relevant. We excluded studies if they were written in non-English. Another limitation of this study is the sample size for different groups of models in the figures and analysis. For instance, we only have 1 model C sample that performed external validation, as shown in Figure 5. Therefore, it is important to consider the sample size when interpreting the results. The strength of this study is that we critically appraised the studies utilizing guidelines provided in the CHARMS checklist. Another strength of our study is that we did not limit our analysis to specific ML/AI modeling techniques. Our comprehensive review and discussion of model development, evaluation, and explainability strategies could guide future research studies attempting to develop PC risk prediction models and efforts on novel risk factor identification utilizing EHR data.
Real-world utilization of the models developed in these studies was limited. Only 2 of the studies conducted a prospective validation after model development (25,35). Multiple studies have considered identifying individuals at high risk, provided a decision curve, or reported model performance by thresholding predicted risks by the models in the validation cohort (22–25,27–29,31,32,35,37,40,42–46). None of the studies reported an integration of their model into the EHR or to identify high-risk individuals in a real-world setting; in the authors' opinion, this is appropriate because all of the algorithms potentially require further external model validation before being ready for this.
In conclusion, through this systematic review, we found that several studies have attempted to develop ML models using EHR data to predict PC risk with some success. However, it was observed that most studies utilized a curated set of predictors instead of utilizing unbiased approaches within the EHR. Logistic regression was the most common modeling technique. Lack of reporting on missing data was common and a significant limitation. Novel risk factor identification was conducted in only 6 studies. We believe that utilization of longitudinal structured and unstructured data together in a population-based cohort coupled with utilization of X-AI techniques may identify novel PC risk factors and should be important considerations in future studies. We also recommend using the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis statement to report prediction model development and validation method details (62). Finally, for the PC risk modeling strategy, it is crucial to evaluate the modeling assumptions and ensure collaboration across a spectrum of content expertise, including physicians, epidemiologists, biostatisticians, data scientists, and AI/ML experts. Such multidisciplinary collaborative efforts will help develop the most effective model for early prediction of PC risk by judiciously utilizing the available EHR data while minimizing biased estimates, inefficient models, and incorrect conclusions.
CONFLICTS OF INTERESTGuarantor of the article: Shounak Majumder, MD.
Specific author contributions: A.K.M., B.C., A.L.O., S.M.: conception, design, acquisition, analysis, and drafting manuscript. All authors: interpretation of data for the work and reviewing manuscript. All authors: final approval of manuscript.
Financial support: This study was supported by research funding from the Centene Foundation to S.M. S.M. was also supported by U01 CA210138, National Cancer Institute. The content of this manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health.
Potential competing interests: S.M., Mayo Clinic and Exact Sciences have an intellectual property development agreement. S.M. is listed as an inventor under this agreement and could share potential future royalties as an employee of Mayo Clinic. The other authors of this manuscript have no conflict of interest to declare.
Study highlights
WHAT IS KNOWN ✓ Pancreatic cancer (PC) is often diagnosed at an advanced stage when treatment options are limited. ✓ PC detection at an early stage can improve survival. ✓ Artificial intelligence (AI)-based models have been developed to predict PC utilizing electronic health records (EHR). ✓ There is limited guidance on the optimal selection of modeling techniques, study design, and utilization of EHR data for PC prediction. WHAT IS NEW HERE ✓ The review provides recommendations for optimal machine learning/AI modeling approaches to utilize EHR data for PC prediction. ✓ Underutilization of EHR data, sparse use of advanced AI methods, and limited experimentation with data exclusion time intervals were some of the major limitations. ✓ Efforts on identifying novel risk factors to predict PC from EHR are currently limited. ✓ Nonlinear and deep learning-based AI models were found to perform similar to traditional linear statistical and machine learning models in predicting PC. ✓ Deep learning models generally utilized a wide range of candidate predictors, instead of a set of curated known risk factors for PC. ACKNOWLEDGMENTSWe thank Larry J. Prokop for his support in article search and Karen A. Doering and Kathleen J. Johnson for their administrative support.
REFERENCES 1. Bray F, Ferlay J, Soerjomataram I, et al. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2018;68(6):394–424. 2. Rahib L, Smith BD, Aizenberg R, et al. Projecting cancer incidence and deaths to 2030: The unexpected burden of thyroid, liver, and pancreas cancers in the United States. Cancer Res 2014;74(11):2913–21. 3. Ryan DP, Hong TS, Bardeesy N. Pancreatic adenocarcinoma. N Engl J Med 2014;371(11):1039–49. 4. Kleeff J, Korc M, Apte M, et al. Pancreatic cancer. Nat Rev Dis Primers 2016;2:16022. 5. Blackford AL, Canto MI, Klein AP, et al. Recent trends in the incidence and survival of stage 1A pancreatic cancer: A surveillance, epidemiology, and end results analysis. J Natl Cancer Inst 2020;112(11):1162–9. 6. US Preventive Services Task Force, Owens DK, Davidson KW, Krist AH, et al. Screening for pancreatic cancer: US preventive services task force reaffirmation recommendation statement. JAMA 2019;322(5):438–44. 7. Sawhney MS, Calderwood AH, Thosani NC, et al. ASGE guideline on screening for pancreatic cancer in individuals with genetic susceptibility: Summary and recommendations. Gastrointest Endosc 2022;95(5):817–26. 8. Aslanian HR, Lee JH, Canto MI. AGA clinical practice update on pancreas cancer screening in high-risk individuals: Expert review. Gastroenterology 2020;159(1):358–62. 9. Xiao C, Choi E, Sun J. Opportunities and challenges in developing deep learning models using electronic health records data: A systematic review. J Am Med Inform Assoc 2018;25(10):1419–28. 10. Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: A review of machine learning interpretability methods. Entropy (Basel) 2020;23(1):18. 11. The EndNote Team. EndNote. 20 ed. Clarivate: Philadelphia, PA, 2013. 12. Moons KGM, de Groot JAH, Bouwmeester W, et al. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: The CHARMS checklist. PLoS Med 2014;11(10):e1001744. 13. Wolff RF, Moons KGM, Riley RD, et al. PROBAST: A tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med 2019;170(1):51–8. 14. Fernandez-Felix BM, López-Alcalde J, Roqué M, et al. CHARMS and PROBAST at your fingertips: A template for data extraction and risk of bias assessment in systematic reviews of predictive models. BMC Med Res Methodol 2023;23(1):44. 15. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst Rev 2021;10(1):89. 16. Chauhan R, Kaur H, Sharma S. A feature based approach for medical databases. 2016 International Conference on Advances in Information Communication Technology and Computing, AICTC 2016, Association for Computing Machinery, New York, NY, August 12, 2016. https://dl.acm.org/doi/proceedings/10.1145/2979779 17. Manias G, Op Den Akker H, Azqueta A, et al. IHELP: Personalised health monitoring and decision support based on artificial intelligence and holistic health records. 26th IEEE Symposium on Computers and Communications, ISCC 2021, Institute of Electrical and Electronics Engineers Inc, September 5-8, 2021. https://ieeexplore.ieee.org/xpl/conhome/9631377/proceeding 18. Matchaba S, Fellague-Chebra R, Purushottam P, et al. Early diagnosis of pancreatic cancer via machine learning analysis of a national electronic medical record database. JCO Clin Cancer Inform 2023;7:e2300076. 19. Chen W, Zhou B, Jeon CY, et al. Machine learning versus regression for prediction of sporadic pancreatic cancer. Pancreatology 2023;23(4):396–402. 20. Ahmed AE, Alzahrani FS, Gharawi AM, et al. Improving risk prediction for pancreatic cancer in symptomatic patients: A Saudi Arabian study. Cancer Manag Res 2018;10:4981–6. 21. Baecker A, Kim S, Risch HA, et al. Do changes in health reveal the possibility of undiagnosed pancreatic cancer? Development of a risk-prediction model based on healthcare claims data. PLoS One 2019;14(6):e0218580. 22. Boursi B, Finkelman B, Giantonio BJ, et al. A clinical prediction model to assess risk for pancreatic cancer among patients with new-onset diabetes. Gastroenterology 2017;152(4):840–50.e3. 23. Chen W, Butler RK, Zhou Y, et al. Prediction of pancreatic cancer based on imaging features in patients with duct abnormalities. Pancreas 2020;49(3):413–9. 24. Chen W, Butler RK, Lustigova E, et al. Risk prediction of pancreatic cancer in patients with recent-onset hyperglycemia: A machine-learning approach. J Clin Gastroenterol 2023;57(1):103–10. 25. Chen W, Zhou Y, Xie F, et al. Derivation and external validation of machine learning-based model for detection of pancreatic cancer. Am J Gastroenterol 2023;118(1):157–67. 26. Dayem Ullah AZM, Stasinos K, Chelala C, et al. Temporality of clinical factors associated with pancreatic cancer: A case-control study using linked electronic health records. BMC Cancer 2021;21(1):1279. 27. Jeon CY, Chen Q, Yu W, et al. Identification of individuals at increased risk for pancreatic cancer in a community-based cohort of patients with suspected chronic pancreatitis. Clin Translational Gastroenterol 2020;11(4):e00147. 28. Klein AP, Lindstrom S, Mendelsohn JB, et al. An absolute risk model to identify individuals at elevated risk for pancreatic cancer in the general population. PLoS One 2013;8(9):e72311. 29. Li X, Gao P, Huang C-J, et al. A deep-learning based prediction of pancreatic adenocarcinoma with electronic health records from the state of Maine. Int J Med Health Sci 2020;14:358–65. 30. Malhotra A, Rachet B, Bonaventure A, et al. Can we screen for pancreatic cancer? Identifying a sub-population of patients at high risk of subsequent diagnosis using machine learning techniques applied to primary care data. PLoS One 2021;16(6):e0251876. 31. Muhammad W, Hart GR, Nartowt B, et al. Pancreatic cancer prediction through an artificial neural network. Front Artif Intelligence 2019;2:2. 32. Munigala S, Singh A, Gelrud A, et al. Predictors for pancreatic cancer diagnosis following new-onset diabetes mellitus. Clin Transl Gastroenterol 2015;6(10):e118. 33. Park J, Artin MG, Lee KE, et al. Deep learning on time series laboratory test results from electronic health records for early detection of pancreatic cancer. J Biomed Inform 2022;131:104095. 34. Risch HA, Yu H, Lu L, et al. Detectable symptomatology preceding the diagnosis of pancreatic cancer and absolute risk of pancreatic cancer diagnosis. Am J Epidemiol 2015;182(1):26–34. 35. Sharma A, Kandlakunta H, Nagpal SJS, et al. Model to determine risk of pancreatic cancer in patients with new-onset diabetes. Gastroenterology 2018;155(3):730–9.e3. 36. Stapley S, Peters TJ, Neal RD, et al. The risk of pancreatic cancer in symptomatic patients in primary care: A large case-control study using electronic records. Br J Cancer 2012;106(12):1940–4. 37. Yu A, Woo SM, Joo J, et al. Development and validation of a prediction model to estimate individual risk of pancreatic cancer. PLoS One 2016;11(1):e0146473. 38. Zhao X, Lang R, Zhang Z, et al. Exploring and validating the clinical risk factors for pancreatic cancer in chronic pancreatitis patients using electronic medical records datasets: Three cohorts comprising 2,960 patients. Translational Cancer Res 2020;9(2):629–38. 39. Chen S-M, Phuc PT, Nguyen P-A, et al. A novel prediction model of the risk of pancreatic cancer among diabetes patients using multiple clinical data and machine learning. Cancer Med 2023;12(19):19987–99. 40. Appelbaum L, Cambronero JP, Stevens JP, et al. Development and validation of a pancreatic cancer risk model for the general population using electronic health records: An observational study. Eur J Cancer 2021;143:19–30. 41. Rasmy L, Xiang Y, Xie Z, et al. Med-BERT: Pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. NPJ Digital Med 2021;4(1):86. 42. Placido D, Yuan B, Hjaltelin JX, et al. A deep learning algorithm to predict risk of pancreatic cancer from disease trajectories. Nat Med 2023;29(5):1113–22. 43. Chen Q, Cherry DR, Nalawade V, et al. Clinical data prediction model to identify patients with early-stage pancreatic cancer. JCO Clin Cancer Inform 2021;5:279–87. 44. Salvatore M, Beesley LJ, Fritsche LG, et al. Phenotype risk scores (PheRS) for pancreatic cancer using time-stamped electronic health record data: Discovery and validation in two large biobanks. J Biomed Inform 2021;113:103652. 45. Zhu W, Aphinyanaphongs Y, Kastrinos F, et al. Identification of patients at risk for pancreatic cancer in a 3-year timeframe based on machine learning algorithms. medRxiv 2023;06. 46. Jia K, Kundrot S, Palchuk MB, et al. A pancreatic cancer risk prediction model (Prism) developed and validated on large-scale US clinical data. EBioMedicine 2023;98:104888. 47. Park J, Artin MG, Lee KE, et al. Structured deep embedding model to generate composite clinical indices from electronic health recor
Comments (0)