Next-generation AI for visually occult pancreatic cancer detection in a low-prevalence setting with longitudinal stability and multi-institutional generalisability
Menée à l'aide de plusieurs cohortes incluant des patients atteints d'un adénocarcinome canalaire du pancréas et des personnes en bonne santé, cette étude évalue la performance d'un système automatisé utilisant une intelligence artificielle de nouvelle génération et des données radiomiques pour détecter un cancer du pancréas non visible à l'oeil nu
Background : Failure of conventional imaging to detect pancreatic ductal adenocarcinoma (PDA) at its visually occult pre-diagnostic stage is a primary barrier to improving its otherwise poor rate of survival.
Objective : To develop and validate the Radiomics-based Early Detection MODel (REDMOD), an AI framework to identify subvisual radiomic signatures of pre-diagnostic PDA on standard-of-care CT.
Designs : REDMOD was trained on a multi-institutional cohort (n=969; 156 pre-diagnostic, 813 control) and tested on an independent set (n=493; 63 pre-diagnostic, 430 control), simulating a low prevalence (~1:6) early detection paradigm. The fully automated framework couples AI-driven segmentation with a heterogeneous ensemble architecture trained on a 40-feature radiomic signature derived from Synthetic Minority Over-sampling Technique (SMOTE)-balanced data. A tunable Youden Index-optimised classification threshold enables performance calibration without retraining. Validation included direct comparison with radiologists, longitudinal test–retest analysis and external specificity validation across two independent cohorts (n=539 and n=80).
Results : On an independent test set (n=493), REDMOD identified occult PDA (AUC 0.82; 73.0% sensitivity) at a median 475-day lead time. This represented nearly twofold higher sensitivity than radiologists (38.9%; p<0.001), which grew to nearly threefold (68.0% vs 23.0%) at >24 months lead time. REDMOD showed strong longitudinal stability (90–92% concordance) and generalisable specificity across multi-institutional (81.3%; n=539) and public (87.5%; n=80) datasets. Mechanistic analyses confirmed predictive power derived principally from multi-scale wavelet-filtered textural features (90% of selected signature), which outperformed unfiltered features (AUC 0.82 vs 0.74; p=0.007) in capturing subvisual architectural disruptions.
Conclusions : REDMOD is an automated, mechanistically grounded, longitudinally stable, externally validated AI that surpasses radiologists for PDA detection at its visually occult pre-diagnostic stage. These attributes position it for prospective validation in high-risk cohorts, a necessary step towards shifting the paradigm from late-stage symptomatic diagnosis to proactive pre-clinical interception.
Gut , article en libre accès, 2026