• Dépistage, diagnostic, pronostic

  • Évaluation des technologies et des biomarqueurs

  • Sein

The architectural gap in clinical artificial intelligence

Menée à partir de données portant sur 397 648 femmes âgées de 40 à 97 ans puis validée à partir de données portant sur 96 348 puis 4 512 femmes supplémentaires, cette étude évalue la performance d'un score, issu d'un algorithme d'apprentissage automatique intégrant des données d'images mammographiques, pour prédire le risque de cancer du sein

In The Lancet Digital Health, a systematic review and meta-analysis by Alabed Samer and colleagues1 provides a clear picture about the current state of clinical artificial intelligence (AI) in radiology communication. Across patients, members of the public, and clinicians, large language model-generated simplifications of radiology reports were consistently rated as more understandable than original reports, while maintaining high clinician-rated accuracy and completeness. At the same time, the meta-analysis presents persistent concerns around releasability, safety, and the absence of real-world deployment data. These findings echo a broader pattern seen in medical imaging and natural language processing: across well-defined clinical use cases, AI models are capable of delivering output that clinicians recognise as useful and reliable, and model performance is no longer the dominant constraint on clinical use.2–4 However, the presence of AI models in routine care remains marginal.5,6 This contrast of robust technical performance on the one hand and poor clinical adoption on the other defines the central paradox of contemporary AI in medicine. The limiting factor is not the capability of the models but the readiness of the systems into which they need to be deployed. Thus, the real challenge is ascertaining whether existing clinical systems are prepared to integrate these tools.

The Lancet Digital Health , commentaire, 2026

Voir le bulletin