Drawn from published evidence and regulatory guidance specific to healthcare. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 3 · Impact 5
AI Diagnostic Error Causing Delayed or Incorrect Treatment
An AI diagnostic or clinical decision support system produces an incorrect output — a missed malignancy, contraindicated drug recommendation, or false-negative screening result — that a clinician acts upon without sufficient independent verification, causing delayed, omitted, or incorrect treatment and direct patient harm.
CriticalLikelihood 4 · Impact 5
Demographic Bias Producing Disparate Clinical Outcomes Across Patient Groups
AI systems trained on historically unrepresentative datasets produce systematically less accurate outputs for underrepresented populations — including women, Black, Asian, and minority ethnic patients, elderly individuals, and patients with disabilities — leading to inequitable diagnostic accuracy, risk stratification, and treatment recommendations that perpetuate existing health disparities.
HighLikelihood 4 · Impact 4
AI Model Drift Causing Undetected Real-World Performance Degradation
A clinical AI model validated at deployment progressively deteriorates in real-world performance due to changes in patient population demographics, clinical workflows, disease prevalence, or medical imaging equipment, without the degradation being detected through post-market monitoring, resulting in a sustained period of substandard AI-assisted clinical care.
HighLikelihood 4 · Impact 4
Clinician Automation Bias and Unsafe Over-Reliance on AI Outputs
Clinical staff accept AI recommendations without applying adequate independent clinical judgment — particularly in high-volume settings using AI for triage or worklist prioritisation — resulting in systematic failures to catch AI errors that a vigilant clinician would have identified, and misallocation of clinical attention toward AI-flagged cases at the expense of AI-missed cases.
HighLikelihood 3 · Impact 4
Unlawful Processing of Patient Health Data in AI Training Without Valid Legal Basis
A healthcare AI system is trained or fine-tuned on patient health records, diagnostic images, or genomic data without adequate legal basis under GDPR Article 9 or HIPAA — for example by treating routine clinical data as available for commercial AI training without explicit consent — exposing the organisation to regulatory enforcement, patient trust damage, and potential criminal liability.
CriticalLikelihood 4 · Impact 5
Generative AI Hallucination in Clinical Documentation or Medical Information
Large language model-based AI tools used for clinical documentation, patient communication, or treatment protocol retrieval generate plausible-sounding but factually incorrect medical information — including fabricated drug interactions, incorrect dosage guidance, or invented clinical evidence — that enters the clinical record or influences treatment without being identified as erroneous.