Drawn from published evidence and regulatory guidance specific to manufacturing. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 3 · Impact 5
AI Industrial Control System Failure Causing Safety Incident or Production Catastrophe
An AI system performing safety-relevant control, monitoring, or optimisation functions in manufacturing — including AI process controllers, AI safety interlocks, AI predictive shutdown systems, or AI-enabled collaborative robot control — exhibits unexpected behaviour due to model drift, adversarial input, out-of-distribution operating conditions, or software defect, causing a dangerous machine state, release of hazardous material, explosion, or serious worker injury that a deterministic safety system would have prevented.
CriticalLikelihood 3 · Impact 5
AI Quality Control Misclassification Releasing Safety-Critical Defective Products
An AI vision inspection or quality control system used to approve manufactured components — including pharmaceutical capsules, aerospace fasteners, automotive safety systems, or medical device components — misclassifies defective items as conforming product and releases them into the supply chain, resulting in product failures in safety-critical applications causing injury, death, large-scale product recall, regulatory sanctions, and catastrophic reputational damage to the manufacturer.
CriticalLikelihood 3 · Impact 5
Cyberattack Exploiting AI Vulnerabilities in OT Networks Causing Sabotage or Data Theft
Threat actors exploit vulnerabilities specific to AI components in operational technology networks — including adversarial input attacks that manipulate AI sensor interpretation, model poisoning through compromised training pipelines, or exploitation of AI API interfaces — to sabotage manufacturing processes, cause equipment damage, exfiltrate proprietary process and product data, or hold production systems to ransom in a manner that conventional OT cybersecurity controls were not designed to detect.
HighLikelihood 4 · Impact 4
AI Predictive Maintenance Failure Leading to Unplanned Equipment Downtime or Catastrophic Asset Failure
An AI predictive maintenance system trained on historical failure data produces incorrect remaining useful life predictions — through model drift as equipment ages beyond training distribution, failure to account for novel fault modes, or inadequate sensor data quality — resulting in either premature maintenance that wastes resources or missed failure prediction that allows critical equipment to fail catastrophically, causing production stoppages, secondary damage, safety incidents, or contractual delivery penalties.
HighLikelihood 4 · Impact 4
AI Supply Chain Optimisation Creating Dangerous Single-Source Concentration and Resilience Failure
AI supply chain optimisation systems that maximise cost efficiency by concentrating procurement in the lowest-cost suppliers systematically reduce supply chain diversity and geographic resilience, creating critical single-source dependencies that collapse under geopolitical disruption, natural disaster, or supplier failure — with AI optimisation continuing to recommend consolidation even as concentration risk accumulates beyond levels that human supply chain managers would have recognised as dangerous.
HighLikelihood 4 · Impact 3
AI Worker Monitoring and Surveillance Creating Workplace Harm and Legal Exposure
AI systems used to monitor worker productivity, movement, physical exertion, fatigue, and compliance in manufacturing environments — including AI wearables, computer vision surveillance, and AI-scored performance metrics — cause documented psychological harm through pervasive surveillance, generate biased performance assessments that disproportionately disadvantage workers with disabilities or atypical working patterns, and in the EU may constitute disproportionate employee monitoring in violation of GDPR without adequate works council consultation.