Drawn from published evidence and regulatory guidance specific to marketing and advertising. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 4 · Impact 5
AI Advertising Discrimination Producing Unlawful Differential Ad Delivery by Protected Characteristics
AI programmatic advertising optimisation systems that maximise click-through or conversion rates learn to deliver advertisements for housing, employment, financial products, and consumer credit predominantly or exclusively to audience segments defined by race, gender, age, and national origin — producing discriminatory ad delivery patterns that violate the Fair Housing Act, Equal Credit Opportunity Act, and equivalent EU non-discrimination law even when protected characteristics are not explicit targeting inputs, because AI audience optimisation achieves functionally equivalent demographic segregation through correlated behavioural proxies.
CriticalLikelihood 4 · Impact 5
AI-Generated Synthetic Content and Deepfakes Creating Brand, Legal, and Regulatory Exposure
AI tools used in marketing content production generate synthetic images, video, voice, and text that misrepresents real persons through deepfakes, creates false impressions of celebrity endorsement, reproduces copyrighted creative works without licensing, or generates advertising content making product claims that are factually inaccurate — exposing the brand to false advertising liability, right of publicity claims, copyright infringement, and regulatory sanction from advertising standards authorities and consumer protection regulators.
HighLikelihood 4 · Impact 4
AI Manipulation and Dark Patterns in Digital Marketing Causing Consumer Harm and Regulatory Enforcement
AI systems trained to maximise engagement, click-through, or conversion generate and deploy manipulative interface design, exploitative emotional triggers, and deceptive persuasion techniques — including AI-identified individual psychological vulnerabilities used to personalise manipulation, AI-optimised countdown timers, false scarcity signals, and confirmshaming — causing consumer harm through unwanted purchases, subscription traps, and privacy consent being obtained by manipulation rather than genuine choice, attracting enforcement from the FTC, DPAs, and ASA.
CriticalLikelihood 3 · Impact 5
AI Advertising Profiling Data Breach Exposing Sensitive Consumer Audience Segments
AI advertising data management platforms, customer data platforms, and programmatic advertising infrastructure holding highly detailed consumer behavioural profiles — including inferred health conditions, financial stress indicators, political sympathies, sexual orientation proxies, and location patterns — suffer data breaches or unauthorised third-party access, exposing sensitive consumer data assembled through AI profiling to threat actors, creating GDPR special category data breach liability and catastrophic consumer trust damage.
HighLikelihood 3 · Impact 4
AI Brand Safety Failure Placing Advertising Adjacent to Harmful or Illegal Content
AI programmatic advertising systems that automate media buying at scale place brand advertisements adjacent to extremist content, child sexual abuse material, terrorist propaganda, misinformation, or deeply offensive user-generated content — because AI brand safety filters fail to detect novel harmful content, are circumvented by adversarial content creators, or prioritise inventory scale over content safety — causing severe brand reputational damage, consumer backlash, and in some jurisdictions legal liability for advertising funding harmful content.
CriticalLikelihood 3 · Impact 5
AI Targeting Children with Inappropriate or Exploitative Advertising Content
AI audience targeting systems misclassify minors as adults, fail to implement effective age-gating, or direct advertising for age-restricted products — including gambling, alcohol, high-interest credit, and age-inappropriate content — to child and adolescent audiences through AI personalisation systems that do not reliably distinguish children from adults using behavioural signals rather than verified age data, violating DSA prohibition on AI-targeted advertising to minors, COPPA, Children's Code requirements, and advertising standards codes.