AI-generated draft content. This page is educational and does not constitute legal advice. Regulatory obligations depend on your jurisdiction, organisation type, and specific AI use case — qualified legal, compliance, or clinical review is always required before adoption.

Employee AI Guidelines for HR & Recruitment

Covers talent acquisition and applicant tracking, automated resume and CV screening, AI video interview analysis, psychometric and cognitive AI assessment, workforce planning and scheduling, performance management and employee scoring, compensation and pay equity analysis, employee monitoring and productivity surveillance, workforce reduction and redundancy selection, internal mobility and succession planning, and people analytics platforms. Any AI system that influences hiring, promotion, remuneration, performance assessment, disciplinary action, or workforce reduction decisions falls within this overlay..

Why Responsible AI matters in HR and recruitment

Organisations in HR and recruitment face AI obligations that generic templates don’t cover — clinical-safety duties, sector-specific regulators, data protection expectations for the populations you serve, and emerging AI-specific legislation. Blanket policies written for software companies miss most of what matters.

The Employee AI Guidelines produces plain-language AI guidelines for staff tailored to your jurisdiction, risk appetite, and the specifics of HR and recruitment. It is a drafting aid built to accelerate — not replace — qualified review by your in-house practitioners or external counsel.

AI risks that matter in HR and recruitment

Drawn from published evidence and regulatory guidance specific to HR and recruitment. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.

CriticalLikelihood 4 · Impact 5

AI Resume Screening Perpetuating Historical Hiring Discrimination at Scale

AI applicant screening systems trained on historical hiring decisions encode and replicate past discriminatory selection practices — systematically downranking candidates with minority ethnic names, employment gaps associated with caregiving responsibilities, or backgrounds from non-elite educational institutions — generating disparate impact against multiple protected groups across thousands of applications before the discrimination pattern is identified by HR oversight.

CriticalLikelihood 3 · Impact 5

AI Video Interview Analysis Using Prohibited or Pseudo-Scientific Assessment Proxies

AI video interview tools that purport to assess candidate suitability through facial expression analysis, micro-expression detection, voice tone evaluation, or eye movement patterns rely on pseudo-scientific methodology, produce racially and disability-correlated assessment disparities, and from August 2026 constitute prohibited emotion recognition in EU employment contexts under EU AI Act Article 5 — exposing employers to regulatory enforcement and discrimination claims while providing no validated predictive value for job performance.

CriticalLikelihood 3 · Impact 5

Automated Workforce Reduction Selection Without Meaningful Human Oversight

AI systems used to identify employees for redundancy, performance management, or role elimination — including AI productivity scoring, attendance monitoring, and performance ranking — produce selection outputs that disproportionately affect protected characteristic groups and are implemented without adequate human review of individual circumstances, creating wrongful dismissal liability, discrimination claims, and EU AI Act Article 14 human oversight violations for high-risk employment AI.

HighLikelihood 4 · Impact 4

Employee Monitoring AI Causing Psychological Harm and Constituting Unlawful Surveillance

Pervasive AI employee surveillance systems — including keystroke logging, continuous screen monitoring, webcam surveillance during remote work, location tracking, productivity scoring, and sentiment analysis of communications — cause documented psychological harm including anxiety, stress, and burnout, reduce retention, and may constitute unlawful processing under GDPR where monitoring is disproportionate, insufficiently disclosed, or lacks a valid legal basis for each specific monitoring activity.

HighLikelihood 4 · Impact 4

AI Compensation Tools Encoding and Amplifying Gender and Racial Pay Gaps

AI salary benchmarking, pay band recommendation, and compensation equity analysis tools trained on market pay data that reflects historical gender, race, and disability pay discrimination produce algorithmic pay recommendations that perpetuate those gaps under the guise of market-rate objectivity, obscuring pay equity violations and creating legal exposure under equal pay legislation while providing employers false assurance that AI-determined compensation is non-discriminatory.

HighLikelihood 4 · Impact 4

AI Assessment Tool Failure to Accommodate Candidates and Employees with Disabilities

AI cognitive aptitude tests, personality assessments, gamified evaluations, and timed psychometric tools used in hiring and performance management fail to provide reasonable adjustments for candidates with dyslexia, ADHD, autism spectrum conditions, visual impairments, or motor disabilities that affect AI-assessed performance but do not affect job capability — systematically screening out qualified disabled candidates in violation of ADA, Equality Act, and EU disability equality law.

How the five principles apply to HR and recruitment

Human oversight

Outputs support, rather than replace, the qualified practitioners in your HR and recruitment team. Human review is treated as a core step, not a rubber stamp.

Safety & validation

Before any AI system is acted on in HR and recruitment, it is tested in the specific population, workflow, and risk context of your organisation — not just in a vendor's demo environment.

Transparency & explainability

Outputs carry enough context — regulatory references, assumptions, known limitations — that a reviewer in HR and recruitment can trace and challenge them.

Accountability

Named roles — named individuals, named committees — are accountable for the AI decisions that affect people in your HR and recruitment organisation.

Equity & inclusiveness

Performance is reviewed across the demographic groups your HR and recruitment organisation actually serves, not just a representative-of-the-dataset average.

How the Employee AI Guidelines works

You describe your organisation and the staff roles in scope. The tool produces a plain-English guidelines document written for frontline employees — not for lawyers — covering what AI tools they can use, what they must not do, and how to escalate concerns.

The output is editable so it can be aligned with your induction and mandatory-training materials. It is a drafting aid intended for review by HR, clinical education, or information-governance leads before it reaches staff.

The output is a draft calibrated to HR and recruitment — it still requires review by qualified in-house or external practitioners before adoption.

What you get — measured and defensible

  • Readable by frontline staff — short sentences, concrete examples, no legal jargon.
  • Role-aware: individual contributors, managers, and technical roles each get guidance written for their context.
  • Includes a printable wallet card summarising the most critical rules for day-to-day reference.
  • Supports a no-blame reporting culture — the escalation process encourages concerns to surface early.

Regulatory and governance considerations

Selected obligations the tool’s output references for HR and recruitment. This is not a complete statement of your legal obligations — qualified counsel should verify applicability in your jurisdiction and context.

EU

EU AI Act — High-Risk AI in Employment and Workers Management (Annex III §4)

EU AI Act Annex III §4 classifies as high-risk AI systems intended for recruitment and selection of natural persons, making decisions affecting terms and conditions of employment, evaluating and monitoring performance, determining access to promotion, assigning tasks, and monitoring compliance with employment contracts. This captures AI applicant tracking and screening, AI video interview tools, automated performance scoring, AI scheduling, and AI used in redundancy and workforce restructuring decisions.

EU

EU AI Act — Prohibition on Emotion Recognition AI in Employment Contexts (Article 5)

EU AI Act Article 5(1)(f) prohibits AI systems that infer emotions of natural persons in the workplace, with only very limited exceptions for safety or medical reasons. This prohibition applies to AI video interview analysis tools that evaluate candidate emotions, engagement, or personality through facial expression analysis, voice tone analysis, or eye movement tracking, as well as employee monitoring AI that assesses emotional states.

EU

GDPR and National Data Protection Law — Employee and Candidate Personal Data in AI Systems

GDPR governs all processing of personal data of employees and job applicants in the EU, including data processed by AI for recruitment, performance management, and workforce planning. Article 22 applies to automated decisions with legal or similarly significant effects on employment, and Article 9 applies where AI processes or infers special category data including health, trade union membership, biometric data, or ethnicity from employee records or behaviour.

US

NYC Local Law 144 (2023) — Automated Employment Decision Tools

New York City Local Law 144 requires employers and employment agencies that use automated employment decision tools (AEDTs) to make, or substantially assist in making, employment decisions for candidates or employees in New York City to conduct annual independent bias audits and publicly publish audit results, as well as notify affected individuals that an AEDT is being used.

Built to strengthen in-house expertise

Every output is an editable draft. Every section carries the regulatory basis it was built from, so reviewers in your HR and recruitment team can verify, challenge, and adapt it to local context. Nothing is a finished legal instrument; nothing is intended to bypass qualified review.

We publish explicit disclaimers in the generated documents themselves, and treat human oversight as a default — not an opt-in. The tool’s role is to reduce the time your qualified practitioners spend on the first draft, so they can focus on review and adaptation.

Explore the Employee AI Guidelines for HR & Recruitment

Review a sample of what the tool produces, then generate a draft tailored to your own HR and recruitment organisation. $29.95 · one-time.

Laws the output references for HR and recruitment

17 regulations across 8 jurisdictions. This list is descriptive, not exhaustive, and is subject to change — verify applicability with qualified counsel before relying on any reference.

AU

  • Australia Fair Work Act 2009 and AHRC Guidance on AI in EmploymentThe Fair Work Act 2009 is Australia's federal workplace-relations law. The Australian Human Rights Commission (AHRC) has published guidance on AI in employment addressing discrimination, accessibility, and procedural fairness. State and territory anti-discrimination laws apply concurrently. The 2024 Privacy Act amendments introduce stricter rules for employee personal-information handling.

BR

  • Brazil Consolidação das Leis do Trabalho (CLT, Decreto-Lei 5.452/1943) — AI in HR DecisionsThe CLT is the consolidated Brazilian labour law applicable to private-sector employment relationships. AI tools used in recruitment, performance management, and dismissal must respect CLT provisions on non-discrimination, just cause for dismissal, and collective-bargaining obligations. Ministério Público do Trabalho (MPT) monitors employer compliance and has issued statements on AI in employment including expectations for transparency and bias mitigation.
  • Brazil LGPD (Law 13,709/2018) and ANPD Guidance — Employee Data in HR AIThe Lei Geral de Proteção de Dados applies to processing of employee and candidate personal data including by HR AI systems. LGPD Article 20 grants a right to review of automated decisions; Article 11 sets conditions for processing sensitive data including health and biometric data used in workforce AI; and ANPD guidance on automated decisions (2024) clarifies explanation and review obligations in the employment context.

EU

  • EU AI Act — High-Risk AI in Employment and Workers Management (Annex III §4)EU AI Act Annex III §4 classifies as high-risk AI systems intended for recruitment and selection of natural persons, making decisions affecting terms and conditions of employment, evaluating and monitoring performance, determining access to promotion, assigning tasks, and monitoring compliance with employment contracts. This captures AI applicant tracking and screening, AI video interview tools, automated performance scoring, AI scheduling, and AI used in redundancy and workforce restructuring decisions.
  • EU AI Act — Prohibition on Emotion Recognition AI in Employment Contexts (Article 5)EU AI Act Article 5(1)(f) prohibits AI systems that infer emotions of natural persons in the workplace, with only very limited exceptions for safety or medical reasons. This prohibition applies to AI video interview analysis tools that evaluate candidate emotions, engagement, or personality through facial expression analysis, voice tone analysis, or eye movement tracking, as well as employee monitoring AI that assesses emotional states.
  • GDPR and National Data Protection Law — Employee and Candidate Personal Data in AI SystemsGDPR governs all processing of personal data of employees and job applicants in the EU, including data processed by AI for recruitment, performance management, and workforce planning. Article 22 applies to automated decisions with legal or similarly significant effects on employment, and Article 9 applies where AI processes or infers special category data including health, trade union membership, biometric data, or ethnicity from employee records or behaviour.
  • EU Employment Equality Directives and Works Council Information and Consultation RightsThe EU Employment Equality Framework Directive (2000/78/EC), Race Equality Directive (2000/43/EC), and Gender Equality Directive (2006/54/EC) prohibit discrimination in employment on multiple protected grounds. Separately, EU and national works council legislation — including the European Works Councils Directive and national information and consultation laws — require employers to inform and consult employee representatives before implementing AI systems materially affecting working conditions.

IN

  • India DPDP Act 2023 — Employee and Candidate Personal Data in HR AIThe Digital Personal Data Protection Act 2023 applies to processing of employee and candidate digital personal data by HR AI systems including CV screening, background checks, performance analytics, and workforce planning. Employer-specific exemptions exist for limited processing but do not displace core DPDP obligations. Sensitive data including biometric data attracts enhanced controls.

SG

  • Singapore Tripartite Guidelines on Fair Employment Practices and PDPAThe Tripartite Guidelines on Fair Employment Practices issued by MOM, NTUC, and SNEF establish Singapore's fair-employment expectations. The Personal Data Protection Act 2012 applies to employee and candidate personal data. The IMDA Model AI Governance Framework provides voluntary but widely-referenced AI governance expectations for HR AI including fairness and transparency.

UAE

  • UAE Federal Decree-Law No. 33 of 2021 (Labour Law) — AI-Assisted HR DecisionsFederal Decree-Law No. 33 of 2021 is the UAE's federal labour law applicable to private-sector employment. AI tools used in recruitment, performance management, promotion, and termination decisions must comply with the Law's non-discrimination, good-faith, and procedural-fairness obligations. The Ministry of Human Resources and Emiratisation (MOHRE) enforces the Law, and employees have recourse to labour courts for discriminatory or unjust AI-driven decisions.
  • UAE Federal Decree-Law No. 45 of 2021 (PDPL) — Employee Data in HR AI SystemsThe UAE Personal Data Protection Law Federal Decree-Law No. 45 of 2021 regulates processing of personal data including employee and candidate data by HR AI systems. CV screening, performance analytics, workforce planning, and employee-monitoring AI are all in scope. ADGM and DIFC data-protection regulations apply in lieu of the federal PDPL for entities in those free zones.

UK

  • UK Equality Act 2010 — Protected Characteristics and AI in Employment DecisionsThe Equality Act 2010 prohibits direct and indirect discrimination, harassment, and victimisation in employment on nine protected characteristics: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation — applying fully to AI systems used by UK employers in recruitment, selection, performance management, promotion, and redundancy.

US

  • NYC Local Law 144 (2023) — Automated Employment Decision ToolsNew York City Local Law 144 requires employers and employment agencies that use automated employment decision tools (AEDTs) to make, or substantially assist in making, employment decisions for candidates or employees in New York City to conduct annual independent bias audits and publicly publish audit results, as well as notify affected individuals that an AEDT is being used.
  • US EEOC Guidance on Artificial Intelligence and Equal Employment Opportunity Law (2023–2024)The Equal Employment Opportunity Commission has issued technical guidance confirming that Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA) apply fully to employer use of AI in all phases of employment including recruitment, screening, assessment, hiring, performance management, and termination.
  • Illinois Artificial Intelligence Video Interview Act (AIVIA) and State AI Assessment LawsIllinois' Artificial Intelligence Video Interview Act (820 ILCS 42) requires employers using AI to analyse video interviews to notify applicants before the interview, explain how the AI works and what characteristics it evaluates, obtain explicit written consent, limit sharing of video recordings, and destroy recordings within 30 days of a request. Multiple US states including Maryland and Texas have enacted or proposed similar requirements for AI in employment assessment.
  • US National Labor Relations Act — AI in Employee Monitoring and Collective BargainingThe National Labor Relations Act (29 U.S.C. §151 et seq.) protects employees' rights to engage in concerted activity for mutual aid or protection. The NLRB's 2022 and 2023 General Counsel memoranda confirm that electronic monitoring and AI-driven management tools — including productivity analytics, keystroke logging, and AI performance evaluation — may interfere with Section 7 rights if they chill protected concerted activity. Employer use of AI to infer union-organising activity or to retaliate against protected activity is unlawful.
  • Colorado Artificial Intelligence Act (SB 24-205)Requires developers and deployers of high-risk AI systems in Colorado to use reasonable care to protect consumers from algorithmic discrimination in consequential decisions including employment, credit, insurance, and healthcare.