When automation goes wrong: AI in HR

Updated as of: 05 June 2025

AI-powered HR tools are triggering costly discrimination lawsuits for some companies. What steps can employers take to deploy these systems safely and fairly?

Key takeaways

  • AI hiring and workplace management systems can discriminate against certain individuals due to algorithmic bias.
  • Companies could be caught in the crosshairs of laws governing anti-discrimination and “high risk” AI systems.
  • Businesses risk costly legal battles and regulatory scrutiny unless necessary safeguards and oversight are in place.

Shutterstock.com/Cherdchai101

AI-powered workplace management tools allow overstretched human resources (HR) departments to streamline core tasks, like recruitment, onboarding and performance management. More than two thirds of UK HR leaders currently use or plan to use AI in some capacity in the next year, according US recruitment company HireVue. 

But this efficiency comes at a cost. Poor AI implementation exposes employers to potential discrimination lawsuits from biased algorithms. Without careful oversight, businesses risk costly legal battles over unfairness, discrimination and digital exclusion.

A recent collective action against HR platform Workday showcases the potential for significant legal exposure that comes with utilising AI in HR. The suit alleges that Workday’s AI-powered applicant screening tool systematically disadvantaged certain applicants. A California judge allowed the case to proceed on 16 May 2025. 

Regulatory scrutiny is also increasing. The US Equal Employment Opportunity Commission’s (EEOC) AI guidance for employers warns companies can be held liable for AI-misuse, even if the tool was developed by a third-party. The UK’s “responsible AI in recruitment” guide requires employers to ensure their AI hiring systems are fair, secure and enable adequate accountability. 

Lexology PRO examines the key legal compliance risks of HR AI and how employers can deploy these tools safely and fairly. 

How does AI risk discrimination in HR? 

“Using AI-powered tools can reduce the risk of human bias, but the decisions made using AI are only as good as the data being provided. Data may be inherently biased, particularly if it is based on historical experience or a small sample size. In this situation, the employer may be exposed to claims for damages due to discrimination. Above all, it’s vital to understand, and to be able to explain, how decisions are being made,” according to Susannah Donaldson, employment partner in Pinsent Masons’ Glasgow office.

AI hiring and workplace management systems potentially enable, reinforce and amplify discrimination against historically marginalised groups. Disabled or older people, women and candidates from minority ethnic backgrounds may be particularly vulnerable to discrimination by AI.

This can play out when companies recruit. Several companies are accused of developing or deploying AI-powered HR systems that discriminated against job candidates. For example, the Equal Employment Opportunity Commission (EEOC) reached a US$365,000 settlement with iTutorGroup to resolve its first AI discrimination lawsuit in 2023. The EEOC accused iTutorGroup of utilising AI hiring software that automatically rejected older job applicants. The company denies wrongdoing. 

The American Civil Liberties Union filed a discrimination complaint against financial software company Intuit in April 2025, alleging its AI interviewing tool discriminated against a deaf and Indigenous employee. The employee was denied a promotion after the platform – which lacked subtitles and refused a request for human-generated captions – penalised them for not demonstrating “active listening.” The case highlights how AI can fail to accommodate disabilities, creating discriminatory barriers.

Use of AI can lead to discrimination in other workplace contexts. In March 2024, Uber Eats paid an undisclosed settlement to a black driver after faulty AI facial recognition wrongly flagged his identity verification, closing his account and leaving him unable to work. 

Some companies use AI to monitor employees’ performance, for example by installing keystroke monitors and other productivity tracking tools. But AI systems may unfairly penalise workers with physical or mental health conditions that affect their productivity. 

Companies risk unfair dismissal claims if they dismiss employees based on performance data collected and analysed by AI and cannot demonstrate that the automated decision-making process was fair and accurate. 

These cases demonstrate the legal and financial consequences for companies deploying AI without proper safeguards to prevent discrimination.

A growing threat of liability

As well as risking potentially costly lawsuits, companies utilising AI to manage their workforce face being caught in the crosshairs of laws targeting discrimination and unfair decision making. These include the UK Equality Act 2010, the EU Charter of Fundamental Rights 2000 and related directives, and in the US, the Civil Rights Act 1964 (CRA) and other workplace-specific laws, such as the Age Discrimination in Employment Act 1967 (ADEA).

“Under Title VII of CRA, if an algorithm disproportionately screens out candidates of a protected class or downgrades certain employees’ performance scores based on protected traits, it may constitute unlawful disparate impact unless the employer can prove the practice is job-related and consistent with business necessity. Similarly, the Americans with Disabilities Act 1990 (ADA) is implicated if automated assessments unjustly exclude individuals with disabilities or fail to provide reasonable accommodations. For example, a video-interview AI that cannot fairly evaluate a neurodivergent candidate could violate ADA requirements,” according to Dario de Martino, A&O Shearman M&A partner and Anna Rudawski, A&O Shearman Cybersecurity partner.

The EU and UK’s General Data Protection Regulation (GDPR) restrict the use of automated decision-making and profiling. Fully automated employment decisions that significantly affect workers face some of the strictest legal requirements. Companies must tell employees about this processing and give them an easy way to request human review or challenge the decision.

The EU AI Act 2024 has strict requirements for using “high risk” AI models, including AI used for HR, which take effect on 2 August 2026. Employers must inform candidates and employees if they are using a high-risk AI system, as well as carrying out data protection impact assessments and ensuring appropriate human oversight. Failure to comply with these provisions risks fines of up to €15 million (US$17 million) or 3% of worldwide annual turnover. 

“Employers may also be required to comply with the legislation regulating AI providers, if the AI is provided by the employer itself. For example, if the company develops the AI system or has it developed then puts it into operation under their own name or trademark, whether for a fee or free of charge. If this is the case, the employer must ensure that the business remains compliant throughout the tool’s lifecycle,” says Michael Kalbfus, partner in Pinsent Masons’ Munich office.

The evolving regulatory space suggests employers’ legal obligations and liability around the use of AI systems is set to increase, underscoring the necessity of taking preventative action to avoid discrimination.

Best practices for using AI in HR 

There are some best practices companies can follow to ensure their AI usage ethical and compliant with anti-discrimination law. 

Carry out due diligence in procurement 

Companies must carry out adequate due diligence when procuring an AI HR system. They should check that the potential provider offers clear, evidence-based assurances that they are compliant with the relevant laws and actively mitigate the risk of discrimination. 

Conduct equality impact assessments

Before deploying AI, companies should identify potential risks in an equality impact assessment.

If the assessment identifies potential for discrimination or exclusion, companies must implement mitigating measures, such as human oversight to review AI outputs and decisions. 

Companies should regularly update their impact assessments after deploying AI to identify emerging risks – at the least, on an annual basis. AI systems can experience “model drift” where they start behaving unexpectedly over time.

Maintain a “human in the loop”

HR professionals should consistently review decisions generated by AI, particularly for high-impact decisions, such as for performance monitoring or dismissals. This can identify whether biases are distorting AI decision-making early, so it can be corrected before it negatively impacts individuals. 

Develop a transparent AI policy

Companies should develop clear policies setting out how AI tools are used for recruitment and/or workplace management, what personal data is being processed by AI, and how employees or candidates can contest AI decisions.

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.  

Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool.

Stay up to date with key developments and in-depth articles by following Lexology’s AI and Employment hubs.