AI at work: five employment contract blind spots companies must fix

Updated as of: 31 July 2025

As AI becomes increasingly rooted in everyday workflows, businesses should update employment contracts to reflect emerging legal and ethical risks.

Key takeaways

  • Companies face growing legal exposure from AI use without clear contractual boundaries.
  • AI errors and misuse create liability issues for companies.
  • Clear AI policies and staff training are essential to ensure responsible use. 

Shutterstock.com/Gumbariya

AI is rapidly transforming how we work. While these tools can improve efficiency, they also create legal blind spots – particularly when employees use AI without clear contractual boundaries or oversight.

The absence of contractual clarity around the use of AI in the workplace – including ownership of AI-produced work, liability for errors, and employees’ right to be informed about how businesses use AI – exposes employers to growing risks concerning intellectual property, data protection, misconduct and discrimination.

“Many employment contracts and workplace policies are not yet adapted to reflect AI-related risks. It is important that employers grapple with these issues early and make appropriate changes before problems emerge,” says Chloe Themistocleous, partner at Eversheds Sutherland.

Lexology Pro outlines five critical areas where employment contracts must evolve to address emerging AI-related risks.

Clarify ownership rights for AI-generated work

As more employees and contractors rely on AI tools to write, code or create content, the question of who owns the resulting intellectual property remains a legal grey area. Global copyright laws are inconclusive about whether AI-generated work qualifies for copyright protection.

Generally, employers own work created by employees in the course of employment. But businesses might not own AI-assisted output unless contracts say so explicitly. Employment and contractor agreements should clearly state that all AI-generated or AI-assisted work belongs to the employer.

Contracts with external AI vendors or platforms should also include clauses assigning ownership of this material to the employer to avoid potential disputes.

Establish accountability for AI errors and misinformation

AI systems are known to sometimes generate false or misleading information, so-called ‘hallucinations’. This can undermine professional integrity and regulatory compliance. 

Several litigations against lawyers who have cited AI-generated fictitious case law in legal submissions serve as cautionary tales for other industries on the inherent risks of AI hallucinations. For example, in Ayinde v London Borough of Haringey 2025 the UK High Court warned that submitting misleading AI-generated legal content without verification breaches professional standards and could amount to negligence in future cases

To reduce the risk of disputes, businesses should include disclaimers in their AI policies about inaccuracies, encourage employees to be transparent about their AI use, and instruct employees not to rely on AI output without adequate human review.

“It is often unclear who is legally responsible when AI systems malfunction or produce unfair or inaccurate outcomes, the supplier, employer, or both. Where third-party AI tools are used, they must clarify in contracts or policy documents who bears responsibility for errors, and ensure this aligns with supplier contracts,” says Dr Nathalie Moreno, partner at Kennedys Law.

Safeguard confidential data from AI training

AI tools learn from user inputs, posing significant risks when those inputs include confidential or personal data. Employees using generative AI may accidentally feed sensitive information to third-party systems that could repurpose it without permission. 

Everyday workplace tasks can easily turn into hidden compliance risks. For example, Samsung employees inadvertently leaked sensitive source code and confidential meeting recordings when they used ChatGPT for coding assistance and transcription tasks. 

If data is mishandled or used without proper safeguards, it can lead to breaches of data protection laws,” says Eversheds' Themistocleous.

The problem is exacerbated by widespread ‘shadow AI’, employees’ use of unapproved AI tools that bypass organisational oversight and safeguards. Nearly half of all knowledge workers are secretly using AI tools that have not been authorised by their employers, according to a survey by Software AG in February 2025. The practice makes it difficult for management to monitor or control data flows.

To mitigate risks, companies should establish clear AI usage policies specifying approved tools and prohibiting personal or confidential data input into public AI systems. This enables businesses to leverage the benefits of AI tools while maintaining strong confidentiality standards.

Providing regular and comprehensive staff training on using AI and the potential risks involved is also crucial to prevent accidental data exposure and comply with data protection regulations.

Combat AI-enabled harassment and workplace misconduct

AI misuse can see employees use tools to generate inappropriate messages, offensive content or even harmful deepfakes. These incidents can cause significant psychological and reputational harm to victims and may erode workplace culture. 

In the US, online platforms face new obligations to respond to the publication of AI-generated deepfakes and nonconsensual intimate images under the Take It Down Act (TIDA), passed in April 2025. A recent case in California – Carranza v City of Los Angeles suggests even employers not covered by TIDA could risk costly damages if they fail to take prompt remedial action against the distribution of harmful images. 

As legal frameworks evolve, employers should update disciplinary procedures to include AI-related behaviour, revise employee handbooks on respectful communication, and monitor emerging risks like AI-generated harassment or content manipulation.

Ensure transparency in automated hiring and evaluation processes

AI algorithms used in recruitment, to screen CVs for example, can perpetuate or amplify discriminatory practices, exposing employers to claims under the Equality Act 2010 or EU anti-discrimination law. 

Research by the University of Melbourne highlights that AI hiring systems often reflect the biases of the limited datasets they are trained on. For example, Amazon famously scrapped a recruitment tool after finding it favoured male candidates over female ones. 

Employers must ensure non-discriminatory recruitment decisions at every stage of the hiring process. Over-reliance on automated HR decision-making can expose businesses to legal and financial consequences, as seen in lawsuits against iTutorGroup and Workday over algorithmic discrimination.

There is also a discrimination risk when AI is used for redundancy scoring, performance evaluations and employee monitoring. It’s important for AI to support, not replace, human judgment in high-stakes employment decisions.

“Many AI systems operate as ‘black boxes,’ making it difficult to explain how decisions are made. This lack of clarity can undermine employee trust and lead to negative tribunal decisions where a business is not able to explain reasons for dismissals or redundancy scoring,” says Themistocleous.

Under the UK General Data Protection Regulation (GDPR) 2020 and the EU GDPR Directive 2016, automated decision-making and profiling face tight controls that will strengthen further. The EU AI Act 2024, effective 2 August 2026, requires compliance teams to prepare for stricter obligations around “high-risk” HR AI systems.

Employers must disclose when and how AI is used in decision-making to candidates and employees. Companies should assure employees that final decisions rest with human reviewers and that they can request and challenge outcomes influenced by automated systems.

“Employees may not be aware that AI is being used to assess them or make decisions affecting them. This can raise fairness, consent, and transparency issues under data protection law. Employers need to update privacy notices to cover AI processing explicitly, including the logic, purpose and consequences of such processing, as required under GDPR and if in the EU, in accordance with European case law,” says Kennedys' Moreno. 

Stay up to date with key developments and in-depth articles by following Lexology’s employment and labour research hub

See our new interactive Compliance Calendar for key deadlines and dates in core compliance areas, including enforcement dates, reporting deadlines and changes to regulations.  

For more information on how to address employment issues, take a look at Lexology PRO’s practical resources page which includes checklists and how-to guides.