AI-powered “bossware” offers enhanced productivity insights and can reduce managers’ workload, but may also compromise employee trust and legal compliance – how can companies strike the right balance?
Key takeaways
- Almost two thirds of companies use AI for performance evaluation, with remote work behind the rise.
- AI monitoring tools give companies granular productivity insights – but there are legal and ethical risks.
- New regulations impose strict compliance requirements on AI “bossware,” while systems monitoring employees at home risk major privacy violations.

Shutterstock.com/vfhnb12
The continued prevalence of remote working has left companies grappling with how to balance monitoring productivity with employees’ rights.
So-called AI “bossware” is becoming increasingly more sophisticated, with remote working creating a need for more digital surveillance. 61% of US employers use AI to evaluate employee performance, according to VPN service provider ExpressVPN.
AI monitoring tools offer granular insights and analysis of employees’ activity patterns. They can flag unusual behaviour and conduct sentiment analysis – reportedly, some software even detects whether workers include the word “union” in their emails.
Amazon was accused of deploying “intrusive algorithms” and other workplace surveillance to monitor and deter union-organising activities in May 2024. The company was subsequently found to have violated workers’ Section 7 of US National Labour Relation Act 1935.
Walmart, Starbucks, Delta and Chevron are all reportedly using AI to monitor employee communications, raising alarm about whether workforce monitoring has become too invasive.
Lexology PRO considers the ethical and legal implications of AI-powered workplace monitoring, and how companies can use these tools in respect of privacy law and employees’ rights.
The rise of AI-powered surveillance tools
Remote workforce monitoring tools are not new, but they are on the rise – 80% of companies now monitor their remote or hybrid workers, according to the Massachusetts Institute of Technology. The upcoming generation of AI-powered systems offer far more in-depth insights and analysis, including:
- enhanced activity tracking;
- sentiment analysis (monitoring for signs of stress or dissatisfaction);
- automatically generated performance reports or productivity “scores”;
- facial recognition and biometric monitoring to track attendance and engagement;
- monitoring for unusual behaviour or policy violations, such as employees accessing prohibited websites; and
- automated scheduling and task allocation.
Risks for employers
Increased functionality carries additional risks, from eroding trust and morale in the workplace to breaches of privacy law. While the far-reaching capabilities of AI monitoring tools offer companies increased oversight and have the potential to reduce managers’ workload, these benefits are offset by potential legal and ethical implications.
Reduced employee trust
Excessive use of automated surveillance tools can leave employees feeling scrutinised, stressed and distrustful. This has implications for recruitment and retainment. Almost half of employees would consider leaving their job if surveillance increased, while 24% would be willing to accept a pay cut to avoid it, according to ExpressVPN.
Employees may also take steps to evade surveillance, such as by using tools to fake productivity, further undermining the relationship between workers and their employers.
Breaches of privacy law or employees’ rights
AI monitoring tools can collect vast amounts of data on employees, including sensitive biometric data. This carries additional privacy and security requirements under UK and EU General Data Protection Regulation (GDPR), as well as other privacy laws.
Using monitoring tools that access employees’ microphones or webcams may be particularly high-risk. Companies risk violating employees’ rights under Article 8 of the European Convention of Human Rights if they capture images or sound recordings from within their homes without explicit consent.
Increased scrutiny under emerging laws
New and proposed AI laws include specific provisions to address workplace monitoring, bringing increased legal scrutiny upon companies using these tools.
EU AI Act 2024 imposes strict requirements on the use of “high risk” AI systems, including models that “evaluate performance and behaviour” in a workplace context, as well as those “used for emotion recognition.”
In the US, New York City’s Local Law 144 requires “bias audits” for automated employment decision-making tools, while the Colorado AI Act mandates “fairness and transparency” in the use of high-risk automated decision-making systems, including in employment contexts.
AI analysis may be flawed
AI-generated performance scores or reports may present an incomplete picture. AI systems don’t always account for mitigating factors that could impact an employee’s performance. This creates a risk of discrimination, if a monitoring tool unfairly penalises employees with physical or mental health conditions impacting their work.
Inaccuracy may be more prevalent for novel AI tools, such as those claiming to interpret peoples’ emotions or level of satisfaction with work.
Key compliance tips
There are steps companies can take to help protect employees’ rights and privacy when carrying out AI-powered remote workforce monitoring.
Be transparent
Internal policy documents should clearly explain how employees are monitored, in what settings, why, and how the information is being used. The company’s privacy policy should also be updated to account for remote monitoring, so it contains all information required under the relevant data protection laws.
Under GDPR, if employees’ consent for monitoring has not been obtained, companies will need to establish a different legal basis, such as legitimate business interest. Employees’ explicit consent is required for any monitoring on their personal devices.
Employees should be informed about their rights to contest significant decisions about their employment made using AI.
Ensure proportionality
To avoid accusations of breaching employees’ privacy, companies must ensure their surveillance practices are necessary and proportionate to achieve the specified aim – this is a legal requirement under GDPR.
Companies must adhere to data and purpose limitation, ensuring their monitoring systems collect the minimum amount of data possible, and that information is only used for the reason specified.
Conduct regular compliance audits
Companies should carry out regular compliance audits to ensure their surveillance practices comply with data protection and employment laws. For instance, impact assessments are required under GDPR for any “monitoring that is likely to result in a high risk to the rights of workers.”
Certain forms of AI-powered surveillance may also trigger additional requirements under new AI regulations, such as the need for risk assessments or “bias audits” for high-risk processing under EU and US laws.
Establish robust protections for sensitive data
Companies must ensure robust security measures are in place to protect sensitive personal data collected by monitoring systems. This should include implementing appropriate access measures, encryption and pseudonymisation where appropriate.