Introduction
This guide will assist in-house counsel, private practice lawyers, and human resource professionals in understanding how AI can be used to enhance diversity, equity, and inclusion (DEI). Organizations that have strong DEI credentials are more likely to win top talent, meet the needs of different customer bases, and respond to the challenges of an increasingly diverse, and continuously changing, business environment.
This guide covers:
- Overview of AI and DEI
- Potential applications and benefits of using AI to improve DEI
- The use of AI in recruitment and onboarding
- Potential risks of using AI to improve DEI
- Relevant laws
- Audits and reviews
This guide can be used in conjunction with How-to guides: How to deliver workplace diversity training, How to comply with legal developments relating to Diversity, Equity and Inclusion and Risks and liabilities of AI algorithmic bias, Checklist: Steps to mitigate risks associated with AI use in business, and Quick views: Key AI terms and Overview of AI in business.
Step 1 - Overview of AI and DEI
AI can be a great asset to organizations in many ways. The type of AI that could or should be used depends on the nature of the business and the task to be performed.
1.1 Definition of AI
AI is branch of computer science devoted to developing data processing systems that perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement. This imitation of human behavior ranges in complexity from the ability to perform simple tasks to mimicking complex problem-solving skills. Within that definition there are distinct types of AI, differentiated by their capabilities and the tasks they are able to perform. Over the last few decades, AI has been broken down into several different subtypes, each of which has its own unique features.
For further information about different types of AI and how they may be used by organizations, see Quick view: Overview of AI in business (USA).
1.2 Definition of DEI
Diversity, equity, and inclusion are corporate values that promote a well-rounded workforce, where all staff and employees are treated fairly and equally.
Diversity refers to a workforce made up of people who have a range of identities, perspectives, skills, and styles. It can mean having employees of different races, religions, status as veterans, disabilities, and more. Equity refers to an organization that has programs and policies that are implemented and operated impartially and fairly, with a goal of ensuring that employees are treated equally. Inclusion refers to the efforts made by the organization to guarantee that all employees feel safe and supported at work.
The landscape of DEI initiatives is undergoing significant and rapid transformation across the US and it is therefore crucial to pay close attention to these changes to stay updated and ensure compliance.
For further information about DEI, see How-to guide: How to comply with legal developments relating to Diversity, Equity and Inclusion (USA).
Section 2 – Potential applications and benefits of using AI to improve DEI
Organizations can use AI in several ways throughout their DEI programs and in particular in relation to recruitment and onboarding.
2.1 Types of AI
Of the various types of AI, the two that have the greatest utility in enhancing the DEI efforts of organizations are machine learning and predictive analytics.
2.1.1 Machine learning algorithms
Machine learning algorithms can replace parts of the recruitment process that can be very menial and repetitive, such as screening resumes of potential candidates. Machine learning AI can analyze vast amounts of data at once to make an overall assessment. A system could look at an applicant’s experience, education, and skills and create an assessment as to how efficient they would be in the role.
Machine learning can also be useful in developing orientation and training programs for new hires and existing employees. The system can track employee performance and engagement, allowing an organization to assess the effectiveness of its programs. For example, if no employees submit questions during a training program, it could be taken to mean that the material was presented with sufficient clarity. If an assessment at the end of the training program shows that few employees have grasped the gist of the material, the lack of questions could point to a lack of employee engagement. It could also mean that the presentation lacked sufficient clarity for the employees to ask questions, or that the material was not well suited to the employees who received it.
2.1.2 Predictive analytics
Predictive analytics AI uses technology to look at past patterns and make assessments about what will happen in the future. Because AI can analyze more data than humans can, it can be used to study previous employee history, prior job posting descriptions, current organization needs and the organization’s growth potential. This can allow organizations to develop recruitment plans based on those analytics. If hiring practices have historically been racially biased, for example, the system will catch that patterned behavior and inform the organization of it, thereby allowing the organization to correct itself and look for candidates based on job qualifications only.
2.2 Use of AI
Some of the general areas relating to DEI that AI can assist with are set out below. See section 3 for detailed information about how AI can be used in relation to recruitment and onboarding.
2.2.1 Communications
AI can be used to draft and send various communications with employees and job applicants throughout the hiring process and their employment.
Communications can be structured to minimize the unconscious bias that often finds its way into corporate communications, even without any malicious intent on the part of the person making the communication. The structure could be as simple as noting when a person has indicated their preferred pronoun on an email and remembering to use those pronouns in any communication with that person.
A system can also be trained to watch for biased language or even the timing of the communication (eg, scheduling an ‘employee appreciation luncheon’ at noon during Ramadan would exclude Muslim employees). Of equal importance to the drafting of a communication is the ability of AI to gauge engagement to the communication. AI can be trained to collect data on engagement (eg, how many employees read an email) and can also analyze feedback. Analyzing engagement will help develop future communications.
2.2.2 Data analytics
Organizations tracking the efficiency of their DEI initiatives can use AI to do much of their tracking for them.
Data analytics may also be used to detect bias towards or among current employees. Unconscious bias may cloud judgments that, to an unbiased observer, appear discriminatory. Relying on one or two male employees to give presentations at technical conferences may not appear to be the result of bias, but when it is learned that female employees with equivalent qualifications are never asked to present, the effects of an unconscious bias may be clear.
2.2.3 Accessibility
A powerful way AI can be used in the workforce is through making corporate workflows and materials more accessible to people with disabilities. Tasks that would normally require hours of effort and resources can be done quickly with AI. For example, if an employee with a sight disability needs assistance with their work, AI technology – like a screen reader – can help them access the materials they need and interact better with their colleagues. Other forms of technology that can further accessibility include speech recognition, image recognition software, and alternative communication.
2.2.4 Translation
Transnational organizations with large workforces in many different countries need to communicate in many languages. While it could be possible to provide understandable and usable materials in only one language, such as English, this creates a language barrier that does not foster inclusivity. Non-Anglophone employees are given the implicit message that they need to confirm. This barrier can be fixed by creating multiple language versions. AI translation systems, such as Google Translate or Microsoft Translator, can very quickly and accurately translate materials and conversations. This allows organizations to hold meetings where people of all languages and cultures are able to communicate with and understand each other.
When using AI for translations, organizations should be careful that the translation is not culturally insensitive or incorrect.
2.2.5 Training
While DEI training can be done in person with lectures and seminars provided by the organization, another resource available could be training sessions where AI provides simulations of real-world scenarios. Opportunities for people to glimpse something closer to reality provides a stronger perspective of others’ lived experiences, which in turn creates deeper conversations and safer, more mindful, work environments. Organizations should be careful not to fully eliminate human interaction when incorporating AI in DEI training. It is when human connection and AI virtual realities are used simultaneously that corporate DEI initiatives are strongest and create the healthiest environments. If an organization requires AI training but then does not allow space for conversation, those DEI lessons may be quickly forgotten and rendered useless because there was no follow-up with human connection.
2.2.6 Analyzing employee data
AI can take in information about the specific needs of each employee and create programs and informational materials that help them specifically. If each employee’s specific needs are being met both in skill development and learning preferences, the workforce as a result will be more engaged and productive.
Section 3 – The use of AI in recruitment and onboarding
AI can have its most significant DEI-related impact in relation to recruitment and onboarding.
3.1 AI in recruitment
AI has many uses that can impact DEI in recruitment processes.
3.1.1 Bias detection
AI analysis of large amounts of data can expose hiring practices that discriminate against marginalized groups of people. To ensure AI is programmed to detect bias, employers should regularly audit and monitor their AI’s training data to ensure that there are no human biases impacting the results. Assuming the programming is non-biased and neutral, the AI can then provide insights into applicant pools and the organization’s hiring practices, exposing hidden biases that the organization can then work to correct.
For further information, see How-to-guide: Risks and liabilities of AI algorithmic bias (USA).
3.1.2 Improving the inclusivity of language in job descriptions
The language of a job posting can be unintentionally exclusionary and discourage or prevent certain groups of people from applying. For example, requiring a ‘native English speaker’ will exclude English speakers who may be fluent or proficient in the language, from applying if they are not native speakers. AI, however, can be used to create more inclusive and appealing job descriptions. With data from a wide variety of sources and examples, AI can write job descriptions that attract the widest candidate pool possible by being neutral and culturally sensitive.
3.1.3 Finding candidates from underrepresented groups
AI can use a filter to bring forward candidates from underrepresented groups. If an organization is struggling to bolster its DEI efforts in hiring, AI can be used to sift through applicants so that the organization can view candidates that will assist with the organization’s DEI initiatives. For example, if an organization historically has hired more men than women, it can filter the applicant pool to bring forward applicants who use the pronouns ‘she’ and ‘her,’ or those who use gender neutral pronouns.
3.1.4 Analyzing data
AI can be used to measure the amount of time it takes to fill positions, the cost of that time and the resources needed to hire, and where the applicants for positions are coming from. Information about the hiring process allows organizations to identify where their inefficiencies are and to get a broad view as to whether they are pulling applicants from a variety of sources. If the data shows inefficiency or a limited pool of applicants, the organization can respond accordingly. For example, if data shows that an overwhelming number of applicants are coming from LinkedIn, the organization can spend more time boosting job postings elsewhere online or through staffing companies to make sure they are pulling candidates from more than one source.
3.2 AI in employee onboarding
An important part of improving DEI efforts and initiatives is ensuring that all employees feel welcomed in their new work environment. AI can be used in several ways to streamline onboarding and help maintain DEI efforts during this early employment period.
3.2.1 Personalized onboarding
AI can personalize onboarding to each employee’s needs, role, and prior experience. AI can take the onboarding requirements the organization needs and put it into a process that suits the employee’s learning style and the specific skills they need to develop to be successful in their new position. This allows the employee to have a better experience because of the inclusivity the personalization brings, and the organization benefits because the employee is as well prepared as possible. For example, an AI system can ensure that personalized onboarding covers any specific requirements that a disabled employee may need.
3.2.2 Chatbots
Early conversations during the onboarding process may create uncomfortable or inappropriate situations due to human biases. For example, in some parts of the US asking a person what church they go to is regarded as harmless small talk, whereas to some this may be considered an overly personal question. To combat this type of unintentional bias, organizations can use AI chatbots to not only save time and resources in handling early onboarding conversations, but by programming the chatbot to use neutral language.
3.2.3 Automated paperwork
AI can also assist with onboarding by automating paperwork. For example, within an onboarding portal AI can list the necessary paperwork a new employee needs to complete. The portal can receive the completed paperwork and file it automatically, so that the employee can immediately proceed with onboarding. When there is no wait between document exchanges between employers and new hires, the onboarding process moves much more quickly and smoothly, and the new hire can become fully integrated into the organization as soon as possible.
3.2.4 Data-driven check-ins
Organizations can continuously improve their onboarding systems by using AI to constantly assess the system and its processes. At various points during onboarding, new hires can be presented with surveys or quizzes to determine what the experience is like for them and if they are receiving the information they need. If the new hires need any further assistance with the onboarding, AI can assist the process by scheduling meetings with managers or human resources without a third party involved. Through these check-ins, organizations can search for recurring issues with their onboarding system and correct them as soon as possible.
3.2.5 Nuanced feedback
One of the great benefits of DEI programs is the range of perspectives and experiences that come with diversity. Just as each new hire will have different perspectives and experience, each new hire will also have different needs. AI can help meet those needs by providing personalized and detailed feedback during and after the onboarding. This feedback could include areas such as soft skills – like flexibility, critical thinking, and social awareness – that could be improved upon, or suggestions to retake assessments for specific tasks.
3.2.6 Learning paths
AI can be programmed to create learning paths for each employee based on their own interactions and responses to the onboarding. By assessing their skills and abilities, AI can identify areas for development. This helps the new hires learn in a well-rounded manner at their own pace and become better integrated into the organization and their role.
3.2.7 Translate content and create quizzes
To make educational DEI content available to all employees across the globe, including employees of multiple languages, AI can be used to translate assessment and educational materials quickly and efficiently into whichever languages are required.
AI can also help by creating questions and simulations for interactive onboarding content such as polls and quizzes, based on the prompts and programming from the organization. When the interactive content is completed, AI can be used to grade the responses and provide feedback for improvement.
Section 4 – Potential risks of using AI to improve DEI
Using AI in DEI initiatives has the great potential benefit of reducing or wholly eliminating biases. However, because AI is programmed by humans, there are also risks involved with using AI in this way.
4.1 Perpetuating bias
There is a risk that AI will perpetuate biases. Because AI is technology created by and programmed by humans, there are ways in which human errors and biases are reflected in the programming itself. If an organization does not take steps to mitigate this, it may be exposed to large blind spots where AI is used as an excuse for not recognizing DEI issues present in hiring or daily operations practices.
For example, it was reported in 2018 that Amazon had to stop using its AI hiring model because of biases it found within the AI programming. The AI had been programmed to find patterns in Amazon hiring practices within the previous 10 years, which for Amazon included largely male hires. As a result, any resume that included the word ‘women’ was penalized and ranked lower than those of men. Because the AI programming was so flawed, Amazon tried to reprogram the AI using neutral terms, but even that was not enough to eliminate the biases and Amazon stopped using the AI system entirely.
In 2024, researchers at the University of Washington tested a variety of commercial large language (LLM) screening tools. The researches found that the LLM screening tools preferred white-associated names 85% of the time over resumes with non-white associated names. In addition, the study found that male-associated names were preferred 52% of the time over female-associated names. Ultimately, organizations must remember that AI is a tool, not a replacement for human oversight and decision-making.
For further information about the risks associated with AI, see How-to guide: Understanding AI-driven risks (USA).
4.2 Workforce transformation
AI could allow some work to be completed automatically, creating redundancies in staffing and subsequent layoffs. If this should occur, the organization should be careful to track if any group of people are being disproportionately laid off as a result. For example, in many traditionally male dominated industries, seniority-based layoffs tend to affect women and people of racial and ethnic minorities because they tend to have less seniority. If most of the people being let go are of a specific sex, nationality, religious affiliation, or other protected characteristic, this is a red flag for the organization. Alternatives to consider include retraining potentially redundant people for a new role, or reassessing hiring practices for future positions.
Section 5 – Relevant laws
When using AI to enhance DEI, organizations need to be mindful of both federal and state laws regarding data protection, employment discrimination, and other laws specific to the use of AI. Because there is a lack of federal regulation in most of these areas, organizations should ensure that they check individual state laws and are compliant wherever applicable.
5.1 Data protection regulations
The United States does not have any single set of data protection laws at a federal level. Instead, there are a handful of subject-specific federal laws regarding data protection. The laws that generally apply to all organizations are the Electronic Communications Privacy Act (ECPA, 18 USC 2510-2523), the Fair Credit Reporting Act (FCRA, 15 USC 1681-1681x), the Fair and Accurate Credit Transactions Act of 2003 (FACT, 15 USC.1681–1681x), and the American Disabilities Act (ADA, 42 USC 12101–12213). Other federal privacy laws, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA), do not generally apply to all employers but apply to organizations working in a particular industry or business.
For example, the FCRA and the FACT work in tandem to protect the accuracy of consumers’ credit-related records. These laws apply to employers when the organization uses a third party to conduct background checks on the applicants or employees and uses that information to make decisions.
Additionally, organizations must comply with consumer protection laws that have been enacted at a state level. There are currently 20 states with data privacy laws: California, Colorado, Connecticut, Delaware, Florida, Indiana, Iowa, Kentucky, Maryland, Minnesota, Montana, New Hampshire, Nebraska, New Jersey, Oregan, Rhode Island, Tennessee, Texas, Utah, and Virginia. For example, the California Consumer Protection Act (CA Civ Code 1798.100–1798.199.100 (2022)) is one of the strictest consumer protection laws in the USA. It requires organizations to inform applicants and employees who reside in California about the type of personal information the organization will collect and how that information will be used. In addition, the CCPA was amended in July 2025 to require businesses that use automated decision-making technology (ADMT) for ‘significant reasons’ (eg, education, employment, healthcare, financial services, or housing) to provide ‘pre-use notice’ to consumers. The notice discloses the data used in decision making and informing customers of their rights. The new regulations are expected to take effect in 2027.
Note that there are no exceptions or special provisions in these laws for data collection or retention by AI systems.
For further information about data privacy laws in the USA, see Panoramic: US Data Protection and Privacy (state-by-state).
5.2 Anti-discrimination legislation
Any use of AI in recruitment and onboarding must be non-discriminatory and compliant with Title VII of the Civil Rights Act (42 USC 2000e–2000e17) protections against discrimination based on race, color, religion, sex and national origin. All actions an organization takes in programming, using, and relying on AI must comply with Title VII. For example, in 2020 iTutorGroup Inc was sued for violating Title VII by using AI to filter out applications of women over the age of 55 and men over the age of 60. The company was in violation of Title VII based on how they were programming the AI and the lawsuit resulted in a settlement with the US Equal Employment Opportunity Commission.
For further information about discrimination laws in the USA, see Checklist: Developing an Equal Employment Opportunity Commission (EEOC) compliant policy (USA).
5.3 Proposed AI Bill of Rights
In October of 2022, the Biden Administration issued a proposed AI Bill of Rights. The Bill was a set of guidelines (not enforceable by law) published by the White House that sets out five principles to guide safe use of AI. The five principles that the proposed bill covered were:
- safe and effective systems;
- algorithmic discrimination;
- data privacy;
- notice and explanations; and
- human alternatives, consideration and fallback.
The proposed Bill of Rights states that AI systems should be created and tested for safety and soundness before being used, so that individuals are protected from undue risks. Any programming for AI should be monitored and assessed for discriminatory actions and rectified immediately if any are found. Gathered data should be handled safely and the individual should have the right and ability to know about that collection and have some degree of agency over it. Individuals should also know that they are interacting with an automated system and should be able to receive an explanation as to how that AI works and how it is used. In appropriate situations, an individual should be able to receive a human alternative to AI use and receive that alternative in a timely manner.
The Bill of Rights was rescinded by the Trump Administration on January 23, 2025. In its place, the new Administration issued Executive Order entitled ‘Removing Barriers to American Leadership in Artificial Intelligence.’ This Order ‘revokes certain existing AI policies and directives that act as barriers to American AI innovation,’ but does not identify the policies and directives that may be impacted.
The US is actively pursuing AI safety and security through several other key initiatives. The Center for AI Standards and Innovation (CAISI)), a unit of the National Institute of Standards and Technology (NIST), is leading pre-deployment testing of advanced AI models, working in collaboration with private developers to advance AI safety science. Notably, the CAISI has entered into agreements with major AI firms to conduct specially focused testing aimed at identifying and mitigating risks, particularly for general-purpose AI models that present potential national security implications. Complementing this testing, NIST has published Version 1.0 of its AI Risk Management Framework. The Framework offers detailed guidelines for risk identification, assessment, and mitigation in both generative AI and dual-use models. This framework underscores safety, transparency, and accountability as fundamental principles for AI system development and deployment. In July 2024, NIST released Generative Artificial Intelligence Profile (NIST AI 600-1) further addressing the risks associated with generative AI technologies. Furthermore, the release of the first-ever National Security Memorandum on Artificial Intelligence in 2024 established a foundation for US safety and security efforts in this domain.
5.4 Sector-specific AI laws in HR context
Some states have passed legislation specifically relevant to the use of AI in human resources and hiring. Two such laws are:
- New York City’s Local Law 144 (Local Law No. 144 (2021) of City of New York 5-300 –5-304) – prohibits employers from using AI decision tools during the hiring process unless the tool has been properly audited.
- Illinois’s Artificial Intelligence Video Interview Act (820 ILCS 42/1–42/20) – requires an employer obtain consent to use AI analysis during video interviews.
- Colorado’s AI Act (SB24-205) (effective February 1, 2026)– requires employers to use reasonable care to avoid algorithmic discrimination, and provide notice to individuals about the use of high-risk AI systems.
Section 6 – Audits and reviews
Audits and reviews are methods for an organization to examine its use of AI and DEI programs to ensure they are functioning properly. These can be carried out manually or through automatic processes. Automatic audits and reviews guarantee a consistent amount of AI checking but should be accompanied by manual review.
6.1 Digital auditing and reviewing
Digital auditing requires an organization to review its data and employment history for any accessibility issues or biases that are hidden within the AI’s programming. To do so, organizations must audit all electronic elements of their work that are touched by the AI. This may include the organization’s website and social media platforms, in addition to its employment application and operational systems.
Audits can help find any blind spots in the AI programming and enable the organization to fix errors, which will improve operations and decisions that contribute to meeting broader DEI goals.
6.2 Ethical concerns
Audits provide organizations with the opportunity to examine the ethical considerations of using AI in connection with their DEI efforts. It is possible for AI to perpetuate biases which would be counteractive to the organization’s DEI objectives. Allowing the AI systems to be audited for ethical concerns about bias alongside more general concerns about data privacy, integrity, and accountability gives organizations an opportunity to catch any concerning programming. Audits can also serve as a reminder that ethical questions require human oversight which cannot be wholly replaced by AI.
Additional resources
Jeffrey Dastin, Insight - Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters.com, 11 October 2018
Annelise Gilbert, EEOC Settles First-of-Its-Kind AI Bias in Hiring Lawsuit, Bloomberg Law, 10 August 2023
Stefan Milne, AI tools show biases in ranking job applicants’ names according to perceived race and gender, Washington.edu, 31 October 2024
Tara Sophia Mohr, Why Women Don’t Apply for Jobs Unless They’re 100% Qualified, Harvard Business Review, 25 August 2014
Chen, Z., Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit Soc Sci Commun 10, 567 (2023)
How-to guides:
Understanding AI-driven risks
Understanding the risks of negligence claims when using AI
AI and smart contracts
Risks and liabilities of AI algorithmic bias
How to deliver workplace diversity training
How to comply with legal developments relating to Diversity, Equity and Inclusion
Checklists:
De-identification of data used by AI systems
Steps to mitigate risks associated with AI use in business
Quick view:
Reliance on information posted:
While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.