How-to Guide: Risks and liabilities of AI algorithmic bias (USA)

Updated as of: 31 July 2025

Introduction

This guide will help in-house counsel, private practice lawyers, and compliance personnel to understand the potential risks and liabilities that may result from algorithmic bias in artificial intelligence (AI) systems. As use of AI grows, more attention is being paid to the biases that may arise when algorithms make decisions that cause harm.

This guide covers:

  1. What is algorithmic bias?
  2. Legal risks and liabilities
  3. Detecting and mitigating algorithmic bias

This guide can be used in conjunction with the following How-to guides: Understanding AI-driven risks, Understanding the risk of negligence claims when using AI, AI and smart contracts; Checklists: De-identification of data used by AI systems, Steps to mitigate risks associated with AI use in business; and Quick views: Overview of AI in business and Key AI terms.

Section 1 – What is algorithmic bias?

1.1 Definition

The Alan Turing Institute defines algorithmic bias as ‘[u]nfairness that can arise from problems with an algorithm’s process or the way the algorithm is implemented, resulting in the algorithm inappropriately privileging or disadvantaging one group of users over another group. Algorithmic biases often result from biases in the data that has been used to train the algorithm, which can lead to the reinforcement of systemic prejudices around race, gender, sexuality, disability or ethnicity.’ The bias has the effect of ‘privileging’ one category of individuals over another, which is not the intended function of the algorithm and where there is no relevant difference between groups to do so. Addressing and responding to these errors requires rigorous auditing, diverse data sets, and continuous monitoring.

1.1.1 Discrimination risks

The risks of inequality in the treatment of certain groups can stem from errors in computer systems or – increasingly – from flaws in the data selected and used to train a system, or in the algorithms developed from that data. These errors inadvertently reinforce existing social or cultural biases and disparities and risk being used to develop discriminatory practices. As these systems are increasingly used in critical decision-making (eg, from hiring employees, lending to borrowers), the consequences of such biases can be far-reaching, causing commercial risks to the organization deploying them. 

1.1.2 Types of bias

Pre-existing

Pre-existing biases mirror societal prejudices, which run the risk of becoming embedded into technological processes. When AI systems rely on historical data, in the absence of evaluation of the data and resulting assumptions made by the algorithm, they risk disadvantaging certain groups.

Technical

Technical bias arises from the way algorithms are designed, implemented, or from the data they are trained on. Technical bias does not favor certain groups or outcomes over others due to explicit intent but because of overlooked aspects programmed in the system’s development.

Emergent

Emergent bias arises as technologies are deployed and interact with complex social dynamics. The data used to train an algorithm may, over time, become less reliable or obsolete. For example, an algorithm designed to identify married couples for marketing purposes may not have been designed to include same-sex couples as being married.

1.2 Factors contributing to bias

1.2.1 Prejudice in design: designer bias

Algorithmic bias includes gender, racial, national, and linguistic biases. While system designers may strive for an unbiased system, their implicit or explicit biases can inadvertently influence algorithms. This will lead to biased outcomes, as the algorithms are no better than the designers developing them. For example, gender bias may result from skewed training data or preconceived ideas about certain genders, while racial bias can be due to the underrepresentation or misrepresentation of certain ethnicities in data sets.

This type of bias is not necessarily the result of a conscious decision, or a purposeful design. As the author and activist Caroline Criado Perez said in her 2016 book Invisible Women: Exposing Data Bias in a World Designed for Men, speaking specifically about gender bias, this type of bias ‘is not generally malicious, or even deliberate. Quite the opposite. It is simply the product of a way of thinking that has been around for millennia and is therefore a kind of not thinking.’ The same type of ‘not thinking’ can manifest itself in other types of bias as well: an assumption that an American is a white, English-speaking male may not be the product of a conscious effort to exclude other demographics, but the bias is still real. 

Example

An AI system intended to predict behavior may be trained to make its predictions based on reactions of test subjects to certain situations. If the test subjects are from only one or two demographics – eg, white residents of the midwestern US – the system will be biased in favor of the results from that specific group. Behaviors that members of that demographic would not engage in are more likely to be labeled as ‘atypical’ or ‘abnormal.’

1.2.2 Socio-economic stereotypes

Designers’ privately held human biases toward specific socio-economic groups may creep into algorithm development. Lack of diversity among programmers may influence biases related to, for example, income levels, education backgrounds, or social status. These biases can taint and impact data interpretation and skew decision-making.

Example

The developer of an algorithm used to predict motor vehicle collision insurance risk for the purposes of setting a premium may assume that motorists living in certain less-affluent neighborhoods are more prone to make claims for collision damage, and so insureds in those neighborhoods will pay a higher premium regardless of their personal driving records.

1.2.3 Data bias

Imperfect data set

Imperfect data sets are one of the significant contributing factors in data bias. Underrepresentation of certain groups, or samples in the training data skewed towards certain groups, will often lead to predictions based on flawed data, which leads to inaccurate conclusions, and which will reinforce existing prejudices. Flawed data collection methods (eg, telephone surveys that reach only customers with landlines), labeling errors, or historical biases embedded in the data serve to perpetuate this bias. 

Inadequate sample

Data sets that do represent the diverse populations they aim to serve lead to algorithms that produce skewed or biased results. While no sample can be regarded as a flawless prediction of the whole, a more comprehensive and diverse sample will produce algorithms that do not reinforce biases in their decisions and outcomes.

1.3 Distinguishing outcomes

1.3.1 Fairness vs accuracy

Balancing fairness and accuracy in algorithmic decision-making is a trade-off. By examining how algorithms treat different groups and conducting impact assessments, organizations can strive to achieve a balance that minimizes bias while maintaining high levels of performance. Ideally these assessments should be undertaken by organizations before the adoption of AI to see what the systems are doing to take steps to address anomalies.

1.3.2 Ethical concerns

The ethics of the likely outcomes should also be considered. Evaluating data inputs, model outputs, and decision processes for potential biases are essential to uphold an organization’s ethical standards and prevent discrimination. Using algorithms that are transparent, explainable, and aligned with ethical principles will build trust with users and stakeholders. 

Section 2 – Legal risks and liabilities

AI that is not implemented correctly has the potential to create liabilities that could result in third-party claims and reputational damage. Assigning liability across the AI network is complex as there are many different parties involved in its development and use. As a consequence, it is difficult to establish the harm caused and who is at fault. 

Legal and regulatory frameworks surrounding AI remain in development and are evolving and uncertainty remains. President Biden’s Blueprint for an AI Bill of Rights recognized the problem and stated that algorithmic discrimination may violate existing legal protections against discrimination. The Blueprint was repealed by an Executive Order issued by President Trump on January 23, 2025. Most recently, on July 23, 2025, President Trump signed yet another AI-related Executive Order (Preventing Woke AI in the Federal Government) which stated purpose is to require more reliable outputs from AI and prevent the use of ideological biases or social agendas being built into AI.   

For additional information, see How to guide: Understanding AI-driven risksUnderstanding the risk of negligence claims when using AI; and Checklists: Steps to mitigate risks associated with AI use in business

2.1 Bias in recruitment and financial services

2.1.1 Recruitment

AI tools have increasingly been used in hiring decisions by private companies and governmental agencies alike, from posting job openings to sending out job offers. Two reports from Harvard University – from the School of Engineering and Applied Sciences and from the Harvard Business School – indicate that 70 percent of companies, including 99 percent of Fortune 500 companies, have already implemented AI-based tools into their hiring processes.

Examples of these tools include advertising jobs to potentially suitable candidates, chatbots or virtual assistants that screen out unqualified applicants’ resumes, software that evaluates candidates’ facial expressions patterns during online video interviews conducted with preset questions, or programs that assist employers in making the right kind of job offer to an applicant. Using these automated systems is not without risk. For example, a class action suit filed against CVS health corporation that alleges that the company’s use of AI-aided video-interview technology violates a Massachusetts ban on using forced lie detector tests in employment decisions is pending in the US District Court in Massachusetts. The court recently denied a motion by CVS to dismiss that was based on allegations that the Massachusetts Lie Detector Statute does not create a private right of action to enforce its notice provisions and that the named plaintiff lacks Article III standing under the US Constitution to challenge the lack of notice (Baker v CVS Health Corp, No. 23-11483 (D Mass February 16, 2024)).

In 2018, Amazon discontinued its automated candidate screening program because the system used 10-year-old data from resumes submitted mostly by men. The AI algorithm had learned to filter out female candidates because women were (and are) underrepresented in the STEM (science, technology, engineering, and mathematics) careers and the system was developed to look for applicants with STEM backgrounds. Another example relating to recruitment might be an AI system that screened applicants based on their home ZIP codes, which could lead to a pattern of rejecting applicants of a certain ethnicity, resulting in race-based claims of employment discrimination, see Understanding Algorithmic Discrimination: How Bias Persists in AI Systems. For further information, see How-to guides: Overview of US employment law and How to investigate the social media activity of prospective employees.

Currently, there are no federal laws that specifically address discrimination by AI systems in recruitment and hiring. Some states have enacted laws and begun initiatives to address the bias issue.

State-level laws

Illinois enacted the Artificial Intelligence Video Interview Act, which was last amended in 2022. The Act governs the use of AI to analyze recorded video interviews of job applicants for positions in the state. It requires employers to:

  • give job applicants advance notice if AI will be used to analyze interviews;
  • provide an explanation of how the AI analysis works (ie, the characteristics it evaluates); and
  • obtain consent before using AI for interviews.

The Maryland Legislature enacted section 3–717 of the state’s Labor and Employment Code in 2020. That law prohibits employers from using certain facial recognition services during a job applicant’s interview, unless the applicant consents by signing a written waiver with specified provisions.

In New York City, Local Law 144 of 2021 makes it unlawful to use an automated employment decision tool (AEDT) to screen candidates or employees for an employment decision, unless (1) the AEDT is subject to an annual bias audit by an independent auditor before use, and (2) the results of the most recent bias audit and the AEDT’s distribution date are published on the employer’s website. The law went into effect on July 5, 2023.

Other states considered similar legislation in 2024. Most of these bills died upon adjournment of the 2024 legislative sessions. In New Jersey, several bills related to AI that were introduced in 2024 remain active in 2025. Three of these bills target the use of AI in the employment context.

A 3911/S 3015 (identical bills) would regulate the use of artificial intelligence-enabled video interviews in the hiring process. Using such interviews would not be prohibited, but interviewees would have to be notified before the interview that artificial intelligence may be used to analyze the applicant’s video interview and consider the applicant’s fitness for the position. Employers who use an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview would be required collect and report to the Department of Labor and Workforce Development the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview after the use of artificial intelligence analysis; and the race and ethnicity of applicants who are offered a position or hired. Both bills remain pending in committee.

A 4888 would establish the AI and Labor Market Study Commission in the Department of Labor and Workforce Development. The purpose of the Commission would be to coordinate the creation of AI Impact Research to assist in understanding the impact of AI on the workforce; identify high risk jobs; create strategies to address potential job displacement; identify new job opportunities that leverage AI skills; and recommend policies and programs to address methods to assist displaced workers and support new AI-related workforce opportunities. The bill was introduced in October of 2024, and remains in committee.

Many observers correctly predicted that AI would be a major topic of state legislation in 2025. In the absence of federal legislation, state legislators have continued to develop their own schemes for regulation of AI. This could result in a patchwork of inconsistent or conflicting rules for the use of AI. For a summary of recent state legislation, see Artificial Intelligence 2025 Legislation.

2.1.2 Financial services

In financial services, risks of algorithmic bias in decision-making can impact, for example, individual access to credit or insurance provision. Data scientist Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency, and accountability, and current responsible AI fellow at Harvard University’s Berman Klein Center for Internet and Society, says that algorithmic discrimination ‘is actually very tangible in lending.’ The discriminatory practice of ‘redlining,’ in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood is ‘implicitly picked up’ by AI systems used to make lending decisions. The FTC has proposed guidelines on AI usage and these include decisions that may affect a consumer’s credit.

Credit approval or scoring

Liability for the gender and racial impact of algorithmic bias in credit approval and credit scores is a critical issue. The most important outcome of a loan application for an applicant is the approval or rejection of the loan. Discriminatory outcomes resulting from biased algorithms can result in individuals who are members of certain groups being denied credit unfairly, or being offered loans on less favorable terms. This type of discrimination violates federal laws, as well as laws in most states, and can lead to legal challenges and reputational damage for the financial institutions involved. Federal laws, such as the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA), provide that lenders may not use factors such as gender or race when making credit decisions. A financial institution that uses algorithms that exhibit bias so that certain groups are disproportionately affected could face lawsuits, regulatory fines, and public backlash, including consumer boycotts.

Insurance underwriting

The impact of algorithmic bias in insurance underwriting and the potential to harm consumers and communities must also be considered. Most state laws prohibit using an applicant’s race or gender as justifications for making their insurance underwriting decisions, but biased algorithms can lead to discriminatory pricing or coverage decisions, resulting in financial harm and unequal treatment for individuals. Often, this happens because of underwriters engaging in ‘proxy discrimination,’ a practice in which facially-neutral qualities – a person’s occupation, or education level – are used in a way that disproportionately disadvantages members of a certain protected class.

Example

In some cities, a majority of taxi drivers are immigrants from East Africa. An insurer that charges higher health insurance premiums for taxi drivers may be found to be engaged in a practice that has a disparate racial impact.

Fairness and transparency in underwriting is essential for ensuring equitable access to insurance coverage. Insurers may face legal and regulatory repercussions, customer complaints, and reputational damage if biases are not identified and rectified. 


In Oklahoma, HB3577 would have enacted the Artificial Intelligence Utilization Review Act (AURA). The bill would require insurers to disclose whether AI-based algorithms are used or will be used in the insurer’s utilization review process, which is defined in the bill as the ‘set of formal techniques designed to monitor the use of or evaluate the medical necessity, appropriateness, efficacy, or efficiency of health care services, procedures, or settings.’ Insurers would also be required to submit their algorithms and training data sets to the state’s Insurance Department to certify that they have minimized bias risks. The bill was recommended to pass as amended by the House Government Modernization and Technology Committee on February 21, 2024, but missed the March 14 deadline for third reading set by Oklahoma legislative rules. The bill did not pass.

2.2 Bias in facial recognition technology

Facial recognition technology matches human faces from a digital database of images. In a National Institute of Standards and Technology study on the accuracy of facial recognition algorithms, the majority of face recognition algorithms exhibit demographic differentials, risking bias among demographic groups.

Example

A pharmacy chain identified individuals suspected of prior theft or other criminal activities in the chain’s stores by using facial recognition technology. The technology disproportionately deployed in stores located in predominantly non-White communities. The FTC brought an enforcement action against the chain, alleging that the technology was inherently flawed and generated unacceptable levels of error. The FTC also alleged that the chain did not implement crucial safeguards that could have mitigated consumer harm prior to the deployment of its facial recognition technology. The FTC and the chain settled the case, with the chain agreeing that it would cease the use of facial recognition in its stores for surveillance purposes for five years, during which time it is also required to expunge any biometric data gathered in association with the previously utilized system. FTC v Rite Aid Corp, No. 2:23-cv-5023 (ED Pa December 19, 2023).

While law enforcement agencies attempt to minimize these concerns (the New York City Police Department, for example, claims that ‘[n]o one has ever been arrested solely based on a facial recognition search’), the Brookings Institution notes that ‘enhanced forms of surveillance used by police officers pose significant risks to civilians already targeted in the criminal justice system and further the historical biases affecting communities of color.’

2.3 State legislation regulating algorithmic bias

In Utah, SB 149 was approved by the governor on March 13, 2024, and was effective from May 1, 2024. The law establishes liability for the use of generative AI that violates consumer protection laws if not properly disclosed. It also creates an Office of Artificial Intelligence Policy and AI Learning Laboratory Program to analyze and recommend potential legislation regarding AI. In addition, a ‘regulatory mitigation’ licensing scheme is established whereby participants of the AI Learning Laboratory Program can avoid regulatory enforcement while developing and analyzing new AI technologies.

Several other states have seen bills to control algorithmic discrimination introduced in their legislatures.

In Hawai’i, SB 59 would prohibit covered entities from making ‘algorithmic eligibility determinations’ or ‘algorithmic information availability determinations’ in a discriminatory manner. The bill would also require users of such algorithmic decision-making tools to provide notice of the use and explain how the tools use an individual’s personal information. Finally, the bill would require annual audits of such tools and obligate covered entities to submit their audit results to the state’s attorney general each year. SB 59 was introduced and referred to committee on January 16, 2025.

In New York, AB 768/SB 1962 would enact the New York Artificial Intelligence Consumer Protection Act. The purpose of the identical bills is to prevent the use of artificial intelligence algorithms to perpetuate discrimination against protected classes. Developers of AI systems would be required to prepare that would describe the mitigation strategies used to prevent harm from the system and to evaluate how the system prevents unlawful algorithmic discrimination. AB 768 was introduced in the Assembly on January 8, 2025, and SB 1962 was introduced in the Senate on January 14, 2025. Both bills have been referred to committee in their respective houses.

The New Mexico Legislature saw the introduction of HB 60, a bill to enact the New Mexico Artificial Intelligence Act. The bill would require developers of AI systems to use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination arising from intended and contracted uses of a high-risk artificial intelligence system. Developers would be required to make disclosures to recipients of AI systems, including disclosure of how the system was evaluated for performance and mitigation of algorithmic discrimination prior to being offered or made available to the deployer. HB 60 was pre-filed on January 9, 2025, and passed the state House Judiciary Committee on February 24, 2025. It remains pending in the state legislature (New Mexico Artificial Intelligence Act, HB 60, moving to a floor vote in the House).

Massachusetts HD 396, titled ‘An Act to Ensure Accountability and Transparency in Artificial Intelligence Systems,’ would require developers to use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination. Developers would require to provide deployers of AI systems with: (1) A summary of intended and foreseeable uses of the AI system; (2) Known limitations and risks, including algorithmic discrimination; and (3) Information on the datasets used for training, including measures taken to mitigate biases. Developers would be required to notify the Attorney General and deployers of any known or foreseeable risks of discrimination within 90 days of discovery. The bill was introduced on January 8, 2025 and referred to committee.

Virginia HB 2094, which was introduced on January 7, 2025, would have also required developers of AI systems to make disclosures of the known or reasonably known limitations of their systems, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of their artificial intelligence systems. Developers would also have been required to disclose the measures the developer has taken to mitigate reasonable foreseeable risks of algorithmic discrimination that the developer knows arises from deployment or use of the system; and how an individual can use the system and monitor its performance for any risk of algorithmic discrimination. However, the bill was ultimately vetoed by Virginia Governor Youngkin, (Virginia Governor Vetoes Artificial Intelligence Bill HB 2094: What the Veto Means for Businesses).

A similar bill in Texas, HB 1709, would also require developers of AI systems to make disclosures regarding the risk of bias in their systems. A developer or deployer of a system would be required to assess potential risks of algorithmic discrimination prior to deployment of a system and implement a risk management policy to govern the development or deployment of the high-risk artificial intelligence system. HB 1709 was filed on December 23, 2024 and referred to the Delivery of Government Efficiency Committee on March 14, 2025.

Finally, in Connecticut, Proposed SB 2 states only that it would amend the Connecticut General Statutes to ‘protect consumers in this state from the risks of algorithmic discrimination and unfair treatment posed by artificial intelligence.’ The Proposed Bill was filed on January 8, 2025, and passed the Senate on May 14, 2025.

Section 3 – Detecting and mitigating algorithmic bias

There are some methods that can be deployed to mitigate risks and negative outcomes from algorithmic bias to try and control biases in output.

3.1 Detection

3.1.1 Outputs and results

While one of the principal features of AI is the purported lack of direct human involvement, human oversight is still needed to test, audit, and examine the input of data, the outputs, and to detect vulnerabilities. Prioritizing transparency around decision-making will help organizations build trust with their customers.

Potential biases can be uncovered by comparing and analyzing results based on various characteristics, such as race or gender. Conducting fairness assessments, evaluating predictions, and scrutinizing decision-making processes will help to identify and mitigate discriminatory patterns. 

3.1.2 Simulations

Running simulations – testing the algorithm by operating it in hypothetical made up situations – allows for the evaluation of algorithm performance, prediction accuracy, and fairness in different scenarios and in different contexts. One easily implemented strategy is to have simulations run by personnel from a diverse range of backgrounds. 

3.2 Mitigation

The total elimination of all bias is not a realistic goal, but it can be a continuous aspiration. Users should ensure that they are using AI ethically in decision-making and should take conscious steps to mitigate risks.  

3.2.1 Initial assumptions

Biases often start with initial assumptions. These assumptions may be innocuous on the face, but contain hidden pathways to bias. Before starting the development of a system, operators of AI systems should establish a core set of their initial assumptions about the purpose of the algorithm. By clarifying their intended goals, their target outcomes, and potential biases and assumptions, operators can align their strategies for bias mitigation from the outset. Operators will be able to proactively design and implement bias mitigation measures, and developing a clear understanding of the algorithm’s purpose allows the development of comprehensive bias impact statements that guide decision-making processes, promote fairness, and uphold ethical principles throughout the algorithm’s lifecycle.

3.2.2 Bias impact statement

A bias impact statement can help identify potential biases in algorithmic decision-making, and as a best practice should be undertaken by all those designing, operating, and using the AI system during adoption and deployment.

The initial step in developing a bias impact statement is to determine what the decision is intended to do and which automated decisions require scrutiny. Reviewing decisions that require caution (eg, those related to hiring, lending, healthcare, and criminal justice) warrant careful examination. Prioritizing scrutiny on impactful automated decisions is a useful starting point.

3.2.3 Engage stakeholders

It is crucial to identify and engage all stakeholders, including shareholders, corporate officers, and employees. In some companies and communities, labor unions or employee organizations and community members will be included. Regular updates on mitigation strategies and progress, and – importantly - inviting feedback, can help build trust and demonstrate a commitment to addressing broader concerns in a responsible manner.

3.3 AI by design

AI by design, as described by Serena Westra and Ioannis Zempekakis of the OLX Group, is a six-step framework intended to help teams embrace a ‘human-centered approach’ to AI.

  • 'Discover’ – this builds an understanding of the goal of the project, the customer’s needs and their problems, as well as business opportunities.
  • ‘Define’ – this looks at the scope of the challenge scope: what problem to solve, or which opportunity to pursue. This step includes researching the context and AI possibilities.
  • ‘Decision’ – this asks if this is a problem that can and should be solved with AI? If so, what data is needed, and could the solution have unethical consequences?
  • ‘Develop’ – this is aimed at understanding the best solution for the problem.
  • ‘Testing’ – this identifies the riskier assumptions and tries to validate them by creating a quick (data-driven) prototype to test desirability.
  • ‘Deliver and Evaluate’ – this calls for iterating, refining, pitching, and delivering the solution to the end customer and key stakeholders.

In essence, AI by design continues to ask key questions about the need for AI and the system to be developed. It is not assumed automatically that AI is the answer to a problem, or that any further discussion is to be foreclosed. Continuing development and analysis are the key.

Additional resources

Alexander Amini et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure
Congressional Research Services, Facial Recognition Technology and Law Enforcement: Select Constitutional Considerations
David Danks and Alex John London, Algorithmic Bias in Autonomous Systems 
Ginger Zhe Jin, Artificial Intelligence and Consumer Privacy
Emre Kazim and Adriano Soares Koshiyama, A High-Level Overview of AI Ethics

Related Lexology Pro content

How-to guides:

Understanding AI-driven risks
Understanding the risks of negligence claims when using AI
AI and smart contracts

Checklists:

De-identification of data used by AI systems
Steps to mitigate risks associated with AI use in business

Quick views:

Overview of AI in business 
Key AI terms

Reliance on information posted:

While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.