Introduction
This guide will assist in-house counsel, private practice lawyers, and compliance personnel in understanding the risks associated with the use of artificial intelligence (AI) in business. While AI does offer significant benefits, it also creates risks which must be considered at every level of the business – from board management to employees.
This guide covers:
- Accuracy and accountability
- Potential for misuse
- Legal liabilities
This guide can be used in conjunction with the following How-to guides: Risks and liabilities of AI algorithmic bias, Understanding the risk of negligence claims when using AI; Checklists: Steps to mitigate risks associated with AI use in business, De-identification of data used by AI systems; Quick views: Overview of AI in business and Key AI terms.
Section 1 – Accuracy and accountability
AI systems can improve productivity and streamline business operations. Two of the most prominent concerns are the accuracy of AI systems, and the accountability and responsibility for errors in the system or in the use of the system.
1.1 Accuracy
‘Accuracy’ is one of the standards by which an AI system is measured and is often defined as the number of predictions based on certain data that the system gets right. This aligns with the definition of AI accuracy as detailed in the EU-US terminology and taxonomy for artificial intelligence, which was developed by the EU-US Trade and Technology Council to coordinate approaches on AI governance. Measuring accuracy lets the user and developer judge whether the system is appropriate and mitigate against risk of harm (eg, if an AI financial trading system made an inaccurate analysis of economic data and triggered a wave of trading based on that inaccuracy).
1.1.1 Explainability of algorithms – ‘White box’ vs ‘Black box’
The operations of AI systems are based on algorithms which the National Library of Medicine defines as a series of steps used to solve a problem or to complete a task. The quality of the algorithm to perform calculations or extract data will be one of the principal factors in ensuring the accuracy of results.
'White-box’ AI models use a decision tree or a simple linear model and can typically be understood by humans. ‘Black-box’ AI models use complex decision trees or parameters to create their outputs. A key challenge is how to balance the performance and interpretation of outputs from the different models used. Many AI models cannot easily explain the steps taken to reach their final conclusions and this also raises additional practical and legal concerns.
1.1.2 Assessing accuracy
Reliability of data
The accuracy of an AI system will only be as good as the data inputted. Data that is not reliable or accurate will produce a bad result no matter how well designed and executed the system is (often loosely referred to as the problem of ‘Garbage in, garbage out’). For example, if some of the words input into a language translation program are wrong, any translation result that involves those words will be incorrect.
Persons developing or using the AI system
AI is not entirely divorced from human error. The following factors can all affect how well a system operates:
- automation bias – defined by researchers at Stanford University as overreliance on AI systems. Overreliance is a particular problem as it can lead to a willingness to accept incorrect results;
- errors on handover – directing the shift of the control of operations from an AI system to a human; and
- situation awareness – awareness of factors other than the data that was entered, which can all affect how well a system operates.
1.2 Accountability and transparency
1.2.1 Accountability
This is an assurance that entities or individuals will act responsibly to preserve customer trust and brand reputation. The term may also refer to the legal liability of both the user and developer of a system as regards the functioning of the AI system. There are also important ethical considerations. Without a governance framework in place, a business faces legal, operational, and reputational risks.
Traditionally, accountability for business decisions comes from the top down – at board level, however, it extends to senior managers and employees and may be assessed against the industry sector in which the business operates.
1.2.2 Transparency
Businesses need to be honest and understand how the AI system operates (ie, is it doing what it is supposed to do or if not, why not?). There is commonly a trade-off between transparency and accuracy and corporations must balance this risk. As researchers from the Alan Turing Institute pointed out in Explaining Explanations in AI (2018), finding the correct level of transparency is essential.
1.2.3 Shared responsibility
Understanding who is responsible for what and clarifying the scope of roles of all parties is crucial. Responsibility extends across the whole team including developers and end users.
Developers
Developers are responsible for the design and build of the AI system. They are responsible for putting the algorithms together and making sure that those algorithms do what they are supposed to do.
Users
Users ensure the system is used correctly and that it is not misused. For example, a company that uses AI to conduct employment background checks may be liable for discrimination if an algorithm persistently labels applicants with Latino or African American-sounding names as security risks.
For additional information, see How-to guide: Risks and liabilities of AI algorithmic bias.
1.2.4 Auditing
Monitoring and oversight of how an AI system is functioning should be standard business practice. Businesses should build this into their budgets and agree the extent and frequency of the audits.
1.2.5 Where are improvements needed?
While an AI system may function adequately and may not appear to have any serious defects, there will always be room for improvement. Both users’ and developers’ feedback can be used to identify improvements and the business will need to consider whether that improvement is technically or economically feasible.
Section 2 – Potential for misuse
2.1 Employee misuse
Employees using AI in the workplace pose a business risk and safeguarding businesses against this risk is critical. The consequences of the misuse could be no more than a minor inconvenience, or they could be more serious, leading to legal consequences for employees or possibly their employers.
Misuse can be reduced, if not eliminated, by ensuring that employees operating or overseeing the AI system are adequately trained and understand the consequences of misuse. Businesses should have clear employee policies and procedures on the parameters of AI usage.
For further information, see How-to guide: Risks and liabilities of AI algorithmic bias; Checklist: Steps to mitigate risks associated with AI use in business; and Quick views: Overview of AI in business and Key AI terms.
2.2 Accidental misuse
2.2.1 System behaving in an unintended manner
AI systems detect patterns and make predictions based on those patterns. The prediction is supposed to accomplish a particular goal or perform a certain task. The data provided is necessarily limited to what the humans already know. An unexpected event or condition – often known as a ‘black swan event’ – can render the predictions from the data unreliable.
Example
The online real estate marketplace Zillow discontinued its home-flipping business due to unacceptably high errors in its AI-generated offers to purchase property. The model was not able to deal with the disruption in the real estate market caused by the COVID-19 pandemic, the shortage of labor available to perform renovations, and a freeze in housing construction that led to an imbalance between supply and demand in the housing market.
2.2.2 Defective programming
AI systems cannot foresee the occurrence of all events because the humans who design the system cannot anticipate every conceivable event. Similarly, if inaccurate data is included in a system – for example, incorrect meteorological information is selected to be input into a weather forecasting system – the forecasts produced by that system will be wrong.
While AI is now being used to write code, researchers have found that the code generated by AI includes an unexpectedly high level of errors. The AI code writing system does not always understand what it is supposed to be doing due to a lack of context. The code that is written will include errors that can have serious consequences for users: researchers at New York University found that, for certain tasks where security is crucial, code generated by the code generator GitHub Copilot contained security flaws around 40 percent of the time.
2.2.3 Hallucinations
The story of the lawyers who submitted a brief written by ChatGPT that included citations to non-existent cases is a well-known cautionary tale. It is also a prime example of an AI ‘hallucination.’ A hallucination looks like it could be a plausible result of a query (ie, it is grammatically and syntactically correct) but is false or inaccurate. The causes of hallucinations are not known for certain. They may be the result of the training data used to develop the model being insufficient. A model may also be ‘overfit,’ meaning that it is accurate for its training data but inaccurate for new data, and therefore producing incorrect answers. In addition, training texts and prompts may be improperly encoded, or if an AI system is trained on unrepresentative data, it may produce biased results.
While fact-checking results may seem to defeat the purpose of automating tasks, quality control is no less important because we want to trust AI to handle a task without human intervention.
2.3 Deliberate misuse
2.3.1 Unethical use
AI has the potential to be used in an unethical manner or be manipulated by bad actors. Overreliance on AI and a reluctance to change a model with known defects could produce undesirable outcomes.
Example
Healthcare AI systems have been found to return less accurate results for African American patients because African Americans were underrepresented in the data used to train the program. Continued use of such a system after a healthcare provider has been made aware of the systemic error would be unethical. In addition, it could also subject the provider to liability for medical negligence (eg, in the event of misdiagnosis or unnecessary medical intervention).
2.3.2 Deepfakes
The US General Accounting Office (GAO) defines deepfakes as ‘a video, photo, or audio recording that seems real but has been manipulated with AI. The underlying technology can replace faces, manipulate facial expressions, synthesize faces, and synthesize speech. Deepfakes can depict someone appearing to say or do something that they in fact never said or did.’
Deepfakes may have some legitimate uses in business. For example, a retailer could allow online customers to upload their likenesses to ‘try on’ clothes virtually. This benefits the customer from a time and cost-saving perspective and may increase sales for the retailer. On the other hand, there are more malevolent uses being made, such as the recent circulation of pornographic images purporting to be of the singer Taylor Swift. Deepfakes could also be used to make false endorsements of products or to simulate an accident caused by a competitor’s product.
All companies are aware, or should be aware, of the need to stay truthful when advertising their goods or services. This extends to the use of deepfakes (eg, creating a deepfake of a living person and using it to advertise a product without their consent would constitute false or misleading advertising). See How-to guides: Issues surrounding online advertising, Understanding business defamation and trade libel and Avoiding false or misleading advertising.
2.4 Cybersecurity risks
2.4.1 Invasion of privacy
AI systems are based on the collection and use of large quantities of data. There are always inherent privacy risks associated with the collection of personal data, but certain privacy risks are heightened in the AI context. These include:
- repurposing – the use of data for a purpose other than that for which it was collected;
- persistence – the continued presence of data in the system after it should have been deleted; and
- spillovers – data being collected inadvertently on people without them being aware that their data is being collected.
Given the sheer quantity of data that may be collected by an AI system a person may not know that their privacy has been violated until well after the event. A company that uses AI for data collection can mitigate potential privacy risks by monitoring and auditing its data collection policies and practices on a regular, if not continuous, basis.
For further information, see How-to guides: How to determine and apply relevant US privacy laws to your organization, How to develop, implement and maintain a US information and data security compliance program; Checklists: Understanding privacy laws in the US and Developing key privacy and data security contractual terms and provisions (B2C) and Quick view: Key data privacy and data security terms.
2.4.2 Cyberattacks
The National Institute of Standards and Technology (NIST) has identified four types of cyberattacks on AI systems:
- evasion attacks, which attempt to alter an input to change how the system responds to it (eg, a malicious actor might add markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs);
- poisoning attacks, which introduce corrupted data;
- privacy attacks, defined as attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it; and
- abuse attacks, or the insertion of incorrect information into a source that an AI system then absorbs. Abuse attacks give the AI system incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.
NIST states that these attacks are easy to mount and require minimum knowledge of the AI system being attacked. AI experts have devised defenses for these attacks but those defenses are, thus far incomplete.
Google’s AI red team – a group that exposes security flaws by simulating attacks on AI systems – published a recent report that outlines the most common tactics, techniques, and procedures used by attackers against AI systems. They are as follows:
- adversarial attacks, such as writing inputs specifically designed to mislead a model, which result in an incorrect output or an output that the system wouldn't give in other circumstances;
- poisoning, which entails manipulating the training data of the model to corrupt its learning process;
- prompt injection attacks, when a user inserts additional content in a text prompt to manipulate the model’s output; and
- backdoor attacks, such as installing and exploiting a backdoor (ie, a hidden entry point that bypasses traditional authentication), to manipulate the model and enable a hacker to hide code in the model and sabotage the model output but also to steal data.
For further information, see How-to guides: How to develop, implement and maintain a US information and data security compliance program and How to manage your organization’s data privacy and security risks; and Checklists: Steps to mitigate risks associated with AI use in business and Completing a data incident response plan assessment.
Section 3 – Legal liabilities
As the use of AI continues to advance and play a larger role in various business sectors, the question of legal liability for AI systems becomes increasingly important.
3.1 Contract liability
3.1.1 Mistake
Legal liability for AI systems also intersects with the mistake doctrine. This doctrine refers to situations of mistakes or errors occurring in the operation or output of AI systems. The mistake doctrine attempts to assign legal responsibility for these mistakes. Traditionally, legal liability for mistakes was properly attributed solely to human actors, since there could be no question that one or more humans made the mistake. However, with the rise of AI, questions have been raised as to whether liability should rest solely with the AI system itself, or whether it should be shared by other parties, especially those involved in the design, development, or deployment of a system.
Courts and legislators are grappling with this issue, and are seeking to establish principles that assign liability when AI systems make mistakes. Factors that are often being considered for assigning responsibility include the level of human control over the system, the foreseeability of errors, and the degree of autonomy granted to the AI system.
In some cases, liability will be attributed to the developer or operator of the AI system if negligence or failure to exercise due care is demonstrated. As an alternative, legal responsibility may be shared between multiple parties involved in the AI system's lifecycle. This may include shared responsibility among manufacturers, programmers, or end users.
3.1.2 Unconscionable terms
As AI systems are increasingly being used in contexts that include consumer use, legal liability for AI will also extend to the inclusion of unconscionable terms in agreements involving AI systems. ‘Unconscionable’ terms are contract provisions that are excessively one-sided, unfair, or oppressive, resulting in an imbalance of power between the parties involved. Unconscionable terms are most commonly, although not invariably, found in contracts involving a transaction with a consumer.
Organizations that provide AI services or products may seek to include contractual provisions that limit or absolve the organization of liability for injuries or damages caused by the AI system. However, if such terms are deemed unconscionable, they may be unenforceable in a court of law.
To mitigate legal liability, organizations should ensure that their contractual terms related to AI are fair, reasonable, and transparent.
3.1.3 Risks of smart contracts
AI integrating advanced applications of smart contracts is being developed at a fast pace. These smart contracts can make ‘decisions’ for their human parties. The mistake doctrine is unlikely to vitiate an AI-driven smart contract, largely because a successful claim of mistake depends upon the presence of a fundamental error in the underlying assumptions held by the parties when making the contract.
Because it can scarcely be said that a contracting party has made a legal ‘mistake’ when a smart contract makes an unfavorable decision that is within the parameters of the coding that made the contract, vesting smart contracts with the capacity and the responsibility to make decisions for ‘their’ human parties carries with it the risk that these automated contracts will make decisions at the margins of their coded instructions.
There has been no reported case law that holds these contracts to be unenforceable or that alters the laws of contract formation or performance for smart contracts. Even though there is no human involvement, a company or person on whose behalf a contract is made or performed (or not performed) may still be held liable for something done without their involvement. See How-to guide: AI and smart contracts.
3.2 Negligence
One of the attractions of AI is its ability to perform tasks without human intervention. In theory, this should minimize, if not eliminate, the risks of human error or misuse of the system causing damage. AI is put together and designed by humans, so the possibilities of error, and damage resulting from that error, will remain present.
3.2.1 Design of system
Negligence in the design of an AI system can, and very likely will, lead to adverse consequences and harm. Negligence requires establishing that the negligent party had a duty of care to the injured party, that they breached that duty, the negligent action caused some harm, and quantifiable monetary damages. Determining the standard of care for design negligence in an AI system involves assessing whether the designers and developers followed industry best practices, complied with relevant regulations, and took reasonable steps to prevent harm.
Under the present state of the law of negligence, the user of an AI system is less likely to be held to be at fault for an injury than the manufacturer or developer of the system.
As AI develops, and as its use is even more widespread, there will certainly be further debates as to who is at fault for injuries caused by AI. Is it the programmer, or is it the designer of the system, or is it the expert charged with providing the knowledge to the AI system? Any contributory fault of the user or injured party will also need to be considered. When an AI system is considered fully autonomous, or when it is far removed from human decision-making, establishing causation and foreseeability of an injury will become more difficult.
AI negligence claims have begun to appear in the courts. In early 2017, a class action was brought against the vehicle manufacturer Tesla, claiming that the autopilot system in an automated vehicle contains safety features that did not work, and fault enhancements.
AI technology is still new and will continue to evolve. The legal frameworks and judicial precedents that address issues of liability are still being established. It is crucial for organizations and policymakers to proactively address these concerns.
3.2.2 Product liability
The law has been relatively slow to deal with AI. There has been some case law in the United States that deals with the use of computerized robotics. For example, in Jones v W + M Automation, Inc, 31 AD3d 1099, 818 NYS2d 396 (2006), the court dismissed a plaintiff’s products liability action against the manufacturer and the programmer of a robotic loading system. The court declined to find that the defendants could be liable for the plaintiff’s injuries because the defendants ‘manufactured only non-defective component parts.’ If the robot – and its associated software – were ‘reasonably safe when designed and installed,’ the defendants could not be liable for the plaintiff’s injuries. His employer, the end user of the system, could still be held liable for making improper modifications to the hardware or software.
3.3 IP infringement
3.3.1 Appropriation of name or likeness
Appropriation occurs when someone uses a name, likeness, or image for commercial purposes without permission. Appropriation of likeness could be implicated in many fields of AI. A deepfake modeled on a real person and used in an advertising campaign could constitute an appropriation. State governments have begun cracking down on deepfakes. In Tennessee, the ELVIS Act of 2024 (Ensuring Likeness Voice and Image Security Act), signed by Governor Bill Lee on March 21, 2024, provides that a person (including a firm, association, partnership, or corporation) may be civilly liable if that person ‘distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice, or likeness without authorization from the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.’
3.3.2 Misrepresentation of origin of work
State and federal laws (including the Lanham Act, which includes federal trademark law) prohibit misrepresentations of the origin of work. AI-generated work misrepresented as a human creation could subject the creator to liability.
Example
Admirers of various bands, including the Beach Boys and Oasis, have used AI to generate new albums that replicate the sound of the original bands. If these albums are marketed without a disclaimer that they are just recreations of what the former bands would sound like, that would be misrepresentation of the origin of work.
3.3.3 Who is the author of AI-generated work?
The Copyright Act states that copyright protection is afforded to ‘original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.’ ‘Authorship’ means some human involvement in the creation of the work. A work produced entirely by AI is not a ‘work of authorship,’ and so is not entitled to copyright protection (Thaler v Perlmutter, No. 22-1564 (DDC August 18, 2023)).
3.3.4 Copyright infringement
Copyright infringement is defined as occuring ‘when a copyrighted work is reproduced, distributed, performed, publicly displayed, or made into a derivative work without the permission of the copyright owner’. This general rule is subject to many exceptions, the most significant one being for 'fair use'. Fair use is stated as not being infringement, but the term ‘fair use’ was purposely not defined in the Copyright Act to allow for a case-by-case determination of whether a use was fair. The Copyright Act gives a non-exclusive list of examples of fair use, including use for purposes ‘such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research.’
In determining whether an unauthorized use is fair use, the courts consider various factors, including:
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
One of the factors the courts consider when presented with questions of fair use is whether the allegedly infringing work was transformative (ie, whether the use adds something new, with a further purpose or different character and is not just a substitute for the original use of the work). For example, OpenAI argues that its purpose is ‘transformative,’ because the training process creates ‘a useful generative AI system.’ OpenAI also contends that the copies are not made available to the public but are used only to train the program.
Some stakeholders argue that using copyrighted works to train AI programs is fair use, and so not an infringement of copyright. As the US Supreme Court stated in the recent case of Andy Warhol Foundation for the Visual Arts, Inc v Goldsmith, 598 US 508 (2023), the main question for the first fair use factor relating to the purpose and character of the use is ‘whether the new work merely supersedes the objects of the original creation (supplanting the original), or instead adds something new, with a further purpose or different character.’ The first fair use factor ‘focuses on whether an allegedly infringing use has a further purpose or different character, which is a matter of degree, and the degree of difference must be weighed against other considerations, like commercialism ... Although new expression may be relevant to whether a copying use has a sufficiently distinct purpose or character, it is not, without more, dispositive of the first factor.’
3.3.5 Recent litigation
Some generative AI applications have also raised concern that training AI programs on copyrighted works allows them to generate similar works that compete with the originals. For example, an AI-generated song called ‘Heart on My Sleeve,’ made to sound like the artists Drake and The Weeknd, was heard millions of times on streaming services. Universal Music Group, which has deals with both artists, argues that AI companies violate copyright by using these artists’ songs in training data.
Various plaintiffs have filed multiple lawsuits claiming the training process for AI programs infringed their copyrights in written and visual works.
In September 2023, a US district court originally ruled that a jury trial would be needed to determine whether it was fair use for an AI company to copy case summaries from the Westlaw legal research platform to train an AI program. The program was to be trained to quote pertinent passages from legal opinions in response to questions from a user. The AI program used WestLaw's proprietary headnotes, and the Court denied summary judgment on the fair use defense to allow a jury to make the determination of whether that use is transformative (meaning it is more likely to be fair use) or merely replicative. The AI company, Ross Intelligence, claimed its AI uses these headnotes to learn language patterns for generating judicial opinion quotes, potentially qualifying as fair use. However, if the AI replicates the creative expression in the headnotes, that use would not be considered a minor transformative step. This unresolved issue highlights challenges in AI-related copyright cases, as seen in other lawsuits where companies argue their AI systems merely analyze statistical relationships, akin to reading a book. Thomson Reuters Enterprise Centre GmbH v Ross Intelligence Inc, 694 F.Supp.3d 467 (D Del 2023). The Court later reconsidered and revised this decision on February 11, 2025. This time, the Court granted Partial Summary Judgment on Direct Copyright Infringement and Related Defenses, granted Plaintiffs' Motion for Partial Summary Judgment on Fair Use, and denied Defendants’ Motion for Summary Judgment on its Affirmative Defense of Fair Use and as to Plaintiffs' Copyright Claims. This decision is currently pending appeal.
In 2023, the New York Times, the Authors’ Guild, and several individual authors brought a suit against Microsoft and Open AI and the developers of ChatGPT for copyright infringement. The suits allege that the data used to train ChatGPT – the data the developers input so the system can make its predictions – is copyright protected and used without the permission of the copyright owners. Microsoft and OpenAI have countered that they ‘scraped’ the data from the internet, and that using it to train their system was fair use of the copyright protected material. OpenAI has moved to dismiss the suit against it, claiming that the NY Times ‘hacked’ its chatbot ChatGPT and other AI systems to generate misleading evidence for the case. In March 2025, the US District Court for the Southern District of New York ultimately denied OpenAI’s motion to dismiss the case - though narrowing its scope. Authors’ Guild case is also moving forward and was recently consolidated with other pending cases against Open AI and Microsoft.
On June 24, 2025, the Northern District of California issued what many are calling the first significant ruling in a major AI copyright case. The case, Bartz v Anthropic PBC, Case No. 3:240cv-05417, involved several authors suing AI developer Anthropic over the training of its AI chatbot, Claude. The Court ultimately ruled that Anthropic’s use of legally obtained books to train Claude constituted fair use but that Anthropic’s use of a pirate book database constituted copyright infringement.
3.3.6 Patent protection for AI-generated work
The US Patent and Trademark Office (USPTO) has issued Guidance on patent protection for AI-assisted inventions. The Guidance, which came into effect on February 13, 2024, states that AI-assisted inventions are not categorically unpatentable but the inventorship analysis should focus on human contributions. Patent protection may be sought for inventions for which a natural person provided a significant contribution to the invention even with the assistance of one or more AI systems. The Guidance provides procedures for determining whether there has been a significant human contribution to an invention. Examples of hypothetical situations of how the guidance would apply are available on the USPTO AI-related resources webpage.
Additional resources
Related Lexology Pro content
How-to guides:
Risks and liabilities of AI algorithmic bias
Understanding the risk of negligence claims when using AI
AI and smart contracts
Checklists:
Steps to mitigate risks associated with AI use in business
De-identification of data used by AI systems
Quick views:
Overview of AI in business
Key AI terms
Reliance on information posted:
While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.