Introduction
This guide will help in-house counsel, private practice lawyers, and compliance professionals in the United States to understand the potential liability for negligence that may arise when using artificial intelligence (AI). The guide also provides sector specific information relating to the risks of negligence or professional negligence claims when using AI in the medical, law and transport sectors.
This guide covers:
- Negligence and liability when using AI
- Industry use cases – AI standards and liability risk
- Managing what lies ahead
This guide can be used in conjunction with the following How-to guide: AI and smart contracts and Understanding the risk of negligence claims when using AI (USA)
Section 1 – Negligence and liability when using AI
From healthcare to self-driving cars, the wide-scale adoption and use of AI has the power to be transformative; however, it also presents challenges about who should be held accountable for decisions made or informed by AI, and who should be held liable when things go wrong.
1.1 Who is liable when AI fails?
Artificial intelligence (AI) is the science of creating machines that simulate human intelligence to undertake ‘human’ tasks and functions (eg, problem solving and reasoning). The use of AI technologies poses various challenges for the law of negligence, as the nexus between human choices, actions, and outcomes upon which negligence law is founded, is being tested and challenged (see Negligence and AI’s Human Users by Andrew D. Selbst).
The law of negligence is governed by tort law. Torts are legal wrongs where one party suffers harm, and the tort of negligence occurs when someone harms someone by failing to act with the level of care that a reasonable person would have exercised under the same circumstances. Broadly speaking, for any negligence claim to be successful, the claimant will have to prove that:
- the claimant was owed a duty of care by, eg, the service provider;
- there was a breach of that duty;
- the breach of duty has caused that harm; and
- damage or other losses have resulted from that harm.
In negligence claims involving AI, the cause of an AI system’s failure to perform is the key element for establishing a breach of the duty of care. Added to this, as with all negligence claims, the fault or defect must have caused the damage or loss in question. The court will need to undertake a detailed assessment of where the error arose, and legal liability is likely to be determined on a case-by-case basis.
From a legal perspective, AI technology is viewed as a product, not a person. When a product malfunctions and causes injury, victims can hold the product manufacturer liable for their losses by filing a product liability claim on a strict liability basis. See, eg, Riegel v Medtronic, 552 US 312 (2008). If the injured party can prove that the product did not function as reasonably as could be expected, the manufacturer may be held liable and made to pay compensation. This is one potential course of action when AI errors cause injury.
Foreseeability is a question that arises in the determination of negligence. As the definition provided by the Cornell Law School’s Legal Information Institute indicates, foreseeability considers whether ‘a person could or should reasonably have foreseen the harms that resulted from their actions.’ This provides guidance on how courts will measure foreseeability in their assessment of a claim.
As human intervention diminishes, and technological innovations advance, the foreseeability of harm can be obscured, making it difficult to determine whether the damage was caused by a single original event or by a combination of multiple (actual or potential) causes. Further, the exact sequence of events may be difficult to prove.
1.2 Consider the contributors to the AI system
There are many contributors involved in the development of an AI system:
- product manager – sets out product requirements for the AI system and determines how these should be managed, including ongoing maintenance and compliance with relevant legislation, eg, legislation that requires that AI algorithms be utilized in a non-discriminatory manner;
- AI development team – could be an internal team or third-party team outsourced by the organization (eg, software developers, data scientists and project managers). Duties of the development team include analysis of data, building the AI system, testing models and deployment within the organization. The development team will work closely with the product manager to ensure that the system meets the product requirements;
- user experience designers – experts in customer service who assist with the creation of appropriate responses using AI-powered tools (eg, chatbots). Compliance and legal teams will require input from the user experience designers to ensure that the systems work as intended and that potentially unlawful uses (eg, copyright infringement) are guarded against;
- senior management, who will oversee the entire development and marketing process; and
- other roles (eg, Chief AI officer who understands the business and is responsible for communicating business value to stakeholders and clients).
When issues regarding negligence arise, it may be difficult to determine not just who was negligent, but whether their negligence was the cause of the plaintiff’s injury, and what percentage of the common fault is borne by each party.
1.3 Establishing the cause of fault or damage
When an AI system fails to perform as anticipated, the cause of the fault or damage will be the key determinant for a court when assessing and establishing a case for breach of the duty of care in negligence claims.
At the heart of the claim, the breach must have caused the damage or loss, and if this is established, the person or entity responsible will be held liable. It is, therefore, essential that in any claim where defective AI causes harm or damage, all parties involved in the creation, design, installation, and maintenance of the AI system are identified to accurately assess liability, as the duty of care is owed by all stakeholders in the AI system.
There are many factors to consider, including the following:
| Cause of damage | Who may be liable? | Why? |
| Did the user fail to correctly follow the instructions of the AI system? | User/owner | Negligent operation of the system – eg, inputting incorrect information may cause the system to function in a way not intended by the user/owner, even if the system is functioning as designed. |
| Was the AI system coded incorrectly? | Software developer | The system was defective. An analogy may be drawn to the manufacturer who puts a defective product on the market. |
| Did the user manual contain errors in the procedural instructions? | Manufacturer | The injury was caused by a system that was not given the correct instructions due to the manufacturer’s errors. |
| Was there a failure in system maintenance? | Maintenance provider | If the system was not properly maintained, leading to malfunction, the maintenance provider could be liable for neglecting necessary upkeep. |
| Was the AI system misused beyond its intended purpose? | User/owner | If the system was used in a way that exceeds its designed capabilities, the user/owner could be responsible for any resulting damage. |
1.4 Professional negligence claims and AI
The test to bring a claim of professional negligence is similar to that of negligence, ie, the claimant must establish that the defendant owed a duty of care and breached that duty (whether by actions or omissions), and as a result of that breach the claimant suffered injury, damage or loss.
The alleged negligent conduct is subjected to a test which compares the conduct to that of a hypothetical reasonably prudent person. See, eg, Texler v DO Summers Cleaners & Shirt Laundry Co, 81 Ohio St 3d 677, 693 NE2d 271 (1998).
Examples of professional negligence involving AI could involve situations where the professional operating the AI tool has not followed operating instructions correctly, or where a fault in the AI software has led to an error in diagnosis (eg, if being used in a medical practice). Professional negligence could also be found if there has been a failure to supervise the AI program as it is working, letting it operate without control by a ‘human in charge,’ or if a program fails to perform in the manner reasonably expected.
The deliberate or accidental misuse of an AI program could lead to a court finding that the person(s) or entities in charge acted in a negligent manner. Determining the answer to who is at fault, ie, whether the blame lies with ‘AI’ itself, or the error lies with an individual, will be a key element in establishing liability and whether there has been a breach of a duty of care in professional negligence terms.
The rise of AI in law firms is a hot topic, but definitive guidelines on its use are still being developed. Many firms are now creating internal policies to regulate their use of AI. Nevertheless, it is likely that AI will find its way into law firm practice, so internal policies developed before AI use becomes widespread, and before problems develop are essential to address the issues that will certainly arise from this rapidly evolving technology. See How-to guides: Corporate governance and the use of AI, Understanding AI-driven risks and Checklists: Developing a policy for the ethical use of AI and Steps to mitigate risks associated with AI use in business.
AI simulates human intelligence. It analyzes data to find patterns and make predictions, much as humans do, only AI does it on a much larger scale and much more quickly. In the legal field, AI technology has found its primary use in legal research and writing. Some AI platforms are even capable of drafting legal memoranda, filling out government forms, and analyzing contracts. As with any delegation of tasks, using AI in such a manner raises serious concerns about client confidentiality. Uploading sensitive documents to an outside platform could easily be considered a violation of professional rules such as ABA Model Rule 1.6, which prohibits lawyers from revealing client information without consent.
An example of professional use of AI gone wrong is a Utah lawyer who was sanctioned by a Utah appeals court after it was discovered that he used ChatGPT to prepare a court brief containing false citations, including a citation to a nonexistent case. The error was first spotted by the opposing counsel, who noticed the brief included a reference to a case that did not appear in any legal database. The lawyer admitted that an unlicensed law clerk had written the brief, and also admitted that he had failed to verify the citations in the brief. The court ordered the lawyer to pay the opposing counsel’s fees, refund his fee to his client, and donate $1,000 to a legal non-profit. The court's decision emphasized that AI may be a useful tool, but it does not relieve lawyers of their responsibility to ensure the accuracy of all court filings.
1.5 Uncertainties surrounding the Communications Decency Act
Section 230 of the Communications Decency Act of 1996 provides protection for online service providers against being held liable for content posted by their users. The law states that an ’interactive computer service’ provider will not be treated as the ‘publisher’ of information supplied by a third party. Historically, Section 230 has provided a powerful means of defense for tech companies. Its application to AI has become a subject of intense legal debate. There is no current consensus on the application of Section 230 to AI-related harm. That application will likely depend, at least initially, on the specifics of each case and how AI is understood to have caused the injury. A central question is whether an AI developer or deployer will be considered to come within the definition of a ‘content provider.’ If they are, those developers or deployers would be unable to use Section 230 as a shield against liability, meaning that they could be subject to more tort claims.
Section 2 – Industry use cases – AI standards and liability risk
2.1 AI in healthcare
AI systems are already in use in various clinical settings, and a recent article on the current state of AI in healthcare notes that AI is helping doctors diagnose and manage kidney disease and improve diagnostics and analysis of patient data. In the future, AI will be able to conduct a broad range of healthcare functions. The introduction of AI technology into medical practice creates challenging legal questions of liability, given the absence of any pertinent US case law, and the potential creation of new risks to manage and insure. As the complexity of the technology rises, the complexity of the liability analysis tends to rise as well. Diagnostic error is a common cause of action in malpractice litigation. Questions of malpractice involving medical AI are challenging:
- What if the AI system gives the wrong diagnosis or treatment plan, which leads to harm to or death of a patient?
- Who is liable?
- How do we determine whether medical malpractice has occurred?
Companies using AI need to consider how to take steps to mitigate risks associated with its use, such as developing appropriate monitoring and oversight of AI use, setting out clear training and policies, and ensuring that they have the necessary insurance coverage.
2.1.1 Standard of medical care
Liability for medical errors is assessed against a standard of care usually described as that level of care, skill, and treatment which, considering all relevant surrounding circumstances, is recognized as acceptable and appropriate by reasonably prudent healthcare providers. This is the generally accepted definition and position at law. This customary standard is normally supported by expert witness testimony that clarifies the local or national practice. Doctors spend years in education and undertaking training to ensure they are certified to practice. Their practice is also backed up with professional liability insurance in the event an error is made.
AI is getting increasingly sophisticated at mirroring what humans can do across clinical decision-making, eg, the use of AI in mammograms, as discussed in a recent article in the American Journal of Roentgenology; transcribing documents; and treating patients remotely. The potential benefits include faster delivery, improved accuracy and patient outcomes at a lower cost.
Use of AI undoubtedly also carries risk. System malfunctions can put patients at risk including, eg, diagnostic errors, or where physicians and nurses may lack familiarity with the processes or may not be adequately trained to help with decisions about patient care.
2.1.2 Liability and medical malpractice
According to the American Medical Association Journal of Ethics, implementation of AI raises complex legal liability questions for health care professionals and technology manufacturers (particularly if they cannot explain recommendations generated by AI technology).
The question of whether AI can be held liable for medical malpractice (eg, the AI system misses a diagnosis in the same way that medical professionals can) is complex. Generally, the answer is that AI cannot be held liable. AI technology is not a licensed medical professional, nor is it a person to whom a negligent act may be attributed. However, there may still be legal recourse for patients who suffer due harm caused by AI.
There may also be potential for a medical negligence claim based upon improper use or reliance on AI by the medical professional. In any case of negligence, whoever is making the claim will need to establish that the product manufacturer or the medical professional owed them a duty of care, that they breached that duty, and that the breach caused injury or loss to them.
Claims arising from AI mistakes are likely to be different from current claims, which tend to be about human error (eg, medical misjudgments or accidents in theater). Physicians, nurses, and support staff will need to receive adequate training to understand the use of AI and how it predicts clinical outcomes. Clinical training and medical education will need to be tailored to address this risk. Investigating the root cause of AI-related medical negligence will involve engaging with many technical experts from not just the medical profession, but also IT analysts and lawyers. See Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation from the Milbank Quarterly, 2021.
2.1.3 Evolving standards in AI claims
The Minnesota Journal of Law, Science & Technology offers up analysis regarding how the use of AI in medicine will shape medical claims and offers the following suggested approaches to allow the injured party to have some course of redress:
- adoption of standard of AI personhood in negligence claims - create a standard of care which would compare the performance of other AI machines and computers and address the likelihood that the standard of care applied to AI and humans would be different;
- common enterprise liability – AI is developed between multiple parties, so it is difficult to apportion liability to one individual entity (eg, solely the manufacturer). In this solution, each entity within the various component parts in development of AI (eg, design, product teams, manufacturers) could be held jointly liable when assessing a claim;
- modify the standard of care when using AI - this involves having a standard of care that evolves to reflect the use of AI in medicine (eg, this could be set out in practice guidance set by professional organizations).
The key is to ensure that all stakeholders, including end users, developers, and everyone else in the supply chain, are mindful of the risks to ensure AI safety and effectiveness. Businesses could consider checking their contractual provisions to determine if there is reliance on AI systems for contract performance and integrate staff training on AI use into their risk management frameworks.
2.2 AI and the practice of law
AI is an important new tool in the practice of law, and this has significant implications for the legal profession and the practice of law. There are compelling ways that AI can help to simplify and bring efficiency to the practice of law, whether in undertaking research, e-discovery and disclosure, predicting case outcomes, document drafting, or customizing chatbots to provide client solutions.
That said, AI models of unsupervised machine learning can provide different outputs while recognizing a concept or phrase, and AI technology might not recognize how to apply the law to a given scenario. Human intervention and oversight from lawyers are still likely to be required. This is particularly true for the area of criminal law, where the idea of AI potentially being responsible for decisions regarding incarceration or other liberty-altering consequences raises questions about human rights implications (including the right to a fair trial) and the underlying principle of transparency when delivering justice.
Even where humans retain ultimate decision-making authority, they may still become overly reliant on technology-based recommendations (ie, ‘automation bias’). This raises the obvious question of determining how to evaluate the potential embedded bias in the AI model. This poses additional risks for law firms regarding compliance with inclusivity and diversity objectives, the role of ‘ethics’ in using legal technology for decision-making, and the promotion of social responsibility. Ultimately, individual firms will have to weigh the risks against the benefits.
2.2.1 AI use cases in the courts
There are also risks in relying on AI for more routine aspects of legal practice. For instance, in a well-publicized case, a New York personal injury lawyer, Steven Schwartz, used ChatGPT to draft a brief. Unfortunately, the cases cited in the brief did not exist, a fact noted by opposing counsel. Mr. Schwartz and his law firm were sanctioned by the court for not only filing a false brief, but for initially refusing to accept responsibility for their error.
The use of AI in legal practice has prompted at least two federal judicial officials – Judge Starr of the Northern District of Texas and Magistrate Fuentes of the Northern District of Illinois – to issue standing orders regarding the use of AI in documents submitted to them. Judge Starr’s order requires all attorneys before his court to certify whether they have used generative AI to prepare filings and, if so, to confirm that any such language prepared by the generative AI was validated by a human for accuracy. The order issued by Magistrate Fuentes states that:
[a]ny party using any generative AI tool to conduct legal research or to draft documents for filing with the Court must disclose in the filing that AI was used, with the disclosure including the specific AI tool and the way it was used.
With this in mind, practitioners should familiarize themselves with the relevant court’s position on AI and whether they need to file a certification regarding use of AI.
2.2.2 Reasonable standard of care
All lawyers must pass a bar exam and must graduate from an approved law school with a Juris Doctor (JD) degree to practice. Lawyers are bound by the rules of professional conduct, including client confidentiality. They will risk disciplinary action, and professional negligence claims for failures to conform to the relevant standards.
Lawyers have a duty to adhere to a reasonable standard of care which, considering all facts, would be recognized as appropriate by a similarly qualified professional. The test is to apply the lawyer’s conduct to that of a similarly qualified lawyer in the same field measured against the same set of facts. It is difficult to apply this standard to AI tools and machine learning, as here the ‘similarly qualified professional’ is not a person; the test may be more appropriately measured against other AI technology. In addition, if using AI to draft briefs, it is valuable to exercise caution. Understand what AI can do, familiarize yourself with the tools, but also recognize the limitations. Finally, be sure to verify that the points being made or the cases being cited are accurate.
2.2.3 Legal liability and malpractice
There has been little discussion about the standard of care and causation in the context of lawyers using (or not using) AI, possibly because the human factor in the practice of law is still prominent. This will likely change in the near future, as new AI applications begin to be used in the legal field.
Despite these inherent challenges, over time, and with the advancement of AI, in some circumstances it may be malpractice not to utilize AI technology. One area where this may be an issue is in the discovery process, where AI could increase efficiencies and reduce potential for errors in disclosure. Discovery is a pre-trial procedure in legal proceedings, where one party can obtain evidence from the other party by the production of documents and requests for information. Thus, where undue delays in discovery could be attributed to not implementing AI, the attorney may be subject to a malpractice claim. A failure to use or understand the application of AI could also be an ethical violation, based on the attorney’s violation of their duty to be aware of the ‘benefits and risks associated with relevant technology.’ By the same token, however, misusing AI can lead to results that could give rise to both ethical violations and malpractice liability, as not to mention the attendant embarrassing publicity. Lawyers must always remember their duties to clients, opposing counsel, and the courts, and keep those duties paramount when adopting even the most efficient and effective new technology.
Courts have acknowledged the use AI, and have recognized its potential to both support and harm the judicial process. For example, the Chief Justice of California’s Supreme Court in May 2024 launched a task force on generative AI to further explore its potential benefits and risks. The task force has discussed a new model policy to ensure the responsible and safe use of generative AI in court administration. That policy was to be presented to the California Judicial Council in February of 2025.
2.3 AI and aviation
Aircraft engineers are using AI to enhance safety, increase efficiencies, and reduce pilot workload. The introduction of fully autonomous flight systems to fly planes without any human input is not new – flying on autopilot has been around for over half a century. AI raises additional technical and ethical questions, and regulators and civil aviation authorities will face increasing challenges including safety, security, and responsible and ethical standards for AI use.
2.3.1 Aviation safety standards
The operation of civilian aircraft in the United States is governed by specific aviation safety regulations, adopted by the Federal Aviation Administration (FAA), which set the standards for aviation safety. The regulations cover all aspects of aeronautical operation and navigation, from health requirements for flight crews to the specific areas in which aircraft may be operated. These regulations are supplemented by Notices to Air Missions (NOTAMs), which contain ‘information essential to personnel concerned with flight operations but not known far enough in advance to be publicized by other means.’ A NOTAM ‘states the abnormal status of a component of the National Airspace System (NAS) – not the normal status.’
The FAA must be convinced that a system reliant on increased automation or AI achieves the appropriate level of safety. In the past, certification processes have taken an analytical approach to all elements of a new system or technology. One aspect of safety-critical systems is consistency, proving that the same inputs provide the same outputs every time. The regulators typically test and evaluate how the technology functions under normal and extreme conditions and then consider the consequences of any given failure. This phase of evaluation is then followed by an analysis of performance in actual service and a detailed examination of instances in which the new technology fails to perform as expected.
2.3.2 Legal liability in the sky
Currently, the commercial airline industry has a well-established liability policy in the event of an accident. For international commercial careers, under the Montreal Convention of 1999, the airline is presumed liable in the event of an accident and for practical purposes there are no financial limits on that liability. This is true even when an ‘autopilot’ system has been engaged since the pilots are still presumed to be in control. Given the process described above, it is unlikely that regulators would modify their approach to these new technologies and permit their use in safety-critical applications, such as navigation in crowded airspace. Advances in technology, however, as well as a cultural shift toward greater acceptance of, and reliance upon, AI could lead to a change in the regulatory environment as well.
2.4 Motor industry
For many, the use of AI in the motor industry brings up an image of a self-driving car with voice and image recognition, a passenger being ferried around and no longer needed at the wheel. That image is real in many places. For example, Arizona has become a hotspot for autonomous vehicle (AV) testing, in part because of its dry climate and grid-like street systems. Residents of Phoenix have undoubtedly seen several Waymo AVs driving and zooming about. Waymo is the name of Google’s self-driving car (AV) company; its cars are readily identifiable by their black and white colors and enormous overhead sensors and are just one of a myriad of self-driving cars being tested. Since November 2022, anyone over 18 can use Waymo’s ride-hailing service in downtown Phoenix and experience a ride without anyone in the driver’s seat.
This is one of the more visible and dramatic uses of the technology. However, behind the scenes, AI is being used by design and production teams analyzing predictive modeling to drive sales data, and AI is critical for agile manufacturing processes (eg, AI-powered exoskeletons worn by product designers to test car safety and comfort).
2.4.1 Motor industry standards of care
Automotive safety is a major concern in the United States, especially since the automobile is such an important part of American life. The National Highway and Traffic Safety Administration (NHTSA) makes public safety ratings of different makes and models. NHTSA has also adopted a classification system which distinguishes between six different levels of automated driving, from simple driver assistance programs which allow the car to brake or accelerate, to fully autonomous self-driving vehicles.
2.4.2 Legal liability on the road
By automating the driving of personal or commercial vehicles (ie, introducing self-driving vehicles), liability for accidents is likely to move away from negligence on the driver’s part toward product liability for the manufacturer. A central question raised by the use of AI technology is: who is responsible for driving a self-driving car in the event of an accident? Fault and liability need to be determined, but this is challenging when a road accident is caused by a self-driving car and a human driver is not in control of the driving.
In the absence of having a human driver, there is no individual person to be assessed as owing a duty of care in the claim for negligence, so legal considerations will move to an analysis of product liability (eg, manufacturing or design defects). As with other areas where AI is being implemented, many legal academics have concerns that the prospect of tortious damages are likely to have a negative impact on investment and the required innovation necessary to get ‘driverless’ vehicles on the road.
A distinction must be made, however, between mere ‘driver-assist’ technologies and ‘driverless’ systems. In a driver-assist system, the human driver is still responsible for overriding the technology when it may cause harm to others. In at least one case, the driver was criminally charged for failing to do so. The driver pleaded no contest to two counts of vehicular manslaughter with gross negligence, was sentenced to probation, and will be ordered to pay restitution to the victim’s family. This case was the first criminal prosecution of a driver using a driver-assist system in the United States and sets a precedent for future cases.
On the road, self-driving vehicles will face unexpected obstacles: diversions because of road works, drivers who stop suddenly, or an unexpected incident involving another road user. In many applications of AI technologies, harm may not have been foreseeable and given that it is difficult to fully assess the reasonableness of a decision made by an AI system, assessing liability is especially challenging when even the programmers themselves may not be totally sure how the system arrived at a particular conclusion.
This can make problematic proving causation and foreseeability, a fundamental requirement for the imposition of liability in tort law.
Section 3 – Managing what lies ahead
The current tort and negligence liability framework is being tested with the introduction of AI, as it is founded on principles related to human-driven mistakes and injuries. The new challenges facing regulators, corporations, and insurers include that AI errors may occur without any human input and determining how standards of reasonable care can be reformed so that machines can be held liable. Any future framework providing for AI tort liability and the redress of harm will need to address the liability risks inherent in various specific industry sectors without hindering the continuing evolution of this innovative technology.
Additional resources
Boston University Law Review, Negligence and AI’s Human Users,Andrew D. Selbst, 2020
Innovations in clinical neuroscience, The Standard of Care, Donna Vanderpool, 2021
Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996
St Mary’s Journal on Legal Malpractice & Ethics, In the Midst of Change, a Few Truths Remain – A Review of Trazenfeld and Jarvis’s Florida Legal Malpractice Law, Jan L. Jacobowitz, 2020
Related Lexology Pro content
How-to guide:
AI and smart contracts (USA)
Understanding the risk of negligence claims when using AI (USA)
Reliance on information posted:
While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.