Checklist: Drafting AI use contracts with third-party tech vendors (USA)

Updated as of: 05 September 2025

Introduction

This checklist will assist in-house counsel, private practitioners, and compliance personnel when negotiating and drafting AI use contracts. These are contracts for the purchase, use, or license of AI software or systems and businesses are often turning to third-party vendors to implement AI solutions.

Imposing specific legal obligations and identifying the necessary contractual protections is critical from the outset to establish clear, ethical, and enforceable agreements. Whilst contracts for AI use are generally subject to the same legal rules and principles as generic technology service contracts, the dynamic nature of AI and the absence of established market practice and legislative and regulatory regimes means there are unique risks and liabilities to consider when negotiating and drafting.

This checklist addresses the following steps:

  1. Terms of use
  2. Compliance and accountability
  3. Warranties
  4. Indemnities

It is presented as a list of steps than can be ticked off as they are reviewed. At the end of each step, there are explanatory notes and specific notes corresponding with each requirement in the checklist.

This checklist can be used in conjunction with the following: How-to guides: Understanding the risk of negligence claims when using AI, Understanding AI-driven risks, Checklist: Steps to mitigate risks associated with AI use in business and Quick views: Key AI terms and Overview of AI in business.

Step 1 – Terms of use

No.Terms of use
1.1Who are the parties?
1.2What is the jurisdiction and governing law?
1.3What is the duration of the contract?
1.4What services will be provided?
1.5What are the payment terms?
1.6How may the contract be terminated?
1.7Is there a survival clause?
1.8What provisions are there for giving/receiving notice?

Step 2 – Compliance and accountability

No.Compliance and accountability
2.1Who is responsible for compliance?
2.2Provide for transparency in AI decision-making
2.3Monitoring and reporting responsibilities
2.4What provisions are present for data protection?
2.5Intellectual property ownership
2.6Dispute resolution
2.7Insurance coverage

Step 3 – Warranties

No.Warranties
3.1Performance warranties
3.2Compliance warranties
3.3Security warranty
3.4Maintenance and support warranty
3.5Warranty against pending third-party claims

Step 4 – Indemnities

No.Indemnities
4.1Assess which indemnity provisions you need
4.2Indemnity for third-party claims
4.3Indemnity for non-compliance with laws and regulations
4.4Indemnification for breach of contract

Explanatory notes

AI is still a new application for many businesses and the use cases for AI, such as machine learning, natural language processing, robotics, and cognitive computing are increasing rapidly. The fast and evolving nature of AI means that there are new legal issues that are likely to arise. Given the uncertainty, drafting in this unique environment presents risks and challenges and legal issues surrounding AI will likely stem from poorly drafted contracts.

Initial considerations

The first consideration a potential AI customer must address is to define AI in relation to the contract. While that consideration may sound overly simplistic, AI is still unfamiliar technology to most people and defining ‘AI’ will outline the scope of the services. It is crucial to have at least some familiarity with the uses, limitations, and the jargon of AI. See Quick view: Key AI terms.

The second consideration is understanding how AI will be used? This will impact the choice of contractual terms and definitions used and the business must understand how the AI technology it will use is going to work. Will it be used to perform repetitive manufacturing tasks, or will it be used for a more sophisticated purpose, such as predicting customer behaviour? Is the task one that could be performed as well, if not better, by a live human? Is the use of AI subject to any industry-sector or regulated requirements? Does the business understand how this fits into operational processes and how decision-making outputs arise? Where is the training data coming from? AI is only as good as the model used to train it, and if that model is deficient, so will the output of that model.

The third consideration is one that should be considered prior to any contract negotiation begins; namely, who is the vendor? Conduct due diligence and a risk assessment on the vendor, look at their business track record and see if the company has faced any significant legal or reputational issues (eg, data breach or negative publicity).

Given the relative newness of AI, a vendor may not have a meaningful track record or may not have had the time to build up a reputation. This may prompt a due diligence investigation of the principals involved with the vendor or an alternative approach could be to request references or reviews from other service users. Newness in an industry may not necessarily disqualify a potential vendor, but it is a factor to bear in mind when measuring risk.

Pre-contract considerations

Before contracting, the personnel involved in the negotiation should be familiar with the basic considerations and associated risks of AI including:

  • legal risks – such as potential IP claims arising;
  • business risks – such as outage or business interruption if the AI system fails; and
  • reputational risks – such as damage to the organization brand.

AI use contracts are commercial contracts and the familiar concepts that are present in drafting any contract are present in an AI use contract. Issues such as performance warranties are present in any contract for services and should be at least discussed during contract negotiations.

Other factors however take on an added dimension with AI. Given the broad implications of the use or misuse of AI, an indemnification clause, for example, must be carefully drafted as to what losses or damages will be indemnified, bearing in mind that some of these losses or damages could come from third-party use. Disclosure requirements may apply as state laws are increasingly imposing obligations on entities using AI tools with their customers.

These considerations will influence the selection or drafting of particular terms, even if those considerations are not stated explicitly in the final agreement. For example, as the contract is negotiated and drafted, the current and foreseeable legal and regulatory environment must be kept in mind. Similarly, the overall costs of performing a contract, as well as the likely benefits of the contract – the return on the investment – should be reflected in the terms included in the agreement.

For detailed guidance, see How-to guides: How to effectively incorporate standard terms and conditions in a commercial agreement or transaction, How to draft a confidentiality agreement and confidentiality clauses, Understanding AI-driven risks and Checklist: What to consider to ensure a contract is valid.

Step 1 – Terms of use

Clear terms will outline the obligations, rights and expectations of all parties and mitigate against the risk of disputes arising from ambiguity. For additional guidance, see Checklist: Review of terms and conditions for the purchase of goods and services from the perspective of the buyer.

1.1 Who are the parties?

Identifying the parties to a contract is an essential first step. Consider who the contracting party is – identify the vendor and identify other third parties to whom all or part of the performance of the contract will be outsourced. Identify whether there are third-party terms and conditions that are applicable to the transaction. Undertaking the necessary due diligence on the parties with whom you are contracting is an important factor.

1.2 What is the jurisdiction and governing law?

Which laws will apply in the event of a contractual dispute is particularly important when the parties to a contract are not necessarily in the same jurisdiction. AI use contracts will often cross state, if not national borders.

The choice of jurisdiction that will govern the agreement is largely a matter of the parties’ preference. Generally, US courts would enforce a choice-of-law provision if the law chosen bears a reasonable relationship to the parties or to the transaction. For example, a contract clause in a contract between an AI tech vendor headquartered in California and its customer in Tennessee that provides that the law of California will apply to any dispute would likely be enforced. There is an exception to the general rule if the chosen law violates public policy of the forum state.

1.3 What is the duration of the contract?

Contractual terms should stipulate the beginning and end date of the contract (the ‘service period’). Establishing when the vendor is expected to begin providing the AI services, and when the user can start using them for their intended purpose, as well as setting out when the contractual relationship will end, is essential. Such terms will help to provide clarity on the scope of the engagement, thus allowing both parties to plan their operational and strategic activities and budget accordingly. 

1.3.1 Renewal and extension

A contract's renewal process can be a potential pitfall for users who are not vigilant; some contracts may include an auto-renewal feature that will continue to run unless express cancellation is issued within a specified notice period. Users should be well informed of their rights and obligations regarding renewal to avoid unintentional commitments and to retain control over their contractual engagements.

The parties should consider whether they wish to negotiate possible extensions to the contract, and these should be set out and mutually agreed to prevent ambiguity.

For further information, see Checklist: Drafting a business-to-business (B2B) contract with automatic renewals.

1.3.2 Suspension

Suspension clauses outline the circumstances under which the delivery of AI services may be temporarily paused. Such circumstances might include maintenance requirements, security breaches, or malfunction of the AI system. They might also include the user's failure to fulfill their contractual obligations. Such clauses protect both the user and the vendor by acknowledging the possibility of service disruptions, and by offering a clear and agreed upon path for dealing with those disruptions. A suspension clause should detail the process for notification of the suspension, the steps required to remedy the disruption that was the cause of the suspension, and how services may be reinstated. This ensures that both parties have plans or strategies for managing and mitigating interruptions in service, thus maintaining the integrity and reliability of the AI application. 

1.4 What services will be provided?

Setting out the services to be provided in detail is a protection for both parties. That includes a clear description of the AI tools or services that the vendor will provide, eg, Model development, deployment, data labelling, etc., as well as Service Label Agreements (SLAs), eg, uptime, response time, model training frequency, etc. Specifying the services ensures that the buyer knows what it will be getting (or is supposed to be getting). The seller will know what it is supposed to provide and will also know the upper limit of what it is supposed to provide. This could also be significant in determining the potential obligations under a performance warranty or an indemnification clause. Consider whether to build in how to manage upgrades to the services and flexibility should the business needs change.

1.5 What are the payment terms?

The amount and timing of the payments to the vendor will depend on all the factors set out above, including especially the services to be provided. For example, payments could include one-time fees, subscriptions, or payments based on usage. Additionally, the method of payment should be made explicit, including details regarding what happens in the event of late payments and whether interest will be payable on outstanding amounts. If the performance of the contract will cross national borders – if the vendor or any of its subcontractors are located overseas, or if an overseas-based subsidiary of the company is involved – the currency used for payment must be specified, and additional factors like foreign exchange rates and exchange controls may also need to be considered. For further information, see Checklist: International supply of goods contracts.

1.6 How may the contract be terminated?

Termination clauses:

  • set out the conditions for ending the agreement;
  • specify 'for cause' (in the event of breach or significant failures) and 'at will' provisions (where the contract can be terminated upon service of notice); and
  • outline post-termination rights and responsibilities.

The parties need time to prepare to manage the end of the contractual relationship, particularly if there is a necessary wind-down phase and transition required if, for example, switching to a new vendor, or changing approach and direction. Matters such as the costs, the notice period required, and how certain operational matters should be dealt with (eg, methods of retention of customer data or non-compete employee provisions) are likely to be discussed. Setting this out expressly ultimately protects against the risk of potential post-contractual disputes. For further guidance on termination, see Checklist: What to consider when terminating a contract .

1.7 Is there a survival clause?

In AI use contracts with third-party vendors, survival clauses are integrated within the terms to ensure that certain obligations persist beyond the termination or expiration of the agreement. These enduring provisions are crucial for protecting both parties' long-term interests and maintaining legal safeguards post-contract. Examples of such clauses are confidentiality provisions to protect information shared during the contract term (either indefinitely or for a specified period), or trade secret provisions to protect methods, designs or information classified as trade secrets which, if not kept confidential, could cause harm, or a loss of competitive advantage. The scope of these terms should be limited only to what is necessary, to avoid a ‘chilling’ effect on research and development. Such a clause may look like this:

Survival Clause

The parties agree that the following provisions shall survive and remain in full force and effect, regardless of the termination or expiration of this Agreement: Confidentiality, Intellectual Property Rights, Indemnification, Limitation of Liability, and Dispute Resolution. These clauses shall continue to bind the parties and their respective successors and assigns.

1.8 What provisions are there for giving/receiving notice?

Notice provisions provide clarity for certain events during the contract lifecycle and within a specified timeframe (eg, if there is an issue with the AI outputs). This clause explains how the notice should be served (eg, by email or hard copy in writing), who should receive service, how issues should be reported and resolved.

In the event of breach or a disruption in service, the clause is likely to provide a timeframe for the party at fault to remedy the issue (in the interests of preserving the contract and resolving the issue), considerations as to costs, and set out what is to happen if they do not.

Step 2 – Compliance and accountability

Compliance and accountability are vital components of AI use contracts with third-party tech vendors, particularly within the terms of use. The contract must set out the allocation of respective obligations and who is responsible for performance – this will help to identify and mitigate risks and ensure a smoother execution of the contract longer term.

Further discussion on the compliance issues surrounding AI is covered in the National Institute of Standards and Technology (NIST), AI Risk Management Framework.

2.1 Who is responsible for compliance?

Given the rapidly evolving laws and regulations, clearly defined terms and responsibilities for who has responsibility for compliance is critical to prevent misunderstanding and ensure that the parties know what is expected of them. This will include matters such as ensuring compliance with relevant laws and regulations to protecting the parties from legal disputes and potential penalties.

There should be clear lines of responsibilities between the parties to ensure they are aware of who is accountable for various aspects of compliance, such as data protection, non-discrimination, and transparency. While AI law and regulation is constantly evolving, ‘traditional’ laws and regulation still apply. If the product causes harm, then the purchaser of the services could face civil liability unless it is made clear that the liability is to fall on the vendor. See How-to guide: Understanding the risk of negligence claims when using AI.

The rapidly evolving nature of AI makes it likely that new laws, regulations, and court decisions will affect the performance of AI contracts. A provision regarding the steps to be taken after such a legal or regulatory change should be considered (including who is to be responsible for monitoring and assessing the impact of these changes). Note that vendors are likely to resist contractual terms that seek to impose all responsibility for monitoring legal and regulatory changes solely on them. On the other hand, absolving vendors of all responsibility in advance is a reckless strategy for the purchaser. This responsibility must be negotiated.  

2.2 Provide for transparency in AI decision-making

Transparency is key to fostering trust in an AI system, and is also essential for ensuring reliability of the system. Transparency calls for clear documentation and understanding of the algorithms, the sources of data used to train the system, and the decision-making processes built into the system. Transparency is essential not only for trust and ethical considerations, but also for regulatory compliance. The terms of the service agreement should ensure that the vendor provides the user with sufficient insight to confidently verify that the AI system adheres to relevant laws, standards, and ethical guidelines and enables the organization to assess reliability of the AI system especially in sectors where AI decisions have significant legal or personal impacts.

An AI vendor should be required to explain or be able to explain how the system works, how the system will be trained, and decision-making (especially in business-critical functions). A particular focus should be placed on the data that is to be collected for training: how is it obtained, and from whom? Who owns the data, and if not the vendor, what rights does the vendor have to use it? If the vendor owns the data, what can the vendor do with the data?

For further information, see Quick view: Key AI terms

2.3 Monitoring and reporting responsibilities

A monitoring and reporting provision obliges the vendor to enable and support continuous oversight of the AI system’s performance against measurable criteria, such as specified benchmarks and service levels, and generally incorporates misuse monitoring (eg, patterns of misuse after deployment and required actions). It builds in sharing information, early detection, reporting incidents, and regular review of the systems. Regular monitoring will also identify whether there is a need to address ‘glitches’ or retrain the AI model. The terms of use should specify the metrics for performance, the tools or methods to be used for monitoring, and the frequency of reporting. This ongoing evaluation helps in early detection of any deviations from expected performance and given that AI model misuse can arise in different forms (eg, using AI for deceptive trading practices – deepfakes or unauthorized data usage), this will mitigate the risks of misuse by bad actors allowing for timely interventions and adjustments.

In addition, consider whether a provision should be made to include a ‘human in the loop’ to oversee operations and report on progress and performance. This human oversight may be a useful safeguard for both parties. It is also an important consideration to ensure that operations continue to be conducted in an ethical manner. 

2.3.1 ‘Fail safe’

What will happen if the AI system fails? The contract should be explicit as to who is responsible for the costs of repairing the system. A vendor should also be required to have in place a back-up or fail-safe system so that the customer can keep its operations running while the malfunction is being repaired or improved.

2.4 What provisions are present for data protection?

AI requires massive amounts of data to function. Collecting and storage of vast quantities of data raises the risk of a breach that will expose that data to other, perhaps malicious users. It also raises questions around rights and obligations surrounding the data being inputted into the AI system (the prompts/inputs) and responsibilities for the data outputs being generated (the outputs). State laws regarding data security and breach notification place obligations on both the party who collects data and the party for whom the data is collected; thus, provisions for monitoring and providing notice of potential data protection issues is essential. It is prudent to consider what policies and procedures the vendor has in place for data protection and security, including how to deal with incidents arising.

Assurances should be obtained that the vendor has the authority to process the data used in training and testing. The vendor should likewise be obligated to ensure the quality and accuracy of data. The contract should include a description of the vendor’s comprehensive privacy protection frameworks and include mandatory breach notification.

For further information and guidance, see How-to guides: How to evaluate the effectiveness of a data security or data privacy compliance program and Incident response plan readiness and identification of a reportable data breach.

2.5 Intellectual property ownership

A detailed analysis of intellectual property (IP) related to AI is outside the scope of this checklist; however, all AI contracts should address IP considerations, including ownership of the system and its inputs and outputs. Lack of agreement may lead to disputes in the long term. Any license or limitation of rights should be spelled out clearly in the contract.

The contract should also specify who owns any derivative works or results from the operation of AI. Data used to train the system should come either from open sources, or from sources that have licensed content for the vendor to use. Vendors should provide evidence that they have a right to use any third-party technology by providing evidence of the license and any representations and warranties demonstrating that they are acting in compliance with the terms of the license and have the authority to grant the rights to the contract under the contract. Indemnity protections for use of IP should also be considered to add protections in the event of third-party claims and resulting liabilities. See step 3 which explains this in more detail below.

For further guidance, see Checklist: Drafting a limited intellectual property license.

2.6 Dispute resolution

A clause calling for a particular dispute resolution method – usually, arbitration or mediation – may be a way of ensuring that disputes are resolved without undue delay or expense. However, the advantages and disadvantages of each method of dispute resolution should be assessed. Include clear timelines and notice periods so that the parties understand the triggers for initiating proceedings. A binding arbitration agreement will make any result reached by the arbitrator virtually the same as a court decision. As noted at step 1.2, specifying the location and choice of jurisdiction can also impact the parties regarding convenience and costs with respect to dispute resolution too.

2.7 Insurance coverage

As with any commercial contract, the insurance aspects of the agreement need to be considered. This is especially important given the emerging tech of AI and considering whether the current insurance provider will underwrite risks associated with AI use. Assess what coverage is required, and how much. Who is responsible for obtaining the insurance? Who is the insured?

In a new and developing industry such as AI, many of the vendors and market participants may be start-up companies operated by relative newcomers to the business world. The financial or business stability of these vendors may not be especially strong. Insurance coverage should be obtained with an eye on what happens if the vendor folds, goes under, or just disappears.

Step 3 – Warranties

Warranties provide an assurance that certain standards of quality and performance will be met. These are essential considerations in commercial contracts as they can help to mitigate risks and provide remedies (eg, legal remedies if the obligations under the contract are not met).  

3.1 Performance warranties

Performance warranties provide the user with assurance that the service will meet certain predefined and agreed-upon standards. Examples of performance warranties include:

  • Fitness for purpose warranty – the vendor promises that the AI system will perform as advertised and will be adequate for the purpose for which it is procured. This protects the user by establishing that the vendor is bound to deliver an AI system that will function well.

  • Data accuracy warranty –the vendor provides assurances that the data used or produced by the AI system will be of a certain degree of accuracy, relevancy, and reliability. Such a warranty is crucial for users relying on AI outputs for critical business processes and decision-making.

  • Availability warranty – whereby a commitment from the vendor provides that the AI service will be accessible and operational for a defined percentage of time, barring scheduled maintenance or agreed upon exceptions. Such a warranty is essential for users for whom AI system uptime is critical to business operations, and it often includes details on service level agreements (SLAs) and potential remedies for system failure or compensation for downtime that exceeds an acceptable threshold.
     

Any warranty provision must include benchmarks and evaluation metrics for performance and accuracy. The use of vague terms such as ‘acceptable’ or ‘accurate’ provided little useful guidance for either party.

3.2 Compliance warranties

Compliance warranties are assurances provided by the vendor that the AI product or service adheres to relevant laws, regulations, and industry standards. These warranties are crucial for the user to mitigate legal and regulatory risk, ensuring that the adoption of the AI system does not inadvertently lead to non-compliance with pertinent legal obligations, such as those related to data protection, privacy, or sector-specific regulations.

The non-infringement warranty is a key component of compliance warranties. It assures that the AI technology, including any related software or intellectual property used or provided by the vendor, does not infringe upon third-party intellectual property rights. This warranty is crucial. It provides protection for the user against legal and financial repercussions arising from claimed intellectual property violations, allowing the user to utilize the AI services without the threat of infringement litigation and being compelled to shoulder the cost of such disputes.

3.3 Security warranty

Security is an overarching concern when considering the adoption of AI systems. A security warranty provides assurance that the AI system complies with specified standards and protocols for cybersecurity. This warranty is essential for all users, since the effective use of an AI system calls for reliance on the system's integrity. The security warranty also provides a safeguard against unauthorized access, breaches, and other cyber threats. The security warranty typically includes guarantees that the AI system has been developed, and will be maintained, in accordance with the industry best practices for security, including practices such as regular updates and patches to address vulnerabilities. An effective security warranty will give users confidence that the vendor is committed to protecting the system against the evolving landscape of cyber risks.

3.4 Maintenance and support warranty

A maintenance and support warranty are a pledge that the AI system will receive ongoing technical support and regular maintenance during the life of the agreement, or beyond, as negotiated by the parties. This warranty gives the user assurance that the system will remain functional and up-to-date, and be optimized as needed over time, addressing the issues that will arise during the system’s operation. It will typically cover the availability of customer service, response times for addressing technical issues, and the provision of updates to improve system performance and security. This type of warranty is essential for users to ensure the reliability and longevity of their AI investment.

3.5 Warranty against pending third-party claims

A vendor should be able to provide a warranty that it has the right to enter into the agreement. The vendor should also provide a warranty that there are no pending legal or equitable claims against it. If such a warranty cannot be provided, the vendor should disclose all such claims and the status of those claims.

Step 4 – Indemnities

4.1 Assess which indemnity provisions you need

Indemnity provisions play a crucial role in mitigating or preventing legal and financial harm that will arise from non-adherence to the laws and regulations. serve as a shield in contracts with third-party vendors. These clauses are particularly significant for purposes of intellectual property protection and data security, where non-compliance can be particularly severe may result in heavy penalties, and cause significant reputational damage.

4.1.1 Indemnification for intellectual property infringement

A comprehensive indemnity clause that covers IP rights is an essential element of AI use contracts. It acts as a safeguard for clients, protecting them from liability for any IP infringement claims that may arise from the use of the vendor's technology. Such a clause typically obliges the vendor to handle all legal defense requirements and to cover all resulting damages, including settlements or court-ordered payments. This not only provides financial protection for the client, but also ensures that they can continue to use the technology without interruption if the vendor is responsible for securing alternative solutions to problems or making modifications to address any intellectual property infringement issues.

4.1.2 Data security and privacy indemnity

Data breaches have become so common as to be virtually routine. In the present legal and business environment, a clause that provides for data security and privacy indemnification must be a non-negotiable part of any AI tech vendor contract. The indemnity holds the vendor accountable for breaches of data security and privacy laws, and shielding the vendor’s client from costs stemming from the vendor's failure to implement and maintain adequate safeguards for data. Indemnification should cover direct damages, such as fines or penalties imposed for non-compliance with data protection regulations such as Regulation (EU) 2016/679 - General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA). Also Indemnification should encompass indirect damages, including the costs associated with providing legally mandated notice to affected individuals, furnishing credit monitoring services, and repairing the damage to the client’s reputation (eg public relations consultants). Moreover, indemnification should require a vendor to take the necessary actions to prevent future breaches, providing additional security for the client’s data assets.

These indemnity clauses allow a company to significantly reduce their risk exposure when contracting with third-party tech vendors. The indemnity for IP infringement allows the company to focus on their core business activities without worrying about time-consuming and expensive legal entanglements. Similarly, indemnities for data security and privacy provide protection against the ever-present threat of data breaches, giving the client insulation from both the immediate financial impacts and the long-term reputational damage that such incidents can inflict.

4.2 Indemnity for third-party claims

In AI use contracts, indemnity for third-party claims is a critical, and perhaps obvious, protective measure, shielding a party from the repercussions of legal actions initiated by entities not party to the contract. This indemnity provision is a common and essential feature when engaging with third-party tech vendors to ensure that the client is not unduly exposed to liabilities arising from the vendor's conduct or product performance. 

4.2.1 Indemnification for performance failures

Performance failure indemnification means that the vendor will be accountable for any shortcomings in the performance standards promised for their product or service. Should the system fail to operate as promised, the vendor is responsible for covering all related expenses, including legal fees, settlements, and judgments sustained by the vendor as a result of third-party claims against the client. This indemnification is an assurance that the vendor stands behind their product, will be able and ready to rectify issues that lead to performance shortfalls, and is prepared to bear the financial burden of any resulting third-party litigation. Performance failure indemnity not only protects the client financially but also helps to instill continued confidence in the reliability and professionalism of the vendor, thus fostering a business relationship based on mutual trust. An indemnification clause may contain the following language:

Performance Failure Indemnification

The Vendor will indemnify, defend, and hold harmless the Client against all claims, liabilities, damages, losses, and expenses arising out of or relating to any failure of the Vendor's product or service to meet the performance standards warranted in this Agreement. Vendor expressly agrees that indemnification includes, but is not limited to, including reasonable legal fees and litigation costs. Indemnification will extend to any third-party claims resulting from such performance deficiencies. 

4.3 Indemnity for non-compliance with laws and regulations

Indemnity clauses for non-compliance with laws and regulations provide that if a vendor fails to adhere to legal standards that vendor will bear the responsibility for any associated costs and legal ramifications. This indemnity is especially important for areas like data protection, intellectual property, and export controls. It is important that the indemnity covers only costs and damages sustained by the client, and not any third-parties, such as customers of the client. Moving beyond indemnity for the client only could be regarded as an attempt to create an insurance policy outside the normal regulatory framework. Courts often refuse to enforce that type of agreement.

4.4 Indemnification for breach of contract

Indemnification clauses for breach of contract should be drafted to provide clear legal recourse for clients should the vendor fail to meet contractual obligations, and to make clear what the consequences of such a failure are. No matter what the subject of the breach is - service delivery standards, proprietary technology misuse, confidentiality requirements, or agreed upon timelines - the vendor is obliged to compensate the client for any direct and consequential losses incurred.

While such a clause is arguably superfluous, in that it does not create any obligations that are not present in any contract, such a clause may still help to emphasize the importance of full performance of a contract. It sets out clearly the consequences of a breach, and offers a predefined path to remedy the situation. The clause prompts the vendor to maintain a high standard of performance and compliance, and makes them aware that any deviation could result in significant financial implications.

In the continually changing and rapidly evolving context of AI, the rules may be nebulous and outcomes may be unpredictable. This makes these clauses particularly important. They ensure that the vendor remains engaged in the continuous improvement of their offerings and responsive to any issues that might arise. By including comprehensive indemnification for breach of contract in AI use agreements, both parties can engage in the innovative and dynamic field of AI with greater confidence, aligned interests, and a clear understanding of their respective risks and protections. For further information, see How-to guide: How to draft and negotiate limitation of liability clauses

Additional resources

Nyambura Kiarie, Understanding Training Data in Contracts with AI Vendors
Pamela Langham, Negotiating a vendor contract for AI tools

Related Lexology Pro content

How-to guides:

Understanding the risk of negligence claims when using AI
Understanding AI-driven risks

Checklist:

Steps to mitigate risks associated with AI use in business

Quick views:

Key AI terms
Overview of AI in business

Reliance on information posted:

While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.