Introduction
This checklist sets out the practical considerations for commercial, in-house legal and compliance teams to address prior to using AI in commercial contract applications. Integration of AI tools is becoming more common; however, it is necessary to assess the benefits and understand the risks of the specific AI technology being used. This list is by no means exhaustive as the technology is evolving and legal and regulatory developments are fast-paced.
This checklist addresses the following steps:
- Assess AI capabilities and scope
- Consider organizational readiness and compliance
- Choosing a third-party service provider
The checklist is presented as a list of requirements that can be checked off as they are addressed. At the end of each step there are explanatory notes corresponding with each requirement in the checklist.
It can be used in conjunction with the following How-to guides: Understanding AI-driven risks, Understanding the risk of negligence claims when using AI, How to effectively incorporate standard terms and conditions in a commercial agreement or transaction, and Checklists: Steps to mitigate risks associated with AI use in business, Drafting AI use contracts with third-party tech vendors and Quick views: Overview of AI in business, Key AI terms.
Step 1 – Assess AI capabilities and scope
| No. | Understanding AI capabilities |
| 1.1 | Assess the business need for AI |
| 1.2 | Consider the common types of contracts suitable for AI drafting |
| 1.3 | Assess the benefit of using AI for each type of contract |
| 1.4 | Consider other use cases for AI and automation |
Step 2 – Consider organizational readiness and compliance
| No. | Practical insights |
| 2.1 | Assess organizational readiness |
| 2.2 | Understand the legal and regulatory requirements of using AI |
| 2.3 | Research responsible AI adoption |
| 2.4 | Undertake a risk assessment |
Step 3 – Choosing a third-party service provider
| No. | Key considerations |
| 3.1 | Research potential suppliers |
| 3.2 | Are there any industry-specific regulatory requirements? |
| 3.3 | Conduct due diligence |
| 3.4 | What are the contractual arrangements? |
Explanatory notes
General notes
Whilst use of AI can help expedite contract drafting and management with fewer mistakes, organizations need to consider whether using AI is appropriate for their needs and weigh up the commercial benefits and risks. Legal teams will always need to have oversight to mitigate risks, understand legal nuances and provide context. This provides a layer of supervision and assurance. Decisions to use AI for commercial contracting requires engagement with senior managers in the business and ‘buy-in’ at board level.
The scope and application of AI-powered tools can take several different forms and have varying levels of complexity. Commercial contracts can also be complex. Combining the two in practice can be revolutionary but embracing AI and automation requires careful consideration of both the capabilities and limitations, among other things. See Quick views: Overview of AI in business and Key AI terms.
Legal framework
In the United States, AI regulation is becoming increasingly sophisticated, mirroring the technology’s integration into various sectors. The National Institute of Standards and Technology (NIST) has developed an AI Risk Management Framework. The framework, while not a legally binding regulation, offers guidance to organizations for the responsible design, development, and deployment of AI. The NIST framework emphasizes the importance of trustworthiness, accountability, and transparency in AI systems. There is a collective move towards a balanced approach that fosters innovation while mitigating risks associated with AI technologies.
Despite numerous federal legislative proposals targeting private sector AI use, progress has been slow, with many legislators expressing their reluctance to impede technological progress through overregulation. In fact, the House of Representatives version of the ‘One Big Beautiful Bill’ passed in May 2025 originally included a ten-year moratorium that would have prevented states and localities from enforcing any law or regulation that limits AI models or systems that enter interstate commerce. The moratorium was removed from the final bill that was passed, due to significant bipartisan opposition. In the absence of comprehensive federal legislation, individual states have taken the initiative to regulate AI systems. States like California, Colorado, Utah, and Texas, have enacted their own laws, reflecting a decentralized approach to AI governance.
In May 2024, Utah became the first state to pass legislation on this matter – the Artificial Intelligence (AI) Policy Act (the AI Policy Act), 2024 Utah Laws chapter 186. The AI Policy Act regulates the way businesses and individuals use and disclose the use of generative AI in their business practices. Under the original 2024 version of the law, contracting parties in a commercial contract context were required to disclose interactions with generative AI if asked to do so. However, the Utah Legislature further updated the AI Policy Act in 2025 by the passage of two pieces of legislation, SB 226 and SB 332. The amendments made by these two laws clarified and narrowed the instances when disclosures are required. Now, disclosures are required only when a consumer makes a ‘clear and ambiguous request’ to determine if they are interacting with AI. For regulated occupations (those occupations that require a state license or certification), the amendments only require that disclosures be made in “high-risk” AI interactions. ‘High-risk’ interactions are defined as interactions involving sensitive personal information, or interactions that affect significant personal decisions, such as medical or mental health. Furthermore, the AI Policy Act prohibits businesses from avoiding consumer protection or fraud liability by claiming the generative AI made the statement that constituted a violation, undertook the act that was a violation; or was used in furtherance of the violation. See, Utah Code Section 13-75-105. Utah’s AI Policy Act is an example of the attempt by state legislatures to balance consumer protection without unduly burdening economic development.
Other states are following the example of Utah, and enacting legislation to regulate AI. In Colorado, the Legislature has enacted comprehensive legislation regarding AI. See, Colorado AI Act, Senate Bill 24-205. The Act was signed into law on May 17, 2024; however, many of its provisions will not take effect until February 1, 2024. The new law calls on developers of AI systems to take steps to use reasonable care to protect consumers from algorithmic discrimination, defined in the law as ‘any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected [by law]. Colo Rev Stat section 6-1-1701. The new law also requires a deployer or other developer who makes available an artificial intelligence system that is intended to interact with consumers to ‘ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.’ Colo Rev Stat section 6-1-1704.
State laws regarding AI are rapidly changing. It is critical that organizations research the current laws in the state(s) in which they do business.
Other areas of the law that organizations should be aware of include privacy law, data protection requirements, general contracting and commercial laws, and intellectual property laws. For example, a company cannot illegally use another party’s intellectual property to train their own AI. Recently, high-profile legal cases brought by the New York Times and the Authors Guild accuse OpenAI and Google of harvesting data from YouTube videos to train their algorithms in violation of the intellectual property rights and usage terms in YouTube’s rules. See, Authors Guild v OpenAI Inc, et al, Nos 23-cv-8292 and 23-cv-11195 (SDNY).
Step 1 – Assess AI capabilities and scope
1.1 Assess the business need for AI
An organization should consider if there is a business or industry-specific need to use AI as opposed to other methods of contract management. While AI offers many benefits such as efficiency and consistency, it can also bring errors, hallucinations, and other flaws within the programming. There may not be a commercial need to replace any systems or operating procedures that exist if they are already sufficiently efficient and effective.
If an organization believes it would benefit from the use of AI, it should consider how this works alongside existing operations, business objectives and how this will benefit commercial contract management processes (eg, cost/time savings or by measuring efficiencies).
1.2 Consider the common types of contracts suitable for AI drafting
AI drafting tools are best suited to contracts that have standardized formats and repetitive clauses, are of low to moderate complexity, and for which there are existing organizational drafting playbooks and guidance. When used to draft these common contracts or clauses that infrequently change, AI can complement legal work and handling more routine tasks in this way allows the legal team to focus on the type of strategic negotiations that require more direct human involvement.
AI systems can be set up so that they recognize specific occurrences in the contract lifecycle: the completion of a draft, the receipt of client feedback, or the approach of a negotiation deadline. Integrating the system with calendars and communication platforms will give the AI system the capability of providing real-time updates and reminders, keeping the team synchronized and proactively managing the contract timeline. This automation not only saves time but also enhances compliance and reduces the risk of missed opportunities or contractual breaches.
1.2.1 Non-disclosure agreements (NDAs)
NDAs are standardized contracts for the protection of confidential information. They often contain repetitive standard clauses for items such as the definition of confidential information, the duration of secrecy, and the obligations of the party receiving the confidential information. AI can draft these agreements quickly, adapting existing templates to the specifics of a new agreement. As the general requirements for an NDA do not change, the base template that an AI will use will not change, reducing the chance for error as there is less for the AI to modify in each iteration.
For further information on NDAs, see How-to guides: How to protect trade secrets in the employment relationship and How to draft a confidentiality agreement and confidentiality clauses.
1.2.2 Service level agreements (SLAs)
SLAs define the level of service a provider will offer to a client. They typically include standardized clauses regarding service quality, availability, responsibilities of the parties, and penalties for non-compliance. AI systems are able to generate these contracts based on predefined performance standards and metrics. The standardized clauses can allow AI systems to quickly incorporate only the necessary unique terms to a new contract and ensure that the terms reflect the best interests of the company, as directed, by always including the basic metrics the company holds its partners to.
For further information on SLAs, see How-to guide: How to draft a supply of services contract and Checklist: Assessing whether standards terms and conditions should be used for the supply of goods and services.
1.2.3 Employment contracts
These contracts stipulate the terms of employment, including salary, job duties, and disciplinary and termination conditions. Employment contracts, especially in large organizations, tend to share common clauses and have similar or identical structures, making them especially suitable for AI drafting once the agreement has been tailored to reflect company policies and applicable labor laws. When a company uses AI to draft their employment contracts, they are most likely including all terms required by law, both at the state and federal level, because those terms will be automatically included in the AI programming. It is always prudent to check the accuracy of these terms prior to issuing a contract to any external customers or clients.
For further details on employment contracts, see How-to guides: How to draft an employment contract and How to use arbitration agreements in employment.
1.2.4 Lease agreements
Lease agreements for commercial equipment or real estate often include standardized sections on payments, term length, renewal options, and maintenance obligations. AI can streamline the drafting process by customizing these sections according to pre-existing templates, and using standard elements as appropriate.
1.2.5 Purchase orders
Purchase orders are formal agreements to purchase specific types of goods or services. Given their repetitive nature and the frequent use of standard terms and conditions, AI can efficiently generate purchase orders by filling in product details, quantities, prices, and delivery terms. In this way, the AI saves the organization time because they do not have to complete each template individually.
For further information on purchase orders, see Checklist: Review of terms and conditions for the purchase of goods and services from the perspective of the buyer.
1.2.6 Software licensing agreements
These contracts often contain repetitive clauses on license scope, restrictions, support, updates, and intellectual property rights. AI can draft these agreements by modifying standard templates to suit different software products and licensing models. As licensing agreements can become very granular, AI can also be greatly beneficial in not only rapidly drafting the standardized template clauses but also ensuring that the clauses do not contradict each other.
For further information on software licensing, see Checklist: Drafting a limited intellectual property license.
1.2.7 Supply agreements
These agreements typically include clauses on delivery schedules, pricing, payment terms, and quality standards. AI can draft supply agreements based on a company’s standard terms and existing supply chain relationships. If desired, the AI can also be programmed to review the organization’s history with a specific client to assess the nature of the relationship and check to make sure the terms do not deviate from their standard practice, unless specifically negotiated.
For further information on supply agreements, see How-to guide: How to draft a supply of services contract.
1.2.8 Sales contracts
Contracts for the sale of goods tend to include standardized terms regarding delivery, risk of loss, warranties, payment terms, and options with regard to non-performance. AI can customize sales contracts based on a playbook developed according to the organization’s typical sales scenarios and legal considerations. These contracts are ideal candidates for AI use because these standard terms can be applied differently depending on the sale scenario, and the AI can quickly adjust these terms when trained to do so.
Note that while AI can handle the initial drafting, human oversight is necessary to review the contract in the context of the company’s culture and past dealings, looking for nuances and any unique aspects that may not be captured within the standard templates and playbooks.
For further information on sales contracts, see Checklists: International supply of goods contracts and Delivery and acceptance of goods in a business-to-business sale of goods contract.
1.3 Assess the benefit of using AI for each type of contract
Simply because it is possible for a contract to be drafted using AI does not mean that practically it should be. Careful considerations of the benefits to the organization, analysis of the potential risks that may arise, and a clear statement of what is proposed should be set out.
1.3.1 Are contracts already automated?
Contracts that are already automated provide an ideal foundation for AI drafting. AI can handle variations and customizations within the automated framework, making it possible to quickly produce tailored contracts that meet specific needs.
Automation implies a level of standardization and digitization of the contract process and assumes a level of capability in-house. This is likely to save time and costs. As noted above, since automated contracts typically involve repetitive clauses and structured formats, AI can further refine the process by learning from existing contracts to draft new ones with greater efficiency and fewer errors.
1.3.2 Can you streamline administrative tasks?
Administrative tasks can be time-consuming and often involve repetitive and pattern-based activities. For example, AI can be trained to recognize patterns and pull together key information and data from different sources.
Through machine learning algorithms, AI systems can learn to categorize information, summarize content (especially lengthy or complex content), and suggest edits to make documents clear and professional. The most important consideration is how AI will streamline the process, which involves balancing the risks and deciding whether the advantages of using AI will be worth those risks, especially when compared to manual processing.
1.4 Consider other use cases for AI and automation
1.4.1 Analysis and monitoring of performance metrics
AI systems can be configured to conduct automatic analyses of performance metrics and contractual obligations. When certain pre-defined milestones are reached, the system can, without additional prompting or instruction, notify the relevant parties, prompting their reviews, or initiating follow-up actions. Reminders can be set for upcoming obligations or expiring contracts, ensuring that nothing falls through the cracks.
AI algorithms can also monitor contract performance in real-time. The system can, compare actual outcomes with predefined benchmarks or key performance indicators (KPIs). KPIs could be defined as delivery times, service level agreements (SLAs), quality standards, or financial metrics. AI can process real-time data and provide instant insights into the indicators, making sure that all parties adhere to the agreed-upon terms. Automated dashboards can visualize the data, and make it easy for contract managers to understand performance and allow them to take necessary actions promptly if KPIs are not being met.
When a performance metric hits a certain threshold, or when a milestone is achieved, the AI system can trigger customized notifications to be sent to the relevant stakeholders. These alerts can be sent through various different channels of communication, such as email, SMS, or in-app notifications, ensuring timely and targeted communication.
1.4.2 Identify potential breaches
Identifying potential breaches in contract management by the use of AI offers a proactive approach to mitigating risks. AI systems, trained with the appropriate machine learning and natural language processing, can perform a continuous, uninterrupted scan of contract terms and obligations and compare them against performance data and operational metrics. AI can detect anomalies, deviations, and non-compliance issues that may signal a breach.
When an AI system identifies a potential breach, the system can alert contract managers or legal teams immediately, allowing them to take swift action as needed. This action might include initiating a review process, documenting the issue, and communicating with the concerned party to enable them to rectify the situation before it escalates. AI’s ability to conduct a rapid analysis of vast amounts of data means that such risks can be spotted early, minimizing legal repercussions, and maintaining the integrity of the business relationship. This proactive surveillance by AI not only prevents potential breaches but also reinforces due diligence in contract management.
Step 2 – Consider organizational readiness and compliance
2.1 Assess organizational readiness
Considering how AI usage and deployment is going to be integrated alongside legacy IT systems and commercial contracting practices is an important factor. Clearly setting out the strategy and how the deployment will roll out across the business is critical. How does the project fit with the overall business strategy? What are the costs of implementation, onboarding, and the ongoing deployment costs (eg, for training and maintenance)?
It is necessary to identify the in-house project team who will have project management responsibility. This is likely made up of a diverse group of legal, commercial, IT, HR, and compliance personnel.
Establish clear lines of reporting, escalation points, and accountability. Consider governance frameworks around the contract drafting and review management processes, including how the contracts will be reviewed by legal, who will have ultimate sign-off authority and responsibility for governance (at the executive level and across the legal and compliance function), and who is responsible for keeping track of legal and regulatory developments.
Exercise caution with intellectual property rights, unlawful bias and discrimination, and specific regulatory compliance for data privacy and security (particularly when generating contracts with personally identifiable information) to understand cyber breach preparedness and have robust processes to guard against future threats. For further information, see How-to guides: Understanding AI-driven risks, How to use AI to enhance Diversity, Equity and Inclusion (DEI), Risks and liabilities of AI algorithmic bias and Checklists: Steps to mitigate risks associated with AI use in business and De-identification of data used by AI systems.
Consider too whether additional insurance policies will be necessary, or whether there is a duty to notify the current insurer.
To help guide their assessment as to if they are prepared, an organization can refer to advice given by the Federal Trade Commission (FTC) about protection personal information. Although rescinded by President Donald Trump in 2025, organizations should also be aware of the 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence by the Biden Administration as it provides a framework for understanding what topics matter to law enforcement and may impact future legislation governing this area. Although the Trump Administration’s Executive Order shifted the focus towards deregulation and acceleration of AI development, states may still enact legislation aimed to establish guardrails and protections for AI development and use similar to the Biden Administration.
In January 2025, DeepSeek, an AI start-up from China, made headlines with its claims that its latest large language model, DeepSeek-R1, could match the performance of more expensive, market-leading models while using less than $6 million in computing power by relying on older, less powerful chips. By late January of 2025, the DeepSeek app was the most downloaded app in the United States. The rapid rise of DeepSeek, coupled with its Chinese ownership, sparked data security concerns among Federal and State officials. These concerns mirror those that led to a Federal Acquisition Rule (FAR) banning telecommunications equipment and services from Huawei and other Chinese manufacturers. See, FAR 52.204-25, Prohibition on Contracting for Certain Telecommunications and Video Surveillance Services or Equipment. Notably, officials at various government levels—including the White House, Congress, federal agencies, and state governments responded swiftly to DeepSeek and its potential national security risks.
On February 7, 2025, Representatives Gottheimer and LaHood introduced the No DeepSeek on Government Devices Act (HR 1121). Both representatives serve on the House Permanent Select Committee on Intelligence, and they stated that they had serious national security concerns about DeepSeek. The ban would prohibit the use of ‘the DeepSeek application or any successor application or service developed or provided by High Flyer or an entity owned by High Flyer.’ The bill would direct the Director of the Office of Management and Budget, in consultation with the Administrator of General Services, the Director of the Cybersecurity and Infrastructure Security Agency, the Director of National Intelligence, and the Secretary of Defense to develop ‘standards and guidelines for executive agencies that require the removal of any covered application from information technology.’ Contractors may have questions about the scope, including which devices are covered and the consequences of failure to remove the app from a device.
Several broader legislative efforts targeting Chinese access to AI and technology may also impact DeepSeek. On January 29, 2025, Senator Josh Hawley introduced a comprehensive U.S.-China AI Decoupling Bill, which could ban DeepSeek’s AI models in the US if passed.
On February 7, Representative Mark Green introduced the China Technology Transfer Control Act (HR 1122). The bill directs the President to control the export, re-export, or transfer of ‘covered national interest technology or intellectual property’
to China.
2.1.1 Evaluate the extent of existing contract automation and necessity for AI
Organizations with a history of using digital tools for data management or electronic contract lifecycle management are likely better prepared for AI integration. Existing in-house technology networks and IT teams have the infrastructure and experience of rolling out projects. IT teams (together with other business personnel) will have to run systems checks on compatibility and accuracy as any new AI system is still likely to require testing and fine-tuning prior to full implementation. If AI is being combined with existing software, teams will need to have the rights to do this and ensure the software is appropriate for the intended use.
After evaluating the existing automated systems within the organization, further assessments can be made about where AI can be used and what specific functions and use cases are required. A clear understanding of what contract automation processes already exist is needed to efficiently incorporate new technologies and avoid redundancies. Even after adoption, the parties will have to work out a framework for review and ‘ongoing training’ of the system. Resourcing the project over the short term and making provisions for associated costs in the longer term is another consideration. This includes financial and non-financial costs, including staff training, training of the AI system, and ongoing maintenance costs to deal with ‘glitches’ in the system.
2.1.2 Technological savvy
As noted at 2.1.1 above, a business with an IT department and infrastructure already in place will prove to be more capable of transitioning to AI-powered systems. Training programs may be necessary for employees who are less familiar with advanced technologies, to ensure that these employees can effectively interact with new AI tools and understand the legal implications of AI going wrong and the importance of compliance with risk management frameworks.
Training programs should not only be provided upon induction but should be kept under review so that guardrails are in place to highlight developments on a regular basis. This training should not be limited to employees whose day-to-day duties involve use of the AI system, but should extend to the senior executive leadership team and board of directors or governors. Existing HR policies should be reviewed and enhanced to include policies on acceptable workplace AI deployment and use.
2.1.3 Appropriateness of AI solutions
AI tools should not be adopted just for the sake of adopting the new technology. The tools must align with the specific contract management needs of the organization. Factors like contract volume, complexity, and the need for customized solutions should guide the organization’s selection of AI applications. If AI is adopted, ongoing cost-benefit analysis will help assess whether the investment in AI was justified by sufficient improvements in efficiency and accuracy, or if the AI needs to be modified or discarded. This analysis should consider not just how the AI performed its individual task, but whether prolonged use meets the organization’s objectives. For example, if the AI uses an excess of energy, an organization should consider if the cost or excessive consumption of that energy aligns with the organization’s existing sustainability goals.
2.2 Understand the legal and regulatory requirements of using AI
Using AI in contracts touches upon diverse legal domains. The common law of contracts will always be a concern, but AI also poses questions regarding the permissible use of intellectual property, and data privacy laws (such as the California Consumer Privacy Act), and s. For instance, the enforceability of contracts generated or executed by AI systems may be scrutinized under traditional legal principles. Ensuring AI outputs align with the Uniform Electronic Transactions Act (UETA) and the Electronic Signatures in Global and National Commerce Act (E-SIGN Act), 15 USC Ch 96, is also essential.
The use of AI in an organization’s decision-making processes raises questions about bias and fairness. For example, even the inadvertent incorporation of discriminatory practices resulting from biased AI algorithms could expose organizations to liability under anti-discrimination laws.
AI is still a relatively new technology, and the legal environment may be expected to change rapidly. To stay abreast of this evolving legal landscape, organizations should establish systems and processes, including the appointment of personnel responsible for horizon scanning. This scanning involves continuously monitoring legislative developments, regulatory guidance, and relevant case law for developments that could affect how AI is utilized in contracts and assessing the materiality of incoming changes. The horizon scanners should make reports to management as needed, and should also be equipped to set out in detail how a change may affect the organization.
2.3 Research responsible AI adoption
Responsible commercial AI adoption demands careful consideration of the ethical issues. The ethical issues encompass the principles of transparency, accountability, and fairness. AI systems should be developed to be transparent in their operations, so that users may understand how decisions are made. Accountability is also crucial: organizations must understand that they are to take responsibility for the AI system's actions, including any errors or unintended consequences and consider who is responsible if an AI-generated contract goes wrong. Moreover, fairness is a core tenet, requiring AI systems to operate without bias, thus preventing discriminatory practices.
The adoption of AI in commercial contracts requires a framework that respects both legal standards and societal norms. An organization needs to be able to explain how the AI program is designed and operates and be prepared to correct any inaccuracies or hallucinations in the AI.
For example, the organization should regularly audit the outputs the AI is generating to test for accuracies. Additionally, the organization can have a response team that is designated for responding to consumer complaints regarding the use of the AI and deal with these promptly. Organizations could also consider looking toward general guidance from esteemed academic sources like the Harvard Law School Forum on Corporate Governance (see Artificial Intelligence: An engagement guide) or how the federal agencies are establishing governance structure around the use of AI.
For more information and guidance, see How-to guides: Understanding AI-driven risks, Risks and liabilities of AI algorithmic bias and Understanding the risk of negligence claims when using AI, AI and smart contracts and Checklists: Steps to mitigate risks associated with AI use in business and De-identification of data used by AI systems.
2.4 Undertake a risk assessment
Conducting a risk assessment of the various risks associated with AI (ie, compliance and regulatory requirements and responsible AI usage) is a necessary step. Being proactive to identify risks and vulnerabilities will help to protect the business and enhance business credibility, as the business can demonstrate they have thought about and dealt with possible exposures.
Step 3 – Choosing a third-party service provider
3.1 Research potential suppliers
It is essential to research the suitability of the prospective AI third-party vendor. Set out and communicate clearly what the goals and scope of the AI project is, the requirements of the technology involved and discuss specific timeframes for delivery. Does the vendor own the solution being sold or are there third-party contractors that they outsource to? If so, consider additional due diligence on those vendors (and whether any terms and conditions will need to be reviewed and/or integrated).
Researching potential service providers’ backgrounds will include checking their technical skills and their experience in delivering similar projects (and industry). It may also be prudent to obtain references from people they have worked with or conduct desk-based industry research.
Consider how the service provider will deal with communications and reporting, how that will work, and whether it is compatible with the existing team. All those involved in the project should communicate regularly and frequently, to ensure that actions are tracked and issues escalated.
3.2 Are there any industry-specific regulatory requirements?
Consider whether there are any AI restrictions and/or necessary industry-specific audits or measures. This level of scrutiny is particularly critical in sectors such as healthcare and financial services, where regulatory considerations demand the strictest level of compliance, and strong confidentiality measures. For example, healthcare providers and their business associates must ensure that AI solutions comply with patient privacy laws such as HIPAA and state privacy laws. Similarly, financial services companies have to navigate the complex landscape of financial regulations such as the Dodd-Frank Act, making sure that their AI applications are not just efficient but also fully compliant with industry standards.
3.3 Conduct due diligence
It is possible to provide a due diligence questionnaire to potential suppliers to inform decision-making. A vendor who is transparent and has stringent compliance procedures and processes will help build trust in the relationship. The questionnaire will help to expose any operational and legal risks of collaboration and identify measures to mitigate them (as necessary). The purpose of the request should be set out clearly and request supporting documentation and references (where necessary). Consider whether there are any specific searches on the supplier that need to be undertaken. Consult with the team who will have ownership of the project across legal, finance, HR, IT, and others (as necessary).
3.4 What are the contractual arrangements?
Consider the contractual arrangements to be put in place between the parties. It may be prudent to liaise with specialist legal or AI experts to advise on the specific contractual arrangements and safeguards that should be put in place.
When outsourcing AI solutions, the third-party contract extends into several critical areas and the legal teams should be involved early in the process to manage the risks. It will also help to understand the overall contract framework, set out expectations around metrics and standards, and prompt discussions on risk and liability.
For further information, see How-to guides: How to draft and negotiate limitation of liability clauses and Understanding the risk of negligence claims when using AI, AI and smart contracts and Checklist: Drafting AI use contracts with third-party tech vendors.
Additional resources
Related Lexology Pro content
How-to guides:
Understanding AI-driven risks
Risks and liabilities of AI algorithmic bias
Understanding the risk of negligence claims when using AI
AI and smart contracts
How to effectively incorporate standard terms and conditions in a commercial agreement or transaction
Checklists:
Steps to mitigate risks associated with AI use in business
De-identification of data used by AI systems
Drafting AI use contracts with third-party tech vendors
Quick views:
Noteworthy cases and enforcement actions involving AI
Overview of AI in business
Key AI terms
Reliance on information posted:
While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.