Checklist: Steps to mitigate risks associated with AI use in business (USA)

Updated as of: 31 July 2025

Introduction

This checklist will assist in-house counsel, private practitioners, and compliance personnel with mitigating the risks that are, or that may be, associated with the use of artificial intelligence (AI) systems in their business. The use of AI has spread rapidly and mitigation of risks is essential for safe and responsible deployment.

This checklist addresses the following steps:

  1. Adopt risk management practices
  2. Include data protection measures
  3. Review ethical considerations

This checklist can be used in conjunction with the following How-to guides: Understanding AI-driven risks, Understanding the risk of negligence claims when using AI, Risks and liabilities of AI algorithmic bias; Checklist: De-identification of data used by AI systems and Quick views: Overview of AI in business and Key AI terms.

Step 1 – Adopt risk management practices

No.Adopting risk management practices
1.1Determine areas of potential risk
1.2Has the data been validated?
1.3Has the model been tested?
1.4Is there human oversight (‘Human in the Loop,’ or HITL)?
1.5Is there provision for stopping or pausing the use of the system?
1.6Is there an emergency response plan?
1.7Train relevant personnel in risk management
1.8Schedule regular audits

Step 2 – Include data protection measures

No.Data protection measures
2.1Has the data used been encrypted or anonymized?
2.2Have access control measures been implemented?
2.3Are regular vulnerability assessments included?
2.4Are the data protection measures compliant with applicable laws?

Step 3 – Review ethical considerations

No.Reviewing ethical considerations
3.1Is the system fair?
3.2Is there accountability for the development and operation of the system?

Explanatory notes

Overview

The use of artificial intelligence (AI) is burgeoning rapidly. Despite being a valuable tool, it does, however, pose risks for the unwary or ill-prepared. Some of these risks may be unknown or not yet completely understood. While the field is still developing, especially regarding the legal limitations and consequences of its use, there are still steps a business may take to mitigate the potential risks that may arise from AI use or adoption.

AI refers to the capability of a machine to perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement. The autocorrect function on a word processing program or text messaging platform are examples of the application of AI. AI systems are becoming more complex, extending into areas such as contract administration and performance monitoring.

Legal framework

The law surrounding AI is still under development. In the immediate future, it is likely that in the absence of legislative action, the courts will attempt to make existing legal rules fit with AI, just as the common law has done with the adoption of any new technology. However, the nature of AI with its limited human involvement means that the existing rubrics of the common law will prove inadequate. Legislative action will likely be necessary.

US federal legislation has begun to take an interest in AI. Some notable examples include:

In addition, several frameworks and guidelines have been developed to regulate AI, including:

  • The White House Executive Order on AI: titled ’Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.’ This order targets multiple sectors and emphasizes the need to mitigate significant risks associated with AI while harnessing its benefits. It focuses on federal agencies and developers of foundational models, mandates the creation of federal standards, and requires the developers of powerful AI systems to share safety test results and other vital information with the US government. The order also instructs the Department of Commerce to provide guidance on content authentication and watermarking for AI-generated materials. However, the Trump Administration repealed this Executive Order on January 20, 2025.
  • The White House Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence relates to the effective utilization of AI models and AI-driven technologies within the United States Government, particularly regarding national security systems (NSS). This includes safeguarding human rights, civil rights, civil liberties, privacy, and safety in AI-related national security operations. A classified annex accompanying this memorandum covers further sensitive national security matters, such as countering adversarial uses of AI that threaten the national security of the United States.
  • The White House Blueprint for an AI Bill of Rights: this document offers guidance on ensuring equitable access and utilization of AI systems, outlining five principles and associated practices to inform the design, use, and deployment of ’automated systems’. These principles address safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives and considerations.
  • Voluntary Commitments from Leading AI Companies: major companies such as Adobe, Amazon, Anthropic, Cohere, Google, IBM, Inflection, Meta, Microsoft, Nvidia, OpenAI, Palantir, Salesforce, Scale AI, and Stability AI have pledged to promote the safe, secure, and transparent development of AI technology. These commitments include conducting internal and external security testing of AI systems prior to release, sharing information on AI risk management, and investing in safeguards.
  • Federal Communications Commission (FCC) Ruling: the FCC has clarified that the restrictions on ‘artificial or pre-recorded voice’ messages under the 1990s Telephone Consumer Protection Act extend to AI technologies capable of generating human-like voices, indicating that regulatory bodies will apply existing laws to AI.
  • Federal Trade Commission (FTC) Approach: the FTC has adopted a proactive stance in utilizing its existing authority to regulate AI. Recently, it warned market participants that using AI tools with discriminatory effects, making unsubstantiated claims about AI, or deploying AI without proper risk assessment could violate the FTC Act. The FTC has already taken action against companies that misled or harmed consumers through AI, including banning Rite Aid from using AI facial recognition technology without appropriate safeguards.

On January 23, 2025, the White House issued the Executive Order titled ‘Removing Barriers to American Leadership in Artificial Intelligence.’ The Order revokes ‘certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.’ The Order states that ‘[i]t is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.’ Within 180 days of the order (ie, no later than July 21, 2025, ‘the Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies (agencies) as the APST and APNSA deem relevant, shall develop and submit to the President an action plan’ to accomplish this goal.

Trump later signed other AI-related Executive Orders, emphasizing the importance of this newly developing technology. On April 23, 2025, President Trump signed an Executive Order (Advancing Artificial Intelligence Education for American Youth) aiming to promote AI literacy and proficiency among Americans by promoting integration of AI into education. Most recently, on July 23, 2025, President Trump signed yet another AI related Executive Order (Preventing Woke AI in the Federal Government) which stated purpose is to require more reliable outputs from AI and prevent the use of ideological biases or social agendas being built into AI.   

For further information, see How to guides: Understanding AI-driven risks, Artificial intelligence and smart contracts and Quick view: Key AI terms.

Step 1 – Adopt risk management practices

1.1 Determine areas of potential risk

1.1.1 Identification of potential risks

Integrating AI into business operations introduces the organization to a wide range of potential risks. The most prominent of these risks are data privacy and cybersecurity. AI systems are vuilt using vast amounts of data, and when personal data or confidential information is being used, the risks are apparent. Personally identifiable information (PII) is also at risk when the party in possession of that information uses AI systems, especially if proper data protection measures and security controls are implemented, or if they are ignored. It should go without saying that businesses using AI technologies must implement robust data privacy practices to safeguard PII. Likewise, end users must ensure that their use of an AI system is responsible and done in an ethical manner. The privacy policies and commitments of third-party AI suppliers must be made subject to stringent review. In addition, AI systems are inherently complex. This complexity introduces new vulnerabilities, both known and unknown, making them prime targets for cyberattacks.

There are also areas of risk that are unique to AI, or that are amplified or raised in a different context because of AI. Bias and fairness issues are presented because AI models can inherit biases present in their historical data, training data, or diversity of the developer base. This can distort results, leading to unfair and potentially damaging outcomes.

Intellectual property concerns arise because an AI system requires a large amount of data that often comes from many different sources. A key legal consideration is therefore whether or not the AI system uses data that infringes or is protected by an owner’s copyright by scraping images and data from the internet to train the AI models (see complaint in Andersen v Stability AI Ltd, Case No. 3:23-cv-00201 (ND Cal 2023)).

Accuracy risk in AI stems from the biases (conscious and unconscious) of the developers, and from the errors that are an inherent risk in any complex system. Continuous human oversight is essential to mitigate these risks, as well as to ensure ethical decision-making and the maintenance of accountability in the use of AI systems. Auditing AI algorithms, and how those algorithms are used, will also help to identify biases, errors, and anomalies. The overarching goal of human oversight and auditing is to ensure fairness in applications.  

For further information, see How-to guides: Risks and liabilities of AI algorithmic bias and Understanding AI-driven risks.

1.1.2 ‘Worst-case’ scenarios

Planning must always look towards the ‘worst case scenarios,’ asking and showing what the worst possible outcome of a situation would be. ‘Worst-case’ scenarios associated with the use of AI in business involve a range of bad outcomes that will be amplified by the complexity of the system. For example, a data breach is a bad outcome for any organization. A data breach involving an AI system, facilitated by AI’s ability to extract voluminous information from vast datasets, could easily lead to an unprecedented exposure of sensitive personal and corporate information to bad actors. Discriminatory biases embedded within AI algorithms could result in widespread unfair practices, severely damaging brand reputations and leading to legal repercussions. Perhaps most alarmingly, the deployment of autonomous AI in critical decision-making without adequate safeguards, could result in catastrophic failures in sectors like healthcare, finance, human resources, and transportation – endangering lives and causing significant financial losses or individual harm. An AI system could, for example, create very advanced financial products, the investment risks of which (eg, collateralized debt obligations where debt is repackaged and sold to investors) could be hard to understand.

1.1.3 Assessment of liability for risk

Assessing liability for the risks associated with the use of AI in business is complex, primarily due to AI systems autonomous decision-making capabilities and opaque nature, which makes the assessment of potential liability associated with the risks of its use difficult. The complexity of the systems, and the large amounts of data put to use, create too many ‘unknowns.’ Traditional legal frameworks based on traditional concepts of tort and contract law often struggle to attribute liability, especially when AI systems act in ways that cannot be easily or expeditiously predicted by human intelligence, or when it is difficult to determine the decision-making process. Liability may therefore generally be examined in light of the machine’s or the system’s manufacturer, designer, or user concerning malfunctions or unintended consequences.

For further information, see How-to guides: Understanding AI-driven risks, Risks and liabilities of AI algorithmic bias and Quick view: Overview of AI in business.

1.1.4 Imputed liability from developer – ‘deep pocket’

Imputed liability allows developers to be held responsible for harms or losses caused by their AI systems. Developers are the architects who create the systems, and they are the ones who potentially embed biases or flaws that lead to adverse outcomes. The idea of pursuing the ‘deep pockets,’ or the party with the most resources, suggests that plaintiffs will target entities with substantial financial resources. These parties are often the developers or the companies deploying the AI. They are pursued regardless of the extent of their fault, based on the assumption that these entities are in a better position to compensate for damages than other parties might be. 

Example

In Jones v W + M Automation, Inc, 31 AD3d 1099, 818 NYS2d 396 (4th Dept 2006), the plaintiff’s suit against a manufacturer and programmer of a robotic loading system for product defect was dismissed. New York’s Appellate Division said the defendants were not liable for the plaintiff’s injuries because they showed that they ‘manufactured only non-defective component parts.’ As long as the robot – and associated software – was ‘reasonably safe when designed and installed,’ the defendants were not liable for the plaintiff’s damages.

1.2 Has the data been validated?

1.2.1 Accurate and unbiased data

Validating the accuracy and lack of bias of data is critical. Policies and procedures must be put in place so that the data used to train AI models is not subject to influence from prejudices or preconceptions. Implementing rigorous validation procedures lets organizations detect and address the biases that might be present in the data, enhancing the objectivity and reliability of AI outcomes. This requires meticulous performance testing and data verification, and validation processes.

1.2.2 Currency and consistency

The responsibility for ensuring that the data utilized by AI systems is timely and up-to-date is typically that of an organization’s data stewards or data managers. This role is crucial for generating relevant insights, making informed decisions, and driving better business outcomes. An added benefit is that the organization will be equipped to identify and address outdated information, and thus make sure that AI models are based on the most recent and most accurate data available. 

1.3 Has the model been tested?

1.3.1 Evaluate performance

Evaluation of the performance of the AI model requires consideration of several factors as outlined below:

Key factors for evaluationConsiderations
AccuracyAssess how well the AI model’s predictions align with the actual outcomes
Precision and recallEvaluate the model’s ability to correctly identify positive cases (precision) and capture all positive cases (recall)
EfficiencyMeasure how quickly the AI model can process data and generate predictions
ScalabilityConsider the model’s ability to handle larger datasets or increased complexity without sacrificing performance
InterpretabilityMeasure how easily stakeholders can understand and trust the model’s decisions
RobustnessEvaluate how well the AI model performs under different conditions, such as variations in input data or ‘noisy’ environments
Bias and fairnessExamine whether the AI model is free from biases and can make fair decisions across different demographic groups
Cost effectivenessDetermine the total cost of deploying and maintaining the AI model in the business environment

1.3.2 Independent testing

Testing of the model should be done by third-party (non-employee) IT experts who have experience in developing and evaluating AI systems. Alternately, the model could be tested by internal teams made up of personnel not involved in the model’s development. Independent testing helps identify potential issues, such as biases, inaccuracies, or vulnerabilities. The feedback from the testing can help with fine-tuning glitches and having responsible testing in place will help foster trust among stakeholders. 

1.4 Is there human oversight (‘Human in the Loop,’ or HITL)?

Organizations can enhance the accuracy, transparency, and accountability of automated processes by including human oversight (‘human in the loop’) in their approach. Human experts will supervise AI algorithms, review outputs, and provide the necessary feedback or corrections. Human oversight also helps organizations comply with legal and regulatory requirements and uphold ethical standards, thereby maintaining customer trust. Human oversight is especially valuable in so-called ‘sensitive domains,’ such as healthcare, finance, recruitment, and legal sectors, where decisions will have significant consequences for individuals and society as well.

Furthermore, human oversight will improve AI system performance by continuously monitoring and fine-tuning algorithms based on real-world feedback. Business plans should set out how this will work operationally and budget accordingly. 
 

1.5 Is there a provision for stopping or pausing the use of the system?

Plans should be in place to maintain the continuity of operations and to mitigate potential risks when issues arise. A best practice is to have in place a clear protocol to identify, escalate, and resolve the issues that arise during an AI system’s deployment and usage. All stakeholders should be made aware of this approach. 

Every stop or pause provision must include designated points of contact for reporting issues, as well as escalation paths to address critical problems, and procedures for troubleshooting and resolving technical issues. Regular monitoring allows the proactive identification of emerging issues, and allows an organization to decide what modifications are required. Corrective measures may therefore be implemented before issues escalate and get out of hand. Establishing clearly defined protocols for incident response, and setting out clearly defined roles and responsibilities, escalation procedures, and mitigation strategies, will enable swift and coordinated action to contain and resolve issues.  

The monitoring is normally done by data stewards or internal data managers. Outside experts may also be called in, to provide insights that are not colored by institutional loyalty. 

1.6 Is there an emergency response plan?

A company’s existing emergency response plan should be adapted to include how to react expeditiously to critical situations caused by the operation of the AI system.

The response plan should categorize issues into different levels of severity and assign response teams accordingly. The categorization of issues may be based on factors such as the likely severity of impact severity, urgency, and how critical the issue is to business operations. The highest priority, and immediate attention, should be given to the issues that pose a significant risk to data integrity, customer safety, or the organizations’ financial stability.

AI technologies allow organizations to automate and optimize emergency response processes, improving overall efficiency and reducing response times.  

For further information, see How-to guides: How to manage your organization’s data privacy and security risks and How to draft a business continuity plan; and Checklist: Completing a data and information security risk assessment.

1.7 Train relevant personnel in risk management

1.7.1 Operators and users

Operators and users should be educated on best practices for monitoring AI systems, detecting anomalies or errors, and responding to emergencies promptly. Training programs should cover various aspects of risk management, including identifying potential risks, assessing their impact, and implementing mitigation strategies. Training should also emphasize the importance of data privacy, robust security protocols, and a culture of compliance with relevant regulations. Training needs as well as training programs should be evaluated regularly and provided not just as part of an onboarding or orientation process, but should be assessed and evaluated on an ongoing basis.  

1.7.2 Culture of risk management

Creating a culture of risk management involves fostering a mindset of awareness, accountability, and collaboration throughout the organization. Senior management should demonstrate their strong commitment to risk management by making it a core organizational value, ‘leading from the front’, and integrating it into the organization’s strategic decision-making processes.

Promoting transparency and establishing open communication channels around the risk issues associated with AI will help to create a culture where employees feel empowered to raise their concerns with management and collaborate on developing and implementing risk mitigation strategies. 

1.8 Schedule regular audits

1.8.1 Periodic audits

Regular assessments allow businesses to identify vulnerabilities in AI algorithms, data processes, and system integrations. This in turn allow the business to address potential gaps in service delivery and respond accordingly.

In addition, regular assessments mean that organizations can identify areas for optimization, such as algorithm performance, data quality, and system scalability. These audits provide valuable insights for refining AI models, updating processes, and implementing best practices to drive innovation and efficiency. It is prudent to build them into policies and procedures around AI use and governance. 

For further information, see Checklists: Completing a data privacy risk assessment and Completing a data incident response plan assessment.

1.8.2 Audit after the event

An audit conducted after the occurrence or threatened occurrence of an incident from an AI system should be more than an opportunity for finger pointing or assigning blame. A properly conceived audit helps organizations understand the scale and impact of the incident, strengthen security protocols, and fortify defenses against future threats. By incorporating lessons learned from audits into incident response plans, businesses can enhance their ability to detect, respond to, and recover from, for example, cybersecurity incidents involving AI systems.

Step 2 – Include data protection measures

Any efforts to mitigate the risks associated with AI must include strong and robust data protection measures prior to adopting the technology. It is or should be routine for any company to have cybersecurity measures already in place. The sheer amount of data required for an AI system makes additional considerations and measures necessary.

The National Institute of Standards and Technology (NIST): Trustworthy & Responsible AI Resource Center provides resources on the ethical and responsible use of AI.

NIST itself does not have a specific guide that focuses on transparency in AI data processing; however, its work on AI has helped formulate principles that advocate for transparent processing practices. See additional resources at the end of this Checklist for other frameworks and guidelines developed by various organizations aimed at ensuring responsible AI use. 

Organizations deploying AI systems are encouraged to:

  • clearly explain how their AI models make decisions or provide recommendations;
  • disclose the type of data the AI system is using for its processing;
  • be open about the limitations and potential biases in their AI systems; and
  • ensure that there is a level of human oversight where necessary.

These guidelines offer a strong foundation for ethical AI use. It must be remembered, however, that the field is rapidly evolving. Monitoring standards and best practices before updates are necessary to keep pace with technological and regulatory advancements.

2.1 Has the data used been encrypted or anonymized?

2.1.1 Encryption

Encryption translates data into a ciphertext by using an encryption algorithm. The encrypted text can be deciphered only by use of the correct decryption key. Encryption ensures that data remains unintelligible and secure even if it is accessed without authorization. 

2.1.2 Anonymization

Anonymization removes personally identifiable information from datasets so that individuals may not be identified. Anonymizing data replaces sensitive details such as names, addresses, and Social Security numbers are replaced with pseudonyms, or they may be removed altogether. 

For further information, see Quick view: Key data privacy and data security terms ; and Checklist: De-identification of data used by AI systems.

2.2 Have access control measures been implemented?

Access control is a security system that allocates resources based on predetermined criteria for authentication and authorization. The system verifies users' identities through various means, such as usernames, passwords, PINs, security tokens, or biometric scans. Such measures allow organizations to restrict data access to authorized personnel. This reduces the risk of unauthorized access, data leaks, and insider threats. Access control measures also make it possible to audit and monitor data access, enabling businesses to track user access to sensitive information and detect any unusual behavior promptly.

2.3 Are regular vulnerability assessments included?

2.3.1 Identify areas of vulnerability

Areas of vulnerability in AI systems are best identified by conducting comprehensive security assessments and audits. Data inputs and outputs are examined to ensure that the proper validation and sanitization processes are in place. Regular testing for penetration, along with threat modeling, and monitoring for anomalous behavior can also help pinpoint vulnerabilities. 

2.3.2 Incorporate protections in governance frameworks

Risk identification and mitigation for AI systems involves looking for potential threats such as data breaches, algorithm bias, model drift, and adversarial attacks. The implications of these risks must also be considered. Risk assessments specific to AI systems look at factors like data quality, model interpretability, and system dependencies. Governance frameworks should have adequate systems and controls, robust security measures, regular risk assessments, and proactive monitoring. Security measures such as encryption, access controls, and secure coding practices can help protect data and prevent unauthorized access. Algorithm bias may be addressed through utilizing training data formulated to consider diversity and representation, implementing model explainability techniques, and monitoring for bias during model operation can help to reduce discriminatory outcomes.

An ethical governance framework for guiding the use of AI systems in business ensures responsible decision-making by executives and employees and fosters trust with stakeholders (both internal and external). An ethics framework will also help an organization navigate regulatory requirements and societal expectations regarding the use and control of AI technology. Aligning AI practices with ethical principles enhances an organization’s reputation, attract customers, and drive sustainable and responsible innovation. 

2.4 Are the data protection measures compliant with applicable laws?

2.4.1 State consumer protection and data privacy laws

Adhering to the relevant state laws and regulations, such as consumer protection laws, data privacy laws, or industry-specific laws, demonstrates an organization’s commitment to responsible data practices, safeguarding sensitive information, and upholding ethical standards in their AI operations. Compliance with consumer protection and data protection laws also fosters trust within customer relationships and mitigates risk and reputational damage to the business.

For further information, see How-to guide: How to determine and apply relevant US privacy laws to your organization; and Checklist: Understanding privacy laws in the US.

2.4.2 Federal laws applicable to business

At present, the regulation of AI in the United States is still in its early stages, and there is no comprehensive federal legislation dedicated solely to AI regulation. 

Executive Order (EO) 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023)perhaps the most significant domestic federal action on AI, was issued on October 30, 2023, and repealed on January 23, 2025 by the Executive Order titled ‘Removing Barriers to American Leadership in Artificial Intelligence’. 

Step 3 – Review ethical considerations

3.1 Is the system fair?

3.1.1 Unbiased data

Data bias refers to systematic distortions that affect how representative data is. This bias is often due to prejudiced assumptions or flawed collection methods. Unbiased data, on the other hand, is data that has been collected and processed in a way that is fair and objective, and that is representative of the entire population or phenomenon it aims to reflect. 

Unbiased data is essential for training AI models that make fair and equitable decisions across diverse populations. Biased data will lead to discriminatory outcomes, and will reinforce existing inequalities and perpetuate systemic biases. Diverse, representative datasets, along with careful evaluation of data sources for inherent biases, organizations can minimize the risk of unfair treatment perpetuated by AI systems. Regularly auditing and monitoring data inputs for bias, implementing bias detection algorithms, and incorporating fairness metrics into model evaluation processes are essential steps in AI decision-making.

For further information, see How-to guide: Risks and liabilities of AI algorithmic bias.

3.1.2 Transparency

Transparency in AI means being clear about where data comes from, how it is used, and how decisions are made, unlike a ‘black box’ where processes are hidden and decisions cannot be explained. Unbiased data and clear AI operations promote trust and accountability. This helps businesses show that AI works fairly.

3.2 Is there accountability for the development and operation of the system?

3.2.1 Responsibility for outcomes and actions

Businesses must take responsibility for the outcomes and actions of their technology, whether those outcomes are good, bad, or neutral. Taking responsibility includes taking pains to ensure that AI systems are designed, deployed, and maintained to uphold ethical standards and, at a minimum, meet legal requirements. Businesses must have clear lines of accountability within their organization, assigning roles and responsibilities for overseeing AI systems. 

3.2.2 Lines of accountability

The main lines of accountability for the business to consider are as follows:

Main lines of accountabilityFunction
Leadership accountabilityHold top executives and decision-makers in the organization accountable for setting strategic direction and ethical standards for the development and operation of AI systems.
Data professionals and data stewardsResponsible for ensuring the quality, integrity, and ethical use of data in AI systems. They oversee data collection, processing, and storage practices to prevent biases and inaccuracies that could impact AI outcomes.
Technical personnel (including AI developers, engineers, and data scientists)Accountable for designing, building, and maintaining AI systems that are accurate, reliable, and secure. They must adhere to coding best practices, test algorithms rigorously, and address any technical issues promptly.
Compliance accountabilityMake legal and compliance teams accountable for ensuring that AI systems comply with relevant laws, regulations, and industry standards.
User accountabilityMake end users and stakeholders who interact with AI systems accountable for providing feedback, reporting issues, and using the technology responsibly. They must understand how AI systems work, follow guidelines for data usage, and communicate any concerns to the responsible parties.

Additional resources

Yoshua Bengio et al, Managing AI Risks in an Era of Rapid Progress
National Institute of Standards and Technology (NIST), AI Risk Management Framework 
Jonas Schuett et al, Towards Best Practices in AGI Safety and Governance: A Survey of Expert Opinion

Related Lexology Pro content

How-to guides:

Understanding AI-driven risks
Understanding the risk of negligence claims when using AI
Risks and liabilities of AI algorithmic bias

Checklist:

De-identification of data used by AI systems

Quick views:

Overview of AI in business 
Key AI terms

Reliance on information posted:

While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.