Checklist: Developing a policy for the ethical use of AI (USA)

Updated as of: 05 September 2025

Introduction

This checklist will assist in-house counsel, private practitioners, and compliance personnel with developing a policy for the ethical use of artificial intelligence (AI).

This checklist addresses the following steps:

  1. Review existing organizational infrastructure and resources
  2. Create a written policy that sets out the organization’s approach to ethical AI use
  3. Raise awareness within the organization

This checklist can be used in conjunction with the following How-to guides: Corporate governance and the use of AI, Understanding AI-driven risks, and Risks and liabilities of AI algorithmic bias; Checklist: Steps to mitigate risks associated with AI use in business; and Quick Views: Overview of AI in business (USA) and Key AI terms.

Step 1 – Review existing organizational infrastructure and resources

No.Task
1.1Create an AI ethics committee
1.2Identify whether existing policies and procedures apply to AI usage
1.3Consider the industry-specific risks that AI usage raises
1.4Review organization-specific use cases

Step 2 – Create a written policy that sets out the organization’s approach to ethical AI use

No.Task
2.1Consider examples of AI ethics guidelines
2.2Include key policy information
2.3Consider adaptability and KPIs
2.4Risk mitigation
2.5Risk monitoring

Step 3 - Raise awareness within the organization

No.Task
3.1Promote an ethical culture
3.2Engage all stakeholders
3.3Incentivize participation
3.4Continuous education and training
3.5Implement response plan for ethical misconduct

Explanatory notes

AI has revolutionized the business environment, providing unparalleled opportunities for efficiency, growth, and innovation. However, it also poses risks of intentional or inadvertent misuse.

See, Quick view: Overview of AI in business (USA) for further information about how AI is being used in business and the associated legal implications.

An AI ethics policy is a written set of principles and rules that an organization can adopt to guide the responsible and fair development and use of AI within its organization. This might form part of a wider policy dealing with AI-related issues or may be a standalone policy. It may also be a part of an organization’s general ethics policy, integrating the ethical use of AI into the organization’s overall ethical framework. AI ethics are important from both a moral and a legal standpoint, as ethical violations may result in legal violations. For example, if not properly calibrated, AI systems for recruitment may discriminate based on gender or race, even though there was no intention to discriminate.

There is no single source or universal agreement regarding what constitutes ethical use of AI; however, ethical AI use typically encompasses the following principles:

  • transparency;
  • explainability;
  • fairness;
  • non-discrimination;
  • privacy; and
  • data protection.

An AI ethics policy should be tailored to the specific operational and regulatory needs of each organization.

Legal framework

There is no comprehensive federal legislation governing AI in the United States. A few states have AI-specific laws; however, these laws do not provide comprehensive AI regulatory frameworks. Instead, they address specific issues, such as restricting the use of automation via machine learning to profile consumers. For example, the Connecticut Data Privacy Act gives consumers the right to opt out of the processing of their personal information for the purposes of profiling, and creates requirements that impact automated decision-making.

Many of the rules and practices regarding the use of AI come from broader federal and state laws that do not specifically contemplate AI, such as laws governing data privacy, consumer protection, intellectual property, and discrimination. While AI may not be mentioned explicitly in those laws, there is nothing in the laws that would exempt AI systems from their reach. For example, the Fair Credit Reporting Act (FCRA), 15 USC 1681-1681x, does not explicitly apply to AI systems used in credit scoring and lending decisions, but there is also nothing in the FCRA that would lead to a conclusion that it does not apply.

The landscape of AI regulation in the US is marked by a dynamic shift from consumer protection to innovation promotion. An Executive Order issued during the Biden Administration prioritized consumer safeguards and guardrails; however, that Order was superseded in January of 2025 by Executive Order 14179, ‘Removing Barriers to American Leadership in AI’.  The new order states explicitly that it is meant to minimize the impact of federal rulemaking impact on AI innovation, indicating a move away from consumer protection in favor of fostering technological advancement. This move occurs alongside other efforts in AI innovation, such as the establishment of the Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST), which has developed a framework for the use and development of AI. 

Despite numerous federal legislative proposals targeting private sector AI use, progress has been slow, with many legislators expressing reluctance to impede technological progress. In fact, the House version of the “Big Beautiful Bill” passed in May 2025 originally included a ten-year moratorium that would have prevented states and localities from enforcing any law or regulation that limits AI models or systems that enter interstate commerce. The moratorium was removed from the final bill that was passed due to significant bipartisan opposition. 

There is no comprehensive federal AI legislation. To fill the void, individual states have stepped in to regulate AI systems. States like CaliforniaColorado,  Utah, and Texas, have passed their own laws. The Colorado AI Act is especially important, particularly for its implications for healthcare. The Act focuses on high-risk AI systems, a term that includes those systems that have a significant influence on consequential decisions. Developers are required to provide extensive documentation regarding their systems and to facilitate impact assessments, in order to ensure transparency and accountability. While the law offers exemptions for certain FDA-regulated products meeting ‘substantially equivalent’ standards and for HIPAA-regulated entities performing non-high-risk healthcare recommendations, its broad language creates ambiguity.

The Colorado AI Act's scope has raised concerns regarding overbreadth, prompting the formation of a task force to evaluate and recommend revisions to the Act. The task force has already proposed some changes focused on refining ambiguous definitions and strengthening transparency, although the ultimate outcome from these recommendations remains uncertain. The trajectory of Colorado's efforts at AI regulation may serve as a model for other states, which may lead to a patchwork of state-level AI laws. The principles reflected in the Colorado AI Act also bear similarities to those found in European AI legislation, suggesting a convergence of regulatory approaches across different jurisdictions.

Since the use of AI has grown, proposals for legislative regulation may gain traction, and inspire more jurisdictions to undertake regulation. Due to the rapidly evolving legal environment, it is important for legal and compliance professionals to check regularly for legal updates regarding AI.

Step 1 – Review existing organizational infrastructure and resources

Before developing an AI ethics policy, take steps to identify any existing policies, practices, staff, and other infrastructure and resources that may be impacted by the ethical AI policy.

1.1 Create an AI ethics committee

Using AI technology in an organization requires a comprehensive approach, integrating all departments including, in particular, Information Technology (IT), Human Resources (HR), marketing, and product development. To coordinate all of these resources, the organization should create a committee to address ethical issues specific to AI.

The AI ethics committee should be charged with overseeing risks related to the use of AI, including but not limited to privacy, cybersecurity, compliance, and other risks. The committee must report directly to key company executives and members of management to establish credibility and to demonstrate organizational commitment. An effective AI ethics policy will directly impact all organizational risks, making the input of senior management particularly important. 

For further information about AI and corporate governance generally, see How-to guide: Corporate governance and the use of AI.

1.2 Identify whether any existing policies and procedures apply to AI usage

As an initial matter, the committee should investigate whether there are any existing internal policies that relate to AI ethics, such as a code of conduct or an information security policy. For example, the Hershey Company Code of Conduct covers issues relating to privacy and data protection when it directs employees to:

protect any personal information that is entrusted to you. Use it only in the way it’s meant to be used and don’t share it with anyone inside or outside of the company in an unauthorized manner. Practice good cybersecurity, too. Make sure you follow the processes and practices we have in place to protect our networks, computers, programs and data from attack, damage or unauthorized access.

Additionally, make an assessment of any applicable industry standards and practices. The International Organization for Standardization, for example, provides guidance on ‘Building a responsible AI: How to manage the AI ethics debate.’ Governmental policies or policies of international agencies – while not binding – provide guidance and some indication of best practices for using AI. See, for example, the United Nations Educational, Scientific, and Cultural Organization’s Recommendation on the Ethics of Artificial Intelligence.

1.3 Consider the industry-specific risks that AI usage raises

The AI ethics policy should be tailored to the organization’s industry, and should take into account the particular needs and risks of that industry. A healthcare provider and an electronics manufacturer may both use AI in their operations, but the systems used by the two companies are not interchangeable. As AI becomes more commonplace and its use is integrated into commerce, ethical and regulatory issues surrounding its use in specific industries will become increasingly important. In particular, concerns about privacy, bias, transparency, and accountability will need to be addressed by any policy.

As an example, AI is being integrated into the healthcare industry because it may improve diagnostics, predict outcomes, and personalize patient care. Many organizations are developing AI-powered chatbots for patient engagement that include AI algorithms for detecting symptoms of diseases. Because the applications of AI in healthcare are vast and growing, concerns about patient data security, discrimination, and privacy are increasing. In addition, the ethical obligations of attorneys related to client confidentiality and certifications to the courts pose additional considerations for the use of AI in the legal profession. There have been numerous reports of attorneys being sanctioned because of AI hallucinations of non-existent cases.  

1.4 Review organization-specific use cases

An effective ethics policy is one developed with the needs and risks of the organization in mind. The organization’s use of AI, as well as potential future uses, should provide guidance as to what protection the organization needs. If there have been problems in the past with the use of AI, ‘learning from your mistakes’ is another good point to remember in developing a policy. The experience of other organizations in the same field or sector of the economy may also provide helpful guidance.

Step 2 – Create a written policy that sets out the organization’s approach to ethical AI use

As noted above, the AI ethics policy might form part of a wider policy dealing with AI-related issues and risks or may be a standalone policy. It may also be a part of the organization’s general ethics policy (ie, not specific to AI). The decision on how the policy should be adopted and integrated into an organization’s governance depends on a number of factors, such as the size of the organization, the likely uses of AI by the organization, and the potential for abuse of an organization’s system. Organizations with ethics policies that are made up of general guidelines (eg, ‘be respectful’ or ‘be professional’) may choose to develop a separate AI policy that builds on the existing general guidelines.

2.1 Consider examples of AI ethics guidelines

There are many resources available to assist in developing ethical AI policy guidelines, including:

2.2 Include key policy information

The AI ethics policy should include details of:

  • the organization’s ethical standards;
  • the relevant external and internal stakeholders; and
  • information about the governance structure that details how the new policy will be adopted, implemented, and overseen.

The relevant stakeholders include those persons or entities who will be impacted by the new policy. This group includes current employees as well as job applicants and potential employees, vendors, contractors, and customers.

2.3 Consider adaptability and KPIs

The policy should include procedures for surviving ongoing changes and new developments, such as changes in personnel, or new developments in business and economic circumstances. In addition, the policy should establish and set out key performance indicators (KPIs) to measure the continued effectiveness of the policy. It is best that the KPIs be developed outside the policy, to avoid the temptation of formulating goals that will be so easy to meet as to be meaningless as real indicators of the success of a policy.

The obvious – and perhaps most intuitive – KPI for an ethics policy is the absence of complaints, enforcement actions, or negative publicity. The absence of adverse consequences, however, is not success. A better measurement of success would be indicators such as customer trust, employee satisfaction, impactful social responsibility initiatives, and environmental impact. These KPIs are perhaps more difficult to quantify but are more helpful for judging the success of a program.

2.4 Risk mitigation

Another key component of the AI ethics policy should be ethical risk mitigation, such as the identification of the ethical standards that data collectors, product developers, and product managers must abide by. The policy should detail a clear internal process in which ethical concerns (eg, whistleblower policies) are elevated to more senior leadership or to an ethics committee.

2.5 Risk monitoring

There should be systems in place, set out in the policy, to monitor for biased algorithms, privacy violations, and unexplainable outputs. Most organizations with an existing ethics policy will already have statements of company policy against discrimination, but the subconscious bias that can influence the collection and analysis of data merits a special ‘call out.’

The policy should be clear that any contracts with AI vendors should require a certification or warranty that the data is bias-free. For additional guidance, see Checklist: Drafting AI use contracts with third-party tech vendors (USA).

Step 3 – Raise awareness within the organization

3.1 Promote an ethical culture

Developing any ethics policy is best done in a holistic manner. In establishing standards for ethical AI use, the ethics committee should clearly reiterate the organization’s core values, standards, policies, and codes of conduct. This messaging should be proactive, rather than reactive.

3.2 Engage all stakeholders

All stakeholders that deal with data or AI products, including HR, marketing, IT, and even product development should understand the organization’s AI ethics policy. For the AI ethics policy to be successfully deployed and maintained, the organization will need to educate employees and improve or enhance their skills. This should empower employees to raise important questions and raise key concerns to the appropriate internal stakeholders. It is important for the ethics committee and key executives to deliver a clear message that ethical data and AI practices are an essential organizational directive.

3.3 Incentivize participation

Formally and informally incentivizing employees to play a role in identifying AI ethical risks demonstrates organizational commitment to the AI ethics policy. For example, the organization could name an ‘AI Ethics Policy Employee of the Month,’ which might include a small monetary award, or an additional day of paid time off.

3.4 Continuous education and training

An introduction to the AI ethics policy should be a part of the new employee onboarding process. In addition, ongoing messaging through internal emails, postings to the company intranet, and periodic training to employees all help to demonstrate the importance of the policies to employees.

3.5 Implement response plan for ethical misconduct

In the event the technology utilized by the organization (or contractors) is misused (either deliberately or accidentally), the ethics committee should work with the appropriate internal stakeholders to proactively develop a response plan. This should include a legal response to address contractor issues and to mitigate potential legal claims that may be brought against the organization. The response plan should also incorporate other stakeholders, such as public relations professionals, to monitor and control the external messaging that may follow claims of ethical misconduct.

Additional resources

UN-Principles for the ethical use of artificial intelligence in the United Nations systems
IBM – ‘Everyday Ethics for Artificial Intelligence’
Microsoft- Responsible AI Principles and approach Google – Responsible AI Practices
Harvard Business Review – ‘A practical guide to building ethical AI’

Related Lexology Pro content

How-to guides:

Risks and liabilities of AI algorithmic bias
Understanding AI-driven risks
AI and smart contracts
Understanding the risk of negligence claims when using AI
How to use AI to develop ESG disclosures
How to use AI to enhance Diversity, Equity and Inclusion
How to use AI to improve workplace safety
Corporate governance and the use of AI

Checklists:

Steps to mitigate risks associated with AI use in business
De-identification of data used by AI systems
Drafting AI use contracts with third-party tech vendors
Practical considerations prior to using AI in commercial contracts

Quick views:

Overview of AI in business
Key AI terms

Reliance on information posted:

While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.