How-to guide: Corporate governance and the use of AI (USA)

Updated as of: 05 September 2025

Introduction

This guide will assist in-house counsel, private practitioners, and compliance personnel to understand the current AI-related legislative and regulatory framework in the United States and the related compliance obligations associated with the use of AI in business operations. Given the wide-scale adoption of AI, the decision-making process, ethical concerns, and legal landscape are unsettled. Caution is advised to ensure deployment of responsible and ethical AI systems whilst supporting innovation alongside.

This guide covers:

  1. What impact does using AI have on corporate governance?
  2. Internal governance and oversight
  3. Overview of the laws and regulations affecting AI use
  4. Ethical considerations when using AI

This guide can be used in conjunction with the following How-to guides: Risks and liabilities of AI algorithmic bias, Understanding AI-driven risks, AI and smart contracts, Understanding the risk of negligence claims when using AI, How to use AI to develop ESG disclosures, How to use AI to improve workplace safety, How to use AI to enhance Diversity, Equity and Inclusion (DEI), Checklists: Steps to mitigate risks associated with AI use in business, De-identification of data used by AI systems, Drafting AI use contracts with third-party tech vendors, Developing a policy for the ethical use of AI, Practical considerations prior to using AI in commercial contracts, and Quick views: Overview of AI in business, Key AI terms.

Section 1 – What impact does using AI have on corporate governance?

1.1 What is corporate governance?

Corporate governance is a set of guidelines, rules, and regulations established to manage and control business operations and govern decision-making. It involves balancing a set of fiduciary duties (ie, the duty to act in the interests of the company) alongside managerial responsibilities that bind the many stakeholders such as shareholders, management, and the board in the context of wider factors such as legal, regulatory, and ethical business practice. The board generally delegates operational day-to-day management to the CEO and directors who are responsible for oversight and risk management.

1.2 Why is it important?

AI has become a widely discussed technological topic in recent years and presents immense potential alongside complex challenges as businesses are faced with different regulatory approaches across jurisdictions. Understanding the importance of having a robust AI governance framework is crucial to ensure responsible and ethical use of the technology to align with the strategy and values of a business.

Section 2 – Internal governance and oversight

2.1 Overview

For most corporations in the United States, there is no over-arching code of corporate governance in the same way that there is in many other countries. Instead, corporate governance issues are largely matters of state law, federal statutory rules, and regulations of a variety of government agencies in addition to rules set out by, for example, the Securities and Exchange Commission (SEC), which safeguards investors by enforcing federal securities laws and regulating the securities markets.

State governments, in turn, have left most governance issues to shareholders, who typically have few ways of expressing dissatisfaction with the actions of management beyond voting for directors or bringing lawsuits.

Each state has its own corporate code, and state corporation laws tend to be deferential towards the decisions made by corporate management, but some duties are imposed on management. For example, the law of Delaware (the state of incorporation of 1.5 million US corporations, due to the state’s favorable business environment), states that directors of a corporation have a fiduciary duty to the corporation and to the corporation’s shareholders. These fiduciary obligations are satisfied when directors make corporate decisions with due care and loyalty. In certain situations, directors also have a duty to provide full and fair disclosure regarding their interest in a corporate transaction. In the case of closely held corporations, some states have held that the majority shareholders owe a fiduciary duty towards minority shareholders. See, for examplePointer v Castellani, 455 Mass 537, 918 NE2d 805 (2009).

As with any corporate decision when adopting and implementing new systems, the fiduciary duty of management means that directors and officers must have some familiarity with the associated legal and business risks and implications. In the case of AI, this means they must understand how AI can impact the business as well as understand the potential risks and challenges (together with benefits) of deployment. If necessary, outside experts should be hired to explain the issues and/or to oversee implementation and deployment.

2.2 Stakeholders

The term ‘stakeholders’ is used to refer to individuals, groups, or parties who have an interest in a business and the outcomes of its actions. The term is broader than just shareholders or officers, and includes employees, customers, suppliers, communities, and unions. Stakeholders do not necessarily have a legal right to influence or even give input regarding decisions unless they are also shareholders (eg, employees participating in an employee stock ownership plan (ESOP)); however, many organizations will consider the impact of AI activities on stakeholders and seek input from them as a way of maintaining good relations, but also as a way of getting the point of view of an outside, yet still interested, party.

The interests of stakeholders in the adoption of AI depends on the organization and the technology adopted. An employee union, for example, may have concerns about job loss from the adoption of AI or customers may have concerns regarding the operation of AI for future transactions.

2.3 Internal AI governance structures

2.3.1 Accountability

Accountability is essential in AI, and on a day-to-day operational level means that the employees and senior managers with overall responsibility for the AI system should have defined roles and responsibilities. This is more than managing daily operations but also includes looking for improvements and record-keeping as well as watching for potential flaws or issues.

Given the ever-changing legal and regulatory landscape, it may also be advisable to assign responsibility for horizon scanning and monitoring internally too.

2.3.2 Oversight

While many companies will opt to adapt existing governance structures, existing structures may be too focused on their ordinary range of tasks and not have the expertise to consider all the implications of AI use. For example, an IT department may be capable of designing and implementing AI systems with some adjustments, but they may not be equipped to consider other potential AI-related risks. See How-to guide: Understanding AI-driven risks. Self-governing and self-operating AI systems call for unique skills that go beyond fixing problems as they arise. Being proactive is crucial.

If resources allow, it is usually advisable to set up a multi-disciplinary (ie, from different backgrounds and experience across diverse groups of stakeholders), dedicated task force (the AI ethics committee) to oversee AI governance efforts. The AI ethics committee can provide independent advice and help to develop standards, guidelines, tools, and templates to help other teams deploy AI responsibly and build trust in AI use. The role and authority of the AI ethics committee should be set out in writing so that the remit is fully understood by all participants and the business. Having a central governing body in place will allow a focus on accountability rather than a race to pass it on to another department.

2.3.2 Explainability and knowledge

An AI system should be explainable, meaning that management should have an idea of how the system operates. This could include knowing the datasets used to train the AI system and the criteria used to develop the output of the system. The business should have specific record-keeping processes to monitor compliance and to evaluate how the systems are performing and methods to report and escalate any issues. Ultimately, the business needs to be ready to take responsibility if the AI causes any illegal or harmful results due to its programming and make changes when necessary.

Employees and staff members need to be taught how the AI works and given training to address any skills or knowledge gaps that might prevent them from using AI responsibly. This should be kept under review to make sure it is working as intended. For further information, see Checklist: Drafting AI use contracts with third-party tech vendors.

2.4 Responsibility

AI corporate responsibility involves identifying who is responsible when decisions made by an AI system causes harm or when an AI system behaves unexpectedly. Any errors or negative outcomes should be addressed immediately, and mitigation measures put in place to ensure the incident is not repeated. This will help the business continue with operations. AI-specific policies and procedures (in writing) should detail the response plan. For further information, see How to Guides: Understanding AI-driven risks and How to draft a business continuity plan, and Checklist: Steps to mitigate risks associated with AI use in business.

It is useful for the business to retain a register of which business units are using AI, record who has overall responsibility within the business units, and identify existing arrangements with third-party service providers. These registers should be kept updated and under continual review. They evidence transparency in operations and provide oversight of responsible and ethical AI use to regulators (if applicable) and customers.

There should be clear lines of reporting between the business and the AI ethics committee to understand how these incidents should be escalated and dealt with and to ensure that the committee has the required oversight of what is happening ‘on the ground.’

2.5 Personnel

Due to the broad reach and potential ripple effects of AI, it is essential to ensure that the company has staff members capable of managing the AI system developed or adopted. To address potential gaps in compliance, it is best that these staff members be regular full-time employees, so that there is continuity and consistency with the operations. The employees should also have clear processes for reporting to the AI ethics committee or team responsible for AI, as well as policies that hold employees to account for responsible AI use so that problems may be addressed quickly and brought to the attention of those who will be able to implement any needed changes.

2.6 Cybersecurity and risk management

With the increasing reliance on and use of AI systems, corporations face heightened cybersecurity risks. While AI can be greatly beneficial to company operations, the company must also have security measures in place to mitigate the risk of data exposure and abuse. AI use needs to be built into and managed as a part of overall risk assessment, categorizing risks as high risk or low risk. When dealing with a regulated industry or individuals with protected characteristics, the risks must be considered in greater detail.

If a company does experience a cyberattack, the company must be ready to respond. Response readiness should include clear avenues of reporting actual or threatened risk. Because a cybersecurity incident may have a severe impact on a company, it is important that management be aware of incidents, and be kept apprised of the steps taken in response. Management involvement is especially important with AI-related incidents, given the self-operating and possibly self-perpetuating nature of AI systems.

The more personal or sensitive the data and information that is being used in an AI system, the greater the risk of a data security breach and the larger the size of the breach. A company should carefully consider how much personal or sensitive data they use in their AI training because in the event of a security breach, the more data compromised results in more liability and costs to the company.

Data that is collected and stored must be collected and stored in compliance with legal and regulatory requirements including transparency around the types of data collected, how it is used, and who has access to it. In addition, even if not required by law, consent should be obtained from individuals before collecting, using, or disclosing personal data for AI development and deployment. Unnecessary or irrelevant data should not be gathered.

A data protection and governance framework should be set up and periodically reviewed and updated in accordance with applicable privacy and data protection laws. Developers and deployers of AI systems should also incorporate a privacy-by-design principle when developing and deploying AI systems. Privacy by design is an approach that embeds privacy in every stage of the system development lifecycle.

For further information, see How to guides: How to implement privacy by design within your organization, How to evaluate the effectiveness of a data security or data privacy compliance program, and Checklist: Steps to mitigate risks associated with AI use in business.

Section 3 – Overview of the laws and regulations affecting AI use

3.1 Legal and regulatory landscape

As of 2025, there is no comprehensive federal AI law in the United States. AI regulation is piecemeal, and in the absence of federal legislation, many states have moved to fill the void by enacting legislation regulating AI, including CaliforniaColorado, Utah, and Texas. Whilst most countries do not have AI-specific regulation, state and international laws are developing to regulate how companies use AI: for example, the EU AI Act

In October 2023, President Biden issued an Executive Order that included guidelines for AI safety, security, and transparency. That Order, however, was revoked by President Trump on January 23, 2025. In addition, the National Institute of Standards and Technology (NIST) (an agency of the Department of Commerce) has published the AI Risk Management Framework setting out guidance on AI governance to ensure compliance with regulations. Various federal agencies, such as the Cybersecurity and Infrastructure Security Agency have also issued guidelines to steer compliance. 

In 2023, the SEC proposed rules relating to the use of AI by broker-dealers and investment advisers. The rules, if adopted in their current form, would require firms that use predictive data analytics (PDA) to take certain steps to address conflicts of interest associated with their use of PDA and similar technologies to interact with investors. The proposed rules were intended to prevent financial services firms from placing their commercial interests ahead of investors’ interests. 

No final vote has been taken on adopting the proposed rule. The mere fact of the proposal, however, should be taken as a clear indication that the SEC is focusing attention on AI. Companies involved in the securities industry should monitor and evaluate the activity of the SEC carefully for the potential adoption of new regulations, and for any enforcement actions brought based on the use of AI.

3.2 Risk and compliance assessment

The use of AI is governed by many of the same laws protecting equal opportunity and consumer protection as govern other corporate functions. For example, employment discrimination claims may result from decisions taken by an AI model that has the effect of discriminating based on race, gender, or other protected characteristics. Existing employment discrimination laws are applicable even if decisions are made by an automated system. Boards and senior managers should ensure the business has risk management processes in place to monitor compliance with legal and regulatory requirements.

In addition to employment discrimination, consumer protection is a critical area of regulation for AI. The Federal Trade Commission (FTC) enforces laws that protect consumers from unfair or deceptive practices, which include the misuse of AI technologies. See the FTC – AI and Your Business series.

For more information on associated risks see How-to guides: Understanding AI-driven risks, Understanding the risk of negligence claims when using AI, and Checklist: Steps to mitigate risks associated with AI use in business.

3.2.1 Discrimination and bias

AI systems can perpetuate bias or discrimination without any intent on anyone’s part to do so. Algorithmic bias, which is defined by the Alan Turing Institute as '[u]nfairness that can arise from problems with an algorithm’s process or the way the algorithm is implemented, resulting in the algorithm inappropriately privileging or disadvantaging one group of users over another group’ may lead to discriminatory outcomes. Algorithmic biases ‘often result from biases in the data that has been used to train the algorithm, which can lead to the reinforcement of systemic prejudices around race, gender, sexuality, disability or ethnicity.’ 

Algorithmic bias can stem from various factors, and includes gender, racial, national, and linguistic biases in design. System designers’ implicit or explicit biases can inadvertently influence algorithms, leading to biased outcomes. For example, racial bias can occur due to underrepresentation or misrepresentation of certain racial groups in data sets. This type of bias is not always a conscious decision, but just reflects a way of thinking by default to certain assumptions about a group.  See How-to guide: Risks and liabilities of AI algorithmic bias.

3.2.2 Consumer protection

Consumer protection laws have focused on disclosures regarding the use of AI. In May 2024, Utah became the first state to pass legislation on this matter, titled the Artificial Intelligence (AI) Policy Act (The AI Policy Act). The AI Policy Act regulates the way businesses and individuals use and disclose the use of generative AI in their business practices. In a commercial contract context, businesses must disclose interactions with generative AI if asked. The focus of state laws is on consumer interaction, not the use of content. Furthermore, the AI Policy Act prohibits businesses from avoiding consumer protection or fraud liability by claiming the generative AI was at fault.

Several states, including Colorado, Connecticut, Texas, and California, have either passed, or are in the process of  advancing legislation regarding transparency in AI. Many of these bills were inspired by the EU Artificial Intelligence Act. On May 17, 2024, Senate Bill 24-205, the Colorado Anti-Discrimination in AI Law (ADAI), was enacted. The ADAI provides safeguards for consumers against algorithmic discrimination in decisions made by high-risk AI systems. The bill will become effective on February 1, 2026. On April 24, the Connecticut Senate passed SB 2. However, SB 2 failed to pass the Connecticut House after concerns were raised from the governor and business community about potential negative economic impacts. The original text of SB 2 provided for a risk-based approach and would have been a leading state legislative framework for AI oversight by becoming the first U.S. legislation to govern private-sector AI development and deployment on a scale comparable to the EU AI Act. In 2024, the California Legislature was especially active in passing AI-related bills. These bills impose various obligations on AI users. These obligations run the gamut from safety and consumer transparency requirements, to privacy safeguards, reporting requirements, protections for performers and deceased celebrities, and election integrity. On September 19, 2024, California Governor Newsom signed into law SB 942, legislation aimed at enhancing transparency for content created or modified using generative AI (GenAI). The primary focus of SB 942 is "Covered Providers," defined as entities that create, code, or produce a generative AI system with over 1,000,000 monthly visitors or users, accessible to the public in California. The law describes a GenAI system as an artificial intelligence capable of generating synthetic content—such as text, images, video, and audio—that mimics the structure and characteristics of its training data.

3.2.3 Data protection laws

AI depends on the accumulation and analysis of data. Some systems and applications require immense quantities to reach useful or accurate results. There are multiple data protection and privacy laws that companies must comply with. Two of the most prominent bodies of privacy law include the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) in the United States. Corporate management must be able to verify that all appropriate steps to comply with these laws are taken and adequate staff training is provided prior to the business adopting AI. This is an ongoing and continuous process, as this area of law is still evolving.

For further information, see How to guides: How to manage your organization’s data privacy and security risks, How to prepare and respond to a governmental investigation or enforcement action for violations of US privacy laws, and Checklist: Responding to a data breach, and Quick view: Collection and use of non-consumer data.

3.2.4 Contracts

Contract law in the United States is largely a matter of state common law; however, some recent legislative enactments impact AI and contract law. The law regarding AI in contracts touches upon diverse legal domains, including intellectual property rights, data privacy (such as the California Consumer Privacy Act), insurance, as well as the common law of contracts. For example, Colorado law prohibits insurers from using any external consumer data and information source, algorithm, or predictive model with regard to any insurance practice that unfairly discriminates against an individual based on an individual’s race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. The Tennessee law – the Ensuring Likeness Voice and Image Security (ELVIS) Act – prohibits the use of AI to mimic an artist or performer’s voice without permission.

3.2.5 Intellectual property rights

When using AI, a business must consider intellectual property rights. This includes copyright, patents, trade secrets, and use of third-party AI technology. This is not limited solely to the AI design, but also relates to the data being used to train the AI and the resulting outputs.

A company that intentionally uses AI-generated material that was produced with infringing data will likely be liable for infringement. For further information, see Checklist: Drafting AI use contracts with third-party tech vendors.

Section 4 – Ethical considerations when using AI

Businesses need to demonstrate that they are socially responsible by considering ethics. Ethical concerns are related to the values and morality involved in the use of AI. Because AI is a machine, it does not have the conscience of a human being, and therefore could produce outcomes that are immoral or counter to a company’s ethics.

Since 2002, the Sarbanes-Oxley Act has required that publicly traded companies adopt a business ethics code and also create an internal procedure by which an employee may report  fraud or ethical violations. While the Sarbanes-Oxley Act only applies to publicly traded companies, the requirements have focused public attention on ethical corporate behavior and driven companies to develop (and advertise) their commitment to ethical values. Communicating to customers and investors that the organization has a commitment to ethical behavior helps build public trust and continued confidence in the business and its brands.

4.1 Practical guidance

4.1.1 Corporate ethics

A corporate ethics statement is a clear and defined statement that demonstrates a commitment to ethical conduct as a part of a company’s mission and values.  

In addition, a written ethics policy can provide employees and stakeholders with a set of rules and principles for responsible and ethical AI use within a business.  An ethics policy can be standalone, or if a company has an existing ethics policy that may be reviewed and revised (if necessary), to address the issues raised by the adoption of AI.  Regardless of whether the AI policy is integrated within an existing policy or stand-alone, an AI policy is likely to be tailored to the nature and type of the business, taking into account specific industry or regulatory needs. See Checklist: Developing a policy for the ethical use of AI.  

Consideration should be given to processes for redress and how complaints should be dealt with (particularly where end users do not agree with the decisions or outcomes produced). 

4.1.2 Fairness – non-discrimination

The use of AI should produce results that are fair to all classes of people. People involved at every aspect of the AI adoption should be involved in training the system and be included in the datasets to guarantee that all parties impacted by the outcomes are being considered by the AI. When testing for fairness, a company should check that the generated outcome is not overestimating or underestimating any dataset (eg, a particular gender or race) and have strategies in place to manage this.

4.1.3 Transparency

Transparency in AI refers to knowing what data the AI is using and how it is using that data to create the outcomes it has been programmed for. Some of the ways the business can ensure employees, stakeholders, and sometimes even the public have transparency about the AI being used is by providing ‘explainability’ reports that share data documentation of inputs and outputs. The company should also regularly educate stakeholders and employees about the operation of its AI applications, and create assessments on the impact it is having on the company. 

4.1.4 Privacy and data protection

Any application that uses large amounts of data, such as an AI system, will necessarily raise concerns regarding the security of that data. Insofar as personal data is used, the protection of the privacy of the data subjects becomes crucial. Failure to pay attention to these two factors can result in significant liability for the organization using the data, as well as serious negative publicity. The ethical implications of failing to protect data or privacy are also very real. For more information on applicable laws, see section 3.2.3. 

4.1.5 The impact on society

The impact AI has on society is already apparent in its current uses (eg, by transforming the job market) and will be even greater as the technology evolves. While some uses of AI are beneficial for society, such as discovering workplace hazards that might otherwise be overlooked, there are also uses of AI that are detrimental to society (eg, replacing employees and causing layoffs). Before a company implements AI systems, it should take into consideration how that specific AI will affect itself as a company and wider society.

4.1.6 Unethical outcomes

AI programming will produce a suggested decision to stakeholders that will lead to a desired effect, but it is unable to contextualize what information it is taking in and how the suggested decision would really apply in the world. These outcomes can include bias in decision-making or unfair treatment, and thus human oversight is crucial to mitigate these risks. For example, if a company was looking to find ways in which it can be more competitive in the market and AI were to suggest a scheme of predatory pricing, following this advice from AI would be unethical and illegal. Having the necessary human oversight will help to identify and address issues and ensure that the system operates fairly and transparently.

4.2 Does the system lead to job displacement?

The automation of tasks could lead to drastic impacts on jobs within the company if not thoroughly monitored and considered. If AI fully automates a task that would otherwise be done by an employee, allowing AI to completely take over that task will result in job displacement. The fear of job displacement in some industries has led to labor unrest and increasingly militant and vocal union activities. For example, in 2023, the Writers’ Guild of America engaged in a months-long strike against the Alliance of Motion Picture and Television Producers in which the use of generative AI in TV and film writing was the key issue. The agreement ultimately reached provides that writers are not required to use AI in writing scripts. If writers are given AI-generated scripts as writing material to start from, the human writers will receive full compensation and credit as the writer.

4.2.1 Reputation management

While using AI to increase company efficiency could benefit the finances of the business, reducing workplace employees can lead to significant reputation loss in the public eye. If a company is considering reducing its number of employees due to AI, there should be a public relations plan in place so that the company does not suffer in the community, among investors, and in the media. If there is no plan in place, the company may want to consider holding off on layoffs until there is one in place.

Note that during the Writers’ Guild strike referenced above, an AP public opinion survey showed 50% of respondents supporting the striking writers, while only 40% supported the studios.

4.2.2 Risk of personal bias

Ethical concerns are, to a large extent, subjective. They may vary depending on cultural and social norms, as well as on individual perspectives. What is considered ethical by one group may not be seen the same way by another, and an individual’s assessment of a situation will often depend on the context in which the ethical judgment is made. Businesses operating globally and across borders must consider cultural and social differences that may impact ethical policies. When developing policies, businesses should be mindful of their surroundings as well as their own values. 

4.3 Guidelines and best practices

While there are guidelines and best practices for ethical AI use (such as fairness, accountability, and transparency), they are not always legally enforceable. However, ignoring these best practices could still result in unlawful discrimination and pose operational or reputation risks. The NIST has developed guidelines for the adoption and use of AI, and the MIT Schwarzman College of Computing and the MIT Washington Office have issued a series of policy briefs on AI that can provide insight and best practices related to AI. Other academic sources, such as the Harvard Law School Forum on Corporate Governance, may provide suggestions and guidance, and some federal agencies have adopted policies and practices that could be adopted into an organization’s own practices. For further information, see Checklist: Practical considerations prior to using AI in commercial contracts.

4.3.1 Compliance with ethical guidelines

The ethical guidelines that a company carefully crafts for its AI use and daily operations are critical to ensure that a business remains true to its core corporate values. If there are no clear and concise guidelines in place, the business may stray away from responsible business practices that do not promote societal good.

4.3.2 Continuous monitoring and improvement

A company must ensure that it has a system or process in place to monitor and audit its AI system, not just for functionality and security, but for the ethical considerations it has laid out in its own guidelines. This involves real-time monitoring of outcomes and decisions and analysis of various aspects of AI use including data quality, performance of the system, and compliance monitoring. This regular testing will help prevent vulnerabilities and address and rectify any risks of harm. For example, new information or discovered biases will require the company to modify their programming to make sure the AI stays compliant with ethical considerations. For further information, see Checklist: Developing a policy for the ethical use of AI.

Additional resources

Artificial Intelligence Governance and Auditing – List of AI Governance Tasks
Directors & Boards – ‘The Director’s AI Checklist by Michael Chertoff and Allan Grafman
MIT AI Risk Repository – What are the risks from Artificial Intelligence?
The Open Worldwide Application Security Project – ‘LLM AI Cybersecurity & Governance Checklist’

Related Lexology Pro content

How-to guides:

Risks and liabilities of AI algorithmic bias
Understanding AI-driven risks
AI and smart contracts
Understanding the risk of negligence claims when using AI
How to use AI to develop ESG disclosures
How to use AI to improve workplace safety
How to use AI to enhance Diversity, Equity and Inclusion (DEI)

Checklists:

Steps to mitigate risks associated with AI use in business
De-identification of data used by AI systems
Drafting AI use contracts with third-party tech vendors
Developing a policy for the ethical use of AI
Practical considerations prior to using AI in commercial contracts

Quick views:

Overview of AI in business
Key AI terms

Reliance on information posted:

While we use reasonable endeavours to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments, but you should not use them for transactions or legal advice, and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.