Introduction
This guide will assist in-house counsel, private practice lawyers and risk and compliance professionals to develop the procedures and policies for a vulnerability disclosure program (VDP) to ensure cybersecurity in your organization. A VDP is a set of procedures and policies enabling external researchers to safely report security flaws in your IT systems. This guide outlines the process for developing a VDP that allows third parties to remotely examine your organization’s online systems and services. A VDP for hands-on or in-person examinations may raise legal issues beyond the scope of this guide.
The guide covers the following issues:
- Understand the scope of your VDP needs
- Identify and address potential cybersecurity liability risks, including inaction
- Design of a VDP
This guide may be read in conjunction with How-to guide: How to manage third party supply chain data privacy, security risks, and liability; and Checklists: Privacy and data security law training; Drafting internal privacy policy, practices, and procedures; and Completing an incident response assessment.
Section 1 – Understand the scope of your VDP needs
Cyberthreats are growing and spreading even as you read this, potentially causing immense damage to your organization’s IT systems and data. Undiscovered vulnerabilities in your organization’s network make it easy for bad actors like hackers and data miners to breach internal networks and gain access to sensitive data. It is therefore important for organizations to develop appropriate procedures and policies for a VDP to identify and secure any vulnerabilities within their IT systems.
For purposes of this Guide, a vulnerability is defined as one or more weaknesses within a product that forms part of an organization’s IT system (eg, software), and that can be exploited by unauthorized parties for the following reasons:
- modifying or accessing data;
- interrupting the proper execution of operations; and
- performing incorrect actions not specifically authorized to the party using the weakness.
Securing vulnerabilities should be a top priority for your organization. According to a 2025 study by IBM, the average total cost of a data breach is about $4.44 million per incident globally. The development of a VDP is therefore an essential step for your organization.
A VDP is a set of procedures and policies to enable researchers outside your organization to safely report security flaws in your IT systems. It can be an important component of your organization’s cybersecurity program. A VDP is designed to use the help of ‘ethical hackers’ and other outside security researchers engaged by the organization to discover flaws before bad actors find and exploit them. It is just one component of a cyber-vulnerability management program. Ensuring a proper structure and adequate staff to manage alerts, vulnerability reports, and remediation is also essential to cybersecurity.
Implementing new technology, like generative AI models and third-party applications throughout the organization, along with the continued use of Internet of Things (IoT) devices and SaaS applications, is increasing the attack surface and placing additional strain on security teams.
To address this challenge, an organization may find it beneficial to implement AI tools and automation to bolster security prevention strategies - particularly in regards to attack surface management, red teaming, and posture management.
Organizations that have leveraged AI and automation in their security prevention efforts have experienced significant benefits, with an average savings of US $1.9 million in breach costs compared to those that have not adopted these technologies according to IBM.
1.1 Objectives and characteristics of a VDP
Your VDP’s objectives will depend on the organization’s security goals and priorities. The overall objective is always to identify and repair weak points in security systems, but the focus of the inquiry is directed towards the areas of the greatest concern to the organization. VDP policies generally address the following:
- the person or persons within the organization who will receive information about security flaws;
- the scope of disclosure of vulnerability reports to affected parties or the public; and
- the authorized methods of discovering vulnerabilities in the organization’s systems, services, and products.
1.2 Common mistakes
There are several common mistakes to avoid when developing a VDP. For instance, informal solicitation of vulnerability reports without a structured VDP can invite dangerous consequences, such as unauthorized research methods, unauthorized access to privileged data, and premature public exposure of vulnerabilities.
Additionally, you should define the VDP to invite a manageable number of reports. Too few relevant reports or too many irrelevant reports could mean that critical vulnerabilities are missed. Another common mistake is failing to respond to vulnerability reports or to address the vulnerabilities that have been identified through the VDP. If the organization fails to respond to bona fide vulnerability reports, outside security researchers may think that the organization is not serious about the VDP, and those researchers may stop looking for vulnerabilities. The best way to avoid these pitfalls is to have a formal VDP supported by appropriate internal personnel.
1.3 Leverage third-party expertise to protect stakeholder interests
A successful VDP is based on trust and cooperation among a variety of parties, such as outside security researchers, in-house staff (eg, software developers and IT security specialists), and others, potentially including industry groups and government agencies.
VDPs call on the skills of outside security researchers to aid in protecting the organization from malicious actions such as hacking and ransomware. The various stakeholders (eg, your organization’s security team, the IT personnel, legal counsel, public and media relations personnel) in a VDP have potentially different (and conflicting) priorities with regard to disclosure of vulnerabilities, for example, consider these possibilities:
- vendors, developers, and manufacturers of hardware and software might prefer to wait to publicly disclose the vulnerability until some kind of mitigation (such as a patch) becomes available, in order to avoid loss of business; and
- on the other hand, from a customer’s perspective, early public disclosure of the vulnerability might be the fastest way to enable them to protect themselves from exploitation by attackers.
Other interested parties might include industry organizations and government agencies who assist in coordinating vulnerability disclosure and response.
Finding undiscovered vulnerabilities involves considerable expertise, and at the most basic level requires these skills:
- up-to-date awareness of common coding mistakes;
- up-to-date awareness of earlier vulnerabilities; and
- knowledge and skills to search for vulnerabilities in code.
So-called ‘ethical hackers’ are people with expertise in these areas who respect legal rules and use their abilities to identify cybersecurity vulnerabilities. Making use of these good actors can help organizations patch vulnerabilities before they are maliciously exploited. Conducting in-house testing can cost an organization significant time and money. The worldwide community of security experts, ready to conduct good-faith research, are valuable allies.
1.3.1 Bug bounties (vulnerability reward programs)
A bug bounty (vulnerability reward program) is a cash reward for reporting vulnerabilities to your organization. This type of vulnerability disclosure is growing in popularity. A bug bounty can be an effective way to attract researchers to engage with your organization’s VDP. A bug bounty can also be targeted in scope; for instance, it could be aimed toward protecting the organization’s most critical assets rather than applying equally to all potential vulnerabilities. Both small organizations and giants such as Google and Microsoft have set up bug bounties.
1.3.2 Ensure in-house expertise and capacity
Your organization’s incident response team (IRT) should include staff qualified to answer questions about the scope of the VDP and what those hired to conduct the VDP are and are not authorized to do. Members of the IRT should also stand ready as designated points of contact to receive vulnerability disclosure reports.
The IRT should have adequate resources and tools. Its size and scope should also be appropriate to the organization’s structure and culture.
See further Checklist: Drafting internal privacy policy, practices, and procedures.
Section 2 – Identify and address potential cybersecurity liability risks, including inaction
Organizations face a vast array of legal issues in cybersecurity. Taking steps to discover vulnerabilities through a VDP brings its own set of legal risks to your organization and security researchers. For example, inviting outsiders to search for vulnerabilities could breach laws concerning unauthorized access and protection of customers’ personal information.
On the other hand, failing to take reasonable measures to ensure cybersecurity can also expose your organization to legal liability. More and more, states are enacting legal requirements that impose a duty to take a proactive approach to cybersecurity. Meeting this duty usually requires ongoing monitoring and updating of cybersecurity measures.
In other words, taking action is dangerous, but so is inaction. Your organization must find a balance between these two dangers.
2.1 Ensure that the VDP complies with legal standards
As mentioned above, inviting security researchers to participate in your organization’s cybersecurity program poses potential legal risks both for your organization and for the researchers. A well-designed VDP reduces the likelihood of legal violations by clarifying legally authorized vulnerability research and disclosure.
You may need to consider some of the federal legislation listed below.
- Computer Fraud and Abuse Act of 1986 (CFAA) – this federal law makes it a crime punishable by imprisonment or a fine for anyone to access without authorization, a computer or computer system used by a financial institution, federal government agency, or any organization or individual involved in interstate or foreign commerce or communication. Unfortunately, the law fails to carve out exemptions or distinguish clearly between legitimate security research and testing, and malicious behavior. Some argue that the CFAA’s broad coverage actually chills the development of computer security. See 18 USC section 1030. An ‘ethical hacker’ could be subject to prosecution for accessing a part of a system without explicit authorization to do so, although in 2022 the Department of Justice (DOJ) revised its policy regarding charging violations of the CFAA, and states that good-faith security research should not be charged. Good-faith security research means accessing a computer solely for purposes of good-faith testing, investigation, and the correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or to protect the users of such devices, machines, or online services.
- Digital Millennium Copyright Act of 1998 (DMCA) – like its predecessor the CFAA, the DMCA was aimed at new forms of criminal misuse of technology. The DMCA’s anti-piracy protections broadly prohibit circumvention of technical protection measures (TPMs) that are meant to control access. But these anti-circumvention provisions contain important exceptions to protect at least some legitimate security testing and encryption research. The DMCA also includes the possibility of additional temporary exemptions to adapt to changing technologies. See 17 USC section 1201-1205; Exemption to Prohibition on Circumvention of Copyright Protection Systems for Access Control Technologies, 89 Fed Reg 85437 (2024).
- Cybersecurity Information Sharing Act of 2015 (CISA). This federal law allows information sharing on cyberthreat indicators between the US government and technology and manufacturing companies. One aim of the law is to make it easier for companies to share cyberthreat information by decreasing the possibility of incurring liability for doing so. See 6 USC sections 1500-1510.
A huge number and variety of state laws restrict third-party access to personal data. A VDP program could invite unauthorized access to this data. Therefore, careful consideration of personal data protections is essential in designing a VDP. It is especially important to consider whether you should obtain customer disclosure or consent. Additionally, inviting outsiders to engage in vulnerability testing may even trigger breach disclosure requirements, which are in place in all 50 states, and which are continuously evolving.
2.2 Determine whether the response to the VDP satisfies legal duties to implement ‘reasonable’ cybersecurity measures
Locating and addressing vulnerabilities is essential to developing a cybersecurity program that will be effective for your organization. Failure to address security vulnerabilities may expose your organization to penalties and fines, as well as civil liability for damages to parties whose private data may be exposed when an outsider gains unauthorized access to your organization’s internal networks. There are many federal and state laws and regulations requiring organizations to have reasonable security measures in place to protect customers’ personal data.
Enforcers of cybersecurity compliance include a variety of state and federal agencies, whose requirements may even overlap, depending on your organization’s business sector. Some are listed below.
- The Federal Trade Commission (FTC) has the power to enforce data security. This includes the authority to bring actions against organizations that inadequately protect the security of personal information. The FTC treats these cases as unfair trade practices. See 15 USC section 45(a).
- The Securities and Exchange Commission (SEC) also has the power to sanction organizations under its jurisdiction for failures in cybersecurity policies and procedures. The SEC’s ‘safeguard rule’ requires covered organizations to adopt policies and procedures reasonably designed to protect customer data. See 17 CFR section 248.30(a).
- The US Department of Health and Human Services (HHS) enforces privacy and security under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). It covers all entities that process private health information. See 45 CFR Parts 160, 164.
At least half of US states require private organizations that own, license, or maintain personal or consumer information to implement ‘reasonable’ cybersecurity procedures and practices. For example, New York’s Stop Hacks and Improve Electronic Data Security Act (SHIELD Act) requires companies to implement safeguards for protecting private information. See NY Gen Bus Law section 899-bb. This includes risk testing, monitoring, detection, and assessment in areas such as the design of networks and software, as well as information processing, transmission, and storage.
A few other states provide incentives for private sector entities to implement reasonable security practices, including VDPs. Additionally, more and more legislation addressing vulnerability disclosure is being considered by state legislatures.
Section 3 – Design of a VDP
Tailor the VDP to your organization’s profile, including its unique risks and priorities. This includes considerations such as the type of data your organization handles, its current security measures, whether there are vulnerabilities that may be particular to your organization or industry, and the potential effect on third parties. See ‘Examples of VDPs’ under the Additional Resources section of this document. These examples can be used as a guide for areas of focus of the VDP for your organization, as modified and adapted to your specific use.
3.1 Tailor the VDP in objective and scope
Your organization should designate which network and system components and data the VDP covers, and direct those conducting the VDP to limit their research to these areas. This decision depends on the following factors:
- the sensitivity of the data your organization processes;
- current security systems;
- how well sensitive data or networks can be segregated and protected; and
- legal or contractual restrictions on data access.
If sensitive data is involved, the organization must determine whether it is appropriate to impose restrictions on accessing or otherwise processing that data or the information gleaned from it. It is crucial to designate which systems or data fall within the scope of the program, such as by identifying the data that is off limits and the data that is fair game. Note that imposing too many restrictions, or restrictions put in place without a determination that they are necessary or appropriate, may limit the effectiveness of the VDP by not revealing all of a system’s vulnerabilities.
It may also be appropriate to prohibit certain methods used to discover vulnerabilities such as the following:
- social engineering (ie, deception);
- denial-of-service attacks; and
- any scanning or penetration testing methods known to harm the organization’s systems.
The VDP could also differentiate among various vulnerability types and include or exclude some of them. These exclusions or inclusions might address the following:
- bugs;
- password management;
- misconfigured systems; and
- inadequate staff security training.
Your organization should also determine which of its networks or data might involve third-party interests. It is advisable to obtain specific authorization from those third parties before including those networks and data in the VDP. For instance, a VDP could involve the organization’s cloud storage. In that case, obtaining permissions and authorizations may be crucial. Consider the data security interests of the cloud storage provider and of the other customers who store data with the same provider.
3.2 Start with publicly available templates
There is no formal standard for an organization’s vulnerability reporting procedures. The basic procedure generally follows this order:
- discovery;
- submission of vulnerability report to the organization;
- investigation period; and
- full disclosure upon patch or expiration of time.
If your organization is preparing its first VDP, there are many resources available to aid in developing a suitable program. The Cybersecurity Unit of the DOJ has issued a framework for designing VDPs. There are also a number of publicly available VDP templates. These are good starting points for creating a VDP suited to your organization. For instance, the National Telecommunications and Information Administration (NTIA) in the US Department of Commerce has prepared a simple ‘early stage’ template VDP. The US Cybersecurity & Infrastructure Security Agency has also developed a VDP template for use by federal executive branch agencies that could be adapted for use by a non-governmental organization.
The basic VDP should include the following elements.
- Brand promise – demonstrate your organization’s commitment to customers, the market, and anyone else potentially impacted by your organization’s security vulnerabilities by describing your organization’s work to address security and future commitments.
- Scope of initial program – outline for security researchers which systems and capabilities are open to research and which are closed. Encourage researchers to contact your organization with questions before they engage in inconsistent or unaddressed conduct.
- ‘No legal action if . . .’ clause – inform researchers in clear and unambiguous language that good-faith efforts appropriately reported will not result in legal action against them by the organization, and include information about relevant laws.
- Communication mechanisms and process – clearly identify for researchers how to submit and report vulnerabilities, including security precautions, requirements, and your organization’s response timeframe. Reports could be filed via a secure reporting form or a dedicated email account, for instance.
- Disclosure deadlines – clearly state that the vulnerability is to be kept confidential until it is fixed, or until a certain amount of time has passed, after giving your organization notice, or subject to individual agreement. Your organization should also consider suggesting a timeframe for researchers to report vulnerabilities to you, such as upon discovery or upon validation.
- Non-binding submission preferences and prioritizations – set expectations based on priorities and submission volume, adapting and evolving these according to what types of issues the organization thinks important. Consider including dispute resolution alternatives such as referral to an outside body.
- Prior permission to test required for certain systems or data – depending on the nature of the system or data involved, it may be advisable to require that researchers obtain prior permission before engaging in any testing or investigation.
- Versioning – tracking the evolution of the VDP (optional).
The VDP should be prominently displayed on your organization’s website and advertised in other relevant locations, such as news and trade websites.
Unanticipated issues will arise. For instance, a researcher may inquire whether the VDP authorizes certain conduct which the VDP developers never considered. Expert legal and technical counsel should be available to answer questions and analyze previously unconsidered issues.
3.3 Develop the VDP response and handling process
The process for responding to and handling reported vulnerabilities deserves careful consideration.
As mentioned above, the discovery of a vulnerability could trigger legal breach notification requirements. It could also trigger notification obligations to third parties with whom your organization has a contractual relationship. For example, if the vulnerability touches your organization’s cloud storage, it may implicate the cloud services provider, who in turn has duties to its other customers.
Acknowledge vulnerability reports upon receipt. Your organization’s IRT or designated VDP team should verify the vulnerability details and, if necessary, consult expert counsel to determine the next steps. These next steps depend on the nature of the report.
If the report identifies a critical and previously unknown vulnerability, the response process may involve several steps. The organization may need to run its own testing. It may be helpful to keep the security researcher who submitted the vulnerability report involved in this process. That researcher may be able to provide background and configuration details on the vulnerability. Additionally, the researcher may be able to assist in testing and remediation or patching.
The response process will also involve reporting the findings contained in the report. A vulnerability report does no good if it does not reach the personnel responsible for taking steps to address the identified vulnerabilities. Furthermore, the vulnerability may need to be disclosed to a wider audience in order to ensure that maximum remediation or patching takes place. For instance, if your organization is a software vendor, simply patching the vulnerability is no guarantee that downstream customers or dependencies are protected from the vulnerability. In other words, it may be unsafe to assume that customers will always run the latest version of the software. Publication, in addition to updates to vulnerable code, can alert and protect the wider universe of potentially affected users.
Upon notification of a vulnerability, taking actions appropriate and necessary to cure the deficiency is imperative and should be included as a consideration in the development of the VDP. See, eg, In re Capital One Consumer Data Sec. Breach Litig., 488 F. Supp. 3d 374 (E.D. Va. 2020). (‘Capital One…developed a product called Cloud Custodian, whose purpose was to address the SSRF threat by encrypting data on the AWS servers. But these efforts were inadequate to secure Capital One customers' data.’ Here, the inadequate response allowed a significant data breach to occur.)
Responsible handling of vulnerability reports should also include a process for coordinated reporting when a vulnerability affects other organizations in addition to your own, and in case of disagreement with the security researcher regarding proper handling of the report.
A coordinated vulnerability disclosure (CVD) process involves gathering information from potentially multiple security researchers and coordinating safe information sharing among various stakeholders. The US Computer Emergency Readiness Team (US-CERT) Coordination Center in the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) offers a coordinated CVD process that may be adapted for use by your organization. In addition to traditional IT vulnerabilities, US-CERT covers industrial control systems (ICS), Internet of Things (IoT), and medical devices. The CVD’s purpose is to ensure safe public disclosure of newly identified vulnerabilities.
Additional Resources
Open Web Application Security Project (OWASP) – a nonprofit foundation providing guidance on how to develop, purchase, and maintain trustworthy and secure software applications. It publishes and updates a list of the top ten application security risks.
MITRE Corporation – a nonprofit corporation that maintains a list of publicly disclosed Common Vulnerabilities and Exposures (CVE).
National Institute of Standards and Technology (NIST) – formerly the National Bureau of Standards, this division of the US Department of Commerce publishes standards for vulnerabilities, including the National Vulnerability Database (NVD), where vulnerabilities are graded according to the Common Vulnerability Scoring System (CVSS).
Cybersecurity and Infrastructure Security Agency (CISA) – an agency of the Department of Homeland Security (DHS), it offers a coordinated vulnerability disclosure (CVD) process for vendors, service providers, and vulnerability reporters to publicly disclose vulnerabilities simultaneously, including the US Computer Emergency Readiness Team (US-CERT), as well as a variety of cybersecurity-related services available to public and private entities.
Google’s Project Zero – a team at Google performing vulnerability research on popular software, including mobile operating systems, web browsers, and open-source libraries; they report bugs to responsible parties and give a 90-day window to fix the bug before publication.
Examples of VDPs
US Federal Court System
US Department of Defense
US Department of Commerce
US Department of Energy
University of California at Berkeley
Nestle Global
CVS Health
Pacific Gas & Electric Company
Capital One Financial Corporation
United Airlines
Related Lexology Pro content
How-to guides:
How to determine and apply relevant US privacy laws to your organization
How to manage your organization’s data privacy and security risks
How to implement privacy by design within your organization
How to develop, implement, and maintain a US privacy law compliance program
How to develop, implement and maintain a US information and data security compliance program
How to evaluate the effectiveness of a data security or data privacy compliance program
How to draft a privacy policy, and privacy and data security provisions in contracts
How to manage third party supply chain data privacy, security risks, and liability
Incident response plan readiness and identification of a reportable data breach
How to prepare for and respond to a governmental investigation or enforcement action for violation of US privacy laws
Checklists:
Understanding privacy laws in the US
Completing a data privacy risk assessment
Drafting internal privacy policies and procedures
Completing a data and information security risk assessment
Drafting a consumer privacy policy
Developing key privacy and data security contractual terms and provisions (B2C)
Privacy and data security law training
Completing a data incident response plan assessment
Responding to a data breach
Privacy and data security due diligence in M&A
Quick views:
Key data privacy and data security terms
Collection and use of non-consumer data
Regulation of data brokers
Reliance on information posted:
While we use reasonable endeavors to provide up to date and relevant materials, the materials posted on our site are not intended to amount to advice on which reliance should be placed. They may not reflect recent changes in the law and are not intended to constitute a definitive or complete statement of the law. You may use them to stay up to date with legal developments but you should not use them for transactions or legal advice and you should carry out your own research. We therefore disclaim all liability and responsibility arising from any reliance placed on such materials by any visitor to our site, or by anyone who may be informed of any of its contents.