Program details define how your program is presented to researchers and provide the context they need to understand your scope, expectations, and rewards. Well-maintained program details help attract the right researchers, reduce misunderstandings, and improve the overall quality of submissions.
Manage program details
⚙️Roles: Company Admin, Program Admin, Program Editor
You can manage your program details by opening your program and going to Details. From here, you can configure the information that researchers see before and after joining your program.
Program description
The program description is the initial description of your program. In many cases, this section serves as an introduction to your organization rather than a detailed explanation of the program itself.
If your brand is widely known, the program description can also be used to explain how your organization views and approaches security. This helps researchers understand your security mindset and sets the tone for collaboration before they engage with your scope in detail.
Industry
The program industry helps categorize your program and makes it easier for researchers to discover programs that align with their interests and expertise. By selecting the most relevant industry for your program, you improve how your program is recommended to researchers on the platform.
Bounties
The bounty table defines how researchers are rewarded for accpeted submissions in your program. By configuring your bounty table competitively and consistently, you set expectations, encourage high-quality findings, and align researcher incentives with your security priorities and available budget.
Learn more about bounty setups and tiers.
💡Note: Use the Intigriti bounty calculator to validate that your bounty amounts are competitive and aligned with current market expectations before finalizing your table.
Reward policy
The Reward Policy section complements your bounty table by explaining how researchers are rewarded beyond the standard configuration. This section is displayed near the bounty table, ensuring important details are not hidden in long descriptions. You can use the Reward Policy section to:
Explain alternative rewards, such as swag or other incentives.
Clarify policies around duplicates, bonuses, temporary campaigns, or exceptional rewards.
Describe how custom bounties are calculated and which factors influence the final amount.
Clearly defining your reward policy helps manage expectations, builds trust with researchers, and encourages continued high-quality engagement.
Scope
Assets
In this section, you define which assets researchers are allowed to test and which are out of scope. Clear, structured, and well-scoped assets help focus testing efforts, reduce ambiguity, and ensure alignment with your organization’s priorities.
Start by creating company assets or adding existing ones to your program’s scope. Where possible, define assets precisely, for example by listing specific domains, URLs, or endpoints instead of using broad wildcard entries. This level of granularity makes it clear where researchers are authorized to test and helps prevent scope misunderstandings.
Once assets are added, organize them in a logical and consistent order by creating asset groups, for example by product line, environment type, or technology. This makes your scope easier to navigate for researchers and simpler to maintain over time.
Finally, assign each asset to an appropriate bounty tier. The tier should reflect the asset’s importance, sensitivity, technical complexity, and maturity, helping researchers understand where to focus their efforts and what level of reward they can expect.
Asset groups
Asset groups allow you to logically bundle related assets within your program. Instead of treating every asset in isolation, groups help you structure your scope in a way that reflects how your systems, products, or environments are organized.
You can create and manage asset groups from your program details.
Find the Bounties and assets section and click Edit.
Click Create group.
Enter a group name and add a description.
Add assets to the group by dragging and dropping existing program assets, or by selecting assets from the company assets overview.
Save your changes.
Asset groups help keep programs with a large number of assets concise by allowing you to place shared information at group level instead of repeating it on every individual asset. Group descriptions can be used to explain common characteristics, expectatins, or testing guidance that applies to all assets in the group.
For researchers, asset groups improve clarity and navigation by showing how assets relate to each other and what they have in common. This makes it easier to understand the broader scope and quickly identify relevant targets, while keeping asset-level descriptions focused and easy to scan.
In scope
The In scope section is where you welcome researchers and guide them through what you want tested. Programs with clear scope definitions, well-explained priorities, and strong communication tend to attract more engaged researchers and higher-quality submissions. How you present this section often reflects how your team works with researchers throughout the program.
💡Note: Use this section to personalize your program and set the tone for collaboration. We strongly recommend describing scenarios of interest and potential worst-case scenarios, such as cross-tenant data exposure or privilege escalation paths, to help researchers focus their efforts on what matters most to your organization.
You can also link to relevant documentation or include short guides directly in this section. Providing context upfront reduces the need for researchers to leave the platform to assess whether the program is relevant or worth their time.
💡Note: If the in scope section becomes too long an detailed, consider moving supporting information to the program FAQ, asset group descriptions or asset descriptions and referencing it here. This works especially well for topics like role and permission overviews or recurring practical questions.
Out of scope
This section defines what is explicitly out of scope, both in terms of assets and vulnerability types you are not looking for. Clearly documenting out-of-scope areas helps researchers focus on relevant targets and prevents unnecessary submissions.
The default out-of-scope template is maintained by Intigriti and is designed to reduce noise submissions. Noise submissions create friction for everyone involved, they lower validity ratios, disappoint researchers, increase the number of invalid submissions counted against platform limits, and add no value to your organization. You can customize this template if needed, but we strongly recommend reviewing any changes carefully.
⚠️ Beware: Removing items from the template without considering the impact can significantly increase noise. If you are unsure, the Intigriti team can provide guidance on what to keep or adjust.
Known issues
Known issues that cannot be resolved immediately should also be documented in the out of scope section. Listing known issues helps researchers avoid spending time on already identified problems and allows them to focus on uncovering new and unknown blind spots. This is especially important for private or non-public programs, where disappointing researchers by accepting duplicate or already-known findings should be avoided whenever possible.
Learn more about similar submissions handling (duplicates, related findings, and known issues).
Rules of engagement
The rules of engagement define how researchers are expected to test your scope and interact with your systems. Use this section to specify expectations, restrictions, or prohibitions that are not already covered by the Out of Scope section or the standard Terms & Conditions.
You can also use this section to provide guidance on how high-impact proof of concepts (PoCs) should be demonstrated responsibly. This typically includes requiring the use of controlled test accounts and ensuring that no real users, data, or business operations are impacted during testing or exploitation. Examples:
Researchers must not complete real orders or transactions on weekend days
Researchers must not create accounts
In addition, this section can be used to set expectations around communication and collaboration, such as response time targets, how you handle follow-up questions, or whether you plan to award bonuses for exceptional reports.
💡 Note: Keep the rules of engagement concise and focused. Clear, well-structured guidelines improve readability and help researchers quickly understand how to test responsibly within your program.
Testing requirements
Testing requirements are displayed directly underneath the bounty table, making them highly visible to researchers. Clearly defining these requirements helps reduce ambiguity, supports incident response, and allows your teams to quickly recognize authorized testing activity.
Automated tooling
For Application, Registered, or Public programs, we strongly recommend defining a rate limit for automated tooling to avoid operational impact. If excessive traffic becomes an issue, you can update this requirement and notify researchers via a program update.
Use of @intigriti.me email address
Using the @intigriti.me email domain helps your teams identify legitimate researcher activity. This can prevent false positives, support fraud handling, and allow automated exceptions or privileges for test accounts.
Custom request header
Requiring a specific request header helps identify authorized testing traffic in logs and during incident response. Example: X-Bug-Bounty: <username>
Custom user agent
A custom user agent serves a similar purpose, making researcher traffic easier to trace and distinguish from regular users. Example: User-Agent: Intigriti - <username> - Mozilla/5.0 (...)
Safe harbour
The safe harbor section clarifies how your organization treats researchers who test your program in good faith and in compliance with your rules.
Enabling Safe harbour helps build trust with researchers and encourages responsible disclosure by reassuring them that compliant, good-faith testing will not expose them to unnecessary legal risk.
Severity assessment
In the Severity assessment section of your program details, you can explain to researchers how they can expect submissions to be categorized.
💡Note: It's recommended to align with the Intigriti Triage Standards.
If you deviate from the standard approach, use your severity assessment text to explain how severities will be handled for your program.
CVSS calculator
Intigriti includes a built-in CVSS calculator to support consistent severity assessment. CVSS 3.0 is used by default, and your organization can switch to CVSS 4.0 for more granular scoring. This is configured at company level and applies across all programs.
To select the CVSS version used by your organization:
Go to your company Admin settings.
Open Severity assessment and click Edit.
Select CVSS 3.0 or CVSS 4.0.
Save your changes.
Switching CVSS versions affects new submissions going forward. For duplicate submissions, the severity vector and level from the parent submission are copied to the duplicate, and the change is logged in the submission activity thread.
Program FAQ
The Program FAQ section is used to document common practical questions researchers may have about your program. It typically covers clarifications around testing practicalities, rewards, and test account usage, helping researchers quickly find answers without needing to ask for additional guidance.
Submission questions
Submission questions allow you to extend the default submission template with additional, structured questions that researchers must answer when submitting a report. By collecting this information upfront, you reduce back-and-forth during triage and make submissions easier to evaluate and reproduce.
Learn more about Submission questions.
Best practices
Write program details from a researcher’s perspective. Clear explanations of scope, rewards, and expectations help researchers quickly decide where to focus their efforts.
Use asset groups, asset descriptions, the In scope and FAQ section together to provide context without duplicating information.
Be explicit about what is out of scope and list known issues proactively. This prevents duplicate and low-value submissions and improves overall researcher experience.
Use the Reward Policy section to clearly explain any non-standard rewards, bonuses, swag, vouchers or custom bounty logic. Transparency builds trust and reduces disputes during submission handling.
Align your severity assessment with the Intigriti Triage Standards whenever possible. If you deviate, explain this clearly so researchers understand how impact is evaluated.
Regularly review your program details after launch, especially when adding new assets, changing bounty tiers, or updating rules of engagement, to ensure everything remains consistent and easy to understand.
Related articles
