1. Scope Activity

In the DAIR model, we introduce another activity not specifically called out in other popular incident response models: scope. This step bridges the gap between initial incident verification and effective containment by determining the extent of the compromise across the organization. Without proper scoping, incident response teams risk addressing only the visible symptoms while leaving significant portions of the attack undetected and unaddressed.

DAIR model diagram highlighting the Scope activity between Verify/Triage and the Contain/Response Actions/Recover cycle
Figure 1. Scope Activity Waypoint

The Purpose of Scoping

Scoping transforms the team’s understanding from "we have an incident" to "we know the breadth and depth of this incident." This activity takes the indicators of compromise (IOCs) identified during detection and verification, then systematically searches for them across the environment to understand the true extent of the breach.

Scoping is essential to effective incident response. Organizations that fail to properly scope incidents often experience repeated compromises, eradicating malware from known-infected systems while leaving other compromised systems untouched. The NovaFlow incident exemplifies this failure: Jordan identified a suspicious process (ssupd) on the development system, but failed to search for the same IOCs elsewhere in the environment, missing the broader compromise.

Scoping answers fundamental questions that shape the entire response effort:

  • Is this an isolated incident affecting a single system, or has the attacker established a widespread presence?

  • Which systems have been compromised, and which remain unaffected?

  • What is the timeline of the attack, and how long has the attacker maintained access?

  • What data or resources has the attacker accessed or exfiltrated?

  • Are there multiple attack vectors or persistence mechanisms deployed across different systems?

Scoping is an essential step in the incident response process, notably absent from traditional models. Organizations that apply effective scoping techniques gain a comprehensive understanding of the incident, enabling targeted containment, eradication, and recovery efforts that address the root cause rather than just the symptoms.

Indicators of Compromise in Scoping

Effective scoping relies on identifying and leveraging indicators of compromise: the artifacts that reveal malicious activity or attacker presence. IOCs come in many forms, and understanding their variety helps responders cast a wider net during scoping activities.

Technical IOCs

Technical indicators provide concrete, searchable artifacts that can be systematically hunted across the environment:

File-based Indicators

These indicators include malware hashes, suspicious filenames, or specific file paths used by attackers. A single malicious executable discovered on one system should trigger searches across all systems for the same file hash, similar filenames, or files in unusual locations.

Network Indicators

These indicators are broad but particularly valuable, allowing organizations to leverage investigative insight to quickly assess a large number of systems. Network indicators include IP addresses, domain names, URLs, network signatures, and other network anomalies (often characterized by Network Detection and Response (NDR) platforms), including unusual network activity patterns. For example, if an investigation reveals anomalous network activity attributed to an attacker command-and-control (C2) server, scoping should identify all systems that have communicated with that attacker infrastructure. This level of analysis may reveal additional compromised systems not yet exhibiting obvious symptoms.

Process and Service Indicators

These indicators include suspicious process names, service names, scheduled task names, command-line arguments, and any other process-launch configuration details. Attackers often use consistent naming patterns across compromised systems, making these valuable for scoping. Randomly-named processes or services can also be valuable indicators, since they deviate from normal system behavior.

Registry and Configuration Indicators

These indicators involve specific keys, values, or configuration changes within the system registry. PowerShell scripts stored in registry values, WMI event subscriptions, or modified system settings can serve as IOCs for broader searches. Similarly, non-Windows systems may have comparable indicators revealed in configuration files (e.g., ASCII-based or encoded data, including macOS property list files).

Account-based Indicators

These indicators include unauthorized user accounts, suspicious account usage patterns, or privilege escalations. An attacker-created account on one system warrants immediate investigation for similar accounts throughout the organization.

Behavioral IOCs

Beyond static technical indicators, behavioral patterns can reveal attacker activity that may not be captured by traditional IOCs. Behavioral indicators often require more sophisticated analysis but can uncover widespread compromise that static indicators miss.

Temporal Patterns

Temporal patterns observed in logging data, SIEM trend analysis, or aggregate network investigations may indicate coordinated activity across multiple systems. Simultaneous logons, synchronized file modifications, or regular beacon intervals can indicate widespread compromise even when individual events appear benign. Attackers often operate on consistent schedules, creating predictable patterns that become visible when analyzed across the environment.

Data Movement Patterns

These patterns show unusual data flows between systems or to external destinations. Large data transfers, especially during off-hours, may indicate exfiltration activities. Systems that do not typically communicate but are suddenly transferring large amounts of data warrant investigation as potential indicators of data staging or exfiltration.

Lateral Movement Indicators

These indicators include authentication, remote access, or service usage patterns that indicate an attacker moving between systems. Pass-the-hash attacks, remote desktop sessions, or WMI usage for remote execution leave distinctive patterns that differ from normal administrative activity. Multiple authentication attempts across many systems in a short timeframe often reveal automated lateral movement tools.

Scoping Methodologies

To be effective in scoping, analysts can apply several systematic approaches to ensure comprehensive coverage while managing resource constraints. The following methodologies help analysts scope incidents effectively.

Enterprise-Wide Hunting

After identifying an IOC, analysts should search for matching evidence across the organization. This analysis should start with centralized log analysis tools, including SIEM platforms, log aggregation systems, or other data resources to quickly search collected logging data across the environment. Starting with log analysis typically provides rapid initial results, though it requires that the organization already have comprehensive log collection in place.

Endpoint Detection and Response (EDR) platforms are also particularly valuable for searching and identifying IOCs across all managed endpoints. Modern EDR tools enable analysts to run complex queries that can find files, processes, network connections, and registry modifications across thousands of systems. When EDR systems are not available, analysts should consider inventory management tools, active scanning tools to probe systems for IOCs, network scans to identify specific services (such as open ports), authenticated scans for file presence, or specialized compromise assessment tools including custom PowerShell or shell scripts.

For comprehensive scoping, analysts should use threat-hunting platforms that combine multiple data sources into a single interface. Platforms that integrate endpoint, network, and logging data provide the broadest view of potential compromise indicators. By combining these data sources in a single platform, analysts can use overlapping analysis coverage to reduce the chance of missing compromised systems.

Progressive Scoping

Given resource limitations, scoping often proceeds in phases rather than attempting to investigate the entire environment simultaneously.

Critical Asset Prioritization

This approach focuses initial scoping on high-value systems, including domain controllers, file servers, databases, and systems containing sensitive data. These systems warrant immediate attention regardless of initial IOC location. Sometimes referred to as pivoting in an investigation, critical asset prioritization ensures that the most important organizational assets are quickly assessed and protected.

Lateral Expansion

This phase extends scoping to systems directly connected to known-compromised hosts, including those on the same network segment, those with recent authentication from compromised accounts, and those sharing administrative credentials. Lateral expansion follows the likely paths an attacker would take to move through the environment.

Environmental Sweep

This phase eventually extends to all systems in the environment, ensuring no compromised systems remain undiscovered. Environmental sweeps may occur over days or weeks for large organizations.

Timeline Reconstruction

An important output of scoping is understanding the attack timeline, including when different systems were compromised and how the attack progressed through the environment. Building this timeline helps analysts understand the scope of the incident and provides critical context for containment and eradication decisions.

Start by identifying the initial compromise, determining the patient zero system, and the initial infection vector. This analysis requires careful review of logs, file timestamps, and process creation times across multiple systems to find the earliest evidence of attacker activity. Look for the oldest artifacts related to the incident, including the first malicious file execution, initial network connection to attacker infrastructure, or earliest authentication anomaly.

Next, map the attacker’s lateral movement through the network to trace their path from initial compromise to broader access. Authentication logs, remote access records, and file movement patterns reveal how the attacker expanded their presence across the environment. Analysts should pay particular attention to privilege escalation events and authentication to high-value systems such as domain controllers or database servers.

Identify when attackers deploy persistence mechanisms to understand the attacker’s evolution from initial access to established presence. This timeline helps responders understand which persistence mechanisms are the oldest and potentially most deeply embedded in the environment. Common persistence mechanisms include scheduled tasks, registry modifications, service installations, and account creations, each leaving timestamped artifacts that can be analyzed.

Finally, determine when sensitive data was accessed or removed by reviewing file access logs, database query logs, and network traffic patterns. This timeline is critical for breach notifications and understanding the full impact of the incident. Many regulatory frameworks require organizations to report the timeline of data access or exfiltration, making this analysis essential for compliance.

The timeline example shown in Figure 2 illustrates how multiple first contact events across different systems can reveal the attacker’s progression through the environment over time. In this illustration, multiple systems exhibit initial compromise events and the use of different C2 infrastructure over time (denoted as LOLC for the attacker lolcats[dot]org domain, and W1GA for the attacker www1-google-analytics[dot]com domain). This timeline was reconstructed from network activity logs into a simple CSV file and visualized using a short Python script to plot the events over time.

Network activity timeline plotting nine lateral movement events across internal IP addresses over a two-hour window
Figure 2. Sample Network Activity Timeline
Listing 1. Network Activity Data Summary CSV
"2025-03-19 16:46:11",172.16.42.103→W1GA
"2025-03-19 14:46:22",172.16.42.103→LOLC
"2025-03-19 16:48:39",172.16.42.105→W1GA
"2025-03-19 15:16:13",172.16.42.105→LOLC
"2025-03-19 16:50:23",172.16.42.107→W1GA
"2025-03-19 16:55:19",172.16.42.108→W1GA
"2025-03-19 16:57:04",172.16.42.109→W1GA
"2025-03-19 16:10:48",172.16.42.2→W1GA
"2025-03-19 16:38:15",172.16.42.3→W1GA

Timeline reconstruction often reveals that incidents have been ongoing for weeks or months before detection. This discovery underscores the importance of comprehensive scoping rather than focusing only on recently identified symptoms. Understanding the full timeline also helps identify any systems that may have been compromised early in the attack but haven’t yet been discovered through IOC searches.

SE3R Documentation Method

SE3R is a "maximum capture, maximum grasp" documentation technique developed by Eric Shepherd that aims to comprehensively capture fine-grained details from incident narratives while preventing the unconscious filtering and editing that typically occurs during analysis. The method provides a systematic five-stage process (Survey, Extract, Read, Review, and Respond) designed to enable thorough detail capture, systematic analysis of incident information, identification of issues and anomalies requiring investigation, and effective planning of investigative response actions.

Originally developed for law enforcement investigative interviews, SE3R adapts effectively to incident response investigation and scoping activities. The method recognizes that incident narratives contain three distinct types of information, each requiring different handling:

  • Knowledge details provide background context on systems, accounts, attacker infrastructure, and other persistent elements that remain relevant throughout the incident timeline.

  • Event detail describes what happened, including discrete events (specific actions), episodes (extended activities like lateral movement campaigns), and continuous states (ongoing conditions like maintaining persistence).

  • Commentary captures analytical observations, investigative questions, and identified anomalies that require further investigation.

The SE3R visual format organizes this information spatially. Knowledge Bins are rectangular containers positioned above and below a horizontal Event Line, with each bin dedicated to a specific topic that accumulates background information throughout the investigation. The Event Line captures the chronological sequence of events, episodes, and continuous states with precise timestamps. Commentary and analytical observations are annotated adjacent to relevant details on the diagram, highlighting issues that warrant additional investigation.

For practical implementation in incident response, start with a basic configuration using five bins for the most critical categories: compromised systems, attacker infrastructure, malware artifacts, compromised accounts, and accessed data.

SE3R is not solely a resource for scoping and timeline development, but it applies well to this type of analysis. The example timeline in Figure 3 shows the outcome of an incident investigation, representing the earliest known C2 activity for internal systems during an investigation. The event timeline is used to represent each internal system and external attacker C2 server (using the abbreviation WGA for the attacker domain www1-google-analytics[dot]com) and destination port number. The knowledge bins presented below the event timeline provide additional context for the timeline data.

SE3R event timeline showing C2 beacon timing across compromised hosts with a reference table identifying each system and attacker infrastructure
Figure 3. C2 Beaconing SE3R Event Timeline and Knowledge Bins

By applying structured techniques at each stage, SE3R helps analysts achieve a total grasp of incident details rather than settling for the abbreviated, edited versions that result from conventional documentation approaches. More information about the SE3R documentation technique is available at Forensic Solutions, Shepherd’s source of training for the use of the SE3R technique. [1]

Scoping Challenges

Comprehensive scoping across an entire organization introduces significant challenges. Understanding these challenges helps incident responders develop strategies to work around limitations and improve scoping effectiveness over time.

Visibility Gaps

Organizations rarely have complete visibility across their entire environment, creating blind spots that attackers can exploit. Unmanaged systems including Bring Your Own Device (BYOD) endpoints, shadow IT infrastructure, and legacy systems may harbor compromise and escape standard scoping tools. These systems often exist outside the organization’s asset inventory, making them difficult to assess during incident response. Even when analysts know these systems exist, they may lack the necessary access credentials or management tools to search for IOCs.

Limited logging on certain systems or classes of systems presents another visibility challenge. This is particularly true for legacy systems, embedded devices, and industrial control system (ICS) platforms that lack robust logging capabilities. Network devices, printers, and Internet of Things (IoT) devices often generate minimal logs or store logs only temporarily, restricting the ability to scope historical compromise. When analyzing these systems, responders should look for alternative evidence sources including network flow data, authentication logs from connected systems, or physical evidence of unauthorized access.

For example, consider the Siemens S7-1200, a widely-deployed programmable logic controller used in manufacturing, water treatment, building automation, and other industrial control environments. Despite having a built-in Ethernet interface, a full TCP/IP stack, and an integrated web server for diagnostics, the S7-1200 lacks the centralized logging capabilities found in modern IT systems. While the PLC maintains internal diagnostics including connection tables and error counters accessible through its web interface, it does not generate syslog events, forward logs to SIEM platforms, or provide the historical audit trails that incident responders rely on for timeline reconstruction. Network flow data becomes the primary source of evidence for determining whether industrial control systems like the S7-1200 have been accessed or manipulated during an incident, making comprehensive network visibility essential for organizations with ICS environments.

Siemens SIMATIC S7-1200 G2 programmable logic controller mounted on a DIN rail with connected cabling and status LEDs
Figure 4. Siemens S7-1200 [2]

Cloud and hybrid environments require different scoping approaches than traditional on-premises infrastructure. Serverless functions, containers, and cloud-native services may not be visible to traditional scoping tools designed for Windows endpoints. Organizations should ensure their scoping strategies include cloud-specific tools and techniques, such as cloud provider APIs, container runtime security tools, and cloud-native logging services. The ephemeral nature of many cloud resources means that evidence can disappear when resources are deallocated, making timely scoping critical.

Anti-Forensic Techniques

Sophisticated attackers actively work to impede scoping efforts by employing anti-forensic techniques to hide their presence and confuse investigators. Recognizing these techniques helps responders develop counter-strategies and avoid drawing incorrect conclusions from manipulated evidence.

Log deletion or modification removes evidence of attacker activity, creating gaps in timeline reconstruction and making it difficult to identify all compromised systems. Attackers may delete specific log entries related to their activities or clear entire log files to cover their tracks. To counter this, analysts should look for evidence of log manipulation itself, including gaps in log timestamps, unusual log file sizes, or security events indicating log clearing. An example of a Windows Event Log indicating log deletion is shown in Figure 5.

Windows Event Viewer showing Security log Event 1102 indicating the audit log was cleared
Figure 5. Windows Event Log Indicating Log Deletion

Collecting logs from multiple sources can also help reconstruct attacker activity even when some logs have been deleted. When multiple log sources are not available, analysts can sometimes infer deleted events by correlating with network logs or other system events. For example, on Windows systems, each logging event is assigned a sequentially incrementing event ID (viewable in Event Viewer by clicking Details | XML View). Gaps in event ID sequences may indicate deleted log entries.

Timestomping modifies file timestamps to hide the timing of malware installations or blend malicious files with legitimate system files. Attackers use timestomping to make recently installed malware appear as though it has been present since system installation. When analyzing systems where timestomping is suspected, analysts should examine alternative timestamp sources including filesystem metadata ($MFT entries on Windows), backup system records, or network detection timestamps for file downloads.

Encryption and obfuscation hide indicators from automated scanning tools, requiring manual analysis to identify compromise. Attackers may encrypt malware payloads, obfuscate PowerShell commands, or use packing techniques to avoid signature-based detection. Network traffic encryption can similarly hide C2 communications from network-based scoping tools. Behavioral analysis and memory forensics often prove more effective than signature matching when dealing with encrypted or obfuscated threats.

Living-off-the-land techniques use legitimate system tools like PowerShell, Windows Management Instrumentation (WMI) utilities including wmic.exe and mofcomp.exe, and .NET Framework tools (csc.exe, installutil.exe, and others), making it difficult to distinguish malicious activity from normal administrative operations. Attackers choose these techniques specifically to blend in with normal activity and avoid detection during scoping. Analyzing command-line arguments, execution context, and temporal patterns helps identify malicious use of legitimate tools. Establishing baselines of normal administrative activity provides a reference point for identifying anomalies.

Scale and Complexity

Large, complex environments present practical scoping challenges that can overwhelm even well-resourced incident response teams. Organizations should develop strategies to manage these challenges while maintaining comprehensive scoping.

The sheer volume of data generated during scoping can overwhelm analysis capabilities and delay incident response. Searching for IOCs across thousands of systems generates significant volumes of log data, file listings, and process information requiring analysis. Analysts should prioritize scoping efforts using the progressive scoping methodology described earlier, focusing first on high-value targets and known-compromised systems before expanding to the broader environment. Automation tools can help process large data volumes, though analysts should still validate automated findings to avoid missing sophisticated attacks.

False positives from legitimate software or normal activity that matches IOC patterns complicate scoping and consume investigative resources. Each potential match requires investigation to determine whether it represents a genuine compromise or benign activity. For example, searching for a malware filename might match legitimate software that happens to use a similar name, or network IOC searches might identify VPN connections that look suspicious but are actually legitimate. Documenting known false positive sources helps streamline future investigations and reduce analysis time.

Dynamic environments where systems are constantly created, modified, or destroyed present unique scoping challenges, particularly in cloud environments using auto-scaling or containerized workloads. Compromised systems may be automatically destroyed before analysts can examine them, and new systems may be created with embedded compromise from tainted images. In these environments, analysts should focus scoping efforts on examining system images, container registries, and infrastructure-as-code templates in addition to running systems. Organizations should implement logging solutions that capture system state even for short-lived resources, ensuring evidence persists after system destruction.

Cloud scoping often requires querying audit logs to trace compromised identities and unauthorized resource activity. Each major cloud provider offers equivalent audit log capabilities with different query syntax. Table 1 provides a quick reference for translating common audit log queries across providers.

Table 1. Cloud Audit Log Quick Reference

Query audit logs

AWS

aws cloudtrail lookup-events

Azure

az monitor activity-log list

GCP

gcloud logging read

Filter by resource

AWS

aws cloudtrail lookup-events --lookup-attributes AttributeKey=ResourceName,AttributeValue=<name>

Azure

az monitor activity-log list --resource-id <resource-id>

GCP

gcloud logging read 'resource.type="<type>"'

Filter by user or principal

AWS

aws cloudtrail lookup-events --lookup-attributes AttributeKey=Username,AttributeValue=<user>

Azure

az monitor activity-log list --caller <object-id>

GCP

gcloud logging read 'protoPayload.authenticationInfo.principalEmail="<email>"'

Filter by time range

AWS

aws cloudtrail lookup-events --start-time <time> --end-time <time>

Azure

az monitor activity-log list --start-time <time> --end-time <time>

GCP

gcloud logging read 'timestamp>="<time>"'

These commands provide a starting point for cross-platform audit log analysis during scoping investigations.

Integration with Response Activities

This section concludes with a discussion of how scoping integrates with other incident response activities including containment, eradication, and recovery.

Scoping doesn’t occur in isolation but integrates closely with other incident response activities. The information gathered during scoping feeds directly into containment, eradication, and recovery decisions, while new discoveries during those activities often trigger additional scoping efforts. In some organizations, the scoping step may directly contribute to containment, then to eradication and recovery, but this is not a requirement. Particularly for later iterations of the response actions loop, scoping may occur in parallel with containment, eradication, and recovery activities. Analysts may also see value in transitioning from scoping to recovery directly without additional containment or eradication steps, for example when no new compromised systems are discovered but additional context warrants updates to the root cause analysis for the incident.

The scope activity transforms incident response from reactive firefighting to strategic remediation. By systematically identifying all affected systems and understanding the full extent of compromise, organizations can mount effective responses that truly eliminate threats rather than merely addressing symptoms. This comprehensive understanding proves essential for breaking the cycle of incomplete remediation and reinfection that affects organizations with immature incident response capabilities.

Scope Activity Examples

The following examples illustrate where scoping is an important part of the incident response process.

The DeFi Server Attachment

Renee is an incident response analyst supporting a mid-sized financial technology company. A trader actively participating in a decentralized finance (DeFi) Telegram group reported suspicious activity on her workstation after opening a document promising insider information about an upcoming token launch. Renee was asked to analyze the document and determine whether it contained any malicious content.

Renee performed initial analysis of the document using a dedicated workstation isolated from the corporate network, beginning with file size, hash, and file type identification using the Fileutils file command, as shown in Listing 2.

Listing 2. DeFi Document File Command Output
$ ls -l malware
-rw-r—​r--@ 1 renee staff   1.6M Oct 21 16:29 malware
$ sha256sum malware
9484272e48f908e816a68f295a105d885b9d0ba52d8255d95c9bf237f71eae6b  malware
$ file malware
malware: PDF document, version 1.4

The document appeared to be a standard PDF file, but Renee wasn’t yet ready to open it in a PDF viewer. She extracted summary information about the PDF structure using Pdfinfo, as shown in Listing 3.

Listing 3. DeFi Document Pdfinfo Command Output
$ pdfid.py malware
PDFiD 0.2.10 malware
 PDF Header: %PDF-1.4
 obj                   27
 endobj                27
 stream                 5
 endstream              5
 xref                   4
 trailer                4
 startxref              4
 /Page                  1
 /Encrypt               0
 /ObjStm                0
 /JS                    0
 /JavaScript            0
 /AA                    0
 /OpenAction            0
 /AcroForm              0
 /JBIG2Decode           0
 /RichMedia             0
 /Launch                0
 /EmbeddedFile          0
 /XFA                   0
 /URI                   6
 /Colors > 2^24         0

The output of pdfid.py revealed that the PDF is a single page, but includes six embedded URIs. Renee extracted the URIs using pdf-parser.py, as shown in Listing 4.

Listing 4. DeFi Document Pdf-parser Command Output
$ pdf-parser.py -s /URI malware

obj 9 0
 Type: /Annot
 Referencing:

  <<
    /Type /Annot
    /Subtype /Link
    /Rect [51.7500000  191.750000  542.250000  827.750000 ]
    /Border [0 0 0]
    /A
      <<
        /Type /Action
        /S /URI
        /URI (http://host███████private.duckdns.org/eubp/example.zip) (1)
      >>
  >>


obj 19 0
 Type: /Action
 Referencing:

  <<
    /URI (https://stc████████lik.com/Update/UpdatePDF.exe) (2)
    /S /URI
    /Type /Action
  >>


obj 25 0
 Type: /Action
 Referencing:

  <<
    /URI (https://stc████████lik.com/Update/UpdatePDF.zip) (3)
    /S /URI
    /Type /Action
  >>
1 Suspicious URL at Dynamic DNS provider DuckDNS serving example.zip.
2 Suspicious URL hosting the UpdatePDF.exe file.
3 Suspicious URL hosting the UpdatePDF.zip file.

Renee noted three suspicious URLs embedded in the PDF across six references. The URLs pointed to a DuckDNS dynamic DNS domain and a suspicious domain hosting two executable files. With this new insight, Renee noted several elements to use for subsequent scoping activities:

  • File-based indicators: the hash of the PDF file.

  • Network indicators: the three embedded URLs.

  • File-based indicators: the filenames of the referenced files (UpdatePDF.exe and UpdatePDF.zip).

Renee documented the scoping indicators she had identified so far and continued her analysis of the PDF document while searching for additional impacted systems across the organization.

The Cloud Storage Exfiltration

Priya is a cloud security analyst for a financial services company that uses Azure for both production applications and data storage. She received an automated alert from Microsoft Defender for Cloud indicating a new Blob Storage container named backup-data-archive-2025 was created in the stcorpdata01 storage account three days ago. She double-checked Teams, but there was no associated change order for the new container. The container name followed common internal naming conventions, but the creation timestamp showed it was created at 11:47 PM on a Saturday, well outside normal business hours. Priya began her investigation to determine if this was an IOC.

Priya started by examining the contents of the suspicious container using the Azure CLI, as shown in Listing 5.

Listing 5. Cloud Exfiltration Container Contents
$ az storage blob list --container-name backup-data-archive-2025 --account-name stcorpdata01 --query "[].{Name:name, Size:properties.contentLength, Modified:properties.lastModified}" --output table
Name                                                Size       Modified
--------------------------------------------------  ---------  --------------------------------
HR/Payroll_2025_Q1.xlsx                             49492787   2025-10-19T23:52:14+00:00
Finance/Budget_Projections_2026.xlsx                134637158  2025-10-19T23:52:18+00:00
CustomerData/client_list_full.csv                   935418906  2025-10-19T23:52:23+00:00
Engineering/Product_Roadmap_Confidential.pptx       67947725   2025-10-19T23:52:31+00:00
Legal/Contracts_Archive_2024.zip                    225770509  2025-10-19T23:52:38+00:00
HR/Employee_Records_Complete.xlsx                   103494067  2025-10-19T23:52:45+00:00
CustomerData/transaction_history_2024.csv           462638387  2025-10-19T23:52:52+00:00
Finance/Audit_Reports_2024.pdf                      164481254  2025-10-19T23:53:01+00:00
Engineering/Source_Code_Archive.zip                 77070336   2025-10-19T23:53:08+00:00
CustomerData/customer_pii_database.csv              340331930  2025-10-19T23:53:15+00:00
[...]

The container held 47 blobs totaling approximately 8.2 GiB. The filenames indicated highly sensitive internal data, including payroll information, customer data, intellectual property, and legal contracts. The directory structure (HR/, Finance/, CustomerData/, Engineering/, Legal/) matched the organization’s internal file server layout exactly. All blobs showed upload timestamps within a twenty-minute window on the same Saturday night the container was created.

Next, Priya investigated who created the container by querying the Azure Activity Log, as shown in Listing 6.

Listing 6. Cloud Exfiltration Container Creation Event
$ az monitor activity-log list --offset 7d --query "[?contains(resourceId, 'backup-data-archive-2025')] | [0]"
{
  "authorization": {
    "action": "Microsoft.Storage/storageAccounts/blobServices/containers/write",
    "scope": "/subscriptions/e7b3c1d9-a842-4f56-b6d1-8a3e5f902c4d/resourceGroups/rg-production-eastus2/providers/Microsoft.Storage/storageAccounts/stcorpdata01/blobServices/default/containers/backup-data-archive-2025"
  },
  "caller": "c7e2a091-4b38-4d65-9f12-b8a3e6d50c71", (1)
  "claims": {
    "appid": "e4f2d1b8-6a93-4c57-8e1f-9d0b2a3c5e7f",
    "http://schemas.microsoft.com/identity/claims/objectidentifier": "c7e2a091-4b38-4d65-9f12-b8a3e6d50c71"
  },
  "eventTimestamp": "2025-10-19T23:47:32.381Z",
  "httpRequest": {
    "clientIpAddress": "203.0.113.88", (2)
    "method": "PUT"
  },
  "operationName": {
    "localizedValue": "Create or Update Container",
    "value": "Microsoft.Storage/storageAccounts/blobServices/containers/write"
  },
  "resourceGroupName": "rg-production-eastus2",
  "status": {
    "localizedValue": "Succeeded",
    "value": "Succeeded"
  }
}
1 Entra ID service principal object ID used to create the container.
2 External IP address originating the request.

The Activity Log event was disconcerting. Priya cross-referenced the caller’s object ID with Entra ID and confirmed it was the svc-automation service principal, a legitimate identity used by various automated processes throughout the organization. However, the source IP address 203.0.113.88 was neither an internal system nor a known Azure resource. She noted this as a possible indicator that the service principal credentials had been compromised and were being used by an external attacker.

Priya expanded her investigation to identify all activity performed by the compromised service principal, as shown in Listing 7.

Listing 7. Cloud Exfiltration Compromised Service Principal Activity
$ az monitor activity-log list --caller c7e2a091-4b38-4d65-9f12-b8a3e6d50c71 --offset 7d --query "[].{Time:eventTimestamp, Operation:operationName.localizedValue, Provider:resourceProviderName.localizedValue}" --output table
Time                       Operation                         Provider
-------------------------  --------------------------------  -------------------------
2025-10-15T14:23:18+00:00  Read Role Assignment              Microsoft.Authorization
2025-10-15T14:28:42+00:00  Read Virtual Machine              Microsoft.Compute
2025-10-16T03:15:27+00:00  List Storage Account Keys         Microsoft.Storage
2025-10-16T08:44:19+00:00  Create Role Assignment            Microsoft.Authorization
2025-10-17T11:32:54+00:00  Create or Update Security Rule    Microsoft.Network
2025-10-17T16:19:08+00:00  Create or Update Virtual Machine  Microsoft.Compute (1)
2025-10-19T23:47:32+00:00  Create or Update Container        Microsoft.Storage (2)
1 The attacker launched a virtual machine using the compromised service principal.
2 Creation of the unauthorized Blob Storage container.

The compromised service principal showed extensive unauthorized activity over five days. The caller enumerated role assignments, performed reconnaissance of existing virtual machines, obtained storage account access keys, and granted the service principal additional permissions through a new role assignment. Then they modified network security group rules, launched a virtual machine, and created the unauthorized container. Priya also queried the storage account’s diagnostic logs and identified 47 blob upload operations on October 19th corresponding to the files in the unauthorized container.

Priya performed additional Activity Log and storage diagnostic log queries to trace the source of the blob uploads. She discovered that while the container was created from the external IP address 203.0.113.88, the uploads originated from a virtual machine at internal IP address 10.0.50.15 within the organization’s Azure virtual network. This event matched the virtual machine created on October 17th, indicating the caller had established infrastructure within the Azure environment to facilitate data exfiltration from internal resources.

Confident she was looking at the actions of an attacker who had compromised the svc-automation service principal, Priya documented the indicators of compromise discovered during her investigation:

  • Account-based indicators: Compromised Entra ID service principal svc-automation with object ID c7e2a091-4b38-4d65-9f12-b8a3e6d50c71

  • Network indicators: External IP address 203.0.113.88 used to create unauthorized resources

  • Cloud resource indicators: Unauthorized Blob Storage container backup-data-archive-2025 in storage account stcorpdata01, attacker-created virtual machine at internal IP 10.0.50.15

  • File-based indicators: 47 specific file names matching internal file server structure, including payroll data, customer PII, and intellectual property

  • Behavioral indicators: Bulk blob upload pattern during a twenty-minute window, Azure management API activity from an external IP address, service principal usage outside normal automation patterns

With these IOCs identified, Priya initiated enterprise-wide scoping activities to determine the full extent of compromise and exfiltration across the Azure environment, as shown in Table 2.

Table 2. Cloud Exfiltration Scoping Commands
Scoping Objective Command

Extend activity search to 90 days

az monitor activity-log list --caller c7e2a091-4b38-4d65-9f12-b8a3e6d50c71 --offset 90d

Enumerate all role assignments for compromised principal

az role assignment list --assignee c7e2a091-4b38-4d65-9f12-b8a3e6d50c71 --all

Identify recently created or modified storage containers

az storage container list --account-name stcorpdata01 --query "[?properties.lastModified >= '2025-10-15']" --output table

The extended activity search revealed that the service principal had been accessed from the external IP address as early as October 8th, more than a week before the first management operations Priya had initially identified. The role assignment query confirmed that the attacker had granted the service principal Contributor access to a second resource group, rg-analytics-eastus2, expanding the potential scope of compromise beyond the production environment.

Priya compiled a summary of her scoping findings for the incident commander, including the confirmed timeline of attacker activity, the scope of compromised resources across both resource groups, an assessment of the volume and sensitivity of exfiltrated data, and recommended next steps for containment. This summary provided leadership with the context needed to make informed decisions about notification obligations, containment priorities, and resource allocation as the investigation continued.

Scope: Step-by-Step

The following steps provide a condensed reference for scoping activities. Each step corresponds to topics covered earlier in this chapter, organized for use when determining the full extent of compromise across the environment.

A standalone version of this step-by-step guide is available for download on the companion website in PDF and Markdown formats.
  1. Identify indicators of compromise (IOCs) from detection and verification activities:

    • File-based indicators (hashes, filenames, file paths).

    • Network indicators (IP addresses, domains, URLs, signatures).

    • Process and service indicators (process names, command-line arguments).

    • Registry and configuration indicators (registry keys, configuration changes).

    • Account-based indicators (unauthorized accounts, suspicious usage patterns).

    • Behavioral indicators (temporal patterns, data movement, lateral movement).

  2. Conduct enterprise-wide hunting for identified IOCs:

    • Search centralized log analysis tools (SIEM, log aggregation systems).

    • Leverage EDR platforms to search across managed endpoints.

    • When EDR is not available, use inventory management tools, active scanning, network scans, or custom scripts to probe for IOCs.

    • Apply threat-hunting platforms that combine endpoint, network, and logging data to provide overlapping analysis coverage.

  3. Apply progressive scoping methodology:

    • Prioritize critical assets (domain controllers, file servers, databases, systems with sensitive data).

    • Expand laterally to systems connected to known-compromised hosts.

    • Conduct an environmental sweep across all systems.

  4. Reconstruct the attack timeline:

    • Identify the initial compromise and the patient zero system.

    • Map lateral movement through authentication logs and file access patterns.

    • Identify the persistence mechanism deployment timeline.

    • Determine when sensitive data was accessed or exfiltrated.

  5. Document scoping findings:

    • List all compromised systems identified.

    • Record IOCs discovered during scoping.

    • Create a timeline visualization of the attack’s progression.

    • Note any visibility gaps or systems requiring additional investigation.

  6. Address scoping challenges:

    • Identify and document visibility gaps (unmanaged systems, limited logging, IoT, and ICS devices).

    • For systems with limited logging, use alternative evidence sources such as network flow data and authentication logs from connected systems.

    • Watch for anti-forensic techniques (log deletion, timestomping, encryption and obfuscation, living off the land).

    • Use counter-strategies, including log manipulation detection, alternative timestamp sources, behavioral analysis, and command-line argument review.

    • Apply cloud-specific scoping tools and techniques for cloud and hybrid environments, including cloud provider audit logs and container runtime security.

    • Manage scale and complexity through prioritization and automation.

    • Document false positive sources for future reference.

  7. Prepare scoping results for integration with containment, eradication, and recovery activities.


1. Shepherd, Eric, "SE3R: An Overview," www.forensicsolutions.co.uk/wp-content/uploads/2024/11/SE3R-overview-2022.pdf
2. Siemens S7-1200 G2 PLC, image used courtesy of Siemens.