An investigation leveraging Cisco XDR, Splunk, Cisco Secure Firewall, and Endace (Zeek) to separate signal from noise.
Cisco Live EMEA in Amsterdam is the kind of environment where security telemetry naturally spikes. There are BYOD devices, transient networks, proxy and agent traffic, and more routine connectivity checks. At scale, normal behavior can start to look suspicious.
Cisco XDR began clustering a set of high and critical incidents labeled as overflow attempts, sourced from Cisco Secure Firewall IPS telemetry, ingested via Splunk. Overflow signatures can be notable because they map cleanly to real exploitation paths. However, a burst of near-identical detections is also a strong signal that an environmental trigger might be driving noise.
The goal in these moments is not to dismiss the alerts. It is to move quickly from hypothesis to validation across data sources. In this case, seven clustered incidents were validated as false positives, then converted into practical tuning that helped suppress 17 additional similar incidents and hundreds of related alerts.
Investigation Walkthrough
Step 1: Start with what Cisco XDR already correlated
Cisco XDR did the heavy lifting up front by grouping related findings into incidents and making repetition obvious. Instead of treating seven incidents as seven separate investigations, we treated them as one pattern and pulled the shared attributes first.
Across the cluster, several repeated attributes stood out:
- All were reported from Secure Firewall via Splunk (Cisco Secure Firewall / Firepower Threat Defense (FTD) IPS telemetry flowing into Cisco XDR)
- Titles and indicators centered on IPS overflow signatures, including IMAP fetch/buffer overflow patterns (port 143) and a generic HTTP auth header overflow signature.
- The events were clustered in time, which often points to one root cause producing many detections.
- The same handful of source hosts appeared repeatedly across incidents, suggesting a small set of initiators driving most of the volume.
A clustered set of detections cannot automatically be deemed benign. It means the fastest path to clarity is to hunt shared context first because a single environmental trigger can generate a lot of noise, and a single attacker can also generate a lot of repetition.
Step 2: Validate enforcement
Cisco XDR gives you the incident storyline, but quick triage still benefits from raw event context. We pivoted into Splunk to examine the underlying Cisco Secure Firewall (FTD) IPS events and pull the fields that typically decide whether an IPS overflow alert is actionable or noisy.
A simple field-extraction view is often enough to spot posture and scope quickly:

Two details mattered immediately:
- IngressInterface = SPAN, indicating passive visibility rather than inline enforcement
- InlineResult = Would block, which is consistent with IPS running in detection mode
That combination does not prove a false positive on its own, but it changes the question. Instead of starting with ‘what did an attacker exploit,’ we start with ‘what normal traffic pattern is repeatedly tripping this signature when we only have passive visibility.’
Step 3: Add network context via Endace Zeek telemetry in Splunk
Next, we reviewed network metadata from Endace. Endace creates Zeek-based logs, which makes high-value HTTP fields easy to query (including uri and user_agent). Those fields are often enough to separate exploit-like patterns from routine system checks and background agent traffic. Note: The Corelight Splunk technical add-on was used as the Zeek open-source code holder.
For the same source hosts tripping the IPS overflow signatures, we asked a straightforward question: what were those hosts actually doing on the wire around the same time window?
A practical starting query is to summarize Corelight HTTP by initiator, uri, and user agent:


In the summarized output, the HTTP activity tied to these sources resembled routine endpoint connectivity behavior, not exploitation. When you see the same source IPs repeatedly paired with well-known connectivity checks and agent identifiers, it is a strong signal the alerts are environmental noise rather than exploit activity. The most common URIs and user agents were consistent with:
- msftconnecttest/connecttest.txt and hotspot-detect: operating system connectivity and captive portal detection checks that are extremely common on busy guest networks.
- CaptiveNetworkSupport user agent strings: consistent with Apple captive portal validation behavior rather than an attacker payload delivery tool.
- iboss Cloud Connector activity: proxy connector traffic that can create repetitive web patterns as clients negotiate access.
- Qualys QAgent/qgpublic activity: vulnerability management agent communications that routinely generate background network noise.
This does not mean the IPS signature is wrong in general. It means that in this environment, at this time, the detections were being driven by known benign patterns. The IPS signature provides a hypothesis, and the Zeek telemetry provides the reality check.
To highlight only those benign context indicators in your Endace HTTP results, add an additional where filter before the stats clause, in the SPL query:


Step 4: Confirm the IMAP subset and scope
Several incidents were IMAP-related (port 143). We ran a focused sweep across the same initiator IPs to confirm the scope and ensure there was no evidence of follow-on behavior beyond repeated signature triggers.
An Endace connection-level sweep is a fast way to validate IMAP scope:


The IMAP view repeated the same theme: detection-mode outcomes, passive visibility, and no corroborating indicators of compromise beyond the signature trigger.
Decision: close as false positives and turn it into durable tuning
Putting it together, the most consistent explanation was noisy signature triggers in a high traffic environment: passive SPAN visibility, IPS in detection mode, and surrounding Endace Zeek context matching normal connectivity checks and known agent or proxy traffic.
We closed all seven incidents as False Positives. The key was scoping. Tuning should be narrowly targeted, so you reduce noise without blinding yourself to real exploit attempts.
What changed after tuning
We used what we learned to recommend targeted suppression and tuning rather than accepting repeated noise. The goal is to tune precisely: suppress the noisy pattern in the environment where it is known to be noisy, while keeping coverage in contexts where an actual exploit attempt would look different and would be worth investigating further.
- Scope suppression to the event network or known noisy segments where SPAN visibility and detection-mode outcomes are expected.
- Require corroborating benign context (for example, msftconnecttest, hotspot-detect, CaptiveNetworkSupport, iboss, Qualys) before deprioritizing.
Specifically, we flagged SigID 17536 and related PROTOCOL IMAP overflow signatures as candidates for tuning when two conditions are true: the ingress interface is SPAN or passive, and the Intrusion Prevention System (IPS) action is “Would block”. In other words, we scoped the tuning to a context where the signatures were producing high volume without supporting evidence of exploitation.
That tuning immediately changed the operational picture. On the next day of the conference, we were able to quickly suppress an additional 17 similar incidents and collapse hundreds of related alerts into a small, understandable set. Instead of burning cycles on repetitive false positives, analysts could stay focused on higher confidence activity.
A repeatable checklist for the next alert storm
When high-severity IPS signatures suddenly cluster into multiple incidents, this checklist helps you move fast without guessing:
- Let Cisco XDR correlation provide the first draft of the story, then validate it with environment context. Multiple incidents often represent one root cause.
- Confirm sensor posture early, by reviewing Cisco Secure Firewall IPS alerts coming into Splunk ES. SPAN or passive plus “Would block” changes the question you are answering.
- Use Endace Zeek fields like uri and user_agent to quickly identify connectivity checks, proxies, and agent traffic.
- Decide with evidence, then capture the pattern as scoped tuning.
- Tune surgically. Condition suppression based on network topology and context, so you reduce noise without losing coverage.
In fast-moving environments, the best outcome is not just closing an incident. It is turning that closure into durable improvements that keep analysts in the signal lane.
Check out the other blogs from our SOC team in Amsterdam 2026.
We’d love to hear what you think! Ask a question and stay connected with Cisco Security on social media.
Cisco Security Social Media

Deixe o seu comentário