All articles

Most Enterprise SIEMs Can Only Detect 1 in 5 Known Attack Techniques. New Research Shows Why.

The Security Digest - News Team
Published
May 7, 2026

The data is already in the SIEM, but the detection rules aren’t. A growing body of research is revealing a structural gap between what security teams collect and what they can actually see, and the causes go deeper than headcount or budget.

Credit: The Security Digest

According to Mitiga, the average enterprise SIEM detects approximately 21% of MITRE ATT&CK techniques. Not 21% of all theoretical attack scenarios, but of all documented, cataloged, known techniques that real adversaries use in real intrusions, based on years of observed behavior mapped by MITRE across 14 tactics and hundreds of individual techniques and sub-techniques. The CardinalOps Fifth Annual State of SIEM Detection Risk report, analyzing more than 13,000 detection rules across hundreds of production SIEM environments, reached a similar conclusion.

While this finding is alarming in and of itself, it also points to a larger implicit problem: telemetry.  The telemetry organizations need to detect those techniques is already flowing into the SIEM. The log data from endpoints, firewalls, identity systems, cloud infrastructure, and email gateways is there. It’s being collected, stored, and retained. It's just that nobody has written the rules to use it across the entire attack surface.

What 21% coverage actually looks like during an attack

Consider a moderately sophisticated intrusion that follows a common pattern: initial access via a phishing email with a malicious attachment; execution via a macro that spawns PowerShell; persistence via a scheduled task; privilege escalation via credential dumping; lateral movement via Remote Desktop Protocol; and data exfiltration over an encrypted channel.

That chain involves at least six distinct MITRE ATT&CK techniques across six tactics. In a SIEM with 21% coverage, the math says analysis detects roughly one of them. Maybe the phishing email triggers an alert because your email gateway feeds into the SIEM, and there is a rule for known malicious attachment types. Or maybe the PowerShell execution fires because someone wrote that detection last quarter after a tabletop exercise. But the scheduled task persistence? No rule. The credential dump? No rule. The lateral movement? No rule. The exfiltration? No rule.

That is the operational reality described by the CardinalOps research, by the Mitiga analysis, and by multiple security leaders who spoke during recent platform evaluations. The SIEM is collecting evidence, but no one is building the necessary rules to apply it across all these scenarios.

Why the rules don’t exist: Five structural causes

The instinctive explanation is staffing. There aren’t enough detection engineers. That’s true. The global cybersecurity workforce gap sits at 3.5 million unfilled positions. But staffing is only one of five structural causes, and it may not be the most important one.

1. Detection engineering is still a manual and serial process.

The 2025 State of Detection Engineering Report from Anvilogic and ESG found that 86% of security professionals say it takes a week or more to go from identifying the need for detection to testing and deploying it in production. That timeline includes reading threat intelligence, researching the attack chain, writing detection logic in the SIEM's query language, testing against historical data, tuning for false positives, obtaining change management approval, and deploying. At one rule per week, an organization would need over three years of uninterrupted detection engineering to cover the full ATT&CK matrix. No organization has three uninterrupted years. The backlog grows every day.

2. Organizations write detections for headlines and compliance rather than coverage.

Detection engineering effort is not distributed evenly across the ATT&CK framework. It clusters around two things: whatever threat made the news this quarter, and whatever the compliance framework requires. CrowdStrike reported that 81% of intrusions in the period from July 2024 to June 2025 were malware-free. The attacks that dominate the threat landscape right now, credential theft, lateral movement, and abuse of legitimate tools, are the ones that require the most carefully crafted behavioral detections. They’re also the ones most likely to be sitting in the uncovered 79%.

A ransomware campaign hits the industry, and suddenly everyone starts writing ransomware-detection code. A regulatory audit asks about data loss prevention, and suddenly DLP rules come into focus. Meanwhile, technique categories like credential access (T1003), living-off-the-land execution (T1059), and scheduled task persistence (T1053) sit uncovered because no auditor asked about them and no headline made them urgent.

3. The telemetry exists, but nobody maps it to techniques.

Most organizations have already solved the data collection problem. They’re ingesting endpoint logs, authentication events, firewall data, DNS queries, cloud API activity, and email metadata. That telemetry, properly mapped, covers a significant portion of the ATT&CK framework. But the mapping hasn’t happened. No one has sat down with the 191 tables in the SIEM (a real number from one enterprise environment reviewed during a recent evaluation) and systematically said: Which ATT&CK techniques can I detect with the data I already have? What rules would I need to write? Where are my gaps, not because I’m missing data, but because I’m missing logic?

4. Detection rules decay, and nobody maintains them.

Industry analysis consistently highlights that SIEM maintenance is an ongoing burden. Environments change. New applications get deployed. Cloud migrations shift where data lives. API updates change log formats. An operating system update alters how a process behaves, and a detection rule that worked perfectly six months ago now produces false positives or, worse, stops matching entirely. Rule tuning, connector maintenance, and periodic revalidation of data quality are continuous requirements. The result is a library of detection rules in which an unknown percentage no longer function as intended. The dashboard says there are 400 active rules. How many of those actually detect what they were written to detect? In most organizations, nobody knows. Nobody has time to check.

5. There is no feedback loop between detection and coverage measurement.

Most security operations teams measure detection by alert volume, mean time to detect, mean time to respond, and false positive rate. These are useful operational metrics, but they tell analysts nothing about coverage. Volume, for example, can explain how many rules were fired but not how many techniques were detected.  As the CardinalOps research put it: until an organization has an honest, ATT&CK-mapped, continuously validated picture of its actual detection coverage, it is making decisions about security investment with a map that does not accurately represent the territory.

The tools that are starting to close the gap

A small number of organizations have begun approaching detection coverage as an engineering problem. Instead of asking a human engineer to read an advisory, research the attack chain, write rules, and test them, these organizations use AI agents to perform the engineering work while keeping humans in the approval loop.

During a recent enterprise evaluation of Strike48, the agentic SOC platform built on the Devo SIEM, the platform demonstrated a detection engineering workflow that compressed the traditional week-long process into minutes. A threat advisory URL was fed into the system. The platform’s engine crawled the advisory, found five to eight additional sources reporting on the same campaign, extracted every TTP and IOC from the combined intelligence, generated detection rules mapped to the SIEM’s query language, tested them against historical data, and flagged them for deployment.

The platform also includes a SIEM administrator agent that does the mapping exercise described in cause number three above. It analyzes every telemetry source flowing into the SIEM, cross-references them against detection libraries, and produces a coverage analysis showing which data sources are underutilized, which technique categories are uncovered, and where the gaps are. One organization’s analysis identified 191 tables across 27 categories and calculated detection coverage percentages for each.

The same platform ran an automated assessment across 28 attack techniques from the Atomic Red Team library and produced a report showing detection rate, coverage gaps, noisy alert patterns, and specific recommendations. The security director, watching the demonstration, was asked how long the same exercise would take his team to complete using their current process. “Prohibitively long,” he said. “I just wouldn’t do it because it would take too much time.”

That answer is the 21% stat in human terms. The coverage gap isn’t a mystery. Security leaders know it exists. They just can’t close it fast enough with the tools and processes they have.

What 80% coverage would actually require

Reaching full ATT&CK coverage is neither realistic nor necessary. Not every technique is relevant to every organization, and some require telemetry sources that may not be worth the cost or complexity of collecting them. But the distance between 21% and even 60% or 70% represents a meaningful reduction in blind spots. Getting there would require three things most organizations currently lack.

First, a systematic mapping of existing telemetry to ATT&CK techniques. Most organizations don’t know what they can already detect with the data they’re collecting. That mapping exercise is the single highest-leverage activity a security team can perform, and it’s the one that gets deprioritized most consistently.

Second, an automated or semi-automated detection engineering pipeline. At one rule per week per engineer, the math doesn’t work. The backlog will always grow faster than the team can ship. Automation doesn’t remove the engineer from the process. It removes the engineering work, query writing, syntax validation, and historical testing, and gives the engineer the role of reviewer and approver instead of builder.

Third, continuous measurement of coverage against the framework rather than operational metrics. The only honest metric is: for each ATT&CK technique relevant to my threat profile, do I have a functioning detection rule that will fire when that technique is executed in my environment? Most organizations cannot answer that question today. The ones that can are starting from a very different position than those still measuring success by how quickly they close tickets.

The 79% that remains

The coverage gap exists because the process for turning collected telemetry into detection logic was designed for a threat landscape that moved slowly enough for humans to keep up. With the evolution of AI and advanced automation, however, that landscape is gone. CrowdStrike’s data on malware-free intrusions, the Anvilogic finding on detection deployment timelines, and the CardinalOps research on unused telemetry all point to the same conclusion. The problem is less about data collection and more about data usage.

Twenty-one percent is a failure of process. And until security leaders start measuring coverage the way they measure response time, the 79% that remains undetected will keep growing alongside the adversary tech.