Back

Mean Time to Detect: What MTTD Actually Measures (and What It Misses)

Hagai Shapira
Hagai Shapira
May 4, 2026
Insights
Mean Time to Detect: What MTTD Actually Measures (and What It Misses)Bright curved horizon of a planet glowing against the dark backdrop of space.Bright curved horizon of a planet glowing against the dark backdrop of space.

Your MDR vendor says they cut your mean time to detect from 48 hours to 12 minutes. The board slide shows MTTD trending down for three straight quarters. But when a credential-stuffing campaign sat in your identity provider for nine days before anyone noticed, the dashboard still showed green.

MTTD has become the headline metric for detection capability, and it is one of the most misunderstood numbers in security operations. Not because the math is wrong, but because the metric measures a narrow slice of detection and gets treated as the whole picture. This article breaks down what MTTD actually captures, where it is genuinely useful, and where it creates a false sense of coverage.

TL;DR:

  • MTTD measures the gap between an incident beginning and your team recognizing it, but "beginning" and "recognizing" are defined differently across nearly every organization that tracks the metric.
  • The metric is most useful as an internal trend indicator. Cross-organization and cross-vendor comparisons are nearly meaningless because of definitional variance.
  • MTTD says nothing about whether you detected the incident yourself, how much of the attack path you missed, or how severe the incident was before detection.
  • Better measurement pairs MTTD with detection source attribution, coverage mapping across crown-jewel assets, and investigation quality signals rather than treating any single number as the headline KPI.

What Is Mean Time to Detect?

MTTD is the average gap between when an attacker gains a foothold and when your team knows about it. The definition is easy to agree on. How organizations actually measure it is not.

MTTD sits in a family of overlapping SOC metrics that create their own confusion. MTTI (mean time to investigate, or mean time to identify, depending on who you ask) is sometimes used interchangeably with MTTD, which is itself a reporting problem. There is no industry-standard approach to measuring across these performance indicators, so cross-organization comparisons can quickly become apples-to-oranges exercises.

MTTD became a standard KPI because it has useful properties for board reporting. A single number, easy to trend over time, aligned with the intuition that faster detection means less damage. Mandiant's M-Trends 2025 report showed that global median dwell time was 11 days when organizations discovered compromise internally, versus 26 days when an external party notified them. The principle is sound; what rarely holds up is the implementation.

What MTTD Actually Measures in Practice

Every MTTD calculation requires two inputs. A start time, when did the incident begin, and an end time, when was it detected. Both are unreliable.

The start time is almost never known in real time. It is reconstructed during investigation. The earliest log entry a security team finds in post-incident review is the earliest logged action available to them, not necessarily the earliest malicious action. Different reviewers looking at the same incident will identify different events as the "start." One picks the phishing email delivery. Another picks the first credential use. Another picks the first lateral movement. These choices produce materially different MTTD values from identical underlying evidence.

The end time is equally variable. "Detection" can mean the moment an automated rule fires, the moment a security team member opens that alert, or the moment the team confirms it as a true positive. These three timestamps can differ by hours or days on the same incident. An organization that auto-populates "detection time" from the acknowledgment field is measuring queue management, not detection capability.

The SANS 2019 SOC Survey of 150 SOC organizations found that around half only partially automated data extraction and metric calculation. Most published MTTD figures are constructed from manual workflows that introduce selection bias before any calculation occurs.

A low MTTD can reflect genuine detection capability. It can also reflect that you are only measuring the easy detections. The number depends on several independent variables:

  • Monitoring coverage and which systems are actually watched
  • Detection rule quality and tuning
  • Log ingestion latency
  • Alert routing and prioritization
  • Security expert availability and shift coverage

Improvement in any one of those variables can move MTTD without changing actual security posture.

Shorter detection windows generally reduce attacker dwell time and limit damage scope. But a fast detection that sits in a queue for six hours is operationally equivalent to a slow detection that gets immediate attention. Detection speed matters only when it connects to investigation and response.

How to Calculate MTTD Correctly

The formula is simple. Sum of (detection time minus incident start time) across relevant incidents, divided by incident count. What breaks it is almost always the inputs.

For example: if your team detected three incidents this month, with dwell times of 2 hours, 6 hours, and 16 hours, your MTTD is (2 + 6 + 16) ÷ 3 = 8 hours. Simple enough. The problem is that "2 hours" assumes you know exactly when the incident started and exactly when detection occurred, and most teams do not.

A meaningful calculation needs three things: a reliable incident start estimate, which often requires forensic reconstruction; a consistent definition of "time of detection" applied uniformly across all incident types; and clean incident logs with timestamps that reflect when events happened, not when they were entered into a ticketing system.

Most organizations have none of the three locked down. Even with clean inputs, several patterns regularly distort what the number tells you:

Mixing detection sources

The most common distortion is mixing detection sources. Incidents surfaced by law enforcement, threat intel vendors, or customer reports have a fundamentally different detection profile than ones your own tooling caught. Including both in the same average distorts the trend in ways that make your program look better than it is. Track them separately.

Timestamp inconsistency

When incident start time is entered manually at ticket closure rather than auto-populated at alert creation, the timestamp reflects post-hoc reconstruction. Pull your last 10 closed incidents and compare three timestamps for each: when the SIEM or EDR first fired, when someone opened the ticket, and when the team confirmed a true positive. If those three numbers differ substantially and your calculation uses only one, you need to audit which one and why.

Noise dilution

High-volume, low-severity events that generate fast automated detections dominate the blended average. A team handling 200 EDR behavioral alerts per month at a two-minute average MTTD, 50 SIEM rule-based alerts at 45 minutes, and five complex multi-stage incidents at eight hours produces a blended MTTD around 20 minutes. That headline hides the eight-hour gap on the incidents that actually matter.

Small sample sizes

For organizations handling fewer than 30 confirmed incidents per reporting period, a single long-dwell outlier can double the average. Every MTTD report should include median alongside mean, plus sample size. A large gap between the two signals that outliers are driving the headline number.

Survivorship bias

You can only calculate MTTD for incidents you eventually detected. The ones you never found do not appear in the calculation, by definition the incidents where your detection capability failed most completely. Threat hunting can surface incidents that other controls missed, and that increased visibility can make reported MTTD rise even as detection genuinely improves.

Where MTTD Works and Where It Misleads

MTTD is a reasonable directional indicator within the right boundaries. Outside them, it obscures more than it reveals.

Where MTTD Is Useful

MTTD earns its place in these specific situations:

  • Tracking internal trends quarter over quarter within the same environment, process definitions, and detection stack. Change any of those variables and the trend line breaks.
  • Evaluating specific changes, whether new monitoring coverage, detection rules, or automation improved detection speed for the incident types they targeted.
  • Exposing persistent gaps. Consistently high MTTD for a specific incident class, identity-based attacks, SaaS misconfigurations, cloud workload anomalies, tells you where to invest.

In each case, the value comes from using MTTD as a narrow, consistent measure rather than a program-wide headline. The moment it gets blended across environments, vendors, or incident types, the signal breaks down.

Where MTTD Misses

The two most consequential blind spots are detection source and attack path. MTTD does not distinguish between incidents your team caught and incidents reported by a third party. Mandiant's data is instructive: global median dwell time was 10 days when organizations discovered malicious activity internally and 26 days when external entities notified them. 

A four-hour MTTD from an FBI notification is not the same as a four-hour MTTD from your own detection rules. The metric treats them identically. On attack path, MTTD captures only a single point in an attack chain. An attacker may have been moving laterally for days before triggering the alert that started your clock.

Three further blind spots are worth stating plainly:

  • All detected incidents are treated equally. A 10-minute detection on a low-severity policy violation and a 10-minute detection on active data exfiltration produce the same number.
  • Definitional variance makes cross-vendor comparisons unreliable. One vendor counts first alert. Another counts validation. Another counts automated triage completion. The numbers look comparable. The realities they describe are not.
  • High alert volumes can artificially lower MTTD while degrading actual detection quality. A declining MTTD next to a rising false positive rate is a tuning problem disguised as an improvement. Alert fatigue compounds this: teams stop investigating and start pattern-matching, and attackers know it.

Any team using MTTD for vendor comparisons or program-level decisions should account for all of these before drawing conclusions from the number.

Putting MTTD in Context with Other Metrics

MTTD becomes meaningful when paired with measures that address its blind spots. The goal is a small set of signals where MTTD is one input among others, not the headline.

Detection source ratio

What percentage of incidents were detected internally versus reported by a third party? This answers the question MTTD cannot. Did we see it, or did someone tell us? Two providers both citing a two-hour MTTD can mean opposite things. One sources 90% of incidents from its own detection rules and investigation. The other relies heavily on customer-reported tickets and external notifications. Same metric, opposite signals.

Detection coverage mapping

What percentage of crown-jewel assets, identity providers, cloud workloads, and SaaS applications are under active monitoring with validated detection rules? The MITRE ATT&CK framework was designed for exactly this question. Without coverage context, MTTD has no denominator.

Investigation confidence

What percentage of detections reached a high-confidence verdict without escalation? This directly measures whether the detection-to-resolution pipeline is working, not just whether the first alert was fast. A team that resolves most alerts to a confident verdict autonomously has a fundamentally different detection posture than one that escalates a large share, even if both report the same MTTD.

Detection-to-response cohesion

Time from detection to completed investigation and response action, measured as a single pipeline rather than separate metrics that obscure the handoff gap. A persistently large gap between detection and response with stable MTTD signals that the intervention needed is response automation, not further detection investment.

The practical reframe is straightforward. Shift from "lower MTTD overall" to "earlier detection for the incidents that matter most, with clear evidence and fewer escalations."

Where Detection Actually Ends (and Why That Changes MTTD)

MTTD is defined by where the detection process terminates. A two-minute MTTD in one setup and a two-minute MTTD in another can describe completely different realities, because the two processes stopped the clock at different points.

  • Alert firing: The clock stops when the detection rule triggers and an alert lands in a queue. This is the fastest possible MTTD reading. It also says the least. An alert firing is not detection in any operational sense. It is a signal that something may warrant investigation. Useful for tuning detection engineering. Misleading as a vendor claim.
  • Analyst acknowledgment: The clock stops when a human opens the alert. This introduces queue wait time, availability, and shift coverage. Teams running legacy MDR operations, including MSSPs offering MDR, typically terminate detection here. The MTTD value reflects operational throughput as much as detection speed. Adding staff lowers MTTD without changing detection quality. It is often a staffing claim, not a capability claim.
  • Classification verdict: The clock stops when an automated tool or human reviewer classifies the alert as benign, suspicious, or malicious. This is where many teams running AI SOC tools terminate their MTTD clock. Teams using these tools still own detection and response, but the tools are responsible for triage and some of the investigation. In these cases, the question shifts from how fast did we detect to how fast did we classify. A classification is progress, but it is not a confident verdict.
  • Confident investigation verdict: The clock stops when the full investigation concludes with evidence, context, and a verdict the team would defend to auditors. This is where detection and investigation converge. MTTD measured here reflects actual time from incident start to usable conclusion. It is the hardest number to produce and the most operationally meaningful.

A provider quoting a fast MTTD should be asked where their clock stops. If the answer is "first alert" or "triage classification," the number measures an earlier step than a buyer typically assumes. The termination point a team optimizes for reveals their operating philosophy. A team optimizing for alert-firing speed will engineer for rule volume. A team optimizing for verdict speed will engineer for investigation depth, which typically runs through richer context.

Four questions surface where any provider's clock actually stops:

  • At what point in your process does the MTTD clock stop?
  • Can you show incident examples where detection improved for a specific incident class, not just in aggregate?
  • How do you handle detections originating from cloud, identity, and SaaS versus endpoint?
  • What percentage of investigations reach a confident verdict without customer escalation?

Together, these questions move the conversation from a single timestamp to the operating model behind it.

Investigation Quality Determines What MTTD Can Tell You

The bottleneck in most SOC environments is not how fast the first alert fires. It is how long it takes to reach a confident verdict on what that alert means. MTTD measures the time between when an incident begins and when it is first detected, but it tells you nothing about the gap between detection and verdict. That post-detection gap is where real security outcomes are determined.

When an investigation has access to the right context, the path from alert to verdict compresses. Not because the alert fired faster, but because the investigation did not stall waiting for information.

A detection that fires in two minutes but requires four hours of manual context assembly before a verdict is, from an operational standpoint, indistinguishable from discovering the incident four hours in. A detection that fires in 10 minutes but arrives with pre-assembled evidence and prior investigation history can reach a verdict in minutes. MTTD captures the first timestamp. It misses everything that follows.

MTTD is not a bad metric. It is an incomplete one. Used as a trend indicator within a consistent environment and process, it surfaces real signal about detection capability. Used as a headline KPI across vendors, organizations, or architectures, it obscures more than it reveals. The better question is not how fast did we detect, but how confidently did we reach a verdict, and how much of the attack surface were we actually watching?

Table of contents
form submission image form submission image

Ready to escape the dark and elevate your security?

button decoration
Get a demo
moutain illustration