Back

What Are Managed Agentic Security Services (MASS), and How Do They Differ From MDR?

Hagai Shapira
Hagai Shapira
March 27, 2026
Insights
What Are Managed Agentic Security Services (MASS), and How Do They Differ From MDR?Bright curved horizon of a planet glowing against the dark backdrop of space.Bright curved horizon of a planet glowing against the dark backdrop of space.

You signed with an MDR provider expecting your team to stop drowning. Instead, you got a flood of escalations, a black box you can't audit, and junior staff on the night shift who know less about your environment than your newest hire. 

The coverage was supposed to free your team for strategic work. Instead, they spend half their day re-triaging what the MDR already triaged.

This is an architecture problem. Legacy MDR goes back roughly a decade, when dedicated providers carved out a niche by saying, "we are experts in SOC investigations." They were better than MSSPs, which were generalists doing 50 different things with no specialization. 

But the model still relied on junior staff running deterministic SOAR workflows. Pre-coded investigation paths that break the moment an attack doesn't follow the script.

AI has changed this model. It supports non-deterministic reasoning, investigates at a greater scale, and adapts during an investigation rather than selecting from templates. But bolting AI onto the same legacy architecture doesn't solve the structural problem.

The architecture itself needs to change. That's the starting point for Managed Agentic Security Services (MASS): a new architecture for delivering modern security services. MASS combines three layers: a deep integration layer, an agentic platform, and senior security experts.

TL;DR:

  • MASS is a new architecture for delivering modern security services. It offers a combination of an agentic platform and senior security experts to deliver services like MDR, threat hunting, managed phishing, DLP, and more.
  • The bottleneck to automating security operations is context, not raw model capability. Dumping raw data on an LLM can cause hallucinations; curating telemetry, organizational, and historical context is what makes AI verdicts trustworthy.
  • MASS is not a list of services. It is an architecture that enables multiple security services to run on a shared foundation of integrations, agentic investigation, and expert-led context. Where legacy MDR providers were narrow specialists, MASS delivers broad coverage with consistent investigation quality across every service.
  • Security experts in MASS are system builders, not AI babysitters. Their primary role is building and scaling the context that makes AI accurate, not reviewing every AI output to catch mistakes.

What Is Managed Agentic Security Services (MASS)?

MASS is a managed security services model that uses an agentic platform and security experts to deliver multiple security operations services through an AI-native foundation.

Current MASS services include MDR, threat hunting, managed phishing, DLP, and more. All running on the same foundation of deep integrations, an agentic platform, and senior security experts with incident response and threat hunting backgrounds.

The "agentic" part is precise. It does not mean a single AI agent is trying to do everything. The way Daylight has built their agentic platform, they've created multiple specialized AI agents and an AI-infused orchestration system that syncs between the AI agents and between agents and humans.  This enables Daylight to build a customized playbook during investigation, as every step is determined based on the output of the previous step.

This is architecturally distinct from both “AI SOAR workflows” and AI capabilities bolted on to legacy platforms. Multi-agent architectures can resemble specialist security team structures, with narrow areas of expertise that can be maintained and improved independently.

Why Context Is the Bottleneck (Not AI Capability)

The bottleneck to automating security operations is not AI capability. It is context architecture: the ability to systematically build, maintain, and make context accessible to AI at scale.

A MASS architecture addresses this through three distinct context types:

1. Telemetry Context

Machine-readable data from security tools. Some AI SOC platforms ingest broad telemetry without curation. A single alert may include over 100 fields, but only five to seven may matter for that specific alert type. 

MASS curates which fields matter, tagging data so agents pull exactly what they need rather than dumping everything on the model.

2. Organizational Context

Policies, exceptions, and unwritten rules unique to the organization. This context typically requires deliberate human documentation through direct engagement with customer teams. It lives in Slack threads, stale wikis, or people's heads. Building it into a structured, AI-accessible format is intensive work.

3. Historical Context

Collective memory from past investigations. In most SOCs, tickets say "Closed, benign" without capturing why. Every departure, every vacation, every shift change resets institutional knowledge. MASS captures investigation reasoning, so it compounds over time rather than walking out the door when someone leaves.

The Three-Part Architecture Behind MASS

MASS operates through three layers working together. Each is essential. None is sufficient on its own.

1. Integration Layer

This layer connects to security tools, identity providers, HR systems, IT asset management, and collaboration platforms. It pulls data into a dedicated data lake per customer, tags it for relevance, and makes it accessible to AI agents with precision.

Daylight's implementation has key distinctions here: deep, bi-directional API-based integrations across source tools. 

Bi-directional means the platform reads alerts from customer tools and writes resolution data back to close them at source. The result is zero alert backlog in customer dashboards, something most MDR providers cannot deliver because they use read-only integrations.

New integrations are built in days, not months, because the integration framework itself is AI-assisted. When a customer adopts a new security tool, coverage follows quickly rather than waiting for a contract negotiation and an 8-month development cycle.

2. Agentic Platform

Multiple specialized AI agents are coordinated by a centralized orchestration layer. In Daylight, that platform is AIR, short for Agentic Investigation and Response. AIR constructs response logic from evidence by querying the business context, invoking specialized agents, and adapting based on findings.

AIR pulls business context from the  Daylight Knowledge (a structured, per-tenant context repository that continuously learns about each customer's environment), and the data lake (which collects all data from the integrations). The platform also includes the ChatOps model for automated employee verification, proprietary detection rules that trigger investigation from reviewing log data, and a management console that offers reporting, analytics and auditing capabilities. 

3. Security Experts

In Daylight's model, security experts have 10+ years of incident response and threat hunting experience. They operate in a follow-the-sun model, not legacy shift-based coverage. This means that they’re distributed across the globe, working standard hours in their regions from Singapore to California.

Their role is fundamentally different from legacy MDR teams: they build context that makes AI accurate rather than reviewing every AI output.

How MASS Compares Across the Market

The managed security market breaks into four models: MASS, AI MDR services, AI SOC platforms, and traditional/legacy MDRs.

MASS Comparison Table
MASS (Daylight) AI MDR Services AI SOC Platforms Traditional MDRs
Deployment model Platform + Managed Service Platform + Managed Service Platform MDR Service
Tier 1 triage Yes Yes Yes Yes
Security team profile IR and TH professionals MDR analysts (varies by provider) No service MDR analysts
24/7 expert support Yes Yes No Yes
SIEM agnostic Yes Yes Varies Varies
Coverage Unlimited Unlimited Unlimited Limited
Built-in detection Yes Partial No Partial
Institutional knowledge-driven operations Yes Varies Varies No
Agentic investigation Yes Yes Yes Varies
Agentic response Yes Varies Varies No
Hypothesis-based hunts Yes No No Manual
IOC hunts Yes Varies Varies Varies
Managed phishing Yes Varies Varies Varies
Managed DLP Yes No No Manual
Agentic data lake (light SIEM) Yes No No No

The MASS row reflects a different operating model, one where investigation, context, and response run on a single architecture rather than stitched-together point solutions.

The Role of Security Experts in MASS

In MASS, security experts have a fundamentally different role with four distinct responsibilities.

All personalization lives in the system rather than with individual experts. Any expert can pick up an investigation with full relevant context already available, because context building and refinement are continuous.

1. Context Building and Scaling (Primary Role)

During onboarding, experts work directly with customer teams to extract undocumented business knowledge: policies, exceptions, and how the business actually operates. They tag data, build knowledge items, and ensure AI agents pull the right context. This continues post-onboarding as the business evolves, but at a lower volume.

This is the key distinction from competitors whose experts merely validate AI outputs. Daylight's experts build the context that makes the AI accurate in the first place.

2. Low-Confidence Verdict Review

Every investigation receives a confidence score. When confidence is not high, an expert reviews the case, analyzes what the AI got wrong, identifies missing context, and improves the system. 

For mature deployments (customers with four or more months on the platform), this happens rarely. The goal is not just closing the case. It is understanding why the AI was uncertain and fixing the root cause.

3. Incident Response Leadership

When a real threat is confirmed, Daylight's security experts lead strategic resolution. These are senior researchers with government-level backgrounds. Their backgrounds in IR at government agencies mean they bring context most internal security teams can't replicate.

4. Glass Box Brainstorming

Experts work alongside customer teams to improve detection rules, surface findings, and raise the overall security bar. In Daylight's Glass Box model, customers can inspect what data was used, what conclusion was reached, and why. 

This is the most appreciated element in customer case studies. Teams finally feel like they have a partner who shows them what great looks like, rather than a black box vendor protecting their switching costs.

How to Evaluate a MASS Provider

MASS is an emerging category. Formal taxonomy has not yet fully caught up to it, so evaluation needs to focus on architecture and experience rather than quadrant placement.

Use these questions to pressure-test a provider:

  • If your environment is primarily cloud with modern identity, prioritize deep integration coverage across identity, cloud, and SaaS platforms, not just endpoints.
  • If you are replacing a legacy MDR and have experienced the black box problem, require Glass Box transparency as non-negotiable. Ask to see how the system arrived at a specific verdict, what the level of explainability is on the decisions the AI makes, and the level of visibility in terms of coverage and performance.
  • If your team is drowning in escalations, evaluate context architecture. All three context types (telemetry, organizational, and historical) need to be addressed, not just raw telemetry ingestion. This will impact the quality of your AI SOC investigations.
  • If your alert backlogs are overflowing, ensure every alert is investigated and then closed at the source tool.
  • If you need coverage beyond MDR, evaluate whether additional services run on the same architecture or are bolted-on acquisitions with separate workflows.
  • If you are considering an AI SOC tool instead, be clear about the tradeoff: these tools require skilled operators 24/7, take zero liability, and cannot handle user-reported phishing. Most importantly, it means your team will be tasked with building the infrastructure for AI in terms of context building.
  • If you cannot validate claims through a POC with realistic incidents, treat the evaluation with caution. Architecture decks do not substitute for seeing the investigation quality firsthand.

Without established analyst frameworks for MASS, architecture and demonstrated investigation quality during a proof-of-concept remain the most reliable indicators of provider maturity.

What Agentic Investigation Changes

The shift from legacy MDR to MASS is not about adding AI to the same model. The model itself was built for a different era. AI-native architecture, combined with the right human expertise, enables what the market has needed for years: a single provider delivering deep, specialized security services across the full SecOps surface with accountability, transparency, and investigation quality that compounds over time.

Instead of managing separate vendors for MDR, threat hunting, phishing, and DLP, each with its own onboarding, its own data silos, and its own gaps, you get one architecture that shares context across every service. Your team stops firefighting and starts doing strategic work.

Daylight Security built its platform from day one as a managed service with security experts embedded by design. The same architecture powers MDR, threat hunting, managed phishing, and DLP.

For a deeper look at how agentic investigation works in practice, explore the Daylight blog.

Frequently Asked Questions About Managed Agentic Security Services

How Does MASS Handle the Hallucination Problem That Affects AI SOC Tools?

MASS approaches the hallucination problem very differently from typical AI SOC tools. In our view, hallucinations aren’t some mysterious failure mode. They’re usually the result of poor system design. When you dump massive amounts of raw telemetry into a general-purpose model and expect it to “figure out” what matters, you’re essentially forcing it to guess. That’s where hallucinations come from.

MASS is built to prevent that. Instead of relying on a single, undifferentiated model, we structure and index the data so the system knows what’s relevant before reasoning even begins. That grounding layer, paired with specialized AI components rather than a one-size-fits-all model, ensures the output is based on the right context, not probabilistic guesswork.

So rather than trying to fix hallucinations after the fact, MASS is designed to avoid creating the conditions that cause them in the first place.

Every investigation gets a confidence score. When confidence is low, a human expert reviews. Curated organizational and historical context grounds AI reasoning in verified knowledge rather than raw data dumps.

How Does MASS Deliver Services Like Managed Phishing, DLP Investigation and Response Without Separate Tools?

Every service runs on the same foundation: deep integrations, an agentic platform, and security experts. Managed phishing uses the same context and agent orchestration as MDR, correlating signals across email, identity, endpoint, and cloud in one investigation. 

The difference is that the trigger for investigations is email alerts and user-reported suspicious emails, a gap that most AI SOC tools cannot fill. As for DLP, the trigger is your incoming alerts, but we’ll perform a high-quality investigation on every alert and respond accordingly.

Do I Need to Replace My Existing Security Stack to Use MASS?

No. MASS integrates with your existing tools rather than replacing them. For example, Daylight connects to security tools, identity providers, HR systems, and collaboration platforms through bi-directional integrations. 

New integrations are built in days, not months. Your current stack becomes the data source; MASS provides the investigation, context, and response layer on top.

Table of content

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore
Lorem ipsum dolor sit amet, consectetur?
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
Lorem ipsum dolor sit amet, consectetur?
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
form submission image form submission image

Ready to escape the dark and elevate your security?

button decoration
Get a demo
moutain illustration