Back to Intel

It is 2:14 AM on a Tuesday. A Carbon Black alert fires -- suspicious PowerShell execution on a domain controller. Within thirty seconds, it is joined by four more alerts: a network connection to a known C2 domain, a credential access event, lateral movement detected on a file server, and a data staging operation flagged by DLP.

In the old model, those alerts would sit in a queue. An analyst would get to them sometime around 8 AM, after scrolling past hundreds of other notifications -- most of them noise. Microsoft demo data triggers. Product conflicts between Carbon Black and Defender. NinjaRMM scripts doing exactly what they are supposed to do. By the time a human reaches the real alerts, six hours have passed. In cybersecurity, six hours is a lifetime.

That is not how our SOC works. At CloudRaider, every alert gets processed in seconds. The AI layer handles the first pass, the noise gets suppressed automatically, and the real threats get routed to a human analyst with full context attached. That 2 AM alert chain? An analyst was looking at it within four minutes, with enrichment data already pulled from three sources. The customer was notified before sunrise.

Here is how we got here, what we learned, and why we believe this is the future of security operations.

The Alert Volume Problem

The numbers in this industry are staggering. The average SOC processes somewhere between 10,000 and 50,000 alerts per month, depending on the size of the environment and how many security tools are deployed. Industry research consistently shows that more than 80% of those alerts are false positives or benign true positives that require no action. They are noise.

But every one of them has to be looked at. Or at least, that is the theory. In practice, alert fatigue sets in fast. Studies from the Ponemon Institute have found that SOC analysts start experiencing fatigue within the first few months of the job. After the hundredth alert that turns out to be nothing, the brain starts skipping. The thousand-and-first alert that actually matters looks identical to the 999 that did not. Analysts miss things. Not because they are bad at their jobs, but because the job as designed is inhuman.

The traditional MSSP response to this problem is to hire more bodies. Scale linearly. If you need to cover 24 hours, seven days a week, you need a minimum of 8 to 12 analysts just for shift coverage -- before you account for vacations, sick days, turnover, and training. At industry rates of $150,000 to $250,000 per fully-loaded analyst, you are looking at $1.2 million to $3 million per year in staffing costs alone. Then add the SIEM licensing, the EDR licensing, the threat intel feeds, the training programs, the management overhead.

For most mid-market organizations, this is simply not feasible. They end up with one of two outcomes: they outsource to a traditional MSSP that treats them as one ticket in a queue of ten thousand, or they go without dedicated security operations entirely. Both options are bad.

We believed there was a third option.

How We Built an AI-First Triage System

Our approach starts from a simple premise: the first pass on every alert should not require a human. Not because humans are unnecessary, but because most alerts do not deserve human attention. The machine should handle the sorting. The human should handle the thinking.

We built our triage system around pattern recognition at scale. Over the past year and a half of SOC operations, we have developed and refined over 90 active false positive patterns. Each one was learned from a real investigation, documented with specific conditions, and deployed across every customer environment we protect. Here are a few categories to give you a sense of what this looks like in practice:

The critical point is this: each pattern is learned once and applied across every customer we protect. When we identify a new false positive pattern in one environment, we evaluate it against all environments and deploy the suppression globally where it applies. This is one of the structural advantages of an MSSP model -- but only if the MSSP is actually investing in systematic pattern development rather than just staffing more seats.

The result of this approach is substantial. Roughly 85% of all alerts that enter our pipeline are resolved without a human analyst ever seeing them. They are classified, documented, and closed with full audit trails. If a pattern ever needs to be revisited -- if something we classified as benign turns out to have a malicious variant -- we can pull every historical instance and re-evaluate.

The Human Layer: Where Judgment Matters

The 15% of alerts that make it through the AI layer are the ones that actually matter. But here is what makes our model different from a traditional SOC: these are not raw alerts. By the time a human analyst sees a flagged event, it has already been enriched with context from multiple sources.

The analyst does not start from "suspicious process detected." They start from "suspicious process detected on a domain controller belonging to Customer X, by a user account that has not logged in from this location before, connecting to an IP address that appears in three threat intelligence feeds, during non-business hours for this organization." The difference between those two starting points is the difference between fifteen minutes of investigation and four hours of investigation.

Our analysts spend their time on judgment calls. Is this lateral movement from an admin doing their job, or from a compromised account? Does this data exfiltration pattern match a known business process, or is it actually suspicious? Should we escalate to the customer's IT director at 3 AM, or can this wait until the morning standup?

These are the questions that require human intelligence, contextual understanding, and professional judgment. No AI system today can reliably make these calls. And frankly, we would not trust one that claimed it could. The stakes are too high. A false negative on a genuine intrusion can cost an organization millions. A false positive escalation at 3 AM erodes trust and credibility. Getting this balance right requires experienced practitioners who know the difference.

The math works out to something remarkable. Each of our analysts is roughly 10x more effective than they would be in a traditional SOC, because they spend zero time on noise. Every alert they look at has a meaningful probability of being a real threat. Their hit rate on genuine security incidents is dramatically higher than industry averages, not because they are better analysts (though they are good), but because the system is feeding them signal instead of noise.

The Compounding Effect

Here is where the model gets genuinely interesting from a strategic perspective. Every investigation we run makes the system smarter.

When an analyst closes an alert as a false positive, they do not just click a button. They document what made it a false positive, what conditions were present, and whether those conditions would apply to other environments. If the answer is yes, a new pattern gets created and deployed. The next time any customer generates the same alert profile, it gets handled automatically.

When an analyst identifies a true positive, the same process runs in reverse. We document the indicators, the attack path, the detection logic that caught it, and the enrichment data that confirmed it. Those become detection engineering inputs. Over time, we are building a library of what real threats look like across diverse environments, and that library makes every subsequent investigation faster and more accurate.

A pattern found in one customer environment protects every customer we serve. This is the compounding effect, and it is the structural advantage of an AI-first MSSP that invests in systematic learning.

We track analyst questions -- the things they ask during investigations, the enrichment data they wish they had, the context that would have saved them time. Those questions become automation targets. If an analyst asks "has this user logged in from this location before?" more than twice, we build an automated enrichment that answers that question before they have to ask it.

This is the compounding effect in action. Our false positive pattern library started with a handful of rules. It is now over 90 active patterns, and we add new ones weekly. Our enrichment pipeline started with basic IP and domain lookups. It now pulls user behavior baselines, geographic anomaly detection, and cross-customer threat correlation automatically.

Competitors who staff their SOC with junior analysts on rotating shifts start from zero with every customer engagement. Their institutional knowledge walks out the door with every resignation. Our institutional knowledge is encoded in code and patterns that persist and improve over time. This is the moat.

The Numbers

We believe in transparency about operational performance. Here is where we stand today:

~10,000
Alerts processed per month
~85%
Auto-resolved via AI patterns
<15 min
Mean time to human triage
3
Team members (vs. 30+ traditional)

Those numbers are not aspirational targets. They are current operational metrics from our production SOC. The auto-resolution rate fluctuates between 82% and 88% depending on the week, the customer mix, and whether any new security tools have been deployed that introduce unfamiliar alert patterns. When new tools come online, the AI layer needs time to learn what is normal before it can identify what is not. That learning period is typically two to four weeks, during which human analysts handle a higher percentage of alerts while new patterns are developed.

The team size deserves context. Three people doing the work of thirty-plus is not about working harder. It is about working on the right things. Our analysts do not spend time on shift handoffs, alert queue management, or copy-pasting IOCs between tools. The system handles all of that. They spend their time on the 15% of alerts where human judgment is irreplaceable, on detection engineering to improve the system, and on customer communication when it matters.

For our customers, the cost implications are significant. They get SOC coverage that would traditionally require a seven-figure annual investment, delivered at a fraction of that cost. Not because we have cut corners, but because we have eliminated the structural inefficiencies that make traditional SOCs so expensive.

What We Have Learned

Building this system has taught us several things that were not obvious at the start.

First, the AI layer is only as good as your pattern engineering. Machine learning models are useful for anomaly detection, but the real leverage comes from deterministic pattern matching on known false positive signatures. These are not sophisticated algorithms. They are carefully documented rules based on deep operational experience. The sophistication is in knowing what to look for, not in the technology that looks for it.

Second, you cannot automate judgment. We tried. Early on, we experimented with automated severity scoring and auto-escalation for certain alert types. It did not work well. The edge cases are where the real threats hide, and edge cases require context that no automated system currently handles reliably. The human layer is not a legacy component we are trying to eliminate. It is a permanent and essential part of the architecture.

Third, cross-customer intelligence is the most underappreciated advantage of the MSSP model. When we see a new attack technique against one customer, every other customer benefits from that detection within hours, not months. This is something that in-house SOCs simply cannot replicate, no matter how well-funded they are. Breadth of visibility is a structural advantage.

Fourth, transparency builds trust. We publish our SOC performance metrics to customers. Not cherry-picked numbers -- the real operational data. When our auto-resolution rate dips because a new tool deployment introduced unfamiliar patterns, customers see that. When mean time to triage spikes during a holiday weekend, they see that too. This transparency has been one of the strongest drivers of customer retention. People trust what they can verify.


The Future

The future of security operations is not replacing analysts with AI. Every vendor pitch that promises fully autonomous security operations is selling a fantasy that the current state of AI cannot deliver and that the threat landscape does not allow.

The future is making every analyst dramatically more effective. It is eliminating the noise so they can focus on the signal. It is giving them context before they have to ask for it. It is encoding institutional knowledge into systems that persist and improve, rather than losing that knowledge every time an analyst leaves for a higher-paying job.

That is what a learning security platform does. It gets better over time, not just from better algorithms, but from the accumulated operational experience of every investigation, every false positive pattern, every detection improvement. The system we have today is meaningfully better than the system we had six months ago, and six months from now it will be better still.

We built CloudRaider because we believed there was a better way to do security operations. The numbers suggest we were right. But the real proof is not in our metrics. It is in the fact that our customers sleep better at night knowing someone is watching -- and that the someone is not drowning in noise.

See These Metrics in Real Time

We publish our SOC performance data to every customer. No cherry-picked numbers. No marketing metrics. The real operational data from our production SOC. Contact us for a walkthrough.

Schedule a Walkthrough