Authors
Bobby Filar
Machine Learning

Walk the floor at any major security conference right now and you'll hear the same word everywhere: autonomous. In 2024, AI was a "force multiplier." By 2025, agentic AI had become its own category. Today, vendors are pushing the idea that autonomous SOC is the foundation of modern security.

The language is evolving fast. The engineering, however, is not keeping pace.

The truth is that most products being sold as "autonomous" still require a human in the loop in order to be used. They sit in what you'd call the "assisted" or "guided" zone – useful, but nowhere near the full autonomy the pitch decks promise. This gap between marketing and capability is where organizations get burned. And the simple fact is, no AI should ever be given full autonomy right out of the gate, regardless of what a vendor promises.

To help you evaluate security AI tools, we’ve developed a trust-based framework called Trust, then autonomy: A new framework for evaluating security AI. This framework maps capability to autonomy to make vendors prove where they really stand. For a peek at what’s inside, keep reading.

Trust, then autonomy: A new framework for evaluating security AI
Get the full guide

Autonomy is a spectrum, not a switch

The first thing to understand is that autonomy isn't binary. It runs across a spectrum, each level corresponding directly to how much trust you've established in the AI:

Level Name What it means
1 Assisted AI surfaces information. Humans decide and act.
2 Guided AI recommends with reasoning shown. Human approval required before any action.
3 Supervised AI acts within defined boundaries. Humans review asynchronously.
4 Conditional Autonomous in well-tested scenarios. Oversight at the boundary, not each action.
5 Full autonomy Fully autonomous across well-understood scenarios. Reached only through demonstrated trust at every prior level.

Here's the critical insight: full autonomy cannot be purchased. It can only be earned. The difference between Level 4 and Level 5 isn't a product feature, it's trust that’s been accumulated through rigorous testing over time.

And crucially, none of the levels are wrong. Not every organization wants Level 5. A well-designed Level 2 system will always outperform a sloppy Level 4. The goal isn't to chase the highest number; it's to find the right level for your environment and build toward it honestly.

Evaluate the vendor before you evaluate the product

Security AI vendors are selling autonomy beyond what they've actually engineered. Before you spend time on POCs and proof-of-value exercises, screen vendors against some simple maturity signals. Lower maturity vendors operate on intuition, demo-driven results, and offer no version comparisons. More mature vendors will have defined benchmarks and reproducible methodology. The most mature vendors will show you a performance curve over 18 months, benchmarked against human analysts.

Here are a few questions from our guide that can reveal vendor maturity:

  • Where does your product sit on the autonomy scale and what happens at the boundary?
  • What does your AI get wrong, and how will I know when it happens in my environment?
  • Has your AI been red-teamed internally and externally?
  • Can I see your evaluation methodology, not just the results?

A longer list with explanations for each question is available in the full framework.

The trust path to autonomy

Once you've selected a vendor, trust isn't just granted, it's built in three stages.

  1. Prove the AI works in your environment, not just in demos. Start at Level 2, move to Level 3, and don't rush. Non-production environments won't expose the variability that comes with real-world attacks.
  2. Once it's live, demand operational evidence, not just benchmarks. What's the catch rate? The false positive rate? And critically: why is it working? A trustworthy security AI is transparent (decisions are visible), explainable (every decision has justified reasoning), and auditable (every action is reconstructable). A black box can't earn Level 5 trust, no matter how impressive its efficacy numbers look.
  3. Keep humans in control while expanding autonomy incrementally. When first deployed, require human approval for all decisions. As the AI proves itself, shift to asynchronous review — then continuous evaluation with fewer guardrails — until enough evidence has accumulated to justify operating autonomously.

This is an intentionally slow process. Security is inherently adversarial. Attackers constantly adapt their tactics to your environment. Trust doesn't come from model complexity or billions of training data points. It comes through rigorous, ongoing evaluation against the threats you're actually facing.

Read the full framework

If you're evaluating security AI right now, we recommend reviewing the full framework. It digs deeper into the topics covered above and more, so you can feel more confident in the security AI sales cycle. You’ll even see how to evaluate Sublime’s agents within the framework. Get the full guide.

Share this post

Get the latest

Sublime releases, detections, blogs, events, and more directly to your inbox.

check
Thank you!

Thank you for reaching out.  A team member will get back to you shortly.

Oops! Something went wrong while submitting the form.