Chapter 3: Cybersecurity Operations
Proceed to:
Chapter 4: Cybersecurity Tools
The State of AI Cybersecurity 2026 | Chapter 3

how ai is reshaping
cybersecurity operations

Cybersecurity systems and AI agents are still getting to know each other. But they need to work together while they’re building a relationship.

Some security professionals lack a nuanced understanding of the types of AI used in their security stack, and without this knowledge they can’t make the best possible decisions about which vendors to work with or how to integrate new solutions into everyday operations.

Generative AI is now playing a role in 77% of security stacks

Read more +

AI adoption confusion

It’s possible that some portion of respondents are misidentifying the various types of AI. Executives and managers, for instance, report above average adoption of all types of AI. This may suggest that they’re more susceptible to “AI-washing” and vendor hype than hands-on practitioners.

Vendor AI hype

Nearly all cybersecurity vendors are boasting about the inclusion of AI in their products. What’s less clear is whether decision-makers have a deep enough understanding of all AI types to be able to evaluate vendor claims objectively.

Which types of AI are currently being used in your organization’s cybersecurity stack?

2026Being usedNot being usedNot sure
Generative AI / Large Language Models (LLMs)77%19%4%
Supervised machine learning (training on selected data sets)67%29%4%
Agentic AI (e.g., for autonomous security operations)67%26%7%
Natural language processing (NLP)58%36%6%
Deep learning and neural networks49%44%7%
Generative adversarial networks (GANs)42%49%9%
Unsupervised machine learning (trained on unlabeled data)35%57%8%

A matter of trust

Although security professionals agree that they have good visibility into the logic and reasoning processes their AI solutions use, they are nonetheless waiting for output explainability to improve before allowing AI to take autonomous action.

Read more +

89%

say they have good visibility into the reasoning behind the outputs generated by AI solutions

92%

say they need to understand how a defensive AI tool makes decisions before they can trust it

88%

say they are confident they can explain AI outputs to business decision-makers and regulators

74%

say they are limiting the autonomy of AI taking action in their SOC until explainability improves

Only 14% of security professionals allow AI to take independent remediation actions in their SOC today (no human in the loop).

Level of AI autonomy in your SOC today

14%

High: AI can act independently, including taking some remediation actions

70%

Medium: AI can take action with human approval (“human in the loop”)

13%

Low: AI only recommends, does not act

2%

None: AI plays no role in decision-making

1%

Not sure

Full Audience

Level of AI autonomy in your SOC today

18%

High: AI can act independently, including taking some remediation actions

75%

Medium: AI can take action with human approval (“human in the loop”)

5%

Low: AI only recommends, does not act

2%

None: AI plays no role in decision-making

0%

Not sure

CISOs

Level of AI autonomy in your SOC today

15%

High: AI can act independently, including taking some remediation actions

71%

Medium: AI can take action with human approval (“human in the loop”)

12%

Low: AI only recommends, does not act

2%

None: AI plays no role in decision-making

1%

Not sure

IT Managers

Level of AI autonomy in your SOC today

9%

High: AI can act independently, including taking some remediation actions

61%

Medium: AI can take action with human approval (“human in the loop”)

25%

Low: AI only recommends, does not act

3%

None: AI plays no role in decision-making

3%

Not sure

Security Architects

Level of AI autonomy in your SOC today

9%

High: AI can act independently, including taking some remediation actions

58%

Medium: AI can take action with human approval (“human in the loop”)

27%

Low: AI only recommends, does not act

2%

None: AI plays no role in decision-making

4%

Not sure

Threat Analysts

A disconnect between leadership perception and operational reality

Executives think their organizations allow AI to act autonomously more often than participants in other roles (18% believe that AI has high autonomy). This may be another manifestation of the cybersecurity confidence gap, with leaders believing that their organizations have deployed the most advanced new technologies, while practitioners have a more realistic view.

85% of security professionals prefer to obtain new SOC capabilities in the form of a managed service

Read more +

Preference for managed SOC services vs. in-house

Education
61%
Energy
56%
Finance
51%
Government
49%
Healthcare
39%
Manufacturing
34%
Professional Services
23%
Retail
19%
Telecoms
19%
Tech
0%
65%
70%
75%
80%
85%
90%

Next: How AI is reshaping cybersecurity tools

Discover defenders’ attitudes towards the impact defensive AI is having on the security stack.

The essential AI cybersecurity platform

Defend intelligently

with AI that learns from – and adapts to – your unique environment

Defend at speed

isolating and stopping attacks faster, without disrupting the business

Defend across boundaries

increasing visibility beyond business silos and tracking threats across domains

Defend with ease

automating and prioritizing the security tasks that matter most

See what Darktrace finds in your environment