Chapter 1: The Attack Surface
Proceed to:
Chapter 2: The Threat Landscape
The State of AI Cybersecurity 2026 | Chapter 1

how ai is reshaping
the attack surface

Organizations have been racing to implement generative and agentic AI tools at a breakneck pace. These technologies promise to transform workflows, revolutionize productivity, and create new value streams.

They also open a whole new attack surface. Within the past year, AI systems have joined the ranks of threat actors’ favorite targets.

The growing concern around rapid enterprise AI adoption

Read more +

44%

are extremely or very concerned with the security implications of third-party LLMs (like Copilot or ChatGPT)

92%

are concerned about the use of AI agents across the workforce and their impact on security

How concerned are you about the security implications of the following AI technologies in your organization’s computing environment?

14%
Extremely
30%
Very
32%
Moderately
14%
Not
Full Audience 1
10%
Extremely
24%
Very
37%
Moderately
18%
Not
Full Audience 2
15%
Extremely
30%
Very
32%
Moderately
16%
Not
Full Audience 3

Number of respondents extremely or very concerned about the security impact of different kinds of AI use

87%
Third-party Generative AI or LLM tools
71%
In-house /proprietary LLMs or custom AI models
87%
AI Agents (autonomous or semi-autonomous)
Public sector

Number of respondents extremely or very concerned about the security impact of different kinds of AI use

75%
Third-party Generative AI or LLM tools
63%
In-house /proprietary LLMs or custom AI models
70%
AI Agents (autonomous or semi-autonomous)
Financial Services

Number of respondents extremely or very concerned about the security impact of different kinds of AI use

78%
Third-party Generative AI or LLM tools
74%
In-house /proprietary LLMs or custom AI models
77%
AI Agents (autonomous or semi-autonomous)
Healthcare

Number of respondents extremely or very concerned about the security impact of different kinds of AI use

78%
Third-party Generative AI or LLM tools
71%
In-house/proprietary LLMs or custom AI models
76%
AI Agents (autonomous or semi-autonomous)
Manufacturing

Number of respondents extremely or very concerned about the security impact of different kinds of AI use

71%
Third-party Generative AI or LLM tools
63%
In-house/proprietary LLMs or custom AI models
66%
AI Agents (autonomous or semi-autonomous)
Education

The growing concern around rapid enterprise AI adoption

AI is being embedded in SaaS applications, autonomous agents take actions in real time, and low- or no-code tools let teams build their own AI automations across the enterprise. But while adoption and innovation accelerate, security is still catching up. AI systems behave in ways that traditional defenses were never designed to monitor, and security leaders are increasingly expressing concern:

Read more +

Top concerns with AI usage in computing environment

Exposure of sensitive data (e.g., private)
61%
Potential violations of data security and privacy
56%
Misuse/abuse of AI tools
51%
Software vulnerabilities in AI tools
49%
Unauthorized access/use of AI tools
39%
Model integrity risks (e.g., poisoning, adversarial)
34%
Supply chain risks in AI components
23%
Shadow AI adoption by employees
19%

More than 70% of survey participants in the US, UK, and Brazil indicated that they are concerned about sensitive data exposure, while those in Italy and UAE were more concerned about regulatory compliance violations.

AI policy maturity is stagnant at best

Read more +

What’s your organization’s policy stance for the safe and secure use of AI?

37%

We already have a formal policy

52%

We are currently discussing a formal policy

8%

We have no plans to create a formal policy

3%

I don't know what my organization's stance is

2026

What’s your organization’s policy stance for the safe and secure use of AI?

45%

We already have a formal policy

50%

We are currently discussing a formal policy

3%

We have no plans to create a formal policy

2%

I don’t know what my organization’s stance and plans are around the safe and secure use of AI

2025

IAM and DLP are the primary controls for tackling Gen AI risk

Read more +

Which security controls has your organization implemented to protect against the risks of Generative AI and LLMs?

Identity/role-based controls for AI tool access
60%
Data loss prevention (DLP) tools
54%
Model monitoring or drift detection
42%
Limiting use to self-hosted tools/models
41%
Blocking the use of public AI tools
39%
Prompt filtering or other input/output controls
34%
Enterprise browser
27%
We currently have no controls in place
4%

Next: How AI is reshaping the threat landscape

Discover defenders’ attitudes towards the impact AI is actually having on the threats they face.

Secure your AI with Darktrace

Defend intelligently

with AI that learns from – and adapts to – your unique environment

Defend at speed

isolating and stopping attacks faster, without disrupting the business

Defend across boundaries

increasing visibility beyond business silos and tracking threats across domains

Defend with ease

automating and prioritizing the security tasks that matter most

See what Darktrace finds in your environment