Blog
/
Compliance
/
September 8, 2025

Cyber Assessment Framework v4.0 Raises the Bar: 6 Questions every security team should ask about their security posture

A practical guide to the key detection and response updates in CAF v4.0, including anomaly-based detection, machine-led threat hunting, and proactive security posture requirements.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Mariana Pereira
VP, Field CISO
CAF v4.0 cyber assessment frameworkDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
08
Sep 2025

What is the Cyber Assessment Framework?

The Cyber Assessment Framework (CAF) acts as guide for organizations, specifically across essential services, critical national infrastructure and regulated sectors, across the UK for assessing, managing and improving their cybersecurity, cyber resilience and cyber risk profile.

The guidance in the Cyber Assessment Framework aligns with regulations such as The Network and Information Systems Regulations (NIS), The Network and Information Security Directive (NIS2) and the Cyber Security and Resilience Bill.

What’s new with the Cyber Assessment Framework 4.0?

On 6 August 2025, the UK’s National Cyber Security Centre (NCSC) released Cyber Assessment Framework 4.0 (CAF v4.0) a pivotal update that reflects the increasingly complex threat landscape and the regulatory need for organisations to respond in smarter, more adaptive ways.

The Cyber Assessment Framework v4.0 introduces significant shifts in expectations, including, but not limited to:

  • Understanding threats in terms of the capabilities, methods and techniques of threat actors and the importance of maintaining a proactive security posture (A2.b)
  • The use of secure software development principles and practices (A4.b)
  • Ensuring threat intelligence is understood and utilised - with a focus on anomaly-based detection (C1.f)
  • Performance of proactive threat hunting with automation where appropriate (C2.a)

This blog post will focus on these components of the framework. However, we encourage readers to get the full scope of the framework by visiting the NCSC website where they can access the full framework here.

In summary, the changes to the framework send a clear signal: the UK’s technical authority now expects organisations to move beyond static rule-based systems and embrace more dynamic, automated defences. For those responsible for securing critical national infrastructure and essential services, these updates are not simply technical preferences, but operational mandates.

At Darktrace, this evolution comes as no surprise. In fact, it reflects the approach we've championed since our inception.

Why Darktrace? Leading the way since 2013

Darktrace was built on the principle that detecting cyber threats in real time requires more than signatures, thresholds, or retrospective analysis. Instead, we pioneered a self-learning approach powered by artificial intelligence, that understands the unique “normal” for every environment and uses this baseline to spot subtle deviations indicative of emerging threats.

From the beginning, Darktrace has understood that rules and lists will never keep pace with adversaries. That’s why we’ve spent over a decade developing AI that doesn't just alert, it learns, reasons, explains, and acts.

With Cyber Assessment Framework v4.0, the bar has been raised to meet this new reality. For technical practitioners tasked with evaluating their organisation’s readiness, there are five essential questions that should guide the selection or validation of anomaly detection capabilities.

6 Questions you should ask about your security posture to align with CAF v4

1. Can your tools detect threats by identifying anomalies?

Cyber Assessment Framework v4.0 principle C1.f has been added in this version and requires that, “Threats to the operation of network and information systems, and corresponding user and system behaviour, are sufficiently understood. These are used to detect cyber security incidents.”

This marks a significant shift from traditional signature-based approaches, which rely on known Indicators of Compromise (IOCs) or predefined rules to an expectation that normal user and system behaviour is understood to an extent enabling abnormality detection.

Why this shift?

An overemphasis on threat intelligence alone leaves defenders exposed to novel threats or new variations of existing threats. By including reference to “understanding user and system behaviour” the framework is broadening the methods of threat detection beyond the use of threat intelligence and historical attack data.

While CAF v4.0 places emphasis on understanding normal user and system behaviour and using that understanding to detect abnormalities and as a result, adverse activity. There is a further expectation that threats are understood in terms of industry specific issues and that monitoring is continually updated  

Darktrace uses an anomaly-based approach to threat detection which involves establishing a dynamic baseline of “normal” for your environment, then flagging deviations from that baseline — even when there’s no known IoCs to match against. This allows security teams to surface previously unseen tactics, techniques, and procedures in real time, whether it’s:

  • An unexpected outbound connection pattern (e.g., DNS tunnelling);
  • A first-time API call between critical services;
  • Unusual calls between services; or  
  • Sensitive data moving outside normal channels or timeframes.

The requirement that organisations must be equipped to monitor their environment, create an understanding of normal and detect anomalous behaviour aligns closely with Darktrace’s capabilities.

2. Is threat hunting structured, repeatable, and improving over time?

CAF v4.0 introduces a new focus on structured threat hunting to detect adverse activity that may evade standard security controls or when such controls are not deployable.  

Principle C2.a outlines the need for documented, repeatable threat hunting processes and stresses the importance of recording and reviewing hunts to improve future effectiveness. This inclusion acknowledges that reactive threat hunting is not sufficient. Instead, the framework calls for:

  • Pre-determined and documented methods to ensure threat hunts can be deployed at the requisite frequency;
  • Threat hunts to be converted  into automated detection and alerting, where appropriate;  
  • Maintenance of threat hunt  records and post-hunt analysis to drive improvements in the process and overall security posture;
  • Regular review of the threat hunting process to align with updated risks;
  • Leveraging automation for improvement, where appropriate;
  • Focus on threat tactics, techniques and procedures, rather than one-off indicators of compromise.

Traditionally, playbook creation has been a manual process — static, slow to amend, and limited by human foresight. Even automated SOAR playbooks tend to be stock templates that can’t cover the full spectrum of threats or reflect the specific context of your organisation.

CAF v4.0 sets the expectation that organisations should maintain documented, structured approaches to incident response. But Darktrace / Incident Readiness & Recovery goes further. Its AI-generated playbooks are bespoke to your environment and updated dynamically in real time as incidents unfold. This continuous refresh of “New Events” means responders always have the latest view of what’s happening, along with an updated understanding of the AI's interpretation based on real-time contextual awareness, and recommended next steps tailored to the current stage of the attack.

The result is far beyond checkbox compliance: a living, adaptive response capability that reduces investigation time, speeds containment, and ensures actions are always proportionate to the evolving threat.

3. Do you have a proactive security posture?

Cyber Assessment Framework v4.0 does not want organisations to detect threats, it expects them to anticipate and reduce cyber risk before an incident ever occurs. That is s why principle A2.b calls for a security posture that moves from reactive detection to predictive, preventative action.

A proactive security posture focuses on reducing the ease of the most likely attack paths in advance and reducing the number of opportunities an adversary has to succeed in an attack.

To meet this requirement, organisations could benefit in looking for solutions that can:

  • Continuously map the assets and users most critical to operations;
  • Identify vulnerabilities and misconfigurations in real time;
  • Model likely adversary behaviours and attack paths using frameworks like MITRE ATT&CK; and  
  • Prioritise remediation actions that will have the highest impact on reducing overall risk.

When done well, this approach creates a real-time picture of your security posture, one that reflects the dynamic nature and ongoing evolution of both your internal environment and the evolving external threat landscape. This enables security teams to focus their time in other areas such as  validating resilience through exercises such as red teaming or forecasting.

4. Can your team/tools customize detection rules and enable autonomous responses?

CAF v4.0 places greater emphasis on reducing false positives and acting decisively when genuine threats are detected.  

The framework highlights the need for customisable detection rules and, where appropriate, autonomous response actions that can contain threats before they escalate:

The following new requirements are included:  

  • C1.c.: Alerts and detection rules should be adjustable to reduce false positives and optimise responses. Custom tooling and rules are used in conjunction with off the shelf tooling and rules;
  • C1.d: You investigate and triage alerts from all security tools and take action – allowing for improvement and prioritization of activities;
  • C1.e: Monitoring and detection personnel have sufficient understanding of operational context and deal with workload effectively as well as identifying areas for improvement (alert or triage fatigue is not present);
  • C2.a: Threat hunts should be turned into automated detections and alerting where appropriate and automation should be leveraged to improve threat hunting.

Tailored detection rules improve accuracy, while automation accelerates response, both of which help satisfy regulatory expectations. Cyber AI Analyst allows for AI investigation of alerts and can dramatically reduce the time a security team spends on alerts, reducing alert fatigue, allowing more time for strategic initiatives and identifying improvements.

5. Is your software secure and supported?  

CAF v4.0 introduced a new principle which requires software suppliers to leverage an established secure software development framework. Software suppliers must be able to demonstrate:  

  • A thorough understanding of the composition and provenance of software provided;  
  • That the software development lifecycle is informed by a detailed and up to date understanding of threat; and  
  • They can attest to the authenticity and integrity of the software, including updates and patches.  

Darktrace is committed to secure software development and all Darktrace products and internally developed systems are developed with secure engineering principles and security by design methodologies in place. Darktrace commits to the inclusion of security requirements at all stages of the software development lifecycle. Darktrace is ISO 27001, ISO 27018 and ISO 42001 Certified – demonstrating an ongoing commitment to information security, data privacy and artificial intelligence management and compliance, throughout the organisation.  

6. Is your incident response plan built on a true understanding of your environment and does it adapt to changes over time?

CAF v4.0 raises the bar for incident response by making it clear that a plan is only as strong as the context behind it. Your response plan must be shaped by a detailed, up-to-date understanding of your organisation’s specific network, systems, and operational priorities.

The framework’s updates emphasise that:

  • Plans must explicitly cover the network and information systems that underpin your essential functions because every environment has different dependencies, choke points, and critical assets.
  • They must be readily accessible even when IT systems are disrupted ensuring critical steps and contact paths aren’t lost during an incident.
  • They should be reviewed regularly to keep pace with evolving risks, infrastructure changes, and lessons learned from testing.

From government expectation to strategic advantage

Cyber Assessment Framework v4.0 signals a powerful shift in cybersecurity best practice. The newest version sets a higher standard for detection performance, risk management, threat hunting software development and proactive security posture.

For Darktrace, this is validation of the approach we have taken since the beginning: to go beyond rules and signatures to deliver proactive cyber resilience in real-time.

-----

Disclaimer:

This document has been prepared on behalf of Darktrace Holdings Limited. It is provided for information purposes only to provide prospective readers with general information about the Cyber Assessment Framework (CAF) in a cyber security context. It does not constitute legal, regulatory, financial or any other kind of professional advice and it has not been prepared with the reader and/or its specific organisation’s requirements in mind. Darktrace offers no warranties, guarantees, undertakings or other assurances (whether express or implied)  that: (i) this document or its content are  accurate or complete; (ii) the steps outlined herein will guarantee compliance with CAF; (iii) any purchase of Darktrace’s products or services will guarantee compliance with CAF; (iv) the steps outlined herein are appropriate for all customers. Neither the reader nor any third party is entitled to rely on the contents of this document when making/taking any decisions or actions to achieve compliance with CAF. To the fullest extent permitted by applicable law or regulation, Darktrace has no liability for any actions or decisions taken or not taken by the reader to implement any suggestions contained herein, or for any third party products, links or materials referenced. Nothing in this document negates the responsibility of the reader to seek independent legal or other advice should it wish to rely on any of the statements, suggestions, or content set out herein.  

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Mariana Pereira
VP, Field CISO

More in this series

No items found.

Blog

/

AI

/

December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

2026 cyber threat trendsDefault blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI