Blog
/
Network
/
September 18, 2024

FortiClient EMS Exploited: Attack Chain & Post Exploitation Tactics

Read about the methods used to exploit FortiClient EMS and the critical post-exploitation tactics that affect cybersecurity defenses.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Emily Megan Lim
Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Sep 2024

Cyber attacks on internet-facing systems

In the first half of 2024, the Darktrace Threat Research team observed multiple campaigns of threat actors targeting vulnerabilities in internet-facing systems, including Ivanti CS/PS appliances, Palo Alto firewall devices, and TeamCity on-premises.

These systems, which are exposed to the internet, are often targeted by threat actors to gain initial access to a network. They are constantly being scanned for vulnerabilities, known or unknown, by opportunistic actors hoping to exploit gaps in security. Unfortunately, this exposure remains a significant blind spot for many security teams, as monitoring edge infrastructure can be particularly challenging due to its distributed nature and the sheer volume of external traffic it processes.

In this blog, we discuss a vulnerability that was exploited in Fortinet’s FortiClient Endpoint Management Server (EMS) and the post-exploitation activity that Darktrace observed across multiple customer environments.

What is FortiClient EMS?

FortiClient is typically used for endpoint security, providing features such as virtual private networks (VPN), malware protection, and web filtering. The FortiClient EMS is a centralized platform used by administrators to enforce security policies and manage endpoint compliance. As endpoints are remote and distributed across various locations, the EMS needs to be accessible over the internet.

However, being exposed to the internet presents significant security risks, and exploiting vulnerabilities in the system may give an attacker unauthorized access. From there, they could conduct further malicious activities such as reconnaissance, establishing command-and-control (C2), moving laterally across the network, and accessing sensitive data.

CVE-2023-48788

CVE-2023-48788 is a critical SQL injection vulnerability in FortiClient EMS that can allow an attacker to gain unauthorized access to the system. It stems from improper neutralization of special elements used in SQL commands, which allows attackers to exploit the system through specially crafted requests, potentially leading to Remote Code Execution (RCE) [1]. This critical vulnerability was given a CVSS score of 9.8 and can be exploited without authentication.

The affected versions of FortiClient EMS include:

  • FortiClient EMS 7.2.0 to 7.2.2 (fixed in 7.2.3)
  • FortiClient EMS 7.0.1 to 7.0.10 (fixed in 7.0.11)

The vulnerability was publicly disclosed on March 12, 2024, and an exploit proof of concept was released by Horizon3.ai on March 21 [2]. Starting from March 24, almost two weeks after the initial disclosure, Darktrace began to observe at least six instances where the FortiClient EMS vulnerability had likely been exploited on customer networks. Seemingly exploited devices in multiple customer environments were observed performing anomalous activities, including the installation of Remote Monitoring and Management (RMM) tools, which was also reported by other security vendors around the same time [3].

Darktrace’s Coverage

Initial Access

To understand how the vulnerability can be exploited to gain initial access, we first need to explain some components of the FortiClient EMS:

  • The service FmcDaemon.exe is used for communication between the EMS and enrolled endpoint clients. It listens on port 8013 for incoming client connections.
  • Incoming requests are then sent to FCTDas.exe, which translates requests from other server components into SQL requests. This service interacts with the Microsoft SQL database.
  • Endpoint clients communicate with the FmcDaemon on the server on port 8013 by default.

Therefore, an SQL injection attack can be performed by crafting a malicious payload and sending it over port 8013 to the server. To carry out RCE, an attacker may send further SQL statements to enable and use the xp_cmdshell functionality of the Microsoft SQL server [2].

Shortly before post-exploitation activity began, Darktrace had observed incoming connections to some of the FortiClient EMS devices over port 8013 from the external IPs 77.246.103[.]110, 88.130.150[.]101, and 45.155.141[.]219. This likely represented the threat actors sending an SQL injection payload over port 8013 to the EMS device to validate the exploit.

Establish C2

After exploiting the vulnerability and gaining access to an EMS device on one customer network, two additional devices were seen with HTTP POST requests to 77.246.103[.]110 and 212.113.106[.]100 with a new PowerShell user agent.

Interestingly, the IP 212.113.106[.]100 has been observed in various other campaigns where threat actors have also targeted internet-facing systems and exploited other vulnerabilities. Open-source intelligence (OSINT) suggests that this indicator of compromise (IoC) is related to the Sliver C2 framework and has been used by threat actors such as APT28 (Fancy Bear) and APT29 (Cozy Bear) [4].

Unusual file downloads were also observed on four devices, including:

  • “SETUP.MSI” from 212.32.243[.]25 and 89.149.200[.]91 with a cURL user agent
  • “setup.msi” from 212.113.106[.]100 with a Windows Installer user agent
  • “run.zip” from 95.181.173[.]172 with a PowerShell user agent

The .msi files would typically contain the RMM tools Atera or ScreenConnect [5]. By installing RMM tools for C2, attackers can leverage their wide range of functionalities to carry out various tasks, such as file transfers, without the need to install additional tools. As RMM tools are designed to maintain a stable connection to remote systems, they may also allow the attackers to ensure persistent access to the compromised systems.

A scan of the endpoint 95.181.173[.]172 shows various other files such as “RunSchedulerTask.ps1” and “anydesk.exe” being hosted.

Screenshot of the endpoint 95.181.173[.]172 hosting various files [6].
Figure 1: Screenshot of the endpoint 95.181.173[.]172 hosting various files [6].

Shortly after these unusual file downloads, many of the devices were also seen with usage of RMM tools such as Splashtop, Atera, and AnyDesk. The devices were seen connecting to the following endpoints:

  • *[.]relay.splashtop[.]com
  • agent-api[.]atera[.]com
  • api[.]playanext[.]com with user agent AnyDesk/8.0.9

RMM tools have a wide range of legitimate capabilities that allow IT administrators to remotely manage endpoints. However, they can also be repurposed for malicious activities, allowing threat actors to maintain persistent access to systems, execute commands remotely, and even exfiltrate data. As the use of RMM tools can be legitimate, they offer threat actors a way to perform malicious activities while blending into normal business operations, which could evade detection by human analysts or traditional security tools.

One device was also seen making repeated SSL connections to a self-signed endpoint “azure-documents[.]com” (104.168.140[.]84) and further HTTP POSTs to “serv1[.]api[.]9hits[.]com/we/session” (128.199.207[.]131). Although the contents of these connections were encrypted, they were likely additional infrastructure used for C2 in addition to the RMM tools that were used. Self-signed certificates may also be used by an attacker to encrypt C2 communications.

Internal Reconnaissance

Following the exploit, two of the compromised devices then started to conduct internal reconnaissance activity. The following figure shows a spike in the number of internal connections made by one of the compromised devices on the customer’s environment, which typically indicates a network scan.

Advanced Search results of internal connections made an affected device.
Figure 2: Advanced Search results of internal connections made an affected device.

Reconnaissance tools such as Advanced Port Scanner (“www[.]advanced-port-scanner[.]com”) and Nmap were also seen being used by one of the devices to conduct scanning activities. Nmap is a network scanning tool commonly used by security teams for legitimate purposes like network diagnostics and vulnerability scanning. However, it can also be abused by threat actors to perform network reconnaissance, a technique known as Living off the Land (LotL). This not only reduces the need for custom or external tools but also reduces the risk of exposure, as the use of a legitimate tool in the network is unlikely to raise suspicion.

Privilege Escalation

In another affected customer network, the threat actor’s attempt to escalate their privileges was also observed, as a FortiClient EMS device was seen with an unusually large number of SMB/NTLM login failures, indicative of brute force activity. This attempt was successful, and the device was later seen authenticating with the credential “administrator”.

Figure 3: Advanced Search results of NTLM (top) and SMB (bottom) login failures.

Lateral Movement

After escalating privileges, attempts to move laterally throughout the same network were seen. One device was seen transferring the file “PSEXESVC.exe” to another device over SMB. This file is associated with PsExec, a command-line tool that allows for remote execution on other systems.

The threat actor was also observed leveraging the DCE-RPC protocol to move laterally within the network. Devices were seen with activity such as an increase in new RPC services, unusual requests to the SVCCTL endpoint, and the execution of WMI commands. The DCE-RPC protocol is typically used to facilitate communication between services on different systems and can allow one system to request services or execute commands on another.

These are further examples of LotL techniques used by threat actors exploiting CVE-2023-48788, as PsExec and the DCE-RPC protocol are often also used for legitimate administrative operations.

Accomplish Mission

In most cases, the threat actor’s end goal was not clearly observed. However, Darktrace did detect one instance where an unusually large volume of data had been uploaded to “put[.]io”, a cloud storage service, indicating that the end goal of the threat actor had been to steal potentially sensitive data.

In a recent investigation of a Medusa ransomware incident that took place in July 2024, Darktrace’s Threat Research team found that initial access to the environment had likely been gained through a FortiClient EMS device. An incoming connection from 209.15.71[.]121 over port 8013 was seen, suggesting that CVE-2023-48788 had been exploited. The device had been compromised almost three weeks before the ransomware was actually deployed, eventually resulting in the encryption of files.

Mitigating risk with proactive exposure management and real-time detection

Threat actors have continued to exploit unpatched vulnerabilities in internet-facing systems to gain initial access to a network. This highlights the importance of addressing and patching vulnerabilities as soon as they are disclosed and a fix is released. However, due to the rapid nature of exploitation, this may not always be enough. Furthermore, threat actors may even be exploiting vulnerabilities that are not yet publicly known.

As the end goals for a threat actor can differ – from data exfiltration to deploying ransomware – the post-exploitation behavior can also vary from actor to actor. However, AI security tools such as Darktrace / NETWORK can help identify and alert for post-exploitation behavior based on abnormal activity seen in the network environment.

Despite CVE-2023-48788 having been publicly disclosed and fixed in March, it appears that multiple threat actors, such as the Medusa ransomware group, have continued to exploit the vulnerability on unpatched systems. With new vulnerabilities being disclosed almost every other day, security teams may find it challenging continuously patch their systems.

As such, Darktrace / Proactive Exposure Management could also alleviate the workload of security teams by helping them identify and prioritize the most critical vulnerabilities in their network.

Insights from Darktrace’s First 6: Half-year threat report for 2024

First 6: half year threat report darktrace screenshot

Darktrace’s First 6: Half-Year Threat Report 2024 highlights the latest attack trends and key threats observed by the Darktrace Threat Research team in the first six months of 2024.

  • Focuses on anomaly detection and behavioral analysis to identify threats
  • Maps mitigated cases to known, publicly attributed threats for deeper context
  • Offers guidance on improving security posture to defend against persistent threats

Appendices

Credit to Emily Megan Lim (Cyber Security Analyst) and Ryan Traill (Threat Content Lead)

References

[1] https://nvd.nist.gov/vuln/detail/CVE-2023-48788

[2] https://www.horizon3.ai/attack-research/attack-blogs/cve-2023-48788-fortinet-forticlientems-sql-injection-deep-dive/

[3] https://redcanary.com/blog/threat-intelligence/cve-2023-48788/

[4] https://www.fortinet.com/blog/threat-research/teamcity-intrusion-saga-apt29-suspected-exploiting-cve-2023-42793

[5] https://redcanary.com/blog/threat-intelligence/cve-2023-48788/

[6] https://urlscan.io/result/3678b9e2-ad61-4719-bcef-b19cadcdd929/

List of IoCs

IoC - Type - Description + Confidence

  • 212.32.243[.]25/SETUP.MSI - URL - Payload
  • 89.149.200[.]9/SETUP.MSI - URL - Payload
  • 212.113.106[.]100/setup.msi - URL - Payload
  • 95.181.173[.]172/run.zip - URL - Payload
  • serv1[.]api[.]9hits[.]com - Domain - Likely C2 endpoint
  • 128.199.207[.]131 - IP - Likely C2 endpoint
  • azure-documents[.]com - Domain - C2 endpoint
  • 104.168.140[.]84 - IP - C2 endpoint
  • 77.246.103[.]110 - IP - Likely C2 endpoint
  • 212.113.106[.]100 - IP - C2 endpoint

Darktrace Model Detections

Anomalous Connection / Callback on Web Facing Device

Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

Anomalous Connection / New User Agent to IP Without Hostname

Anomalous Connection / Posting HTTP to IP Without Hostname

Anomalous Connection / Powershell to Rare External

Anomalous Connection / Rare External SSL Self-Signed

Anomalous Connection / Suspicious Self-Signed SSL

Anomalous Server Activity / Rare External from Server

Anomalous Server Activity / New User Agent from Internet Facing System

Anomalous Server Activity / Server Activity on New Non-Standard Port - External

Compliance / Remote Management Tool On Server

Device / New User Agent

Device / New PowerShell User Agent

Device / Attack and Recon Tools

Device / ICMP Address Scan

Device / Network Range Scan

Device / Network Scan

Device / RDP Scan

Device / Suspicious SMB Scanning Activity

Anomalous Connection / Multiple SMB Admin Session

Anomalous Connection / New or Uncommon Service Control

Anomalous Connection / Unusual Admin SMB Session

Device / Increase in New RPC Services

Device / Multiple Lateral Movement Breaches

Device / New or Uncommon WMI Activity

Device / New or Unusual Remote Command Execution

Device / SMB Lateral Movement

Device / Possible SMB/NTLM Brute Force

Unusual Activity / Successful Admin Brute-Force Activity

User / New Admin Credentials on Server

Unusual Activity / Enhanced Unusual External Data Transfer

Unusual Activity / Unusual External Data Transfer

Unusual Activity / Unusual External Data to New Endpoint

Device / Large Number of Model Breaches

Device / Large Number of Model Breaches from Critical Network Device

MITRE ATT&CK Mapping

Tactic – ID: Technique

Initial Access – T1190: Exploit Public-Facing Application

Resource Development – T1587.003: Develop Capabilities: Digital Certificates

Resource Development – T1608.003: Stage Capabilities: Install Digital Certificate

Command and Control – T1071.001: Application Layer Protocol: Web Protocols

Command and Control – T1219: Remote Access Software

Execution – T1059.001: Command and Scripting Interpreter: PowerShell

Reconnaissance – T1595: Active Scanning

Reconnaissance – T1590.005: Gather Victim Network Information: IP Addresses

Discovery – T1046: Network Service Discovery

Credential Access – T1110: Brute Force

Defense Evasion,Initial Access,Persistence,Privilege Escalation – T1078: Valid Accounts

Lateral Movement – T1021.002: Remote Services: SMB/Windows Admin Shares

Lateral Movement – T1021.003: Remote Services: Distributed Component Object Model

Execution – T1569.002: System Services: Service Execution

Execution – T1047: Windows Management Instrumentation

Exfiltration – T1041: Exfiltration Over C2 Channel

Exfiltration – T1567.002: Exfiltration Over Web Service: Exfiltration to Cloud Storage

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Emily Megan Lim
Cyber Analyst

More in this series

No items found.

Blog

/

AI

/

December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

2026 cyber threat trendsDefault blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI