Blog
/
Network
/
January 31, 2024

How Darktrace Defeated SmokeLoader Malware

Read how Darktrace's AI identified and neutralized SmokeLoader malware. Gain insights into their proactive approach to cybersecurity.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Patrick Anjos
Senior Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
31
Jan 2024

What is Loader Malware?

Loader malware is a type of malicious software designed primarily to infiltrate a system and then download and execute additional malicious payloads.

In recent years, loader malware has emerged as a significant threat for organizations worldwide. This trend is expected to continue given the widespread availability of many loader strains within the Malware-as-a-Service (MaaS) marketplace. The MaaS marketplace contains a wide variety of innovative strains which are both affordable, with toolkits ranging from USD 400 to USD 1,650 [1], and continuously improving, aiming to avoid traditional detection mechanisms.

SmokeLoader is one such example of a MaaS strain that has been observed in the wild since 2011 and continues to pose a significant threat to organizations and their security teams.

How does SmokeLoader Malware work?

SmokeLoader’s ability to drop an array of different malware strains onto infected systems, from backdoors, ransomware, cryptominers, password stealers, point-of-sale malware and banking trojans, means its a highly versatile loader that has remained consistently popular among threat actors.

In addition to its versatility, it also exhibits advanced evasion strategies that make it difficult for traditional security solutions to detect and remove, and it is easily distributed via methods like spam emails or malicious file downloads.

Between July and August 2023, Darktrace observed an increasing trend in SmokeLoader compromises across its customer base. The anomaly-based threat detection capabilities of Darktrace, coupled with the autonomous response technology, identified and contained the SmokeLoader infections in their initial stages, preventing attackers from causing further disruption by deploying other malicious software or ransomware.

SmokeLoader Malware Attack Details

PROPagate Injection Technique

SmokeLoader utilizes the PROPagate code injection technique, a less common method that inserts malicious code into existing processes in order to appear legitimate and bypass traditional signature-based security measures [2] [3]. In the case of SmokeLoader, this technique exploits the Windows SetWindowsSubclass function, which is typically used to add or change the behavior of Windows Operation System. By manipulating this function, SmokeLoader can inject its code into other running processes, such as the Internet Explorer. This not only helps to disguise  the malware's activity but also allows attackers to leverage the permissions and capabilities of the infected process.

Obfuscation Methods

SmokeLoader is known to employ several obfuscation techniques to evade the detection and analysis of security teams. The techniques include scrambling portable executable files, encrypting its malicious code, obfuscating API functions and packing, and are intended to make the malware’s code appear harmless or unremarkable to antivirus software. This allows attackers to slip past defenses and execute their malicious activities while remaining undetected.

Infection Vector and Communication

SmokeLoader typically spreads via phishing emails that employ social engineering tactics to convince users to unknowingly download malicious payloads and execute the malware. Once installed on target networks, SmokeLoader acts as a backdoor, allowing attackers to control infected systems and download further malicious payloads from command-and-control (C2) servers. SmokeLoader uses fast flux, a DNS technique utilized by botets whereby IP addresses associated with C2 domains are rapidly changed, making it difficult to trace the source of the attack. This technique also boosts the resilience of attack, as taking down one or two malicious IP addresses will not significantly impact the botnet's operation.

Continuous Evolution

As with many MaaS strains, SmokeLoader is continuously evolving, with its developers regularly adding new features and techniques to increase its effectiveness and evasiveness. This includes new obfuscation methods, injection techniques, and communication protocols. This constant evolution makes SmokeLoader a significant threat and underscores the importance of advanced threat detection and response capabilities solution.

Darktrace’s Coverage of SmokeLoader Attack

Between July and August 2023, Darktrace detected one particular SmokeLoader infection at multiple stages of its kill chain on a customer network. This detection was made possible by Darktrace DETECT’s anomaly-based approach and Self-Learning AI that allows it to identify subtle deviations in device behavior.

One of the key components of this process is the classification of endpoint rarity and determining whether an endpoint is new or unusual for any given network. This classification is applied to various aspects of observed endpoints, such as domains, IP addresses, or hostnames within the network. It thereby plays a vital role in identifying SmokeLoader activity, such as the initial infection vector or C2 communication, which typically involve a device contacting a malicious endpoint associated with SmokeLoader.

The First Signs of Infection SmokeLoader Infection

Beginning in July 2023, Darktrace observed a surge in suspicious activities that were assessed with moderate to high confidence to be associated with SmokeLoader malware.

For example on July 30, a device was observed making a successful HTTPS request to humman[.]art, a domain that had never been seen on the network, and therefore classified as 100% rare by DETECT. During this connection, the device in question received a total of 6.0 KiB of data from the unusual endpoint. Open-source intelligence (OSINT) sources reported with high confidence that this domain was associated with the SmokeLoader C2 botnet.

The device was then detected making an HTTP request to another 100% rare external IP, namely 85.208.139[.]35, using a new user agent. This request contained the URI ‘/DefenUpdate.exe’, suggesting a possible download of an executable (.exe) file. This was corroborated by the total amount of data received in this connection, 4.3 MB. Both the file name and its size suggest that the offending device may have downloaded additional malicious tooling from the SmokeLoader C2 endpoint, such as a trojan or information stealer, as reported on OSINT platforms [4].

Figure 1: Device event log showing the moment when a device made its first connection to a SmokeLoader associated domain, and the use of a new user agent. A few seconds later, the DETECT model “Anomalous Connection / New User Agent to IP Without Hostname” breached.

The observed new user agent, “Mozilla/5.0 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0) like Gecko” was identified as suspicious by Darktrace leading to the “Anomalous Connection / New User Agent to IP Without Hostname” DETECT model breach.

As this specific user agent was associated with the Internet Explorer browser running on Windows 10, it may not have appeared suspicious to traditional security tools. However, Darktrace’s anomaly-based detection allows it to identify and mitigate emerging threats, even those that utilize sophisticated evasion techniques.

This is particularly noteworthy in this case because, as discussed earlier, SmokeLoader is known to inject its malicious code into legitimate processes, like Internet Explorer.

Figure 2: Darktrace detecting the affected device leveraging a new user agent and establishing an anomalous HTTP connection with an external IP, which was 100% rare to the network.

C2 Communication

Darktrace continued to observe the device making repeated connections to the humman[.]art endpoint. Over the next few days. On August 7, the device was observed making unusual POST requests to the endpoint using port 80, breaching the ‘Anomalous Connection / Multiple HTTP POSTs to Rare Hostname’ DETECT model. These observed POST requests were observed over a period of around 10 days and consisted of a pattern of 8 requests, each with a ten-minute interval.

Figure 3: Model Breach Event Log highlighting the Darktrace DETECT model breach ‘Anomalous Connection / Multiple HTTP POSTs to Rare Hostname’.

Upon investigating the details of this activity identified by Darktrace DETECT, a particular pattern was observed in these requests: they used the same user-agent, “Mozilla/5.0 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0) like Gecko”, which was previously detected in the initial breach.

Additionally, they the requests had a constantly changing  eferrer header, possibly using randomly generated domain names for each request. Further examination of the packet capture (PCAP) from these requests revealed that the payload in these POST requests contained an RC4 encrypted string, strongly indicating SmokeLoader C2 activity.

Figure4: Advanced Search results display an unusual pattern in the requests made by the device to the hostname humman[.]art. This pattern shows a constant change in the referrer header for each request, indicating anomalous behavior.
Figure 5: The PCAP shows the payload seen in these POST requests contained an RC4 encrypted string strongly indicating SmokeLoader C2 activity.  

Unfortunately in this case, Darktrace RESPOND was not active on the network meaning that the attack was able to progress through its kill chain. Despite this, the timely alerts and detailed incident insights provided by Darktrace DETECT allowed the customer’s security team to begin their remediation process, implementing blocks on their firewall, thus preventing the SmokeLoader malware from continuing its communication with C2 infrastructure.

Darktrace RESPOND Halting Potential Threats from the Initial Stages of Detection

With Darktrace RESPOND, organizations can move beyond threat detection to proactive defense against emerging threats. RESPOND is designed to halt threats as soon as they are identified by DETECT, preventing them from escalating into full-blown compromises. This is achieved through advanced machine learning and Self-Learning AI that is able to understand  the normal ‘pattern of life’ of customer networks, allowing for swift and accurate threat detection and response.

One pertinent example was seen on July 6, when Darktrace detected a separate SmokeLoader case on a customer network with RESPOND enabled in autonomous response mode. Darktrace DETECT initially identified a string of anomalous activity associated with the download of suspicious executable files, triggering the ‘Anomalous File / Multiple EXE from Rare External Locations’ model to breach.

The device was observed downloading an executable file (‘6523.exe’ and ‘/g.exe’) via HTTP over port 80. These downloads originated from endpoints that had never been seen within the customer’s environment, namely ‘hugersi[.]com’ and ‘45.66.230[.]164’, both of which had strongly been linked to SmokeLoader by OSINT sources, likely indicating the initial infection stage of the attack [5].

Figure 6: This figure illustrates Darktrace DETECT observing a device downloading multiple .exe files from rare endpoints and the associated model breach, ‘Anomalous File / Multiple EXE from Rare External Locations’.

Around the same time, Darktrace also observed the same device downloading an unusual file with a numeric file name. Threat actors often employ this tactic in order to avoid using file name patterns that could easily be recognized and blocked by traditional security measures; by frequently changing file names, malicious executables are more likely to remain undetected.

Figure 7: Graph showing the unusually high number of executable files downloaded by the device during the initial infection stage of the attack. The orange and red circles represent the number of model breaches that the device made during the observed activity related to SmokeLoader infection.
Figure 8: This figure illustrates the moment when Darktrace DETECT identified a suspicious download with a numeric file name.

With Darktrace RESPOND active and enabled in autonomous response mode, the SmokeLoader infection was thwarted in the first instance. RESPOND took swift autonomous action by blocking connections to the suspicious endpoints identified by DETECT, blocking all outgoing traffic, and enforcing a pre-established “pattern of life” on offending devices. By enforcing a patten of life on a device, Darktrace RESPOND ensures that it cannot deviate from its ‘normal’ activity to carry out potentially malicious activity, while allowing the device to continue expected business operations.

Figure 9:  A total of 8 RESPOND actions were applied, including blocking connections to suspicious endpoints and domains associated with SmokeLoader.

In addition to the autonomous mitigative actions taken by RESPOND, this customer also received a Proactive Threat Notification (PTN) informing them of potentially malicious activity on their network. This prompted the Darktrace Security Operations Center (SOC) to investigate and document the incident, allowing the customer’s security team to shift their focus to remediating and removing the threat of SmokeLoader.

Conclusion

Ultimately, Darktrace showcased its ability to detect and contain versatile and evasive strains of loader malware, like SmokeLoader. Despite its adeptness at bypassing conventional security tools by frequently changing its C2 infrastructure, utilizing existing processes to infect malicious code, and obfuscating malicious file and domain names, Darktrace’s anomaly-based approach allowed it to recognize such activity as deviations from expected network behavior, regardless of their apparent legitimacy.

Considering SmokeLoader’s wide array of functions, including C2 communication that could be used to facilitate additional attacks like exfiltration, or even the deployment of information-stealers or ransomware, Darktrace proved to be crucial in safeguarding customer networks. By identifying and mitigating SmokeLoader at the earliest possible stage, Darktrace effectively prevented the compromises from escalating into more damaging and disruptive compromises.

With the threat of loader malware expected to continue growing alongside the boom of the MaaS industry, it is paramount for organizations to adopt proactive security solutions, like Darktrace DETECT+RESPOND, that are able to make intelligent decisions to identify and neutralize sophisticated attacks.

Credit to Patrick Anjos, Senior Cyber Analyst, Justin Torres, Cyber Analyst

Appendices

Darktrace DETECT Model Detections

- Anomalous Connection / New User Agent to IP Without Hostname

- Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

- Anomalous File / Multiple EXE from Rare External Locations

- Anomalous File / Numeric File Download

List of IOCs (IOC / Type / Description + Confidence)

- 85.208.139[.]35 / IP / SmokeLoader C2 Endpoint

- 185.174.137[.]109 / IP / SmokeLoader C2 Endpoint

- 45.66.230[.]164 / IP / SmokeLoader C2 Endpoint

- 91.215.85[.]147 / IP / SmokeLoader C2 Endpoint

- tolilolihul[.]net / Hostname / SmokeLoader C2 Endpoint

- bulimu55t[.]net / Hostname / SmokeLoader C2 Endpoint

- potunulit[.]org / Hostname / SmokeLoader C2 Endpoint

- hugersi[.]com / Hostname / SmokeLoader C2 Endpoint

- human[.]art / Hostname / SmokeLoader C2 Endpoint

- 371b0d5c867c2f33ae270faa14946c77f4b0953 / SHA1 / SmokeLoader Executable

References:

[1] https://bazaar.abuse.ch/sample/d7c395ab2b6ef69210221337ea292e204b0f73fef8840b6e64ab88595eda45b3/#intel

[2] https://malpedia.caad.fkie.fraunhofer.de/details/win.smokeloader

[3] https://www.darkreading.com/cyber-risk/breaking-down-the-propagate-code-injection-attack

[4] https://n1ght-w0lf.github.io/malware%20analysis/smokeloader/

[5] https://therecord.media/surge-in-smokeloader-malware-attacks-targeting-ukrainian-financial-gov-orgs

MITRE ATT&CK Mapping

Model: Anomalous Connection / New User Agent to IP Without Hostname

ID: T1071.001

Sub technique: T1071

Tactic: COMMAND AND CONTROL

Technique Name: Web Protocols

Model: Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

ID: T1185

Sub technique: -

Tactic: COLLECTION

Technique Name: Man in the Browser

ID: T1071.001

Sub technique: T1071

Tactic: COMMAND AND CONTROL

Technique Name: Web Protocols

Model: Anomalous File / Multiple EXE from Rare External Locations

ID: T1189

Sub technique: -

Tactic: INITIAL ACCESS

Technique Name: Drive-by Compromise

ID: T1588.001

Sub technique: - T1588

Tactic: RESOURCE DEVELOPMENT

Technique Name: Malware

Model: Anomalous File / Numeric File Download

ID: T1189

Sub technique: -

Tactic: INITIAL ACCESS

Technique Name: Drive-by Compromise

ID: T1588.001

Sub technique: - T1588

Tactic: RESOURCE DEVELOPMENT

Technique Name: Malware

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Patrick Anjos
Senior Cyber Analyst

More in this series

No items found.

Blog

/

AI

/

December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

2026 cyber threat trendsDefault blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI