Blog
/
Network
/
May 25, 2022

Uncovering the Sysrv-Hello Crypto-Jacking Bonet

Discover the cyber kill chain of a Sysrv-hello botnet infection in France and gain insights into the latest TTPs of the botnet in March and April 2022.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Shuh Chin Goh
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
May 2022

In recent years, the prevalence of crypto-jacking botnets has risen in tandem with the popularity and value of cryptocurrencies. Increasingly crypto-mining malware programs are distributed by botnets as they allow threat actors to harness the cumulative processing power of a large number of machines (discussed in our other Darktrace blogs.1 2 One of these botnets is Sysrv-hello, which in addition to crypto-mining, propagates aggressively across the Internet in a worm-like manner by trolling for Remote Code Execution (RCE) vulnerabilities and SSH worming from the compromised victim devices. This all has the purpose of expanding the botnet.

First identified in December 2020, Sysrv-hello’s operators constantly update and change the bots’ behavior to evolve and stay ahead of security researchers and law enforcement. As such, infected systems can easily go unnoticed by both users and organizations. This blog examines the cyber kill chain sequence of a Sysrv-hello botnet infection detected at the network level by Darktrace DETECT/Network, as well as the botnet’s tactics, techniques, and procedures (TTPs) in March and April 2022.

Figure 1: Timeline of the attack

Delivery and exploitation

The organization, which was trialing Darktrace, had deployed the technology on March 2, 2022. On the very same day, the initial host infection was seen through the download of a first-stage PowerShell loader script from a rare external endpoint by a device in the internal network. Although initial exploitation of the device happened prior to the installation and was not observed, this botnet is known to target RCE vulnerabilities in various applications such as MySQL, Tomcat, PHPUnit, Apache Solar, Confluence, Laravel, JBoss, Jira, Sonatype, Oracle WebLogic and Apache Struts to gain initial access to internal systems.3 Recent iterations have also been reported to have been deployed via drive-by-downloads from an empty HTML iframe pointing to a malicious executable that downloads to the device from a user visiting a compromised website.4

Initial intrusion

The Sysrv-hello botnet is distributed for both Linux and Windows environments, with the corresponding compatible script pulled based on the architecture of the system. In this incident, the Windows version was observed.

On March 2, 2022 at 15:15:28 UTC, the device made a successful HTTP GET request to a malicious IP address5 that had a rarity score of 100% in the network. It subsequently downloaded a malicious PowerShell script named ‘ldr.ps1'6 onto the system. The associated IP address ‘194.145.227[.]21’ belongs to ‘ASN AS48693 Rices Privately owned enterprise’ and had been identified as a Sysrv-hello botnet command and control (C2) server in April the previous year. 3

Looking at the URI ‘/ldr.ps1?b0f895_admin:admin_81.255.222.82:8443_https’, it appears some form of query was being executed onto the object. The question mark ‘?’ in this URI is used to delimit the boundary between the URI of the queryable object and the set of strings used to express a query onto that object. Conventionally, we see the set of strings contains a list of key/value pairs with equal signs ‘=’, which are separated by the ampersand symbol ‘&’ between each of those parameters (e.g. www.youtube[.]com/watch?v=RdcCjDS0s6s&ab_channel=SANSCyberDefense), though the exact structure of the query string is not standardized and different servers may parse it differently. Instead, this case saw a set of strings with the hexadecimal color code #b0f895 (a light shade of green), admin username and password login credentials, and the IP address ‘81.255.222[.]82’ being applied during the object query (via HTTPS protocol on port 8443). In recent months this French IP has also had reports of abuse from the OSINT community.7

On March 2, 2022 at 15:15:33 UTC, the PowerShell loader script further downloaded second-stage executables named ‘sys.exe’ and ‘xmrig.2 sver.8 9 These have been identified as the worm and cryptocurrency miner payloads respectively.

Establish foothold

On March 2, 2022 at 17:46:55 UTC, after the downloads of the worm and cryptocurrency miner payloads, the device initiated multiple SSL connections in a regular, automated manner to Pastebin – a text storage website. This technique was used as a vector to download/upload data and drop further malicious scripts onto the host. OSINT sources suggest the JA3 client SSL fingerprint (05af1f5ca1b87cc9cc9b25185115607d) is associated with PowerShell usage, corroborating with the observation that further tooling was initiated by the PowerShell script ‘ldr.ps1’.

Continual Pastebin C2 connections were still being made by the device almost two months since the initiation of such connections. These Pastebin C2 connections point to new tactics and techniques employed by Sysrv-hello — reports earlier than May do not appear to mention any usage of the file storage site. These new TTPs serve two purposes: defense evasion using a web service/protocol and persistence. Persistence was likely achieved through scheduling daemons downloaded from this web service and shellcode executions at set intervals to kill off other malware processes, as similarly seen in other botnets.10 Recent reports have seen other malware programs also switch to Pastebin C2 tunnels to deliver subsequent payloads, scrapping the need for traditional C2 servers and evading detection.11

Figure 2: A section of the constant SSL connections that the device was still making to ‘pastebin[.]com’ even in the month of April, which resembles beaconing scheduled activity

Throughout the months of March and April, suspicious SSL connections were made from a second potentially compromised device in the internal network to the infected breach device. The suspicious French IP address ‘81.255.222[.]82’ previously seen in the URI object query was revealed as the value of the Server Name Indicator (SNI) in these SSL connections where, typically, a hostname or domain name is indicated.

After an initial compromise, attackers usually aim to gain long-term remote shell access to continue the attack. As the breach device does not have a public IP address and is most certainly behind a firewall, for it to be directly accessible from the Internet a reverse shell would need to be established. Outgoing connections often succeed because firewalls generally filter only incoming traffic. Darktrace observed the device making continuous outgoing connections to an external host listening on an unusual port, 8443, indicating the presence of a reverse shell for pivoting and remote administration.

Figure 3: SSL connections to server name ‘81.255.222[.]8’ at end of March and start of April

Accomplish mission

On March 4, 2022 at 15:07:04 UTC, the device made a total of 16,029 failed connections to a large volume of external endpoints on the same port (8080). This behavior is consistent with address scanning. From the country codes, it appears that public IP addresses for various countries around the world were contacted (at least 99 unique addresses), with the US being the most targeted.

From 19:44:36 UTC onwards, the device performed cryptocurrency mining using the Minergate mining pool protocol to generate profits for the attacker. A login credential called ‘x’ was observed in the Minergate connections to ‘194.145.227[.]21’ via port 5443. JSON-RPC methods of ‘login’ and ‘submit’ were seen from the connection originator (the infected breach device) and ‘job’ was seen from the connection responder (the C2 server). A high volume of connections using the JSON-RPC application protocol to ‘pool-fr.supportxmr[.]com’ were also made on port 80.

When the botnet was first discovered in December 2020, mining pools MineXMR and F2Pool were used. In February 2021, MineXMR was removed and in March 2021, Nanopool mining pool was added,12 before switching to the present SupportXMR and Minergate mining pools. Threat actors utilize such proxy pools to help hide the actual crypto wallet address where the contributions are made by the crypto-mining activity. From April onwards, the device appears to download the ‘xmrig.exe’ executable from a rare IP address ‘61.103.177[.]229’ in Korea every few days – likely in an attempt to establish persistency and ensure the cryptocurrency mining payload continues to exist on the compromised system for continued mining.

On March 9, 2022 from 18:16:20 UTC onwards, trolling for various RCE vulnerabilities (including but not limited to these four) was observed over HTTP connections to public IP addresses:

  1. Through March, the device made around 5,417 HTTP POSTs with the URI ‘/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php’ to at least 99 unique public IPs. This appears to be related to CVE-2017-9841, which in PHPUnit allows remote attackers to execute arbitrary PHP code via HTTP POST data beginning with a ‘13 PHPUnit is a common testing framework for PHP, used for performing unit tests during application development. It is used by a variety of popular Content Management Systems (CMS) such as WordPress, Drupal and Prestashop. This CVE has been called “one of the most exploitable CVEs of 2019,” with around seven million attack attempts being observed that year.14 This framework is not designed to be exposed on the critical paths serving web pages and should not be reachable by external HTTP requests. Looking at the status messages of the HTTP POSTs in this incident, some ‘Found’ and ‘OK’ messages were seen, suggesting the vulnerable path could be accessible on some of those endpoints.

Figure 4: PCAP of CVE-2017-9841 vulnerability trolling

Figure 5: The CVE-2017-9841 vulnerable path appears to be reachable on some endpoints

  1. Through March, the device also made around 5,500 HTTP POSTs with the URI ‘/_ignition/execute-solution’ to at least 99 unique public IPs. This appears related to CVE-2021-3129, which allows unauthenticated remote attackers to execute arbitrary code using debug mode with Laravel, a PHP web application framework in versions prior to 8.4.2.15 The POST request below makes the variable ‘username’ optional, and the ‘viewFile’ parameter is empty, as a test to see if the endpoint is vulnerable.16

Figure 6: PCAP of CVE-2021-3129 vulnerability trolling

  1. The device made approximately a further 252 HTTP GETs with URIs containing ‘invokefunction&function’ to another minimum of 99 unique public IPs. This appears related to a RCE vulnerability in ThinkPHP, an open-source web framework.17

Figure 7: Some of the URIs associated with ThinkPHP RCE vulnerability

  1. A HTTP header related to a RCE vulnerability for the Jakarta Multipart parser used by Apache struts2 in CVE-2017-563818 was also seen during the connection attempts. In this case the payload used a custom Content-Type header.

Figure 8: PCAP of CVE-2017-5638 vulnerability trolling

Two widely used methods of SSH authentication are public key authentication and password authentication. After gaining a foothold in the network, previous reports3 19 have mentioned that Sysrv-hello harvests private SSH keys from the compromised device, along with identifying known devices. Being a known device means the system can communicate with the other system without any further authentication checks after the initial key exchange. This technique was likely performed in conjunction with password brute-force attacks against the known devices. Starting from March 9, 2022 at 20:31:25 UTC, Darktrace observed the device making a large number of SSH connections and login failures to public IP ranges. For example, between 00:05:41 UTC on March 26 and 05:00:02 UTC on April 14, around 83,389 SSH connection attempts were made to 31 unique public IPs.

Figure 9: Darktrace’s Threat Visualizer shows large spikes in SSH connections by the breach device

Figure 10: Beaconing SSH connections to a single external endpoint, indicating a potential brute-force attack

Darktrace coverage

Cyber AI Analyst was able to connect the events and present them in a digestible, chronological order for the organization. In the aftermath of any security incidents, this is a convenient way for security users to conduct assisted investigations and reduce the workload on human analysts. However, it is good to note that this activity was also easily observed in real time from the model section on the Threat Visualizer due to the large number of escalating model breaches.

Figure 11: Cyber AI Analyst consolidating the events in the month of March into a summary

Figure 12: Cyber AI Analyst shows the progression of the attack through the month of March

As this incident occurred during a trial, Darktrace RESPOND was enabled in passive mode – with a valid license to display the actions that it would have taken, but with no active control performed. In this instance, no Antigena models breached for the initial compromised device as it was not tagged to be eligible for Antigena actions. Nonetheless, Darktrace was able to provide visibility into these anomalous connections.

Had Antigena been deployed in active mode, and the breach device appropriately tagged with Antigena All or Antigena External Threat, Darktrace would have been able to respond and neutralize different stages of the attack through network inhibitors Block Matching Connections and Enforce Group Pattern of Life, and relevant Antigena models such as Antigena Suspicious File Block, Antigena Suspicious File Pattern of Life Block, Antigena Pastebin Block and Antigena Crypto Currency Mining Block. The first of these inhibitors, Block Matching Connections, will block the specific connection and all future connections that matches the same criteria (e.g. all future outbound HTTP connections from the breach device to destination port 80) for a set period of time. Enforce Group Pattern of Life allows a device to only make connections and data transfers that it or any of its peer group typically make.

Conclusion

Resource hijacking results in unauthorized consumption of system resources and monetary loss for affected organizations. Compromised devices can potentially be rented out to other threat actors and botnet operators could switch from conducting crypto-mining to other more destructive illicit activities (e.g. DDoS or dropping of ransomware) whilst changing their TTPs in the future. Defenders are constantly playing catch-up to this continual evolution, and retrospective rules and signatures solutions or threat intelligence that relies on humans to spot future threats will not be able to keep up.

In this case, it appears the botnet operator has added an object query in the URL of the initial PowerShell loader script download, added Pastebin C2 for evasion and persistence, and utilized new cryptocurrency mining pools. Despite this, Darktrace’s Self-Learning AI was able to identify the threats the moment attackers changed their approach, detecting every step of the attack in the network without relying on known indicators of threat.

Appendix

Darktrace model detections

  • Anomalous File / Script from Rare Location
  • Anomalous File / EXE from Rare External Location
  • Compromise / Agent Beacon (Medium Period)
  • Compromise / Slow Beaconing Activity To External Rare
  • Compromise / Beaconing Activity To External Rare
  • Device / External Address Scan
  • Compromise / Crypto Currency Mining Activity
  • Compromise / High Priority Crypto Currency Mining
  • Compromise / High Volume of Connections with Beacon Score
  • Compromise / SSL Beaconing to Rare Destination
  • Anomalous Connection / Multiple HTTP POSTs to Rare Hostname
  • Device / Large Number of Model Breaches
  • Anomalous Connection / Multiple Failed Connections to Rare Endpoint
  • Anomalous Connection / SSH Brute Force
  • Compromise / SSH Beacon
  • Compliance / SSH to Rare External AWS
  • Compromise / High Frequency SSH Beacon
  • Compliance / SSH to Rare External Destination
  • Device / Multiple C2 Model Breaches
  • Anomalous Connection / POST to PHP on New External Host

MITRE ATT&CK techniques observed:

IoCs

Thanks to Victoria Baldie and Yung Ju Chua for their contributions.

Footnotes

1. https://www.darktrace.com/en/blog/crypto-botnets-moving-laterally

2. https://www.darktrace.com/en/blog/how-ai-uncovered-outlaws-secret-crypto-mining-operation

3. https://www.lacework.com/blog/sysrv-hello-expands-infrastructure

4. https://www.riskiq.com/blog/external-threat-management/sysrv-hello-cryptojacking-botnet

5. https://www.virustotal.com/gui/ip-address/194.145.227.21

6. https://www.virustotal.com/gui/url/c586845daa2aec275453659f287dcb302921321e04cb476b0d98d731d57c4e83?nocache=1

7. https://www.abuseipdb.com/check/81.255.222.82

8. https://www.virustotal.com/gui/file/586e271b5095068484446ee222a4bb0f885987a0b77e59eb24511f6d4a774c30

9. https://www.virustotal.com/gui/file/f5bef6ace91110289a2977cfc9f4dbec1e32fecdbe77326e8efe7b353c58e639

10. https://www.ironnet.com/blog/continued-exploitation-of-cve-2021-26084

11. https://www.zdnet.com/article/njrat-trojan-operators-are-now-using-pastebin-as-alternative-to-central-command-server

12. https://blogs.juniper.net/en-us/threat-research/sysrv-botnet-expands-and-gains-persistence

13. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-9841

14. https://www.imperva.com/blog/the-resurrection-of-phpunit-rce-vulnerability

15. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3129

16. https://isc.sans.edu/forums/diary/Laravel+v842+exploit+attempts+for+CVE20213129+debug+mode+Remote+code+execution/27758

17. https://securitynews.sonicwall.com/xmlpost/thinkphp-remote-code-execution-rce-bug-is-actively-being-exploited

18. https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5638

19. https://sysdig.com/blog/crypto-sysrv-hello-wordpress

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Shuh Chin Goh

More in this series

No items found.

Blog

/

AI

/

December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

2026 cyber threat trendsDefault blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI