How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents
AI is accelerating faster than governance can keep up, expanding attack surfaces and creating unseen risks. From data and models to AI agents and integrations, security starts by knowing what to protect. Discover how to identify AI-driven risks, so you can establish governance frameworks and controls that secure innovation without exposing the enterprise to new attack surfaces.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Written by
Simon Fellows
Senior Vice President, Product Strategy
Share
23
Dec 2025
Introduction: Why securing AI is now a security priority
AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?
While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.
The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.
What does “securing AI” actually mean?
Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.
Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.
AI does not fit neatly into any of these categories. An AI system is simultaneously:
An application that executes logic
A data processor that ingests and generates sensitive information
A decision-making layer that influences or automates actions
A dynamic system that changes behavior over time
As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.
For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.
In each case, no single security team “owns” the risk outright.
This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.
At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.
The five categories of AI risk in the enterprise
A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem
Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.
1. Defending against misuse and emergent AI behaviors
Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.
Key risks include:
Malicious prompt injection designed to coerce unwanted actions
Unauthorized or unintended use cases that bypass guardrails
Exposure of sensitive data through prompt histories
Hallucinated or malicious outputs that influence human behavior
Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes
2. Monitoring and controlling AI in operation
Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.
Operational AI risks include:
Agents using permissions in unintended ways
Uncontrolled outbound connections to external services or agents
Loss of forensic visibility into ephemeral AI components
Non-compliant data transmission across jurisdictions
Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.
3. Protecting AI development and infrastructure
Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.
Vulnerabilities in AI-generated code and dependencies
AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.
4. Securing the AI supply chain
AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.
Suppliers applying AI to enterprise data without disclosure
Compromised models, training data, or dependencies
Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.
5. Strengthening readiness and oversight
Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.
Oversight risks include:
Lack of meaningful AI risk reporting
Untested AI systems in production
Security teams untrained in AI-specific threats
Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.
Reframing AI security for the boardroom
AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.
Effective communication with leadership focuses on:
Trust: confidence in data integrity, model behavior, and outputs
Accountability: clear ownership across teams and suppliers
Resilience: the ability to operate, audit, and adapt under attack or regulation
Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.
Conclusion: Securing AI is a lifecycle challenge
The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.
Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.
The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.
Download the full framework
Discover how to identify AI-driven risks, so you can establish governance frameworks and controls at your organization.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Darktrace Recognized as the Only Visionary in the 2026 Gartner® Magic Quadrant™ for CPS Protection Platforms
The Gartner® Magic Quadrant™ for CPS Protection Platforms provides an independent view of the vendors shaping this rapidly evolving market, evaluating how providers are helping organizations address the cybersecurity risks associated with increasingly connected operational technology and cyber-physical environments. Security and risk leaders use this research to better understand vendor positioning and to inform decisions as they modernize their Cyber Physical System (CPS) security strategies. We encourage organizations evaluating CPS security platforms to review the full report to gain a comprehensive view of the market.
Darktrace’s position as the only Visionary for the second consecutive year reflects the strength of our innovation, product execution, and long-term strategy for CPS security.
Darktrace / OT is built to address the realities of modern industrial defense, securing converged IT, OT, and IoT environments, applying Self-Learning AI to detect known, unknown, and novel threats, accelerating investigations and prioritizing risk based on operational impact. This unique approach supports the flexible deployment models required by complex critical infrastructure organizations.
Why Darktrace / OT stands apart in CPS security
As industrial environments continue to converge with enterprise infrastructure, security leaders are being asked to reduce cyber risk in systems where uptime, safety, and regulatory requirements limit traditional security approaches. Teams need to understand how risk develops across the environment, investigate threats with greater speed and clarity, and prioritize mitigation based on operational impact.
Darktrace / OT is built for that challenge. It combines cross-domain visibility, detection and investigation with Self-Learning AI, bespoke risk management beyond CVEs, and OT-relevant workflows that support both security outcomes and operational resilience.
Unified visibility across converged CPS environments
As critical infrastructure expands beyond traditional OT networks to include engineering workstations, HMIs, remote access pathways, enterprise systems, and cloud-linked infrastructure, teams need to understand how assets relate, where dependencies exist, and how exposure develops across domains.
Darktrace / OT provides unified visibility across OT, IT, IoT, and IoMT to help organizations understand cyber risk across connected industrial environments. By bringing telemetry together through capabilities such as Operational Overview, OT workflows, and deep protocol inspection, Darktrace helps engineers and security teams work from shared context and a more operationally relevant understanding of the environment they are defending.
Enhanced threat detection, investigation, and response powered by Self-Learning AI
While signatures can still provide value for known threats, they fail against insider misuse, zero-day exploits, and custom-built malware designed for targeted operations. Darktrace / OT uses Self-Learning AI to detect subtle deviations from normal behavior across industrial environments where threats often appear through abnormal communications, misuse of legitimate access, or suspicious device behavior rather than known malware. To improve incident investigation, Darktrace’s Cyber AI Analyst automatically correlates activity and produces contextual summaries to reduce manual triage effort and help teams move faster from alert to understanding.
Darktrace / OT further strengthens investigation and response through Expanded telemetry through NEXT for OT, extending visibility into operational endpoints such as engineering workstations and HMIs to support deeper root-cause analysis. Leveraging Self-Learning AI, Darktrace also enables autonomous response that can surgically contain anomalous activity while allowing industrial processes to continue operating normally. Organizations can customize response actions by device, device type, or network segment, with options ranging from fully autonomous enforcement to human confirmation workflows, helping security teams reduce operational disruption while maintaining control over response decisions.
Contextual risk prioritization based on operational relevance
Teams need the right tools to shift from reactive defense to proactively thinking about their security posture. However, most OT teams are stuck using IT-centric tools that don’t speak the language of industrial systems, are consistently overwhelmed with static CVE lists, and these tools offer no understanding of OT-specific protocols.
Darktrace / OT helps organizations move beyond static vulnerability lists by prioritizing cyber risk based on operational context. By incorporating asset criticality, network relationships, exploitability intelligence, behavioral telemetry, and attack path analysis, the platform helps teams understand which exposures could realistically impact operations. By correlating CVE severity, KEV data, MITRE techniques, and business impact, Darktrace enables more focused remediation decisions that support operational resilience, governance, and compliance initiatives such as IEC-62443.
Built for real-world deployment and enterprise alignment
Darktrace / OT is designed for the realities of industrial environments where flexible deployment across on-premises, hybrid, distributed, air-gapped and operationally sensitive networks is essential. The platform integrates with enterprise security ecosystems including SIEM, SOAR, CMDB, firewall, and governance tools to support broader security workflows. This enables OT security to align with enterprise programs while respecting operational constraints and improving collaboration between security and engineering teams.
Customer validation and platform recognition
In the last 12 months, Darktrace / OT has received a 4.8/5 rating on Gartner Peer Insights*. which we believe reflects strong customer confidence in the platform across critical infrastructure and industrial environments.
With this recognition, Darktrace is now positioned across multiple Gartner Magic Quadrants, including Leader positions in Network Detection and Response (NDR) and Email Security Platforms, reflecting the breadth of Darktrace’s ActiveAI Security Platform.
Continuing to advance the future of CPS security
We believe Darktrace’s recognition as the only Visionary for the second consecutive year reflects a clear direction: CPS security platforms need to help customers connect visibility to investigation, investigation to prioritization, and prioritization to real operational outcomes.
That remains our focus for Darktrace / OT.
As industrial environments continue to grow more connected, more complex, and more business-critical, we will continue to invest in the capabilities that help customers reduce uncertainty, strengthen resilience, and secure the systems that keep their operations running.
Download the full Gartner Magic Quadrant for CPS Protection Platforms
Gartner, Magic Quadrant for CPS Protection Platforms, Katell Thielemann, Ruggero Contu, Wam Voster, Sumit Rajput, 3 March 2026
Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant and Peer Insights are a registered trademark, of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences with the vendors listed on the platform, should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose.
Product Marketing Manager, OT Security & Compliance
Blog
/
Network
/
March 18, 2026
When Reality Diverges from the Playbook: Darktrace Identifies Encryption in a World Leaks Ransomware Attack
As-a-Service Cybercrime Models
As-a-Service cybercrime models reduce the barrier to entry for cyber criminals as they no longer need expertise in every domain. Threat actors can increasingly outsource or supplement missing skills through the broader cybercrime-as-a-service ecosystem, and thus these models continue to grow in popularity within the cybercriminal underground. This has led to multiple templates in this sphere, such as Phishing-as-a-Service, Botnet-as-a-Service, DDoS-as-a-Service, and notably Ransomware-as-a-Service (RaaS) [1].
What is Extortion-as-a-Service?
Extortion-as-a-Service (EaaS) businesses function as a formalized way for cyber threat actors to offer extortion services to others for a fee or profit share and represents an evolution of extortion operations from the double-extortion ransomware model. Advancing from the RaaS model, extortion has become a distinct profit stream, separate from the encryption payload. This separation of functions, data theft, negotiation, and publicity, sets the stage for EaaS [1].
The EaaS model reflects a broader trend in cybercriminal activity, in which threat actors increasingly prioritize data theft and public exposure over traditional ransomware encryption. This shift reduces operational complexity while increasing pressure on victims through reputational damage. This approach has become increasingly popular among threat actors as, unlike encryption-based attacks, these operations are more difficult to detect and remediate [2]. It reflects a trend of ‘hack-and-leak’ operations that prioritize stealth, speed, and reputational damage over traditional encryption-based ransomware attacks [3].
World Leaks Overview
World Leaks emerged in early 2024 as a direct rebrand of the Hunters International ransomware group, which was notorious for encrypting victims’ data and demanding payment for decryption keys. In mid-2025, Hunters International shifted to an extortion-only model due to law enforcement scrutiny and reduced profitability, rebranding itself as World Leaks.
World Leaks functions as an affiliate-based EaaS operation which provides proprietary Storage Software exfiltration tooling to affiliates while maintaining a four-platform infrastructure consisting of a main data leak site hosted on the Dark Web where victim data is published, a victim negotiation portal with live chat, an affiliate management panel, and an insider journalist platform granting media outlets 24-hour advance access to stolen data before public release [4]. Since its emergence, World Leaks has published data stolen from dozens of organizations globally on its data leak site, serving both as a pressure tactic and a means for building reputation among cyber criminals.
World Leaks (known associations include Hive Ransomware, Secp0 Ransomware, and UNC6148) have been known to target the industrial (manufacturing) sector, along with healthcare organizations, technology firms and more generally, industries with valuable intellectual property [4]. Victims targeted have spanned multiple countries, with most located in the US, as well as Canada and several countries across Europe [5].
World Leaks’ Tactics, Techniques, and Procedures (TTPs) [3][4]
World Leaks’ typical attack pattern involves the exploitation of credentials with inadequate access controls, e.g. lacking multi-factor authentication (MFA), moving through reconnaissance, lateral movement and data exfiltration, notably without an encryption element.
Initial Access:
Initial access is typically gained through the exploitation of compromised virtual private network (VPN) credentials lacking MFA through valid accounts, as well as phishing campaigns. The targeting of internet-facing VPN infrastructure, RDP, and public-facing applications also represent common attack vectors in World Leaks incidents.
Lateral Movement:
SMB, RDP, and SSH are used for lateral movement via remote services. Notably, the group is also known to use PsExec and Rclone as part of their lateral movement activities.
Data exfiltration is carried out through custom storage software tooling via TOR connections. Cloud storage services used for exfiltration particularly include MEGA. World Leaks also carry out direct data transfer through established command-and-control (C2) infrastructure.
Unlike Hunters International, which combined encryption with extortion, World Leaks claims to have abandoned the use of encryption. Some reports note that operations since January 2025 represent a pivot toward eliminating encryption entirely, instead relying on custom exfiltration tooling with SOCKSv5 proxy and TOR-based communications [4]. However, in early 2026, Darktrace detected an incident that directly contradicted this claim: World Leaks carried out an attack that involved both the exfiltration and encryption of customer data.
Darktrace’s Coverage of World Leaks Ransomware
Organizations today face a growing challenge: keeping pace with increasingly fast-moving threats. This incident highlights a common problem, when time-limited mitigations expire or human security teams cannot respond quickly enough, attackers are often able to regain the upper hand. A recent Darktrace detection of World Leaks ransomware provides a clear example of this challenge in practice.
In January 2026, Darktrace identified the presence of ransomware and data encryption linked to World Leaks within the network of an organization within the healthcare sector. Although Darktrace’s Autonomous Response capability was active in the customer’s environment and initially blocking suspicious connectivity, buying time for the customer to remediate, the attack continued once these mitigative actions expired. Darktrace continued to apply Autonomous Response actions as the attack progressed, working to inhibit the attackers at each stage of the intrusion.
Investigations carried out by Darktrace revealed that threat actors likely gained initial access via a Fortigate appliance in mid-October, indicating a three-month dwell time, before employing living-off-the-land (LOTL) techniques for lateral movement. C2 communications were established using Cloudflare Tunnel (formerly Argo Tunnel). As part of the Actions on Objectives attack phase, a significant volume of data was exfiltrated to the MEGA cloud storage platform, followed by the encryption of customer data.
Initial access/ Lateral movement
Darktrace analysts identified the likely patient-zero device within the network as a Fortigate appliance. In October 2025, this device was seen conducting brute-force activity using the compromised ‘administrator’ credential to gain a foothold deeper within the customer’s environment. Masquerading as a privileged user, the threat actor then went on to launch activity on remote devices via PsExec, a common administrative tool that allows users to execute processes on remote systems without manually installing client software, providing significant power to attackers when abused. Around the time, Darktrace detected an unknown device on the network attempting to authenticate via NTLM. As this device had not previously been seen on the network, it likely belonged to the attacker.
Reconnaissance
As part of the reconnaissance phase of the attack, port and network scanning was carried out in an attempt to identify open UDP and TCP ports within the network.
Lateral movement & C2
Around one month after entering the customer’s network, the World Leaks threat actors began tunnelling activity using Cloudflare Tunnel. Darktrace detected connections to several hostnames including: region2.v2.argotunnel[.]com; h2.cftunnel[.]com; region1.v2.argotunnel[.]com. This tunnelling activity continued until January of 2026, when encryption occurred. Cloudflare tunnels are known to be abused by attackers as they enable the use of temporary infrastructure to scale operations, allowing rapid deployment and teardown. Furthermore, leveraging of Cloudflare’s infrastructure to create these rate-limited tunnels (used to relay traffic from an attacker-controlled server to a local machine) makes such malicious activity harder to detect by both defenders and traditional security measures, particularly those that rely on static blocklists [6].
Further lateral movement was carried out using common remote management tools such as Windows Remote Management (WinRM) RDP, allowing the World Leaks threat actors to access local devices within the victim organization’s network.
As this attack progressed, Darktrace detected multiple files being written over SMB. These files included Windows\Temp\chromeremotedesktophost.msi, which was written from the patient-zero device to another internal device as part of lateral movement efforts. Following this transfer, and prior to subsequent data exfiltration activity, a network server was observed connecting to the hostname remotedesktop-pa[.]googleapis[.]com, an API endpoint required for Chrome Remote Desktop, indicating that Chrome RDP was used by the threat actor in this stage of the attack.
Other files written over SMB included the script programdata\syc\OpenSSHUtils.psm1 (which can be used legitimately to configure OpenSSH) and the executable programdata\syc\ssh‑sk‑helper.exe (a legitimate OpenSSH component used to support security keys). These files were written from the suspected patient‑zero device to an internal domain controller using the ‘administrator’ credential.
Thereafter, SSH connections to external IP address 51.15.109[.]222 were observed, providing another channel between the malicious actors and victim machines. Darktrace recognized that the use of SSH by the devices seen connecting to this IP address was highly anomalous, indicating that this suspicious activity formed part of the attack.
Writes of the script programdata\syc\OpenSSHUtils.psm1 were also observed into January, highlighting the continuation of the attack that had begun three months earlier.
On December 19 and 20, Darktrace detected a DNS server within the customer’s network making anomalous outgoing connections to an external IP address not previously seen in the environment: 193.161.193[.]99. This IP address has been reported by open-source-intelligence (OSINT) as being associated with C2 infrastructure, having been linked to several remote access trojans (RATs) and botnets in the past.
This activity a shift towards the infrastructure-as-a-service (IaaS) model, underscoring the growing trend around As-a-Service Cybercrime models and the increasing the industrialization of botnets. The presence of extensive digital botnets, often leased to other criminal organizations, means the group gaining initial access is not necessarily the same group conducting ransomware deployment or data theft; botnets now act as shared underlying infrastructure enabling multiple forms of cybercriminal activity [7].
Furthermore, connections to this IP address (193.161.193[.]99) were made over port 1194, which is associated with OpenVPN, suggesting that World Leaks may have leveraged it to obfuscate C2 communication with attacker-controlled infrastructure.
Figure 1: Darktrace’s detection of the IP address 193.161.193[.]99, noting that it was first seen within the customer’s network on December 19, 2025.
Data exfiltration
In November, Darktrace detected the threat actors carrying out one of their Attack on Objective tactics: data exfiltration. Multiple local devices within the compromised network began transferring data to Backblaze and MEGA domains, both of which provide cloud storage services; 80+GB of data was transferred to MEGA in late December 2025. Endpoints associated with this activity included: backblazeb2[.]com and gfs302n520[.]userstorage[.]mega[.]co[.]nz, as well as related user agents such as AS40401 BACKBLAZE) and MegaClient/10.3.0/64.
Notably, Darktrace researchers identified two known World Leaks TTPs in this attack: the use of MEGA, a known tool abused by the group, and Rclone, a command-line tool used to manage files on cloud storage, which was observed in the user agent of the MEGA data-transfer connections: rclone/v1.69.0 [4].
Figure 2: Cyber AI Analyst Incident highlighting data upload activity to backblaze[.]com endpoints.\
Ransomware deployment & encryption
The encryption stage of this attack was confirmed by the presence of a ransom note found on the network in a file with a seemingly randomized nine-character string preceding README.txt, attributing the incident to World Leaks, along with an extension with the same nine characters appended to encrypted files. Darktrace also observed SMB writes of files named world.exe and task.bat, with the compromised ‘localadmin’ credential used during the SMB logins. It is likely that these files served as the vector for the ransomware payload.
Figure 3: Packet Capture (PCAP) of the ransom note claiming that the attack was carried out by World Leaks.
Conclusion
Though traditional ransomware relies on encryption, recent trends show that cyber threat actors no longer need to rely on noisy encryption tools and can eliminate much of the risk and technical complexity associated with encrypting systems. This is the model reportedly preferred by World Leaks after their rebrand from Hunters International.
In addition to reducing noise around these attacks, extortion‑only operations may be favored by threat actors over encryption‑focused ones for several reasons, including the fact that traditional security tools may struggle to detect data theft compared to encryption, that attackers leave less evidence behind when encryption is avoided, and that the long‑term impacts of stolen data on organizations can be greater than the loss of systems caused by encryption processes, which can be restored [8]. This is supported by analysis of data leak sites suggesting that almost 1,500 incidents in 2025 relied on data theft alone. Attackers can simply steal victim data and attempt to extort a ransom by threatening to publish it, without needing to deploy ransomware at all [9]. Furthermore, although World Leaks aims to function as an affiliate‑based EaaS operation, security teams should remain aware that their affiliates may have different criminal objectives.
Contrary to reports that World Leaks’ typical attack style has an extortion‑only objective, Darktrace detected an incident in which a World Leaks attack did end with the encryption of customer data. This highlights the need for adaptive defenses and reinforces the importance of network defenders staying proactive in the face of attacks, particularly as they may progress in ways that are unexpected compared to previous trends associated with a given threat actor.
Credit to Tiana Kelly (Senior Cyber Analyst and Analyst Manager) and Emily Megan Lim (Senior Cyber Analyst)
Edited by Ryan Traill (Content Manager)
Appendices
IoCs
world.exe – Executable File – Possible Ransomware Payload
task.bat – Script File – Possible Ransomware Payload