Blog
/
AI
/
December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

AI is accelerating faster than governance can keep up, expanding attack surfaces and creating unseen risks. From data and models to AI agents and integrations, security starts by knowing what to protect. Discover how to identify AI-driven risks, so you can establish governance frameworks and controls that secure innovation without exposing the enterprise to new attack surfaces. 
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Written by
Simon Fellows
Senior Vice President, Product Strategy
How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
23
Dec 2025

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Download the full framework

Discover how to identify AI-driven risks, so you can establish governance frameworks and controls at your organization.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Written by
Simon Fellows
Senior Vice President, Product Strategy

More in this series

No items found.

Blog

/

Endpoint

/

February 1, 2026

ClearFake: From Fake CAPTCHAs to Blockchain-Driven Payload Retrieval

fake captcha to blockchain driven palyload retrievalDefault blog imageDefault blog image

What is ClearFake?

As threat actors evolve their techniques to exploit victims and breach target networks, the ClearFake campaign has emerged as a significant illustration of this continued adaptation. ClearFake is a campaign observed using a malicious JavaScript framework deployed on compromised websites, impacting sectors such as e‑commerce, travel, and automotive. First identified in mid‑2023, ClearFake is frequently leveraged to socially engineer victims into installing fake web browser updates.

In ClearFake compromises, victims are steered toward compromised WordPress sites, often positioned by attackers through search engine optimization (SEO) poisoning. Once on the site, users are presented with a fake CAPTCHA. This counterfeit challenge is designed to appear legitimate while enabling the execution of malicious code. When a victim interacts with the CAPTCHA, a PowerShell command containing a download string is retrieved and executed.

Attackers commonly abuse the legitimate Microsoft HTML Application Host (MSHTA) in these operations. Recent campaigns have also incorporated Smart Chain endpoints, such as “bsc-dataseed.binance[.]org,” to obtain configuration code. The primary payload delivered through ClearFake is typically an information stealer, such as Lumma Stealer, enabling credential theft, data exfiltration, and persistent access [1].

Darktrace’s Coverage of ClearFake

Darktrace / ENDPOINT first detected activity likely associated with ClearFake on a single device on over the course of one day on November 18, 2025. The system observed the execution of “mshta.exe,” the legitimate Microsoft HTML Application Host utility. It also noted a repeated process command referencing “weiss.neighb0rrol1[.]ru”, indicating suspicious external activity. Subsequent analysis of this endpoint using open‑source intelligence (OSINT) indicated that it was a malicious, domain generation algorithm (DGA) endpoint [2].

The process line referencing weiss.neighb0rrol1[.]ru, as observed by Darktrace / ENDPOINT.
Figure 1: The process line referencing weiss.neighb0rrol1[.]ru, as observed by Darktrace / ENDPOINT.

This activity indicates that mshta.exe was used to contact a remote server, “weiss.neighb0rrol1[.]ru/rpxacc64mshta,” and execute the associated HTA file to initiate the next stage of the attack. OSINT sources have since heavily flagged this server as potentially malicious [3].

The first argument in this process uses the MSHTA utility to execute the HTA file hosted on the remote server. If successful, MSHTA would then run JavaScript or VBScript to launch PowerShell commands used to retrieve malicious payloads, a technique observed in previous ClearFake campaigns. Darktrace also detected unusual activity involving additional Microsoft executables, including “winlogon.exe,” “userinit.exe,” and “explorer.exe.” Although these binaries are legitimate components of the Windows operating system, threat actors can abuse their normal behavior within the Windows login sequence to gain control over user sessions, similar to the misuse of mshta.exe.

EtherHiding cover

Darktrace also identified additional ClearFake‑related activity, specifically a connection to bsc-testnet.drpc[.]org, a legitimate BNB Smart Chain endpoint. This activity was triggered by injected JavaScript on the compromised site www.allstarsuae[.]com, where the script initiated an eth_call POST request to the Smart Chain endpoint.

Example of a fake CAPTCHA on the compromised site www.allstarsuae[.]com.
Figure 2: Example of a fake CAPTCHA on the compromised site www.allstarsuae[.]com.

EtherHiding is a technique in which threat actors leverage blockchain technology, specifically smart contracts, as part of their malicious infrastructure. Because blockchain is anonymous, decentralized, and highly persistent, it provides threat actors with advantages in evading defensive measures and traditional tracking [4].

In this case, when a user visits a compromised WordPress site, injected base64‑encoded JavaScript retrieved an ABI string, which was then used to load and execute a contract hosted on the BNB Smart Chain.

JavaScript hosted on the compromised site www.allstaruae[.]com.
Figure 3: JavaScript hosted on the compromised site www.allstaruae[.]com.

Conducting malware analysis on this instance, the Base64 decoded into a JavaScript loader. A POST request to bsc-testnet.drpc[.]org was then used to retrieve a hex‑encoded ABI string that loads and executes the contract. The JavaScript also contained hex and Base64‑encoded functions that decoded into additional JavaScript, which attempted to retrieve a payload hosted on GitHub at “github[.]com/PrivateC0de/obf/main/payload.txt.” However, this payload was unavailable at the time of analysis.

Darktrace’s detection of the POST request to bsc-testnet.drpc[.]org.
Figure 4: Darktrace’s detection of the POST request to bsc-testnet.drpc[.]org.
Figure 5: Darktrace’s detection of the executable file and the malicious hostname.

Autonomous Response

As Darktrace’s Autonomous Response capability was enabled on this customer’s network, Darktrace was able to take swift mitigative action to contain the ClearFake‑related activity early, before it could lead to potential payload delivery. The affected device was blocked from making external connections to a number of suspicious endpoints, including 188.114.96[.]6, *.neighb0rrol1[.]ru, and neighb0rrol1[.]ru, ensuring that no further malicious connections could be made and no payloads could be retrieved.

Autonomous Response also acted to prevent the executable mshta.exe from initiating HTA file execution over HTTPS from this endpoint by blocking the attempted connections. Had these files executed successfully, the attack would likely have resulted in the retrieval of an information stealer, such as Lumma Stealer.

Autonomous Response’s intervention against the suspicious connectivity observed.
Figure 6: Autonomous Response’s intervention against the suspicious connectivity observed.

Conclusion

ClearFake continues to be observed across multiple sectors, but Darktrace remains well‑positioned to counter such threats. Because ClearFake’s end goal is often to deliver malware such as information stealers and malware loaders, early disruption is critical to preventing compromise. Users should remain aware of this activity and vigilant regarding fake CAPTCHA pop‑ups. They should also monitor unusual usage of MSHTA and outbound connections to domains that mimic formats such as “bsc-dataseed.binance[.]org” [1].

In this case, Darktrace was able to contain the attack before it could successfully escalate and execute. The attempted execution of HTA files was detected early, allowing Autonomous Response to intervene, stopping the activity from progressing. As soon as the device began communicating with weiss.neighb0rrol1[.]ru, an Autonomous Response inhibitor triggered and interrupted the connections.

As ClearFake continues to rise, users should stay alert to social engineering techniques, including ClickFix, that rely on deceptive security prompts.

Credit to Vivek Rajan (Senior Cyber Analyst) and Tara Gould (Malware Research Lead)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Detections

Process / New Executable Launched

Endpoint / Anomalous Use of Scripting Process

Endpoint / New Suspicious Executable Launched

Endpoint / Process Connection::Unusual Connection from New Process

Autonomous Response Models

Antigena / Network::Significant Anomaly::Antigena Significant Anomaly from Client Block

List of Indicators of Compromise (IoCs)

  • weiss.neighb0rrol1[.]ru – URL - Malicious Domain
  • 188.114.96[.]6 – IP – Suspicious Domain
  • *.neighb0rrol1[.]ru – URL – Malicious Domain

MITRE Tactics

Initial Access, Drive-by Compromise, T1189

User Execution, Execution, T1204

Software Deployment Tools, Execution and Lateral Movement, T1072

Command and Scripting Interpreter, T1059

System Binary Proxy Execution: MSHTA, T1218.005

References

1.        https://www.kroll.com/en/publications/cyber/rapid-evolution-of-clearfake-delivery

2.        https://www.virustotal.com/gui/domain/weiss.neighb0rrol1.ru

3.        https://www.virustotal.com/gui/file/1f1aabe87e5e93a8fff769bf3614dd559c51c80fc045e11868f3843d9a004d1e/community

4.        https://www.packetlabs.net/posts/etherhiding-a-new-tactic-for-hiding-malware-on-the-blockchain/

Continue reading
About the author
Vivek Rajan
Cyber Analyst

Blog

/

Network

/

January 30, 2026

The State of Cybersecurity in the Finance Sector: Six Trends to Watch

Default blog imageDefault blog image

The evolving cybersecurity threat landscape in finance

The financial sector, encompassing commercial banks, credit unions, financial services providers, and cryptocurrency platforms, faces an increasingly complex and aggressive cyber threat landscape. The financial sector’s reliance on digital infrastructure and its role in managing high-value transactions make it a prime target for both financially motivated and state-sponsored threat actors.

Darktrace’s latest threat research, The State of Cybersecurity in the Finance Sector, draws on a combination of Darktrace telemetry data from real-world customer environments, open-source intelligence, and direct interviews with financial-sector CISOs to provide perspective on how attacks are unfolding and how defenders in the sector need to adapt.  

Six cybersecurity trends in the finance sector for 2026

1. Credential-driven attacks are surging

Phishing continues to be a leading initial access vector for attacks targeting confidentiality. Financial institutions are frequently targeted with phishing emails designed to harvest login credentials. Techniques including Adversary-in-The-Middle (AiTM) to bypass Multi-factor Authentication (MFA) and QR code phishing (“quishing”) are surging and are capable of fooling even trained users. In the first half of 2025, Darktrace observed 2.4 million phishing emails within financial sector customer deployments, with almost 30% targeted towards VIP users.  

2. Data Loss Prevention is an increasing challenge

Compliance issues – particularly data loss prevention -- remain a persistent risk. In October 2025 alone, Darktrace observed over 214,000 emails across financial sector customers that contained unfamiliar attachments and were sent to suspected personal email addresses highlighting clear concerns around data loss prevention. Across the same set of customers within the same time frame, more than 351,000 emails containing unfamiliar attachments were sent to freemail addresses (e.g. gmail, yahoo, icloud), highlighting clear concerns around DLP.  

Confidentiality remains a primary concern for financial institutions as attackers increasingly target sensitive customer data, financial records, and internal communications.  

3. Ransomware is evolving toward data theft and extortion

Ransomware is no longer just about locking systems, it’s about stealing data first and encrypting second. Groups such as Cl0p and RansomHub now prioritize exploiting trusted file-transfer platforms to exfiltrate sensitive data before encryption, maximizing regulatory and reputational fallout for victims.  

Darktrace’s threat research identified routine scanning and malicious activity targeting internet-facing file-transfer systems used heavily by financial institutions. In one notable case involving Fortra GoAnywhere MFT, Darktrace detected malicious exploitation behavior six days before the CVE was publicly disclosed, demonstrating how attackers often operate ahead of patch cycles

This evolution underscores a critical reality: by the time a vulnerability is disclosed publicly, it may already be actively exploited.

4. Attackers are exploiting edge devices, often pre-disclosure.  

VPNs, firewalls, and remote access gateways have become high-value targets, and attackers are increasingly exploiting them before vulnerabilities are publicly disclosed. Darktrace observed pre-CVE exploitation activity affecting edge technologies including Citrix, Palo Alto, and Ivanti, enabling session hijacking, credential harvesting, and privileged lateral movement into core banking systems.  

Once compromised, these edge devices allow adversaries to blend into trusted network traffic, bypassing traditional perimeter defenses. CISOs interviewed for the report repeatedly described VPN infrastructure as a “concentrated focal point” for attackers, especially when patching and segmentation lag behind operational demands.

5. DPRK-linked activity is growing across crypto and fintech.  

State-sponsored activity, particularly from DPRK-linked groups affiliated with Lazarus, continues to intensify across cryptocurrency and fintech organizations. Darktrace identified coordinated campaigns leveraging malicious npm packages, previously undocumented BeaverTail and InvisibleFerret malware, and exploitation of React2Shell (CVE-2025-55182) for credential theft and persistent backdoor access.  

Targeting was observed across the United Kingdom, Spain, Portugal, Sweden, Chile, Nigeria, Kenya, and Qatar, highlighting the global scope of these operations.  

6. Cloud complexity and AI governance gaps are now systemic risks.  

Finally, CISOs consistently pointed to cloud complexity, insider risk from new hires, and ungoverned AI usage exposing sensitive data as systemic challenges. Leaders emphasized difficulty maintaining visibility across multi-cloud environments while managing sensitive data exposure through emerging AI tools.  

Rapid AI adoption without clear guardrails has introduced new confidentiality and compliance risks, turning governance into a board-level concern rather than a purely technical one.

Building cyber resilience in a shifting threat landscape

The financial sector remains a prime target for both financially motivated and state-sponsored adversaries. What this research makes clear is that yesterday’s security assumptions no longer hold. Identity attacks, pre-disclosure exploitation, and data-first ransomware require adaptive, behavior-based defenses that can detect threats as they emerge, often ahead of public disclosure.

As financial institutions continue to digitize, resilience will depend on visibility across identity, edge, cloud, and data, combined with AI-driven defense that learns at machine speed.  

Learn more about the threats facing the finance sector, and what your organization can do to keep up in The State of Cybersecurity in the Finance Sector report here.  

Acknowledgements:

The State of Cybersecurity in the Finance sector report was authored by Calum Hall, Hugh Turnbull, Parvatha Ananthakannan, Tiana Kelly, and Vivek Rajan, with contributions from Emma Foulger, Nicole Wong, Ryan Traill, Tara Gould, and the Darktrace Threat Research and Incident Management teams.

[related-resource]  

Continue reading
About the author
Nathaniel Jones
VP, Security & AI Strategy, Field CISO
Your data. Our AI.
Elevate your network security with Darktrace AI