Blog
/
AI
/
December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

AI is accelerating faster than governance can keep up, expanding attack surfaces and creating unseen risks. From data and models to AI agents and integrations, security starts by knowing what to protect. Discover how to identify AI-driven risks, so you can establish governance frameworks and controls that secure innovation without exposing the enterprise to new attack surfaces. 
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Written by
Simon Fellows
Senior Vice President, Product Strategy
How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
23
Dec 2025

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Download the full framework

Discover how to identify AI-driven risks, so you can establish governance frameworks and controls at your organization.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Written by
Simon Fellows
Senior Vice President, Product Strategy

More in this series

No items found.

Blog

/

Email

/

April 24, 2026

Email-Borne Cyber Risk: A Core Challenge for the CISO in the Age of Volume and Sophistication

Default blog imageDefault blog image

The challenge for CISOs

Despite continuous advances in security technologies, humans continue to be exploited by attackers. Credential abuse and social actions like phishing are major factors, accounting for around 60% of all breaches. These attacks rely less on technical vulnerabilities and more on exploiting human behavior and organizational processes. 

From my perspective as a former CISO, protecting humans concentrates three of today’s most pressing challenges: the sheer volume of email-based threats, their increasing sophistication, and the limitations of traditional employee awareness programs in moving the needle on risk. 

My personal experience of security awareness training as a CISO

With over 20 years’ experience as an ICT and Cybersecurity leader across various international organizations, I’ve seen security awareness training (SAT) in many guises. And while the cyber landscape is evolving in every direction, the effectiveness of SAT is reaching a plateau.  

Most programs I’ve seen follow a familiar pattern. Training is delivered through a combination of eLearning modules and internal sessions designed to reinforce IT policies. Employees are typically required to complete a slide deck or video, followed by a multiple-choice quiz. Occasional phishing simulations are distributed throughout the year.

The content is often static and unpersonalized, based on known threats that may already be outdated. Every employee regardless of role or risk exposure receives the same training and the same simulated phishing templates, from front-desk staff to the CEO.

The problem with traditional SAT programs

The issue with the approach to SAT outlined above is that the distribution of power is imbalanced. Humans will always be fallible, particularly when faced with increasingly sophisticated attacks. Providing generic, low-context training risks creating false confidence rather than genuine resilience. Let’s look at some of the problems in detail.

Timing and delivery

Employees today operate under constant cognitive load, making lots of rapid decisions every day to reduce their email volumes. Yet if employees are completing training annually, or on an ad hoc basis, it becomes a standalone occurrence rather than a continuous habit.  

As a result, retention is low. Employees often forget the lessons within weeks, a phenomenon known as the ‘Ebbinghaus Forgetting Curve.’

The graph illustrates that when you first learn something, the information disappears at an exponential rate without retention. In fact, according to the curve, you forget 50% of all new information within a day, and 90% of all new information within a week.  

Simultaneously, most training is conducted within a separate interface. Because it takes place away from the actual moment of decision-making, the "teachable moment" is lost. There is a cognitive disconnect between the action (clicking a link in Outlook) and the education (watching a video in a browser). 

People

In the context of professional risk management, the risks faced by different users are different. Static learning such as everyone receiving the same ‘Password Reset’ email doesn’t help users prepare for the specific threats they are likely to face. It also contributes to user fatigue, driven by repetitive training. And if users receive tests at the same time, news spreads among colleagues, hurting the efficacy of the test.  

Staff turnover introduces further risk. In many organizations, new employees gain access to systems before receiving meaningful training, reducing onboarding to little more than policy acknowledgment.

Measuring success

In my experience, solutions are standalone, without any correlation to other tools in the security stack. In some cases, the programs are delivered by HR rather than the security team, creating a complete silo.  

As a result, SAT is often perceived as a compliance exercise rather than a capability building function. The result is that poor-quality training does little to reduce the likelihood of compromise, regardless of completion rates or quiz performance.

What a modern SAT solution should look like

For today’s CISO, email represents the convergence point of high-volume, high-impact, and human-centric threats. Despite significant security investments, it remains one of the most difficult channels to secure effectively. Given these constraints, CISOs must evolve their approach to SAT.

Success lies in a balanced strategy one that combines advanced technology, attack surface reduction, and pragmatic user enablement, without over-relying on human vigilance as the final line of defense.

This means moving beyond traditional SAT toward continuous, contextual awareness, realistic simulations, and tight integration with security outcomes.

Three requirements for a modern SAT solution

  • Invisible protection: The optimum security solution is one that assists users without impeding their experience. The objective is to enhance human capabilities, rather than simply delivering a lecture. 
  • Real-time feedback: Rather than a monthly quiz, the ideal system would provide a prompt or warning when a user is about to engage with something suspicious. 
  • Positive culture: Shifting the focus away from a "gotcha" culture, which is a contributing factor to a resentment, and instead empowers employees to serve as "sensors" for the company. 

Discover how personalized security coaching can strengthen your human layer and make your email defenses more resilient. Explore Darktrace / Adaptive Human Defense.

Continue reading
About the author
Karim Benslimane
VP, Field CISO

Blog

/

Network

/

April 21, 2026

How a Compromised eScan Update Enabled Multi‑Stage Malware and Blockchain C2

multi-stage malwareDefault blog imageDefault blog image

The rise of supply chain attacks

In recent years, the abuse of trusted software has become increasingly common, with supply chain compromises emerging as one of the fastest growing vectors for cyber intrusions. As highlighted in Darktrace’s Annual Threat Report 2026, attackers and state-actors continue to find significant value in gaining access to networks through compromised trusted links, third-party tools, or legitimate software. In January 2026, a supply chain compromise affecting MicroWorld Technologies’ eScan antivirus product was reported, with malicious updates distributed to customers through the legitimate update infrastructure. This, in turn, resulted in a multi‑stage loader malware being deployed on compromised devices [1][2].

An overview of eScan exploitation

According to eScan’s official threat advisory, unauthorized access to a regional update server resulted in an “incorrect file placed in the update distribution path” [3]. Customers associated with the affected update servers who downloaded the update during a two-hour window on January 20 were impacted, with affected Windows devices subsequently have experiencing various errors related to update functions and notifications [3].

While eScan did not specify which regional update servers were affected by the malicious update, all impacted Darktrace customer environments were located in the Europe, Middle East, and Africa (EMEA) region.

External research reported that a malicious 32-bit executable file , “Reload.exe”, was first installed on affected devices, which then dropped the 64-bit downloader, “CONSCTLX.exe”. This downloader establishes persistence by creating scheduled tasks such as “CorelDefrag”, which are responsible for executing PowerShell scripts. Subsequently, it evades detection by tampering with the Windows HOSTS file and eScan registry to prevent future remote updates intended for remediation. Additional payloads are then downloaded from its command-and-control (C2) server [1].

Darktrace’s coverage of eScan exploitation

Initial Access and Blockchain as multi-distributed C2 Infrastructure

On January 20, the same day as the aforementioned two‑hour exploit window, Darktrace observed multiple devices across affected networks downloading .dlz package files from eScan update servers, followed by connections to an anomalous endpoint, vhs.delrosal[.]net, which belongs to the attackers’ C2 infrastructure.

The endpoint contained a self‑signed SSL certificate with the string “O=Internet Widgits Pty Ltd, ST=SomeState, C=AU”, a default placeholder commonly used in SSL/TLS certificates for testing and development environments, as well as in malicious C2 infrastructure [4].

Utilizing a multi‑distributed C2 infrastructure, the attackers also leveraged domains linked with the Solana open‑source blockchain for C2 purposes, namely “.sol”. These domains were human‑readable names that act as aliases for cryptocurrency wallet addresses. As browsers do not natively resolve .sol domains, the Solana Naming System (formerly known as Bonfida, an independent contributor within the Solana ecosystem) provides a proxy service, through endpoints such as sol-domain[.]org, to enable browser access.

Darktrace observed devices connecting to blackice.sol-domain[.]org, indicating that attackers were likely using this proxy to reach a .sol domain for C2 activity. Given this behavior, it is likely that the attackers leveraged .sol domains as a dead drop resolver, a C2 technique in which threat actors host information on a public and legitimate service, such as a blockchain. Additional proxy resolver endpoints, such as sns-resolver.bonfida.workers[.]dev, were also observed.

Solana transactions are transparent, allowing all activity to be viewed publicly. When Darktrace analysts examined the transactions associated with blackice[.]sol, they observed that the earliest records dated November 7, 2025, which coincides with the creation date of the known C2 endpoint vhs[.]delrosal[.]net as shown in WHOIS Lookup information [4][5].

WHOIS Look records of the C2 endpoint vhs[.]delrosal[.]net.
Figure 1: WHOIS Look records of the C2 endpoint vhs[.]delrosal[.]net.
 Earliest observed transaction record for blackice[.]sol on public ledgers.
Figure 2: Earliest observed transaction record for blackice[.]sol on public ledgers.

Subsequent instructions found within the transactions contained strings such as “CNAME= vhs[.]delrosal[.]net”, indicating attempts to direct the device toward the malicious endpoint. A more recent transaction recorded on January 28 included strings such as “hxxps://96.9.125[.]243/i;code=302”, suggesting an effort to change C2 endpoints. Darktrace observed multiple alerts triggered for these endpoints across affected devices.

Similar blockchain‑related endpoints, such as “tumama.hns[.]to”, were also observed in C2 activities. The hns[.]to service allows web browsers to access websites registered on Handshake, a decentralized blockchain‑based framework designed to replace centralized authorities and domain registries for top‑level domains. This shift toward decentralized, blockchain‑based infrastructure likely reflects increased efforts by attackers to evade detection.

In outgoing connections to these malicious endpoints across affected networks, Darktrace / NETWORK recognized that the activity was 100% rare and anomalous for both the devices and the wider networks, likely indicative of malicious beaconing, regardless of the underlying trusted infrastructure. In addition to generating multiple model alerts to capture this malicious activity across affected networks, Darktrace’s Cyber AI Analyst was able to compile these separate events into broader incidents that summarized the entire attack chain, allowing customers’ security teams to investigate and remediate more efficiently. Moreover, in customer environments where Darktrace’s Autonomous Response capability was enabled, Darktrace took swift action to contain the attack by blocking beaconing connections to the malicious endpoints, even when those endpoints were associated with seemingly trustworthy services.

Conclusion

Attacks targeting trusted relationships continue to be a popular strategy among threat actors. Activities linked to trusted or widely deployed software are often unintentionally whitelisted by existing security solutions and gateways. Darktrace observed multiple devices becoming impacted within a very short period, likely because tools such as antivirus software are typically mass‑deployed across numerous endpoints. As a result, a single compromised delivery mechanism can greatly expand the attack surface.

Attackers are also becoming increasingly creative in developing resilient C2 infrastructure and exploiting legitimate services to evade detection. Defenders are therefore encouraged to closely monitor anomalous connections and file downloads. Darktrace’s ability to detect unusual activity amidst ever‑changing tactics and indicators of compromise (IoCs) helps organizations maintain a proactive and resilient defense posture against emerging threats.

Credit to Joanna Ng (Associate Principal Cybersecurity Analyst) and Min Kim (Associate Principal Cybersecurity Analyst) and Tara Gould (Malware Researcher Lead)

Edited by Ryan Traill (Content Manager)

Appendices

Darktrace Model Detections

  • Anomalous File::Zip or Gzip from Rare External Location
  • Anomalous Connection / Suspicious Self-Signed SSL
  • Anomalous Connection / Rare External SSL Self-Signed
  • Anomalous Connection / Suspicious Expired SSL
  • Anomalous Server Activity / Anomalous External Activity from Critical Network Device

List of Indicators of Compromise (IoCs)

  • vhs[.]delrosal[.]net – C2 server
  • tumama[.]hns[.]to – C2 server
  • blackice.sol-domain[.]org – C2 server
  • 96.9.125[.]243 – C2 Server

MITRE ATT&CK Mapping

  • T1071.001 - Command and Control: Web Protocols
  • T1588.001 - Resource Development
  • T1102.001 - Web Service: Dead Drop Resolver
  • T1195 – Supple Chain Compromise

References

[1] https://www.morphisec.com/blog/critical-escan-threat-bulletin/

[2] https://www.bleepingcomputer.com/news/security/escan-confirms-update-server-breached-to-push-malicious-update/

[3] hxxps://download1.mwti.net/documents/Advisory/eScan_Security_Advisory_2026[.]pdf

[4] https://www.virustotal.com/gui/domain/delrosal.net

[5] hxxps://explorer.solana[.]com/address/2wFAbYHNw4ewBHBJzmDgDhCXYoFjJnpbdmeWjZvevaVv

Continue reading
About the author
Joanna Ng
Associate Principal Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI