Blog
/
Email
/
January 14, 2025

Why AI-powered Email Protection Became Essential for this Global Financial Services Leader

Hear the cybersecurity transformation story of this leading money transmitter, who facilitates more than $9 billion in remittances via thousands of agent locations across the US serving more than two million active customers.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
14
Jan 2025

When agile cyber-attackers don’t stop, but pivot  

When he first joined this leading financial services provider, it was clear to the CISO that email security needed to be a top priority. The organization provides transfer services to millions of consumers via a network of thousands of agent locations across the US. Those agents are connected to hundreds of thousands of global payers to complete consumer transfers, ranging from leading financial institutions to small local businesses.

With this vast network of agents and payers, the provider relies on email as its primary communications channel. Transmitting billions of dollars every year, the organization is a prime target for cyber criminals looking to steal credentials, financial assets, and sensitive data.

Vulnerable to attacks with gaps in email security and visibility

The CISO discovered that employees were under constant attack by phishing emails impersonating his company’s own executives. The business email compromise (BEC) attacks were designed to deceive employees into sharing credentials or clicking on malicious links.

Upon discovering that their Microsoft 365 tenant lacked secure configuration, the CISO implemented necessary changes to strengthen the service, including enabling authentication controls. While his efforts significantly reduced BEC attacks, cyber criminals changed their tactics, sending employees malicious phishing emails from seemingly valid email accounts from trusted domains like Google and Yahoo. The emails passed through the organization’s native email filters without detection.

The CISO also sought to strengthen defenses against third-party supply chain attacks that could originate with any of the hundreds of thousands of third-party agents and payers the company works with around the world. While the larger institutions typically have sophisticated email security strategies in place, the smaller businesses may lack the cybersecurity expertise needed to effectively secure and manage their data, putting the organization at risk.

While the CISO knew the company was vulnerable to phishing and third-party threats, he didn’t have visibility across the flow of email. Without access to key metrics and valuable data, he couldn’t get the crucial insights needed to quickly identify possible threats and adjust security protocols.  

Skilled analysts bogged down with low-level tasks

Like many enterprise organizations, this leading financial services provider relied on a crew of highly skilled analysts to respond to alerts and analyze and triage emails most of their workday. “That shouldn’t be how we operate,” said the CISO. “My role and the role of my staff should be to focus on more strategic projects, support the business, and work on important new product development.”

Balancing user experience with mitigating threats

Enabling greater email security measures without negatively impacting the business, user experience, and customer satisfaction was a daunting challenge the CISO and his security team faced. Imposing restrictions that are too stringent could restrict communication, delay the delivery of important messages, or block legitimate emails – potentially slowing down money transfers, frustrating customers, affecting employee productivity, and impacting revenue. However, maintaining controls that are too permissive could result in serious outcomes like data theft, financial fraud, operational disruption, compliance penalties, and customer attrition.  

Self-Learning AI is a game changer

After conducting a thorough POC with several modern security solution providers, this global financial services provider chose the Darktrace / EMAIL an AI-driven email security platform. The CISO said they chose the solution for two key reasons:

First, Darktrace / EMAIL offers modern capabilities

  • Self-Learning AI uses business data to recognize anomalies in communication patterns and user behavior to stop known and unknown threats
  • Secures the organization’s entire mailflow across all inbound, outbound, and lateral email
  • Protects against account takeover attacks by identifying subtle anomalies in cloud SaaS
  • Catches sophisticated threats like impersonations, session token misuse, adversary-in-the-middle attacks, credential theft, and data exfiltration

Second, they pointed to Darktrace’s experience, innovation, and expertise

  • Deep cybersecurity and industry knowledge
  • Demonstrated customers successes worldwide
  • At the forefront of innovation and research, establishing new thresholds in cybersecurity, with technology advances backed by over 200 patents and pending applications

Moreover, and most importantly, this organization trusted Darktrace to deliver on its promises.  And according to the CISO, that’s just what happened.

Significantly reduced phishing threats and business risk

Since implementing Darktrace / EMAIL, the threat posed by BEC attacks has dropped sharply. “Phishing is not an issue that concerns me anymore. I estimate we are now identifying and blocking more than 85% of threats our previous solution was missing,” said the CISO. The biggest factor contributing to this success? The power of AI.

With Darktrace / EMAIL, this leadingglobal financial services provider is identifying and blocking more than 85% ofthe phishing email threats its previous solution missed.

AI wasn’t originally on the financial service provider’s list of criteria. But after seeing AI in action and understanding its potential to vastly scale their detection and response capabilities–without adding headcount, the CISO determined AI wasn’t an option but an imperative. “AI is essential when it comes to email security, it’s an absolute necessity,” he said.  

Darktrace / EMAIL’s Self-Learning AI is uniquely powerful because it learns the content and context of every internal and external user and can spot the subtle differences in behavioral patterns that point to possible social engineering attacks. Through patented behavioral anomaly detection, Darktrace / EMAIL continuously learns about the organization’s business and users, based on its own operations and data, adjusting security protocols accordingly.  

For example, when clients are transferring large amounts of money, they are required to send photos of their driver’s licenses and passports via email to the organization for verification – accounting for a large percentage of its’ inbound email. Darktrace / EMAIL recognizes that it’s normal for customers to send this sensitive information, and it also knows that it’s not normal for that same sensitive information to leave the organization via outbound mail. In addition, Darktrace identifies patterns in user behavior, including who employees communicate with and what kind of information they share. When user behavior falls outside of established norms, such as an email sent from the CFO to employees the CEO would not typically communicate with, Darktrace can take the appropriate action to remove the threat.  

“After the implementation, we gave the solution two weeks to ingest our data and learn the specifics of our business. After that, it was perfect, just amazing,” said the CISO.  

Boosted team productivity and elevated value to the business

With Darktrace / EMAIL, the organization has successfully scaled its detection and response efforts without scaling personnel. The security team has reduced the number of emails requiring manual investigation by 90%. And because analysts now have the benefit of Darktrace / EMAIL’s analytics and reporting, the investigation process is much easier and faster. “The impact of this solution on my team has been very positive,” said the CISO. “Darktrace / EMAIL essentially manages itself, freeing up time for our skilled analysts–and for myself–to focus on more important projects.”  

The security team has scaled its detection and response efforts without scaling personnel,reducing the number of emails it manually investigates by 90%

Increased visibility delivers business-critical insights

You can’t control what you can’t see, and with zero visibility into critical data and metrics, this financial services provider was at a serious disadvantage. That has all changed. “Something that I love about Darktrace / EMAIL is the visibility that it provides into key metrics from a single dashboard. We can now understand the behavior of our email flow and data traffic and can make insight-driven decisions to continuously optimize our email security. It’s awesome,” said the CISO.  

An efficient user interface also improves productivity and reduces mean time to action by enabling teams to easily visualize key data points and quickly evaluate what actions need to be taken. Darktrace / EMAIL was developed with that experience in mind, allowing users to access data and take quick action without having to constantly log into the solution.

Keeping the business focused on cybersecurity

The leadership of this global organization takes information security very seriously, understanding that cyber-attacks aren’t just an IT problem but a business problem. When it came to evaluating Darktrace, the CISO said numerous stakeholders were involved including C-level executives, infrastructure, and IT, which operates separately from information security. The CISO initially identified the need, conducted the market research, engaged the target vendors, and then brought the other decision makers into the process for the solution evaluation and final decision. “Our IT group, infrastructure team, CTO and CEO are all involved when it comes to making major cybersecurity investments. We always try to make these decisions jointly to ensure we are taking everything into consideration.”

The organization has reached a higher level of maturity when it comes to email cybersecurity. The ability to automate routine email detection and investigation tasks has both strengthened the organization’s cyber resilience and enabled the CISO and his team to contribute more to the business. His advice for other IT leaders facing the same email security and visibility challenges he once experienced: “For those companies that need greater insight and control over their email but have limited resources and people, AI is the answer.”  

Darktrace / Email solution brief screenshot

Secure Your Inbox with Cutting-Edge AI Email Protection

Discover the most advanced cloud-native AI email security solution to protect your domain and brand while preventing phishing, novel social engineering, business email compromise, account takeover, and data loss.

  • Gain up to 13 days of earlier threat detection and maximize ROI on your current email security
  • Experience 20-25% more threat blocking power with Darktrace / EMAIL
  • Stop the 58% of threats bypassing traditional email security

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community

More in this series

No items found.

Blog

/

AI

/

December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

2026 cyber threat trendsDefault blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI