Blog
/
/
February 1, 2021

Explore AI Email Security Approaches with Darktrace

Stay informed on the latest AI approaches to email security. Explore Darktrace's comparisons to find the best solution for your cybersecurity needs!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
01
Feb 2021

Innovations in artificial intelligence (AI) have fundamentally changed the email security landscape in recent years, but it can often be hard to determine what makes one system different to the next. In reality, under that umbrella term there exists a significant distinction in approach which may determine whether the technology provides genuine protection or simply a perceived notion of defense.

One backward-looking approach involves feeding a machine thousands of emails that have already been deemed to be malicious, and training it to look for patterns in these emails in order to spot future attacks. The second approach uses an AI system to analyze the entirety of an organization’s real-world data, enabling it to establish a notion of what is ‘normal’ and then spot subtle deviations indicative of an attack.

In the below, we compare the relative merits of each approach, with special consideration to novel attacks that leverage the latest news headlines to bypass machine learning systems trained on data sets. Training a machine on previously identified ‘known bads’ is only advantageous in certain, specific contexts that don’t change over time: to recognize the intent behind an email, for example. However, an effective email security solution must also incorporate a self-learning approach that understands ‘normal’ in the context of an organization in order to identify unusual and anomalous emails and catch even the novel attacks.

Signatures – a backward-looking approach

Over the past few decades, cyber security technologies have looked to mitigate risk by preventing previously seen attacks from occurring again. In the early days, when the lifespan of a given strain of malware or the infrastructure of an attack was in the range of months and years, this method was satisfactory. But the approach inevitably results in playing catch-up with malicious actors: it always looks to the past to guide detection for the future. With decreasing lifetimes of attacks, where a domain could be used in a single email and never seen again, this historic-looking signature-based approach is now being widely replaced by more intelligent systems.

Training a machine on ‘bad’ emails

The first AI approach we often see in the wild involves harnessing an extremely large data set with thousands or millions of emails. Once these emails have come through, an AI is trained to look for common patterns in malicious emails. The system then updates its models, rules set, and blacklists based on that data.

This method certainly represents an improvement to traditional rules and signatures, but it does not escape the fact that it is still reactive, and unable to stop new attack infrastructure and new types of email attacks. It is simply automating that flawed, traditional approach – only instead of having a human update the rules and signatures, a machine is updating them instead.

Relying on this approach alone has one basic but critical flaw: it does not enable you to stop new types of attacks that it has never seen before. It accepts that there has to be a ‘patient zero’ – or first victim – in order to succeed.

The industry is beginning to acknowledge the challenges with this approach, and huge amounts of resources – both automated systems and security researchers – are being thrown into minimizing its limitations. This includes leveraging a technique called “data augmentation” that involves taking a malicious email that slipped through and generating many “training samples” using open-source text augmentation libraries to create “similar” emails – so that the machine learns not only the missed phish as ‘bad’, but several others like it – enabling it to detect future attacks that use similar wording, and fall into the same category.

But spending all this time and effort into trying to fix an unsolvable problem is like putting all your eggs in the wrong basket. Why try and fix a flawed system rather than change the game altogether? To spell out the limitations of this approach, let us look at a situation where the nature of the attack is entirely new.

The rise of ‘fearware’

When the global pandemic hit, and governments began enforcing travel bans and imposing stringent restrictions, there was undoubtedly a collective sense of fear and uncertainty. As explained previously in this blog, cyber-criminals were quick to capitalize on this, taking advantage of people’s desire for information to send out topical emails related to COVID-19 containing malware or credential-grabbing links.

These emails often spoofed the Centers for Disease Control and Prevention (CDC), or later on, as the economic impact of the pandemic began to take hold, the Small Business Administration (SBA). As the global situation shifted, so did attackers’ tactics. And in the process, over 130,000 new domains related to COVID-19 were purchased.

Let’s now consider how the above approach to email security might fare when faced with these new email attacks. The question becomes: how can you train a model to look out for emails containing ‘COVID-19’, when the term hasn’t even been invented yet?

And while COVID-19 is the most salient example of this, the same reasoning follows for every single novel and unexpected news cycle that attackers are leveraging in their phishing emails to evade tools using this approach – and attracting the recipient’s attention as a bonus. Moreover, if an email attack is truly targeted to your organization, it might contain bespoke and tailored news referring to a very specific thing that supervised machine learning systems could never be trained on.

This isn’t to say there’s not a time and a place in email security for looking at past attacks to set yourself up for the future. It just isn’t here.

Spotting intention

Darktrace uses this approach for one specific use which is future-proof and not prone to change over time, to analyze grammar and tone in an email in order to identify intention: asking questions like ‘does this look like an attempt at inducement? Is the sender trying to solicit some sensitive information? Is this extortion?’ By training a system on an extremely large data set collected over a period of time, you can start to understand what, for instance, inducement looks like. This then enables you to easily spot future scenarios of inducement based on a common set of characteristics.

Training a system in this way works because, unlike news cycles and the topics of phishing emails, fundamental patterns in tone and language don’t change over time. An attempt at solicitation is always an attempt at solicitation, and will always bear common characteristics.

For this reason, this approach only plays one small part of a very large engine. It gives an additional indication about the nature of the threat, but is not in itself used to determine anomalous emails.

Detecting the unknown unknowns

In addition to using the above approach to identify intention, Darktrace uses unsupervised machine learning, which starts with extracting and extrapolating thousands of data points from every email. Some of these are taken directly from the email itself, while others are only ascertainable by the above intention-type analysis. Additional insights are also gained from observing emails in the wider context of all available data across email, network and the cloud environment of the organization.

Only after having a now-significantly larger and more comprehensive set of indicators, with a more complete description of that email, can the data be fed into a topic-indifferent machine learning engine to start questioning the data in millions of ways in order to understand if it belongs, given the wider context of the typical ‘pattern of life’ for the organization. Monitoring all emails in conjunction allows the machine to establish things like:

  • Does this person usually receive ZIP files?
  • Does this supplier usually send links to Dropbox?
  • Has this sender ever logged in from China?
  • Do these recipients usually get the same emails together?

The technology identifies patterns across an entire organization and gains a continuously evolving sense of ‘self’ as the organization grows and changes. It is this innate understanding of what is and isn’t ‘normal’ that allows AI to spot the truly ‘unknown unknowns’ instead of just ‘new variations of known bads.’

This type of analysis brings an additional advantage in that it is language and topic agnostic: because it focusses on anomaly detection rather than finding specific patterns that indicate threat, it is effective regardless of whether an organization typically communicates in English, Spanish, Japanese, or any other language.

By layering both of these approaches, you can understand the intention behind an email and understand whether that email belongs given the context of normal communication. And all of this is done without ever making an assumption or having the expectation that you’ve seen this threat before.

Years in the making

It’s well established now that the legacy approach to email security has failed – and this makes it easy to see why existing recommendation engines are being applied to the cyber security space. On first glance, these solutions may be appealing to a security team, but highly targeted, truly unique spear phishing emails easily skirt these systems. They can’t be relied on to stop email threats on the first encounter, as they have a dependency on known attacks with previously seen topics, domains, and payloads.

An effective, layered AI approach takes years of research and development. There is no single mathematical model to solve the problem of determining malicious emails from benign communication. A layered approach accepts that competing mathematical models each have their own strengths and weaknesses. It autonomously determines the relative weight these models should have and weighs them against one another to produce an overall ‘anomaly score’ given as a percentage, indicating exactly how unusual a particular email is in comparison to the organization’s wider email traffic flow.

It is time for email security to well and truly drop the assumption that you can look at threats of the past to predict tomorrow’s attacks. An effective AI cyber security system can identify abnormalities with no reliance on historical attacks, enabling it to catch truly unique novel emails on the first encounter – before they land in the inbox.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

More in this series

No items found.

Blog

/

AI

/

May 12, 2026

Resilience at the Speed of AI: Defending the Modern Campus with Darktrace

Default blog imageDefault blog image

Why higher education is a different cybersecurity battlefield

After four decades in IT, now serving as both CIO and CISO, I’ve learned one simple truth: cybersecurity is never “done.” It’s a constant game of cat and mouse. Criminals evolve. Technologies advance. Regulations expand. But in higher education, the challenge is uniquely complex.

Unlike a bank or a military installation, we can’t lock down networks to a narrow set of approved applications. Higher education environments are open by design. Students collaborate globally, faculty conduct cutting-edge research, and administrators manage critical operations, all of which require seamless access to the internet, global networks, cloud platforms, and connected systems.

Combine that openness with expanding regulatory mandates and tight budgets, and the balancing act becomes clear.

Threat actors don’t operate under the same constraints. Often well-funded and sponsored by nation-states with significant resources, they’re increasingly organized, strategic, and innovative.

That sophistication shows up in the tactics we face every day, from social engineering and ransomware to AI-driven impersonation attacks. We’re dealing with massive volumes of data, countless signals, and a very small window between detection and damage.

No human team, no matter how talented or how numerous, can manually sift through that noise at the speed required.

Discovering a force multiplier

Nothing in cybersecurity is 100% foolproof. I never “set it and forget it.” But for institutions balancing rising threats and finite resources, the Darktrace ActiveAI Security Platform™ offers something incredibly valuable: peace of mind through speed and scale.

It closes the gap between detection and response in a way humans can’t possibly match. At the speed of light, it can quarantine, investigate, and contain anomalous activity.

I’ve purchased and deployed Darktrace three separate times at three different institutions because I’ve seen firsthand what it can do and what it enables teams like mine to achieve.

I first encountered Darktrace while serving as CIO for a large multi-campus college system. What caught my attention was Darktrace's Self-Learning AI, and its ability to learn what "normal" looked like across our network. Instead of relying solely on static signatures or rigid rules, Darktrace built a behavioral baseline unique to our environment and alerted us in real time when something simply didn’t look right.

In higher education, where strict lockdowns aren’t realistic, that behavioral model made all the difference. We deployed it across five campuses, and the impact was immediate. Operating 24/7, Darktrace surfaced threats in ways our team couldn’t replicate manually.

Over time, the Darktrace platform evolved alongside the changing threat landscape, expanding into intrusion prevention, cloud visibility, and email security. At subsequent institutions, including Washington College, Darktrace was one of my first strategic investments.

Revealing the hidden threat other tools missed

One of the most surprising investigations of my career involved a data leak. Leadership suspected sensitive information from high-level meetings was being exposed, but our traditional tools couldn’t provide any answers.

Using Darktrace’s deep network visibility, down to packet-level data, we traced unusual connections to our CCTV camera system, which had been configured with a manufacturer’s default password. A small group of employees had hacked into the CCTV cameras, accessed audio-enabled recordings from boardroom meetings, and stored copies locally.

No other tool in our environment could have surfaced those connections the way Darktrace did. It was a clear example of why using AI to deeply understand how your organization, systems, and tools normally behave, matters: threats and risks don’t always look the way we expect.

Elevating a D-rating into a A-level security program

When I arrived at my last CISO role, the institution had recently experienced a significant ransomware attack. Attackers located  data  which informed their setting  ransom demands to an amount they knew would likely result in payment. It was a sobering example of how calculated and strategic modern cybercriminals have become.

Third-party cyber ratings reflected that reality, with a  D rating.

To raise the bar, we implemented a comprehensive security program and integrated layered defenses; -deploying state of the art tools and methods-  across the environment, with Darktrace at its core.

After a 90-day learning period to establish our behavioral baseline, we transitioned the platform into fully autonomous mode. In a single 30-day span, Darktrace conducted more than 2,500 investigations and autonomously resolved 92% of all false positives.

For a small team, that’s transformative. Instead of drowning in alerts, my staff focused on less than  200 meaningful cases that warranted human review.

Today, we maintain a perfect A rating from third-party assessors and have remained cybersafe.

Peace of mind isn’t about complacency

The effect of Darktrace as a force multiplier has a real human impact.

With the time reclaimed through automation, we expanded community education programs and implemented simulated phishing exercises. Through sustained training and awareness efforts, we reduced social engineering susceptibility from nearly 45% to under 5%.

On a personal level, Darktrace allows me to sleep better at night and take time off knowing we have intelligent systems monitoring and responding around the clock. For any CIO or CISO carrying institutional risk on their shoulders, that matters.

The next era: AI vs. AI

A new chapter in cybersecurity is unfolding as adversaries leverage AI to enhance scale, speed, and believability. Phishing campaigns are more personalized, impersonation attempts are more precise, and deepfake video technology, including live video, is disturbingly authentic. At the same time, organizations are rapidly adopting AI across their own environments —from GenAI assistants to embedded tools to autonomous agents. These systems don’t operate within fixed rules. They act across email, cloud, SaaS, and identity systems, often with broad permissions, and their behavior can evolve over time in ways that are difficult to predict or control.

That creates a new kind of security challenge. It’s not just about defending against AI-powered threats but understanding and governing how AI behaves within your environment, including what it can access, how it acts, and where risk begins to emerge.

From my perspective, this is a natural next step for Darktrace.

Darktrace brings a level of maturity and behavioral understanding uniquely suited to the complexity of AI environments. Self-Learning AI learns the normal patterns of each business to interpret context, uncover subtle intent, and detect meaningful deviations without relying on predefined rules or signatures. Extending into securing AI by bringing real-time visibility and control to GenAI assistants, AI agents, development environments and Shadow AI, feels like the logical evolution of what Darktrace already does so well.

Just as importantly, Darktrace is already built for dynamic, cross-domain environments where risk doesn’t sit in a single tool or control plane. In higher education, activity already spans multiple systems and, with AI, that interconnection only accelerates.

Having deployed Darktrace multiple times, I have confidence it’s uniquely positioned to lead in this space and help organizations adopt AI with greater visibility and control.

---

Since authoring this blog, Irving Bruckstein has transitioned to the role of Chief Executive Officer of the Cyberaigroup.

Continue reading
About the author
Irving Bruckstein
CEO CyberAIgroup

Blog

/

AI

/

May 11, 2026

The Next Step After Mythos: Defending in a World Where Compromise is Expected

mythos cybersecurityDefault blog imageDefault blog image

Is Anthropic’s Mythos a turning point for cybersecurity?

Anthropic’s recent announcements around their Mythos model, alongside the launch of Project Glasswing, have generated significant interest across the cybersecurity industry.

The closed-source nature of the Mythos model has understandably attracted a degree of skepticism around some of the claims being made. Additionally, Project Glasswing was initially positioned as a way for software vendors to accelerate the proactive discovery of vulnerabilities in their own code; however, much of the attention has focused on the potential for AI to identify exploitable vulnerabilities for those with malicious intent.

Putting questions around the veracity of those claims to one side – which, for what it’s worth, do appear to be at least partially endorsed by independent bodies such as the UK’s AI Security Institute – this should not be viewed as a critical turning point for the industry. Rather, it reflects the natural direction of travel.

How Mythos affects cybersecurity teams  

At Darktrace, extolling the virtues of AI within cybersecurity is understandably close to our hearts. However, taking a step back from the hype, we’d like to consider what developments like this mean for security teams.

Whether it’s Mythos or another model yet to be released, it’s worth remembering that there is no fundamental difference between an AI discovered vulnerability and one discovered by a human. The change is in the pace of discovery and, some may argue, the lower the barrier to entry.

In the hands of a software developer, this is unquestionably positive. Faster discovery enables earlier remediation and more proactive security. But in the hands of an attacker, the same capability will likely lead to a greater number of exploitable vulnerabilities being used in the wild and, critically, vulnerabilities that are not yet known to either the vendor or the end user.

That said, attackers have always been able to find exploitable vulnerabilities and use them undetected for extended periods of time. The use of AI does not fundamentally change this reality, but it does make the process faster and, unfortunately, more likely to occur at scale.

While tools such as Darktrace / Attack Surface Management and / Proactive Exposure Management  can help security teams prioritize where to patch, the emergence of AI-driven vulnerability discovery reinforces an important point: patching alone is not a sufficient control against modern cyber-attacks.

Rethinking defense for a world where compromise is expected

Rather than assuming vulnerabilities can simply be patched away, defenders are better served by working from the assumption that their software is already vulnerable - and always will be -and build their security strategy accordingly.

Under that assumption, defenders should expect initial access, particularly across internet exposed assets, to become easier for attackers. What matters then is how quickly that foothold is detected, contained, and prevented from expanding.

For defenders, this places renewed emphasis on a few core capabilities:

  • Secure-by-design architectures and blast radius reduction, particularly around identity, MFA, segmentation, and Zero Trust principles
  • Early, scalable detection and containment, favoring behavioral and context-driven signals over signatures alone
  • Operational resilience, with the expectation of more frequent early-stage incidents that must be managed without burning out teams

How Darktrace helps organizations proactively defend against cyber threats

At Darktrace, we support security teams across all three of these critical capabilities through a multi-layered AI approach. Our Self-Learning AI learns what’s normal for your organization, enabling real-time threat detection, behavioral prediction, incident investigation and autonomous response. - all while empowering your security team with visibility and control.

To learn more about Darktrace’s application of AI to cybersecurity download our White Paper here.  

Reducing blast radius through visibility and control

Secure-by-design principles depend on understanding how users, devices, and systems behave. By learning the normal patterns of identity and network activity, Darktrace helps teams identify when access is being misused or when activity begins to move beyond expected boundaries. This makes it possible to detect and contain lateral movement early, limiting how far an attacker can progress even after initial access.

Detecting and containing threats at the earliest stage  

As AI accelerates vulnerability discovery, defenders need to identify exploitation before it is formally recognized. Darktrace’s behavioral understanding approach enables detection of subtle deviations from normal activity, including those linked to previously unknown vulnerabilities.

A key example of this is our research on identifying cyber threats before public CVE disclosures, demonstrating that assessing activity against what is normal for a specific environment, rather than relying on predefined indicators of compromise, enables detection of intrusions exploiting previously unknown vulnerabilities days or even weeks before details become publicly available.

Additionally, our Autonomous Response capability provides fast, targeted containment focused on the most concerning events, while allowing normal business operations to continue. This has consistently shown that even when attackers use techniques never seen before, Darktrace’s Autonomous Response can contain threats before they have a chance to escalate.

Scaling response without increasing operational burden

As early-stage incidents become more frequent, the ability to investigate and respond efficiently becomes critical. Darktrace’s Cyber AI Analyst’s AI-driven investigation capabilities automatically correlate activity across the environment, prioritizing the most significant threats and reducing the need for manual triage. This allows security teams to respond faster and more consistently, without increasing workload or burnout.

What effective defense looks like in an AI-accelerated landscape

Developments like Mythos highlight a reality that has been building for some time: the window between exposure and exploitation is shrinking, and in many cases, it may disappear entirely. In that environment, relying on patching alone becomes increasingly reactive, leaving little room to respond once access has been established.

The more durable approach is to assume that compromise will occur and focus on controlling what happens next. That means identifying early signs of misuse, containing threats before they spread, and maintaining visibility across the environment so that isolated signals can be understood in context.

AI plays a role on both sides of this equation. While it enables attackers to move faster, it also gives defenders the ability to detect subtle changes in behavior, prioritize what matters, and respond in real time. The advantage will not come from adopting AI in isolation, but from applying it in a way that reduces the gap between detection and action.

AI may be accelerating parts of the attack lifecycle, but the fundamentals of defense, detection, and containment still apply. If anything, they matter more than ever – and AI is just as powerful a tool for defenders as it is for attackers.

To learn more about Darktrace and Mythos read more on our blog: Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

[related-resource]

Continue reading
About the author
Toby Lewis
Head of Threat Analysis
Your data. Our AI.
Elevate your network security with Darktrace AI