Blog
/
Email
/
March 29, 2023

Email Security & Future Innovations: Educating Employees

As online attackers change to targeted and sophisticated attacks, Darktrace stresses the importance of protection and utilizing steady verification codes.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
29
Mar 2023

In an escalating threat landscape with email as the primary target, IT teams need to move far beyond traditional methods of email security that haven’t evolved fast enough – they’re trained on historical attack data, so only catch what they’ve seen before. By design, they are permanently playing catch up to continually innovating attackers, taking an average of 13 days to recognize new attacks[1]

Phishing attacks are getting more targeted and sophisticated as attackers innovate in two key areas: delivery tactics, and social engineering. On the malware delivery side, attackers are increasingly ‘piggybacking’ off the legitimate infrastructure and reputations of services like SharePoint and OneDrive, as well as legitimate email accounts, to evade security tools. 

To evade the human on the other end of the email, attackers are tapping into new social engineering tactics, exploiting fear, uncertainty, and doubt (FUD) and evoking a sense of urgency as ever, but now have tools at their disposal to enable tailored and personalized social engineering at scale. 

With the help of tools such as ChatGPT, threat actors can leverage AI technologies to impersonate trusted organizations and contacts – including damaging business email compromises, realistic spear phishing, spoofing, and social engineering. In fact, Darktrace found that the average linguistic complexity of phishing emails has jumped by 17% since the release of ChatGPT.  

This is just one example of accelerating attack sophistication – lowering the barrier to entry and improving outcomes for attackers. It forms part of a wider trend of the attack landscape moving from low-sophistication, low-impact, and generic phishing tactics - a 'spray and pray' approach - to more targeted, sophisticated, and higher impact attacks that fall outside of the typical detection remit for any tool relying on rules and signatures. Generative AI and other technologies in the attackers' toolkit will soon enable the launch of these attacks at scale, and only being able to catch known threats that have been seen before will no longer be enough.

Figure 1: The progression of attacks and relative coverage of email security tools

In an escalating threat landscape with email as the primary target, the vast majority of email security tools haven't evolved fast enough – they’re trained on historical attack data, so only catch what they’ve seen before. They look to the past to try and predict the next attack, and are designed to catch today’s attacks tomorrow.

Organizations are increasingly moving towards AI systems, but not all AI is the same, and the application of that AI is crucial. IT and security teams need to move towards email security that is context-aware and leverages AI for deep behavioral analysis. And it’s a proven approach, successfully catching attacks that slip by other tools across thousands of organizations. And email security today needs to be more about just protecting the inbox. It needs to address not just malicious emails, but the full 360-degree view of a user across their email messages and accounts, as well as extended coverage where email bleeds into collaboration tools/SaaS. For many organizations, the question is not if they should upgrade their email security, but when – how much longer can they risk relying on email security that’s stuck looking to the past?  

The Email Security Industry: Playing Catch-Up

Gateways and ICES (Integrated Cloud Email Security) providers have something in common: they look to past attacks in order to try to predict the future. They often rely on previous threat intelligence and on assembling ‘deny-lists’ of known bad elements of emails already identified as malicious – these tools fail to meet the reality of the contemporary threat landscape. Some of these tools attempt to use AI to improve this flawed approach, looking not only for direct matches, but using "data augmentation" to try and find similar-looking emails. But this approach is still inherently blind to novel threats. 

These tools tend to be resource-intensive, requiring constant policy maintenance combined with the hand-to-hand combat of releasing held-but-legitimate emails and holding back malicious phishing emails. This burden of manually releasing individual emails typically falls on security teams, teams that are frequently small with multiple areas of responsibility. The solution is to deploy technology that autonomously stops the bad while allowing the good through, and adapts to changes in the organization – technology that actually fits the definition of ‘set and forget’.  

Becoming behavioral and context-aware  

There is a seismic shift underway in the industry, from “secure” email gateways to intelligent AI-driven thinking. The right approach is to understand the behaviors of end users – how each person uses their inbox and what constitutes ‘normal’ for each user – in order to detect what’s not normal. It makes use of context – how and when people communicate, and with who – to spot the unusual and to flag to the user when something doesn’t look quite right – and why. Basically, a system that understands you. Not past attacks.  

Darktrace has developed a fundamentally different approach to AI, one that doesn’t learn what’s dangerous from historical data but from a deep continuous understanding of each organization and their users. Only a complex understanding of the normal day-to-day behavior of each employee can accurately determine whether or not an email actually belongs in that recipient’s inbox. 

Whether it’s phishing, ransomware, invoice fraud, executive impersonation, or a novel technique, leveraging AI for behavioral analysis allows for faster decision-making – it doesn’t need to wait for a Patient Zero to contain a new attack because it can stop malicious threats on first encounter. This increased confidence in detection allows for more a precise response – targeted action to remove only the riskiest parts of an email, rather than taking a broad blanket response out of caution – in order to reduce risk with minimal disruption to the business. 

Returning to our attack spectrum, as the attack landscape moves increasingly towards highly sophisticated attacks that use novel or seemingly legitimate infrastructure to deliver malware and induce victims, it has never been more important to detect and issue an appropriate response to these high-impact and targeted attacks. 

Fig 2: How Darktrace combined with native email security to cover the full spectrum of attacks

Understanding you and a 360° view of the end user  

We know that modern email security isn’t limited to the inbox alone – it has to encompass a full understanding of a user’s normal behavior across email and beyond. Traditional email tools are focused solely on inbound email as the point of breach, which fails to protect against the potentially catastrophic damage caused by a successful email attack once an account has been compromised.    

Fig 3: A 360° understanding of a user reveals their digital touchpoints beyond Microsoft

In order to have complete context around what is normal for a user, it’s crucial to understand their activity within Microsoft 365, Google Workspace, Salesforce, Dropbox, and even their device on the network. Monitoring devices (as well as inboxes) for symptoms of infection is crucial to determining whether or not an email has been malicious, and if similar emails need to be withheld in the future. Combining with data from cloud apps enables a more holistic view of identity-based attacks. 

Understanding a user in the context of the whole organization – which also means network, cloud, and endpoint data – brings additional context to light to improve decision making, and connecting email security with external data on the attack surface can help proactively find malicious domains, so that defenses can be hardened before an attack is even launched.

Educating and Engaging Your Employees

Ultimately, it’s employees who interact with any given email. If organizations can successfully empower this user base, they will end up with a smarter workforce, fewer successful attacks, and a security team with more time on their hands for better, strategic work. 

The tools that succeed best will be those that can leverage AI to help employees become more security-conscious. While some emails are evidently malicious and should never enter an employee’s inbox, there is a significant grey area of emails that have potentially risky elements. The majority of security tools will either withhold these emails completely – even though they might be business critical – or let them through scot-free. But what if these grey-area emails could in fact be used as training opportunities?    

As opposed to phishing simulation vendors, behavioral AI can improve security awareness holistically throughout organizations by training users with a light touch via their own inboxes – bringing the end user into the loop to harden defenses.  

The new frontier of email security fights AI with AI, and organizations who lag behind might end up learning the hard way. Read on for our blog series about how these technologies can transform the employee experience, dynamize deployment, augment security teams and form part of an integrated defensive loop.    

[1] 13 days is the mean average of phishing payloads active in the wild between the response of Darktrace/Email compared to the earliest of 16 independent feeds submitted by other email security technologies.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

Blog

/

/

April 21, 2025

Why Asset Visibility and Signature-Based Threat Detection Fall Short in ICS Security

operational technology operators looking at equipment Default blog imageDefault blog image

In the realm of Industrial Control System (ICS) security, two concepts often dominate discussions:

  1. Asset visibility
  2. Signature-based threat detection

While these are undoubtedly important components of a cybersecurity strategy, many organizations focus on them as the primary means to enhance ICS security. However, this is more of a short-term approach and these organizations often realize too late that these efforts do not translate into actually securing their environment.

To truly secure your environment, organizations should focus their efforts on anomaly detection across core network segments. This shift enables enhanced threat detection, while also providing a more meaningful and dynamic view of asset communication.

By prioritizing anomaly detection, organizations can build a more resilient security posture, detecting and mitigating threats before they escalate into serious incidents.

The shortcomings of asset visibility and signature-based threat detection

Asset visibility is frequently touted as the foundation of ICS security. The idea is that you cannot protect what you cannot see.

However, organizations that invest heavily in asset discovery tools often end up with extensive inventories of connected devices but little actionable insight into their security posture or risk level, let alone any indication as to whether these assets have been compromised.

Simply knowing what assets exist does not equate to securing them.

Worse, asset discovery is often a time-consuming static process. By the time practitioners complete their inventory, not only is there likely to have been changes to their assets, but the threat landscape may have already evolved, introducing new vulnerabilities and attack vectors  that were not previously accounted for.

Signature-based detection is reactive, not proactive

Traditional signature-based threat detection relies on known attack patterns and predefined signatures to identify malicious activity. This approach is fundamentally reactive because it can only detect threats that have already been identified elsewhere.

In an ICS environment where cyber-attacks on OT systems have become more frequent, sophisticated, and destructive, signature-based detection provides a false sense of security while failing to detect sophisticated, previously unseen threats:

Additionally, adversaries often dwell within OT networks for extended periods, studying their specific conditions to identify the most effective way to cause disruption. This means that the likelihood of any attack within OT network looking the same as a previous attack is unlikely.

Implementation effort vs. actual security gains

Many organizations spend considerable time and resources implementing asset visibility solutions and signature-based detection systems only to be required to constantly tune and adjust the sensitivity of the solution.

Despite these efforts, these tools often fail to deliver the level of protection expected, leaving gaps in detection, an overwhelming amount of asset data, and a constant stream of false positives and false negatives from signature-based systems.

A more effective approach: Anomaly detection at core network segments

While it's important to understand the type of device involved during alert triage, organizations should shift their focus from static asset visibility and threat signatures to anomaly detection across critical network segments. This method provides a superior approach to ICS security for several reasons:

Proactive threat detection

Anomaly detection monitors network behavior in real time and identifies deviations . This means that even novel or previously unseen threats can be detected based on unusual network activity, rather than relying on predefined signatures.

Granular security insights

By analyzing traffic patterns across key network segments, organizations can gain deeper insights into how assets interact. This not only improves threat detection but also organically enhances asset visibility. Instead of simply cataloging devices, organizations gain meaningful visibility into how they behave within the network, understanding their unique pattern of life, and making it easier to detect malicious activity.

Efficiency and scalability

Implementing anomaly detection allows security teams to focus on real threats rather than sifting through massive inventories of assets or managing signature updates. It scales better with evolving threats and provides continuous monitoring without requiring constant manual intervention.

Enhanced threat detection for critical infrastructure

Unlike traditional security approaches that rely on static baselines or threat intelligence that doesn't reflect the unique behaviors of your OT environment, Darktrace / OT uses multiple AI techniques to continuously learn and adapt to your organization’s real-world activity across IT, OT, and IoT.

By building a dynamic understanding of each device’s pattern of life, it detects threats at every stage of the kill chain — from known malware to zero-days and insider attacks — without overwhelming your team with false positives or unnecessary alerts. This ensures scalable protection as your environment evolves, without a significant increase in operational overhead.

[related-resource]

Continue reading
About the author
Jeffrey Macre
Industrial Security Solutions Architect

Blog

/

/

April 16, 2025

Introducing Version 2 of Darktrace’s Embedding Model for Investigation of Security Threats (DEMIST-2)

woman looking at laptop at deskDefault blog imageDefault blog image

DEMIST-2 is Darktrace’s latest embedding model, built to interpret and classify security data with precision. It performs highly specialized tasks and can be deployed in any environment. Unlike generative language models, DEMIST-2 focuses on providing reliable, high-accuracy detections for critical security use cases.

DEMIST-2 Core Capabilities:  

  • Enhances Cyber AI Analyst’s ability to triage and reason about security incidents by providing expert representation and classification of security data, and as a part of our broader multi-layered AI system
  • Classifies and interprets security data, in contrast to language models that generate unpredictable open-ended text responses  
  • Incorporates new innovations in language model development and architecture, optimized specifically for cybersecurity applications
  • Deployable across cloud, on-prem, and edge environments, DEMIST-2 delivers low-latency, high-accuracy results wherever it runs. It enables inference anywhere.

Cybersecurity is constantly evolving, but the need to build precise and reliable detections remains constant in the face of new and emerging threats. Darktrace’s Embedding Model for Investigation of Security Threats (DEMIST-2) addresses these critical needs and is designed to create stable, high-fidelity representations of security data while also serving as a powerful classifier. For security teams, this means faster, more accurate threat detection with reduced manual investigation. DEMIST-2's efficiency also reduces the need to invest in massive computational resources, enabling effective protection at scale without added complexity.  

As an embedding language model, DEMIST-2 classifies and creates meaning out of complex security data. This equips our Self-Learning AI with the insights to compare, correlate, and reason with consistency and precision. Classifications and embeddings power core capabilities across our products where accuracy is not optional, as a part of our multi-layered approach to AI architecture.

Perhaps most importantly, DEMIST-2 features a compact architecture that delivers analyst-level insights while meeting diverse deployment needs across cloud, on-prem, and edge environments. Trained on a mixture of general and domain-specific data and designed to support task specialization, DEMIST-2 provides privacy-preserving inference anywhere, while outperforming larger general-purpose models in key cybersecurity tasks.

This proprietary language model reflects Darktrace's ongoing commitment to continually innovate our AI solutions to meet the unique challenges of the security industry. We approach AI differently, integrating diverse insights to solve complex cybersecurity problems. DEMIST-2 shows that a refined, optimized, domain-specific language model can deliver outsized results in an efficient package. We are redefining possibilities for cybersecurity, but our methods transfer readily to other domains. We are eager to share our findings to accelerate innovation in the field.  

The evolution of DEMIST-2

Key concepts:  

  • Tokens: The smallest units processed by language models. Text is split into fragments based on frequency patterns allowing models to handle unfamiliar words efficiently
  • Low-Rank Adaptors (LoRA): Small, trainable components added to a model that allow it to specialize in new tasks without retraining the full system. These components learn task-specific behavior while the original foundation model remains unchanged. This approach enables multiple specializations to coexist, and work simultaneously, without drastically increasing processing and memory requirements.

Darktrace began using large language models in our products in 2022. DEMIST-2 reflects significant advancements in our continuous experimentation and adoption of innovations in the field to address the unique needs of the security industry.  

It is important to note that Darktrace uses a range of language models throughout its products, but each one is chosen for the task at hand. Many others in the artificial intelligence (AI) industry are focused on broad application of large language models (LLMs) for open-ended text generation tasks. Our research shows that using LLMs for classification and embedding offers better, more reliable, results for core security use cases. We’ve found that using LLMs for open-ended outputs can introduce uncertainty through inaccurate and unreliable responses, which is detrimental for environments where precision matters. Generative AI should not be applied to use cases, such as investigation and threat detection, where the results can deeply matter. Thoughtful application of generative AI capabilities, such as drafting decoy phishing emails or crafting non-consequential summaries are helpful but still require careful oversight.

Data is perhaps the most important factor for building language models. The data used to train DEMIST-2 balanced the need for general language understanding with security expertise. We used both publicly available and proprietary datasets.  Our proprietary dataset included privacy-preserving data such as URIs observed in customer alerts, anonymized at source to remove PII and gathered via the Call Home and aianalyst.darktrace.com services. For additional details, read our Technical Paper.  

DEMIST-2 is our way of addressing the unique challenges posed by security data. It recognizes that security data follows its own patterns that are distinct from natural language. For example, hostnames, HTTP headers, and certificate fields often appear in predictable ways, but not necessarily in a way that mirrors natural language. General-purpose LLMs tend to break down when used in these types of highly specialized domains. They struggle to interpret structure and context, fragmenting important patterns during tokenization in ways that can have a negative impact on performance.  

DEMIST-2 was built to understand the language and structure of security data using a custom tokenizer built around a security-specific vocabulary of over 16,000 words. This tokenizer allows the model to process inputs more accurately like encoded payloads, file paths, subdomain chains, and command-line arguments. These types of data are often misinterpreted by general-purpose models.  

When the tokenizer encounters unfamiliar or irregular input, it breaks the data into smaller pieces so it can still be processed. The ability to fall back to individual bytes is critical in cybersecurity contexts where novel or obfuscated content is common. This approach combines precision with flexibility, supporting specialized understanding with resilience in the face of unpredictable data.  

Along with our custom tokenizer, we made changes to support task specialization without increasing model size. To do this, DEMIST-2 uses LoRA . LoRA is a technique that integrates lightweight components with the base model to allow it to perform specific tasks while keeping memory requirements low. By using LoRA, our proprietary representation of security knowledge can be shared and reused as a starting point for more highly specialized models, for example, it takes a different type of specialization to understand hostnames versus to understand sensitive filenames. DEMIST-2 dynamically adapts to these needs and performs them with purpose.  

The result is that DEMIST-2 is like having a room of specialists working on difficult problems together, while sharing a basic core set of knowledge that does not need to be repeated or reintroduced to every situation. Sharing a consistent base model also improves its maintainability and allows efficient deployment across diverse environments without compromising speed or accuracy.  

Tokenization and task specialization represent only a portion of the updates we have made to our embedding model. In conjunction with the changes described above, DEMIST-2 integrates several updated modeling techniques that reduce latency and improve detections. To learn more about these details, our training data and methods, and a full write-up of our results, please read our scientific whitepaper.

DEMIST-2 in action

In this section, we highlight DEMIST-2's embeddings and performance. First, we show a visualization of how DEMIST-2 classifies and interprets hostnames, and second, we present its performance in a hostname classification task in comparison to other language models.  

Embeddings can often feel abstract, so let’s make them real. Figure 1 below is a 2D visualization of how DEMIST-2 classifies and understands hostnames. In reality, these hostnames exist across many more dimensions, capturing details like their relationships with other hostnames, usage patterns, and contextual data. The colors and positions in the diagram represent a simplified view of how DEMIST-2 organizes and interprets these hostnames, providing insights into their meaning and connections. Just like an experienced human analyst can quickly identify and group hostnames based on patterns and context, DEMIST-2 does the same at scale.  

DEMIST-2 visualization of hostname relationships from a large web dataset.
Figure 1: DEMIST-2 visualization of hostname relationships from a large web dataset.

Next, let’s zoom in on two distinct clusters that DEMIST-2 recognizes. One cluster represents small businesses (Figure 2) and the other, Russian and Polish sites with similar numerical formats (Figure 3). These clusters demonstrate how DEMIST-2 can identify specific groupings based on real-world attributes such as regional patterns in website structures, common formats used by small businesses, and other properties such as its understanding of how websites relate to each other on the internet.

Cluster of small businesses
Figure 2: Cluster of small businesses
Figure 3: Cluster of Russian and Polish sites with a similar numerical format

The previous figures provided a view of how DEMIST-2 works. Figure 4 highlights DEMIST-2’s performance in a security-related classification task. The chart shows how DEMIST-2, with just 95 million parameters, achieves nearly 94% accuracy—making it the highest-performing model in the chart, despite being the smallest. In comparison, the larger model with 2.78 billion parameters achieves only about 89% accuracy, showing that size doesn’t always mean better performance. Small models don’t mean poor performance. For many security-related tasks, DEMIST-2 outperforms much larger models.

Hostname classification task performance comparison against comparable open source foundation models
Figure 4: Hostname classification task performance comparison against comparable open source foundation models

With these examples of DEMIST-2 in action, we’ve shown how it excels in embedding and classifying security data while delivering high performance on specialized security tasks.  

The DEMIST-2 advantage

DEMIST-2 was built for precision and reliability. Our primary goal was to create a high-performance model capable of tackling complex cybersecurity tasks. Optimizing for efficiency and scalability came second, but it is a natural outcome of our commitment to building a strong, effective solution that is available to security teams working across diverse environments. It is an enormous benefit that DEMIST-2 is orders of magnitude smaller than many general-purpose models. However, and much more importantly, it significantly outperforms models in its capabilities and accuracy on security tasks.  

Finding a product that fits into an environment’s unique constraints used to mean that some teams had to settle for less powerful or less performant products. With DEMIST-2, data can remain local to the environment, is entirely separate from the data of other customers, and can even operate in environments without network connectivity. The size of our model allows for flexible deployment options while at the same time providing measurable performance advantages for security-related tasks.  

As security threats continue to evolve, we believe that purpose-built AI systems like DEMIST-2 will be essential tools for defenders, combining the power of modern language modeling with the specificity and reliability that builds trust and partnership between security practitioners and AI systems.

Conclusion

DEMIST-2 has additional architectural and deployment updates that improve performance and stability. These innovations contribute to our ability to minimize model size and memory constraints and reflect our dedication to meeting the data handling and privacy needs of security environments. In addition, these choices reflect our dedication to responsible AI practices.

DEMIST-2 is available in Darktrace 6.3, along with a new DIGEST model that uses GNNs and RNNs to score and prioritize threats with expert-level precision.

[related-resource]

Continue reading
About the author
Margaret Cunningham, PhD
Director, Security & AI Strategy, Field CISO
Your data. Our AI.
Elevate your network security with Darktrace AI