Blog
/
/
May 19, 2020

Understanding a SaaS Attack and How AI Can Investigate

The Cyber AI Platform recently detected and investigated two incidents of SaaS account takeover in real-time. Learn about the importance of cyber security here!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
19
May 2020

Executive summary

  • Darktrace has observed a significant increase in attacks against SaaS platforms, including file storage, collaborative work, and email solutions.
  • This blog post details two example threats that are representative of the current threat landscape: an Office 365 business email compromise and a Box.com file sharing account compromise.
  • Organizations are recommended to enable multi-factor authentication to combat credential stuffing attacks and the re-use of stolen credentials from data dumps. It is further advised to actively monitor SaaS environments for in-progress cyber-attacks.
  • SaaS exacerbates the skill gap in security – identifying and investigating threats in SaaS environments is a different skill to traditional security operations skill-sets.

Introduction

The digital transformation – whether planned naturally or forced by the global pandemic – has increased the use of Software-as-a-Service (SaaS) solutions in modern organizations. The annual growth rate of the SaaS market is currently 18%, and as the workforce becomes increasingly remote throughout 2020, this is set to skyrocket.

Attackers have been targeting SaaS solutions for a long time – but almost nobody talks about how the Techniques, Tools & Procedures (TTPs) in SaaS attacks differ significantly from traditional TTPs seen in networks and endpoint attacks.

How do you create meaningful detections in SaaS environments that don’t have endpoint or network data? How can you investigate threats in a SaaS environment as an analyst? What does a ‘good’ SaaS event look like, and what does a threat look like? Finding skilled security analysts that can work in traditional IT environments is already hard – it gets even harder when trying to hire security people with SaaS domain knowledge.

SaaS consumers are left with only a few choices: either use the native SaaS security controls provided in each SaaS solution – and rely on the (non-)maturity of the SaaS provider – or go with a third party SaaS security solution, often in the form of Cloud Access Security Brokers (CASBs). Both cases are often not ideal.

This blog outlines two attacks we have recently observed in SaaS environments that are representative for the broader SaaS threat landscape: a Microsoft (Office) 365 business email compromise (BEC) and the compromise of a corporate Box.com account. The analysis serves to illuminate the sharp distinction between a traditional network attack and a SaaS compromise – demonstrating how using machine learning to detect anomalies in behavior offers crucial hope for defenders as SaaS applications define this new era of work.

Anonymized SaaS Threat 1: Office 365 Business Email Compromise

Figure 1: The timeline of attack for the Microsoft 365 Compromise

In this case of a classic BEC attack, a threat-actor infiltrated an employee’s Microsoft 365 account to access sensitive financial documents hosted in SharePoint, including pay slip and banking details. The attacker went on to make configuration changes to the hacked inbox, deleting items and making updates that may have allowed them to cover their tracks.

Darktrace first observed the employee’s account log in from unusual IP ranges. The particular account had never logged in from Bulgaria before, and the peer accounts belonging to those from the same department had not exhibited similar behavioral traits. This in itself was a low-level anomaly and not necessarily indicative of malicious activity – employees might change locations after all.

The unusual login location was then accompanied by an unusual login time and a new user-agent. All of these anomalies triggered Cyber AI Analyst – Darktrace’s automated threat investigation technology – to launch a deeper analysis.

Darktrace then identified that the account was starting to access highly sensitive information, including payroll information on a Sharepoint. Two examples that were highlighted by AI Analyst are shown below:

  • hxxps://anonymised[.]sharepoint[.]com/anonymised/pages/Understanding-my-payslip[.]aspx
  • hxxps:// anonymised [.]sharepoint[.]com/anonymised /pages/Changing-my-bank-details[.]aspx

The attacker tried to gain insights about payment information and credit card details, with the likely intention of changing the payroll details to an attacker-controlled bank account. But with its ability to automatically analyze events to piece together attack narratives, Cyber AI Analyst was able to put together these weak signals of a threat and illuminate the likely account compromise. The security team was then able to lock the account and alert the user, who subsequently changed their credentials.

Anonymized SaaS Threat 2: Box.com Compromise

Figure 2: The timeline of attack for the Box.com Compromise

Darktrace observed a case of unauthorized access to a corporate Box.com file storage account belonging to an employee of a global supply company. The Box.com account login took place in the US – the same country that this organization operates in – but from an unusual IP space and ASN. Made suspicious by this low-level anomaly, Cyber AI Analyst did further, ongoing investigations into the user’s activity.

The actor behind the account logged in to Box.com successfully, and then proceeded to download expense reports, invoices, and other financial documents. It became evident that the account started accessing files that were highly unusual for the account to access. Darktrace recognized that neither the account itself, nor its peer group were usually accessing the file called ‘PASSWORD SHEET.xlsx’.

With Cyber AI’s bespoke knowledge of ‘self’ for every member of the organization’s workforce, the technology was able to identify the threat immediately. The Darktrace Cyber AI Platform detected that the activity occurred at a highly unusual time for the legitimate user, and that the location of the actor’s IP address was also anomalous compared to the employee’s previous access locations for this particular SaaS service.

While accessing these documents may have been normal for the employee in another context, Darktrace Cyber AI’s deep understanding of user behavior and granular visibility within the Box.com application allowed it to spot the subtle signs of account compromise. Moreover, when Darktrace’s Cyber AI Analyst automatically investigated the threat, it was able to illuminate the wider narrative, understanding that each unauthorized file exposure was part of a connected incident and highlighted the breach as a key concern for the security team.

Conclusion

Traditional detection approaches like ‘more than X failed logins from Y’ are not enough to ensure sufficient security across SaaS applications. Keeping threat intelligence lists up to date is even more difficult, as most SaaS attacks don’t involve any Command & Control – just indiscriminate logins from remote devices. Attackers may use VPN, Tor, other compromised devices, dynamic DNS, or virtual private servers to further mask their tracks.

A more intricate and effective approach to SaaS security requires understanding the dynamic individual behind the account. SaaS applications are fundamentally platforms for humans to communicate – allowing them to exchange and store ideas and information. Abnormal, threatening behavior is therefore impossible to detect without a nuanced understanding of those unique individuals: where and when do they typically access a SaaS account, which files are they like to access, who do they typically connect with?

Cyber AI asks these questions, continuously analyzing data not only across SaaS platforms, but from the unique ‘patterns of life’ of every user and device in the organization as a whole. With this context, it can chain together seemingly disparate anomalies – unusual login times, login locations, access of new or unusual files, and hundreds of other indicators of threat. These anomalies then act as a trigger for more in-depth investigations via Cyber AI Analyst that can link the anomalies together and create a coherent attack narrative.

Both of the above SaaS attacks were comprehensively but succinctly investigated and fully reported on by the Darktrace’s Cyber AI Analyst, which then surfaced an easy-to-understand incident report, ready for executive review. For a more in-depth look at how Cyber AI Analyst investigated an emerging APT threat in the wild, read: Catching APT41 exploiting a zero-day vulnerability.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

/

April 16, 2025

Introducing Version 2 of Darktrace’s Embedding Model for Investigation of Security Threats (DEMIST-2)

woman looking at laptop at deskDefault blog imageDefault blog image

DEMIST-2 is Darktrace’s latest embedding model, built to interpret and classify security data with precision. It performs highly specialized tasks and can be deployed in any environment. Unlike generative language models, DEMIST-2 focuses on providing reliable, high-accuracy detections for critical security use cases.

DEMIST-2 Core Capabilities:  

  • Enhances Cyber AI Analyst’s ability to triage and reason about security incidents by providing expert representation and classification of security data, and as a part of our broader multi-layered AI system
  • Classifies and interprets security data, in contrast to language models that generate unpredictable open-ended text responses  
  • Incorporates new innovations in language model development and architecture, optimized specifically for cybersecurity applications
  • Deployable across cloud, on-prem, and edge environments, DEMIST-2 delivers low-latency, high-accuracy results wherever it runs. It enables inference anywhere.

Cybersecurity is constantly evolving, but the need to build precise and reliable detections remains constant in the face of new and emerging threats. Darktrace’s Embedding Model for Investigation of Security Threats (DEMIST-2) addresses these critical needs and is designed to create stable, high-fidelity representations of security data while also serving as a powerful classifier. For security teams, this means faster, more accurate threat detection with reduced manual investigation. DEMIST-2's efficiency also reduces the need to invest in massive computational resources, enabling effective protection at scale without added complexity.  

As an embedding language model, DEMIST-2 classifies and creates meaning out of complex security data. This equips our Self-Learning AI with the insights to compare, correlate, and reason with consistency and precision. Classifications and embeddings power core capabilities across our products where accuracy is not optional, as a part of our multi-layered approach to AI architecture.

Perhaps most importantly, DEMIST-2 features a compact architecture that delivers analyst-level insights while meeting diverse deployment needs across cloud, on-prem, and edge environments. Trained on a mixture of general and domain-specific data and designed to support task specialization, DEMIST-2 provides privacy-preserving inference anywhere, while outperforming larger general-purpose models in key cybersecurity tasks.

This proprietary language model reflects Darktrace's ongoing commitment to continually innovate our AI solutions to meet the unique challenges of the security industry. We approach AI differently, integrating diverse insights to solve complex cybersecurity problems. DEMIST-2 shows that a refined, optimized, domain-specific language model can deliver outsized results in an efficient package. We are redefining possibilities for cybersecurity, but our methods transfer readily to other domains. We are eager to share our findings to accelerate innovation in the field.  

The evolution of DEMIST-2

Key concepts:  

  • Tokens: The smallest units processed by language models. Text is split into fragments based on frequency patterns allowing models to handle unfamiliar words efficiently
  • Low-Rank Adaptors (LoRA): Small, trainable components added to a model that allow it to specialize in new tasks without retraining the full system. These components learn task-specific behavior while the original foundation model remains unchanged. This approach enables multiple specializations to coexist, and work simultaneously, without drastically increasing processing and memory requirements.

Darktrace began using large language models in our products in 2022. DEMIST-2 reflects significant advancements in our continuous experimentation and adoption of innovations in the field to address the unique needs of the security industry.  

It is important to note that Darktrace uses a range of language models throughout its products, but each one is chosen for the task at hand. Many others in the artificial intelligence (AI) industry are focused on broad application of large language models (LLMs) for open-ended text generation tasks. Our research shows that using LLMs for classification and embedding offers better, more reliable, results for core security use cases. We’ve found that using LLMs for open-ended outputs can introduce uncertainty through inaccurate and unreliable responses, which is detrimental for environments where precision matters. Generative AI should not be applied to use cases, such as investigation and threat detection, where the results can deeply matter. Thoughtful application of generative AI capabilities, such as drafting decoy phishing emails or crafting non-consequential summaries are helpful but still require careful oversight.

Data is perhaps the most important factor for building language models. The data used to train DEMIST-2 balanced the need for general language understanding with security expertise. We used both publicly available and proprietary datasets.  Our proprietary dataset included privacy-preserving data such as URIs observed in customer alerts, anonymized at source to remove PII and gathered via the Call Home and aianalyst.darktrace.com services. For additional details, read our Technical Paper.  

DEMIST-2 is our way of addressing the unique challenges posed by security data. It recognizes that security data follows its own patterns that are distinct from natural language. For example, hostnames, HTTP headers, and certificate fields often appear in predictable ways, but not necessarily in a way that mirrors natural language. General-purpose LLMs tend to break down when used in these types of highly specialized domains. They struggle to interpret structure and context, fragmenting important patterns during tokenization in ways that can have a negative impact on performance.  

DEMIST-2 was built to understand the language and structure of security data using a custom tokenizer built around a security-specific vocabulary of over 16,000 words. This tokenizer allows the model to process inputs more accurately like encoded payloads, file paths, subdomain chains, and command-line arguments. These types of data are often misinterpreted by general-purpose models.  

When the tokenizer encounters unfamiliar or irregular input, it breaks the data into smaller pieces so it can still be processed. The ability to fall back to individual bytes is critical in cybersecurity contexts where novel or obfuscated content is common. This approach combines precision with flexibility, supporting specialized understanding with resilience in the face of unpredictable data.  

Along with our custom tokenizer, we made changes to support task specialization without increasing model size. To do this, DEMIST-2 uses LoRA . LoRA is a technique that integrates lightweight components with the base model to allow it to perform specific tasks while keeping memory requirements low. By using LoRA, our proprietary representation of security knowledge can be shared and reused as a starting point for more highly specialized models, for example, it takes a different type of specialization to understand hostnames versus to understand sensitive filenames. DEMIST-2 dynamically adapts to these needs and performs them with purpose.  

The result is that DEMIST-2 is like having a room of specialists working on difficult problems together, while sharing a basic core set of knowledge that does not need to be repeated or reintroduced to every situation. Sharing a consistent base model also improves its maintainability and allows efficient deployment across diverse environments without compromising speed or accuracy.  

Tokenization and task specialization represent only a portion of the updates we have made to our embedding model. In conjunction with the changes described above, DEMIST-2 integrates several updated modeling techniques that reduce latency and improve detections. To learn more about these details, our training data and methods, and a full write-up of our results, please read our scientific whitepaper.

DEMIST-2 in action

In this section, we highlight DEMIST-2's embeddings and performance. First, we show a visualization of how DEMIST-2 classifies and interprets hostnames, and second, we present its performance in a hostname classification task in comparison to other language models.  

Embeddings can often feel abstract, so let’s make them real. Figure 1 below is a 2D visualization of how DEMIST-2 classifies and understands hostnames. In reality, these hostnames exist across many more dimensions, capturing details like their relationships with other hostnames, usage patterns, and contextual data. The colors and positions in the diagram represent a simplified view of how DEMIST-2 organizes and interprets these hostnames, providing insights into their meaning and connections. Just like an experienced human analyst can quickly identify and group hostnames based on patterns and context, DEMIST-2 does the same at scale.  

DEMIST-2 visualization of hostname relationships from a large web dataset.
Figure 1: DEMIST-2 visualization of hostname relationships from a large web dataset.

Next, let’s zoom in on two distinct clusters that DEMIST-2 recognizes. One cluster represents small businesses (Figure 2) and the other, Russian and Polish sites with similar numerical formats (Figure 3). These clusters demonstrate how DEMIST-2 can identify specific groupings based on real-world attributes such as regional patterns in website structures, common formats used by small businesses, and other properties such as its understanding of how websites relate to each other on the internet.

Cluster of small businesses
Figure 2: Cluster of small businesses
Figure 3: Cluster of Russian and Polish sites with a similar numerical format

The previous figures provided a view of how DEMIST-2 works. Figure 4 highlights DEMIST-2’s performance in a security-related classification task. The chart shows how DEMIST-2, with just 95 million parameters, achieves nearly 94% accuracy—making it the highest-performing model in the chart, despite being the smallest. In comparison, the larger model with 2.78 billion parameters achieves only about 89% accuracy, showing that size doesn’t always mean better performance. Small models don’t mean poor performance. For many security-related tasks, DEMIST-2 outperforms much larger models.

Hostname classification task performance comparison against comparable open source foundation models
Figure 4: Hostname classification task performance comparison against comparable open source foundation models

With these examples of DEMIST-2 in action, we’ve shown how it excels in embedding and classifying security data while delivering high performance on specialized security tasks.  

The DEMIST-2 advantage

DEMIST-2 was built for precision and reliability. Our primary goal was to create a high-performance model capable of tackling complex cybersecurity tasks. Optimizing for efficiency and scalability came second, but it is a natural outcome of our commitment to building a strong, effective solution that is available to security teams working across diverse environments. It is an enormous benefit that DEMIST-2 is orders of magnitude smaller than many general-purpose models. However, and much more importantly, it significantly outperforms models in its capabilities and accuracy on security tasks.  

Finding a product that fits into an environment’s unique constraints used to mean that some teams had to settle for less powerful or less performant products. With DEMIST-2, data can remain local to the environment, is entirely separate from the data of other customers, and can even operate in environments without network connectivity. The size of our model allows for flexible deployment options while at the same time providing measurable performance advantages for security-related tasks.  

As security threats continue to evolve, we believe that purpose-built AI systems like DEMIST-2 will be essential tools for defenders, combining the power of modern language modeling with the specificity and reliability that builds trust and partnership between security practitioners and AI systems.

Conclusion

DEMIST-2 has additional architectural and deployment updates that improve performance and stability. These innovations contribute to our ability to minimize model size and memory constraints and reflect our dedication to meeting the data handling and privacy needs of security environments. In addition, these choices reflect our dedication to responsible AI practices.

DEMIST-2 is available in Darktrace 6.3, along with a new DIGEST model that uses GNNs and RNNs to score and prioritize threats with expert-level precision.

[related-resource]

Continue reading
About the author
Margaret Cunningham, PhD
Director, Security & AI Strategy, Field CISO

Blog

/

/

April 16, 2025

AI Uncovered: Introducing Darktrace Incident Graph Evaluation for Security Threats (DIGEST)

man looking at computer screenDefault blog imageDefault blog image

DIGEST advances how Cyber AI Analyst scores and prioritizes incidents. Trained on over a million anonymized incident graphs, our model brings deeper context to severity scoring by analyzing how threats are structured and how they evolve. DIGEST assesses threats as an expert, before damage is done. For more details beyond this overview, please read our Technical Research Paper.

Darktrace combines machine learning (ML) and artificial intelligence (AI) approaches using a multi-layered, multi-method approach. The result is an AI system that continuously ingests data from across an organization’s environment, learns from it, and adapts in real time. DIGEST adds a new layer to this system, specifically to our Cyber AI Analyst, the first and most experienced AI Analyst in cybersecurity, dedicated to refining how incidents are scored and prioritized. DIGEST improves what your team uses to focus on what matters the most first.

To build DIGEST, we combined Graph Neural Networks (GNNs) to interpret incident structure with Recurrent Neural Networks (RNNs) to analyze how incidents evolve over time. This pairing allows DIGEST to reliably determine the potential severity of an incident even at an early stage to give the Cyber AI Analyst a critical edge in identifying high-risk threats early and recognizing when activity is unlikely to escalate.

DIGEST works locally in real-time regardless of whether your Darktrace deployment is on prem or in the cloud, without requiring data to be sent externally for decisions to be made. It was built to support teams in all environments, including those with strict data controls and limited connectivity.

Our approach to AI is unique, drawing inspiration from multiple disciplines to tackle the toughest cybersecurity challenges. DIGEST demonstrates how a novel application of GNNs and RNNs improves the prioritization and triage of security incidents. By blending interdisciplinary expertise with innovative AI techniques, we are able to push the boundaries of what’s possible and deliver it where it is needed most. We are eager to share our findings to accelerate progress throughout the broader field of AI development.

DIGEST: Pattern, progression, and prioritization

Most security incidents start quietly. A device contacting an unusual domain. Credentials are used at unexpected hours. File access patterns shift. The fundamental challenge is not always detecting these anomalies but knowing what to address first. DIGEST gives us this capability.

To understand DIGEST, it helps to start with Cyber AI Analyst, a critical component of our Self-Learning AI system and a front-line triage partner in security investigations. It combines supervised and unsupervised machine learning (ML) techniques, natural language processing (NLP), and graph-based reasoning to investigate and summarize security incidents.

DIGEST was built as an additional layer of analysis within Cyber AI Analyst. It enhances its capabilities by refining how incidents are scored and prioritized, helping teams focus on what matters most more quickly. For a general view of the ML and AI methods that power Darktrace products, read our AI Arsenal whitepaper. This paper provides insights regarding the various approaches we use to detect, investigate, and prioritize threats.

Cyber AI Analyst is constantly investigating alerts and produces millions of critical incidents every year. The dynamic graphs produced by Cyber AI Analyst investigations represent an abstract understanding of security incidents that is fully anonymized and privacy preserving. This allowed us to use the Call Home and aianalyst.darktrace.com services to produce a dataset comprising the broad structure of millions of incidents that Cyber AI analyst detected on customer deployments, without containing any sensitive data. (Read our technical research paper for more details about our dataset).

The dynamic graphs from Cyber AI Analyst capture the structure of security incidents where nodes represent entities like users, devices or resources, and edges represent the multitude of relationships between them. As new activity is observed, the graph expands, capturing the progression of incidents over time. Our dataset contained everything from benign administrative behavior to full-scale ransomware attacks.

Unique data, unmatched insights

Key terms

Graph Neural Networks (GNNs): A type of neural network designed to analyze and interpret data structured as graphs, capturing relationships between nodes.

Recurrent Neural Networks (RNNs): A type of neural network designed to model sequences where the order of events matters, like how activity unfolds in a security incident.

The Cyber AI Analyst dataset used to train DIGEST reflects over a decade of work in AI paired with unmatched expertise in cybersecurity. Prior to training DIGEST on our incident graph data set, we performed rigorous data preprocessing to ensure to remove issues such as duplicate or ill-formed incidents. Additionally, to validate DIGEST’s outputs, expert security analysts assessed and verified the model’s scoring.

Transforming data into insights requires using the right strategies and techniques. Given the graphical nature of Cyber AI Analyst incident data, we used GNNs and RNNs to train DIGEST to understand incidents and how they are likely to change over time. Change does not always mean escalation. DIGEST’s enhanced scoring also keeps potentially legitimate or low-severity activity from being prioritized over threats that are more likely to get worse. At the beginning, all incidents might look the same to a person. To DIGEST, it looks like the beginning of a pattern.

As a result, DIGEST enhances our understanding of security incidents by evaluating the structure of the incident, probable next steps in an incident’s trajectory, and how likely it is to grow into a larger event.

To illustrate these capabilities in action, we are sharing two examples of DIGEST’s scoring adjustments from use cases within our customers’ environments.

First, Figure 1 shows the graphical representation of a ransomware attack, and Figure 2 shows how DIGEST scored incident progression of that ransomware attack. At hour two, DIGEST’s score escalated to 95% well before observation of data encryption. This means that prior to seeing malicious encryption behaviors, DIGEST understood the structure of the incident and flagged these early activities as high-likelihood precursors to a severe event. Early detection, especially when flagged prior to malicious encryption behaviors, gives security teams a valuable head start and can minimize the overall impact of the threat, Darktrace Autonomous Response can also be enabled by Cyber AI Analyst to initiate an immediate action to stop the progression, allowing the human security team time to investigate and implement next steps.

Graph representation of a ransomware attack
Figure 1: Graph representation of a ransomware attack
Timeline of DIGEST incident score escalation. Note that timestep does not equate to hours, the spike in score to 95% occurred approximately 2 hours into the attack, prior to data encryption.
Figure 2:  Timeline of DIGEST incident score escalation. Note that timestep does not equate to hours, the spike in score to 95% occurred approximately 2 hours into the attack, prior to data encryption.

In contrast, our second example shown in Figure 3 and Figure 4 illustrates how DIGEST’s analysis of an incident can help teams avoid wasting time on lower risk scenarios. In this instance, Figure 3 illustrates a graph of unusual administrative activity, where we observed connection to a large group of devices. However, the incident score remained low because DIGEST determined that high risk malicious activity was unlikely. This determination was based on what DIGEST observed in the incident's structure, what it assessed as the probable next steps in the incident lifecycle and how likely it was to grow into a larger adverse event.

Graph representation of unusual admin activity connecting to a large group of devices.
Figure 3: Graph representation of unusual admin activity connecting to a large group of devices.
Timeline of DIGEST incident scoring, where the score remained low as the unusual event was determined to be low risk.
Figure 4: Timeline of DIGEST incident scoring, where the score remained low as the unusual event was determined to be low risk.

These examples show the value of enhanced scoring. DIGEST helps teams act sooner on the threats that count and spend less time chasing the ones that do not.

The next phase of advanced detection is here

Darktrace understands what incidents look like. We have seen, investigated, and learned from them at scale, including over 90 million investigations in 2024. With DIGEST, we can share our deep understanding of incidents and their behaviors with you and triage these incidents using Cyber AI Analyst.

Our ability to innovate in this space is grounded in the maturity of our team and the experiences we have built upon in over a decade of building AI solutions for cybersecurity. This experience, along with our depth of understanding of our data, techniques, and strategic layering of AI/ML components has shaped every one of our steps forward.

With DIGEST, we are entering a new phase, with another line of defense that helps teams prioritize and reason over incidents and threats far earlier in an incident’s lifecycle. DIGEST understands your incidents when they start, making it easier for your team to act quickly and confidently.

DIGEST is available in Darktrace 6.3, along with a new embedding model – DEMIST-2 – designed to provide reliable, high-accuracy detections for critical security use cases.

[related-resource]

Continue reading
About the author
Margaret Cunningham, PhD
Director, Security & AI Strategy, Field CISO
Your data. Our AI.
Elevate your network security with Darktrace AI