Blog
/
Email
/
February 24, 2025

Detecting and Containing Account Takeover with Darktrace

Account takeovers are rising with SaaS adoption. Learn how Darktrace detects deviations in user behavior and autonomously stops threats before they escalate.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Min Kim
Cyber Security Analyst
women on laptop in officeDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
24
Feb 2025

Thanks to its accessibility from anywhere with an internet connection and a web browser, Software-as-a-Service (SaaS) platforms have become nearly universal across organizations worldwide. However, with this growing popularity comes greater responsibility. Increased attention attracts a larger audience, including those who may seek to exploit these widely used services. One crucial factor to be vigilant about in the SaaS landscape is safeguarding internal credentials. Minimal protection on accounts can lead to SaaS hijacking, which could allow further escalations within the network.

How does SaaS account takeover work?

SaaS hijacking occurs when a malicious actor takes control of a user’s active session with a SaaS application. Attackers can achieve this through various methods, including employees using company credentials on compromised or spoofed external websites, brute-force attacks, social engineering, and exploiting outdated software or applications.

After the hijack, attackers may escalate their actions by changing email rules and using internal addresses for additional social engineering attacks. The larger goal of these actions is often to steal internal data, damage reputations, and disrupt operations.

Account takeover protection

It has become essential to have security tools capable of outsmarting potential malicious actors. Traditional tools that rely on rules and signatures may not be able to identify new events, such as logins or activities from a rare endpoint, unless they come from a known malicious source.

Darktrace relies on analysis of user and network behavior, tailored to each customer, allowing it to identify anomalous events that the user typically does not engage in. In this way, unusual SaaS activities can be detected, and unwanted actions can be halted to allow time for remediation before further escalations.

The following cases, drawn from the global customer base, illustrate how Darktrace detects potential SaaS hijack attempts and further escalations, and applies appropriate actions when necessary.

Case 1: Unusual login after a phishing email

A customer in the US received a suspicious email that seemed to be from the legitimate file storage service, Dropbox. However, Darktrace identified that the reply-to email address, hremployeepyaroll@mail[.]com, was masquerading as one associated with the customer’s Human Resources (HR) department.

Further inspection of this sender address revealed that the attacker had intentionally misspelled ‘payroll’ to trick recipients into believing it was legitimate

Furthermore, the subject of the email indicated that the attackers were attempting a social engineering attack by sharing a file related to pay raises and benefits to capture the recipients' attention and increase the likelihood of their targets engaging with the email and its attachment.

Figure 1: Subject of the phishing email.
Figure 1: Subject of the phishing email.

Unknowingly, the recipient, who believed the email to be a legitimate HR communication, acted on it, allowing malicious attackers to gain access to the account. Following this, the recipient’s account was observed logging in from a rare location using multi-factor authentication (MFA) while also being active from another more commonly observed location, indicating that the SaaS account had been compromised.

Darktrace’s Autonomous Response action triggered by an anomalous email received by an internal user, followed by a failed login attempt from a rare external source.
Figure 2: Darktrace’s Autonomous Response action triggered by an anomalous email received by an internal user, followed by a failed login attempt from a rare external source.

Darktrace subsequently observed the SaaS actor creating new inbox rules on the account. These rules were intended to mark as read and move any emails mentioning the file storage company, whether in the subject or body, to the ‘Conversation History’ folder. This was likely an attempt by the threat actor to hide any outgoing phishing emails or related correspondence from the legitimate account user, as the ‘Conversation History’ folder typically goes unread by most users.

Typically, Darktrace / EMAIL would have instantly placed the phishing email in the junk folder before they reached user’s inbox, while also locking the links identified in the suspicious email, preventing them from being accessed. Due to specific configurations within the customer’s deployment, this did not happen, and the email remained accessible to the user.

Case 2: Login using unusual credentials followed by password change

In the latter half of 2024, Darktrace detected an unusual use of credentials when a SaaS actor attempted to sign into a customer’s Microsoft 365 application from an unfamiliar IP address in the US. Darktrace recognized that since the customer was located within the Europe, Middle East, and Africa (EMEA) region, a login from the US was unexpected and suspicious. Around the same time, the legitimate account owner logged into the customer’s SaaS environment from another location – this time from a South African IP, which was commonly seen within the environment and used by other internal SaaS accounts.

Darktrace understood that this activity was highly suspicious and unlikely to be legitimate, given one of the IPs was known and expected, while the other had never been seen before in the environment, and the simultaneous logins from two distant locations were geographically impossible.

Model alert in Darktrace / IDENTITY: Detecting a login from a different source while the user is already active from another source.
Figure 3: Model alert in Darktrace / IDENTITY: Detecting a login from a different source while the user is already active from another source.

Darktrace detected several unusual login attempts, including a successful login from an uncommon US source. Subsequently, Darktrace / NETWORK identified the device associated with this user making external connections to rare endpoints, some of which were only two weeks old. As this customer had integrated Darktrace with Microsoft Defender, the Darktrace detection was enriched by Defender, adding the additional context that the user had likely been compromised in an Adversary-in-the-Middle (AiTM) phishing attack. AiTM phishing attacks occur when a malicious attacker intercepts communications between a user and a legitimate authentication service, potentially leading to account hijacking. These attacks are harder to identify as they can bypass security measures like MFA.

Following this, Darktrace observed the attacker using the now compromised credentials to access password management and change the account's password. Such behavior is common in account takeover incidents, as attackers seek to maintain persistence within the SaaS environment.

While Darktrace’s Autonomous Response was not fully configured on the customer’s SaaS environment, they were subscribed to the Managed Threat Detection service offered by Darktrace’s Security Operations Center (SOC). This 24/7 service ensures that Darktrace’s analysts monitor and investigate emerging suspicious activity, informing customers in real-time. As such, the customer received notification of the compromise and were able to quickly take action to prevent further escalation.

Case 3: Unusual logins, new email rules and outbound spam

Recently, Darktrace has observed a trend in SaaS compromises involving unusual logins, followed by the creation of new email rules, and then outbound spam or phishing campaigns being launched from these accounts.

In October, Darktrace identified a SaaS user receiving an email with the subject line "Re: COMPANY NAME Request for Documents" from an unknown sender using a freemail  account. As freemail addresses require very little personal information to create, threat actors can easily create multiple accounts for malicious purposes while retaining their anonymity.

Within the identified email, Darktrace found file storage links that were likely intended to divert recipients to fraudulent or malicious websites upon interaction. A few minutes after the email was received, the recipient was seen logging in from three different sources located in the US, UK, and the Philippines, all around a similar time. As the customer was based in the Philippines, a login from there was expected and not unusual. However, Darktrace understood that the logins from the UK and US were highly unusual, and no other SaaS accounts had connected from these locations within the same week.

After successfully logging in from the UK, the actor was observed updating a mailbox rule, renaming it to ‘.’ and changing its parameters to move any inbound emails to the deleted items folder and mark them as read.

Figure 4: The updated email rule intended to move any inbound emails to the deleted items folder.

Malicious actors often use ambiguous names like punctuation marks, repetitive letters, and unreadable words to name resources, disguising their rules to avoid detection by legitimate users or administrators. Similarly, attackers have been known to adjust existing rule parameters rather than creating new rules to keep their footprints untracked. In this case, the rule was updated to override an existing email rule and delete all incoming emails. This ensured that any inbound emails, including responses to potential phishing emails sent by the account, would be deleted, allowing the attacker to remain undetected.

Over the next two days, additional login attempts, both successful and failed, were observed from locations in the UK and the Philippines. Darktrace noted multiple logins from the Philippines where the legitimate user was attempting to access their account using a password that had recently expired or been changed, indicating that the attacker had altered the user’s original password as well.

Following this chain of events, over 500 emails titled “Reminder For Document Signed Agreement.10/28/2024” were sent from the SaaS actor’s account to external recipients, all belonging to a different organization within the Philippines.

These emails contained rare attachments with a ‘.htm’ extension, which included programming language that could initiate harmful processes on devices. While inherently not malicious, if used inappropriately, these files could perform unwanted actions such as code execution, malware downloads, redirects to malicious webpages, or phishing upon opening.

Outbound spam seen from the hijacked SaaS account containing a ‘.htm’ attachment.
Figure 5: Outbound spam seen from the hijacked SaaS account containing a ‘.htm’ attachment.

As this customer did not have Autonomous Response enabled for Darktrace / IDENTITY, the unusual activity went unattended, and the compromise was able to escalate to the point of a spam email campaign being launched from the account.

In a similar example on a customer network in EMEA, Darktrace detected unusual logins and the creation of new email rules from a foreign location through a SaaS account. However, in this instance, Autonomous Response was enabled and automatically disabled the compromised account, preventing further malicious activity and giving the customer valuable time to implement their own remediation measures.

Conclusion

Whether it is an unexpected login or an unusual sequence of events – such as a login followed by a phishing email being sent – unauthorized or unexpected activities can pose a significant risk to an organization’s SaaS environment. The threat becomes even greater when these activities escalate to account hijacking, with the compromised account potentially providing attackers access to sensitive corporate data. Organizations, therefore, must have robust SaaS security measures in place to prevent data theft, ensure compliance and maintain continuity and trust.

The Darktrace suite of products is well placed to detect and contain SaaS hijack attempts at multiple stages of an attack. Darktrace / EMAIL identifies initial phishing emails that attackers use to gain access to customer SaaS environments, while Darktrace / IDENTITY detects anomalous SaaS behavior on user accounts which could indicate they have been taken over by a malicious actor.

By identifying these threats in a timely manner and taking proactive mitigative measures, such as logging or disabling compromised accounts, Darktrace prevents escalation and ensures customers have sufficient time to response effectively.

Credit to Min Kim (Cyber Analyst) and Ryan Traill (Analyst Content Lead)

[related-resource]

Appendices

Darktrace Model Detections Case 1

SaaS / Compromise / SaaS Anomaly Following Anomalous Login

SaaS / Compromise / Unusual Login and New Email Rule

SaaS / Compliance / Anomalous New Email Rule

SaaS / Unusual Activity / Multiple Unusual SaaS Activities

SaaS / Access / Unusual External Source for SaaS Credential Us

SaaS / Compromise / Login From Rare Endpoint While User is Active

SaaS / Email Nexus / Unusual Login Location Following Link to File Storage

Antigena / SaaS / Antigena Email Rule Block (Autonomous Response)

Antigena / SaaS / Antigena Suspicious SaaS Activity Block (Autonomous Response)

Antigena / SaaS / Antigena Enhanced Monitoring from SaaS User Block (Autonomous Response)

List of Indicators of Compromise (IoCs)

176.105.224[.]132 – IP address – Unusual SaaS Activity Source

hremployeepyaroll@mail[.]com – Email address – Reply-to email address

MITRE ATT&CK Mapping

Cloud Accounts – DEFENSE EVASION, PERSISTENCE, PRIVILEGE ESCALATION, INITIAL ACCESS – T1078

Outlook Rules – PERSISTENCE – T1137

Cloud Service Dashboard – DISCOVERY – T1538

Compromise Accounts – RESOURCE DEVELOPMENT – T1586

Steal Web Session Cookie – CREDENTIAL ACCESS – T1539

Darktrace Model Detections Case 2

SaaS / Compromise / SaaS Anomaly Following Anomalous Login

SaaS / Compromise / Unusual Login and Account Update

Security Integration / High Severity Integration Detection

SaaS / Access / Unusual External Source for SaaS Credential Use

SaaS / Compromise / Login From Rare Endpoint While User Is Active

SaaS / Compromise / Login from Rare High Risk Endpoint

SaaS / Access / M365 High Risk Level Login

Antigena / SaaS / Antigena Suspicious SaaS Activity Block (Autonomous Response)

Antigena / SaaS / Antigena Enhanced Monitoring from SaaS user Block (Autonomous Response)

List of IoCs

74.207.252[.]129 – IP Address – Suspicious SaaS Activity Source

MITRE ATT&CK Mapping

Cloud Accounts – DEFENSE EVASION, PERSISTENCE, PRIVILEGE ESCALATION, INITIAL ACCESS – T1078

Cloud Service Dashboard – DISCOVERY – T1538

Compromise Accounts – RESOURCE DEVELOPMENT – T1586

Steal Web Session Cookie – CREDENTIAL ACCESS – T1539

Darktrace Model Detections Case 3

SaaS / Compromise / Unusual Login and Outbound Email Spam

SaaS / Compromise / New Email Rule and Unusual Email Activity

SaaS / Compromise / Unusual Login and New Email Rule

SaaS / Email Nexus / Unusual Login Location Following Sender Spoof

SaaS / Email Nexus / Unusual Login Location Following Link to File Storage

SaaS / Email Nexus / Possible Outbound Email Spam

SaaS / Unusual Activity / Multiple Unusual SaaS Activities

SaaS / Email Nexus / Suspicious Internal Exchange Activity

SaaS / Compliance / Anomalous New Email Rule

List of IoCs

95.142.116[.]1 – IP Address – Suspicious SaaS Activity Source

154.12.242[.]58 – IP Address – Unusual Source

MITRE ATT&CK Mapping

Cloud Accounts – DEFENSE EVASION, PERSISTENCE, PRIVILEGE ESCALATION, INITIAL ACCESS – T1078

Compromise Accounts – RESOURCE DEVELOPMENT – T1586

Email Accounts – RESOURCE DEVELOPMENT – T1585

Phishing – INITIAL ACCESS – T1566

Outlook Rules – PERSISTENCE – T1137

Internal Spear phishing – LATERAL MOVEMENT - T1534

Get the latest insights on emerging cyber threats

This report explores the latest trends shaping the cybersecurity landscape and what defenders need to know in 2025.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Min Kim
Cyber Security Analyst

More in this series

No items found.

Blog

/

Email

/

May 1, 2026

How email-delivered prompt injection attacks can target enterprise AI – and why it matters

Default blog imageDefault blog image

What are email-delivered prompt injection attacks?

As organizations rapidly adopt AI assistants to improve productivity, a new class of cyber risk is emerging alongside them: email-delivered AI prompt injection. Unlike traditional attacks that target software vulnerabilities or rely on social engineering, this is the act of embedding malicious or manipulative instructions into content that an AI system will process as part of its normal workflow. Because modern AI tools are designed to ingest and reason over large volumes of data, including emails, documents, and chat histories, they can unintentionally treat hidden attacker-controlled text as legitimate input.  

At Darktrace, our analysis has shown an increase of 90% in the number of customer deployments showing signals associated with potential prompt injection attempts since we began monitoring for this type of activity in late 2025. While it is not always possible to definitively attribute each instance, internal scoring systems designed to identify characteristics consistent with prompt injection have recorded a growing number of high-confidence matches. The upward trend suggests that attackers are actively experimenting with these techniques.

Recent examples of prompt injection attacks

Two early examples of this evolving threat are HashJack and ShadowLeak, which illustrate prompt injection in practice.

HashJack is a novel prompt injection technique discovered in November 2025 that exploits AI-powered web browsers and agentic AI browser assistants. By hiding malicious instructions within the URL fragment (after the # symbol) of a legitimate, trusted website, attackers can trick AI web assistants into performing malicious actions – potentially inserting phishing links, fake contact details, or misleading guidance directly into what appears to be a trusted AI-generated output.

ShadowLeak is a prompt injection method to exfiltrate PII identified in September 2025. This was a flaw in ChatGPT (now patched by OpenAI) which worked via an agent connected to email. If attackers sent the target an email containing a hidden prompt, the agent was tricked into leaking sensitive information to the attacker with no user action or visible UI.

What’s the risk of email-delivered prompt injection attacks?

Enterprise AI assistants often have complete visibility across emails, documents, and internal platforms. This means an attacker does not need to compromise credentials or move laterally through an environment. If successful, they can influence the AI to retrieve relevant information seamlessly, without the labor of compromise and privilege escalation.

The first risk is data exfiltration. In a prompt injection scenario, malicious instructions may be embedded within an ordinary email. As in the ShadowLeak attack, when AI processes that content as part of a legitimate task, it may interpret the hidden text as an instruction. This could result in the AI disclosing sensitive data, summarizing confidential communications, or exposing internal context that would otherwise require significant effort to obtain.

The second risk is agentic workflow poisoning. As AI systems take on more active roles, prompt injection can influence how they behave over time. An attacker could embed instructions that persist across interactions, such as causing the AI to include malicious links in responses or redirect users to untrusted resources. In this way, the attacker inserts themselves into the workflow, effectively acting as a man-in-the-middle within the AI system.

Why can’t other solutions catch email-delivered prompt injection attacks?

AI prompt injection challenges many of the assumptions that traditional email security is built on. It does not fit the usual patterns of phishing, where the goal is to trick a user into clicking a link or opening an attachment.  

Most security solutions are designed to detect signals associated with user engagement: suspicious links, unusual attachments, or social engineering cues. Prompt injection avoids these indicators entirely, meaning there are fewer obvious red flags.

In this case, the intention is actually the opposite of user solicitation. The objective is simply for the email to be delivered and remain in the inbox, appearing benign and unremarkable. The malicious element is not something the recipient is expected to engage with, or even notice.

Detection is further complicated by the nature of the prompts themselves. Unlike known malware signatures or consistent phishing patterns, injected prompts can vary widely in structure and wording. This makes simple pattern-matching approaches, such as regex, unreliable. A broad rule set risks generating large numbers of false positives, while a narrow one is unlikely to capture the diversity of possible injections.

How does Darktrace catch these types of attacks?

The Darktrace approach to email security more generally is to look beyond individual indicators and assess context, which also applies here.  

For example, our prompt density score identifies clusters of prompt-like language within an email rather than just single occurrences. Instead of treating the presence of a phrase as a blocking signal, the focus is on whether there is an unusual concentration of these patterns in a way that suggests injection. Additional weighting can be applied where there are signs of obfuscation. For example, text that is hidden from the user – such as white font or font size zero – but still readable by AI systems can indicate an attempt to conceal malicious prompts.

This is combined with broader behavioral signals. The same communication context used to detect other threats remains relevant, such as whether the content is unusual for the recipient or deviates from normal patterns.

Ask your email provider about email-delivered AI prompt injection

Prompt injection targets not just employees, but the AI systems they rely on, so security approaches need to account for both.

Though there are clear indications of emerging activity, it remains to be seen how popular prompt injection will be with attackers going forward. Still, considering the potential impact of this attack type, it’s worth checking if this risk has been considered by your email security provider.

Questions to ask your email security provider

  • What safeguards are in place to prevent emails from influencing AI‑driven workflows over time?
  • How do you assess email content that’s benign for a human reader, but may carry hidden instructions intended for AI systems?
  • If an email contains no links, no attachments, and no social engineering cues, what signals would your platform use to identify malicious intent?

Visit the Darktrace / EMAIL product hub to discover how we detect and respond to advanced communication threats.  

Learn more about securing AI in your enterprise.

Continue reading
About the author
Kiri Addison
Senior Director of Product

Blog

/

AI

/

April 30, 2026

Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

mythos vulnerability discoveryDefault blog imageDefault blog image

Anthropic’s Mythos and what it means for security teams

Recent attention on systems such as Anthropic Mythos highlights a notable problem for defenders. Namely that disclosure’s role in coordinating defensive action is eroding.

As AI systems gain stronger reasoning and coding capability, their usefulness in analyzing complex software environments and identifying weaknesses naturally increases. What has changed is not attacker motivation, but the conditions under which defenders learn about and organize around risk. Vulnerability discovery and exploitation increasingly unfold in ways that turn disclosure into a retrospective signal rather than a reliable starting point for defense.

Faster discovery was inevitable and is already visible

The acceleration of vulnerability discovery was already observable across the ecosystem. Publicly disclosed vulnerabilities (CVEs) have grown at double-digit rates for the past two years, including a 32% increase in 2024 according to NIST, driven in part by AI even prior to Anthropic’s Mythos model. Most notably XBOW topped the HackerOne US bug bounty leaderboard, marking the first time an autonomous penetration tester had done so.  

The technical frontier for AI capabilities has been described elsewhere as jagged, and the implication is that Mythos is exceptional but not unique in this capability. While Mythos appears to make significant progress in complex vulnerability analysis, many other models are already able to find and exploit weaknesses to varying degrees.  

What matters here is not which model performs best, but the fact that vulnerability discovery is no longer a scarce or tightly bounded capability.

The consequence of this shift is not simply earlier discovery. It is a change in the defender-attacker race condition. Disclosure once acted as a rough synchronization point. While attackers sometimes had earlier knowledge, disclosure generally marked the moment when risk became visible and defensive action could be broadly coordinated. Increasingly, that coordination will no longer exist. Exploitation may be underway well before a CVE is published, if it is published at all.

Why patch velocity alone is not the answer

The instinctive response to this shift is to focus on patching faster, but treating patch velocity as the primary solution misunderstands the problem. Most organizations are already constrained in how quickly they can remediate vulnerabilities. Asset sprawl, operational risk, testing requirements, uptime commitments, and unclear ownership all limit response speed, even when vulnerabilities are well understood.

If discovery and exploitation now routinely precede disclosure, then patching cannot be the first line of defense. It becomes one necessary control applied within a timeline that has already shifted. This does not imply that organizations should patch less. It means that patching cannot serve as the organizing principle for defense.

Defense needs a more stable anchor

If disclosure no longer defines when defense begins, then defense needs a reference point that does not depend on knowing the vulnerability in advance.  

Every digital environment has a behavioral character. Systems authenticate, communicate, execute processes, and access resources in relatively consistent ways over time. These patterns are not static rules or signatures. They are learned behaviors that reflect how an organization operates.

When exploitation occurs, even via previously unknown vulnerabilities, those behavioral patterns change.

Attackers may use novel techniques, but they still need to gain access, create processes, move laterally, and will ultimately interact with systems in ways that diverge from what is expected. That deviation is observable regardless of whether the underlying weakness has been formally named.

In an environment where disclosure can no longer be relied on for timing or coordination, behavioral understanding is no longer an optional enhancement; it becomes the only consistently available defensive signal.

Detecting risk before disclosure

Darktrace’s threat research has consistently shown that malicious activity often becomes visible before public disclosure.

In multiple cases, including exploitation of Ivanti, SAP NetWeaver, and Trimble Cityworks, Darktrace detected anomalous behavior days or weeks ahead of CVE publication. These detections did not rely on signatures, threat intelligence feeds, or awareness of the vulnerability itself. They emerged because systems began behaving in ways that did not align with their established patterns.

This reflects a defensive approach grounded in ‘Ethos’, in contrast to the unbounded exploration represented by ‘Mythos’. Here, Mythos describes continuous vulnerability discovery at speed and scale. Ethos reflects an understanding of what is normal and expected within a specific environment, grounded in observed behavior.

Revisiting assume breach

These conditions reinforce a principle long embedded in Zero Trust thinking: assume breach.

If exploitation can occur before disclosure, patching vulnerabilities can no longer act as the organizing principle for defense. Instead, effective defense must focus on monitoring for misuse and constraining attacker activity once access is achieved. Behavioral monitoring allows organizations to identify early‑stage compromise and respond while uncertainty remains, rather than waiting for formal verification.

AI plays a critical role here, not by predicting every exploit, but by continuously learning what normal looks like within a specific environment and identifying meaningful deviation at machine speed. Identifying that deviation enables defenders to respond by constraining activity back towards normal patterns of behavior.

Not an arms race, but an asymmetry

AI is often framed as fueling an arms race between attackers and defenders. In practice, the more important dynamic is asymmetry.

Attackers operate broadly, scanning many environments for opportunities. Defenders operate deeply within their own systems, and it’s this business context which is so significant. Behavioral understanding gives defenders a durable advantage. Attackers may automate discovery, but they cannot easily reproduce what belonging looks like inside a particular organization.

A changed defensive model

AI‑accelerated vulnerability discovery does not mean defenders have lost. It does mean that disclosure‑driven, patch‑centric models no longer provide a sufficient foundation for resilience.

As vulnerability volumes grow and exploitation timelines compress, effective defense increasingly depends on continuous behavioral understanding, detection that does not rely on prior disclosure, and rapid containment to limit impact. In this model, CVEs confirm risk rather than define when defense begins.

The industry has already seen this approach work in practice. As AI continues to reshape both offense and defense, behavioral detection will move from being complementary to being essential.

Continue reading
About the author
Andrew Hollister
Principal Solutions Engineer, Cyber Technician
Your data. Our AI.
Elevate your network security with Darktrace AI