Blog
/
/
December 20, 2022

How to Select the Right Cybersecurity AI

Choosing the right cybersecurity AI is crucial. Darktrace's guide provides insights and tips to help you make an informed decision.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Germaine Tan
VP, Security & AI Strategy, Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
20
Dec 2022

AI has long been a buzzword – we started seeing it utilized in consumer space; in social media, e-commerce, and even in our music preference! In the past few years it has started to make its way through the enterprise space, especially in cyber security.

Increasingly, we see threat actors utilizing AI in their attack techniques. This is inevitable with the advancements in AI technology, the lower barrier to entry to the cyber security industry, and the continued profitability of being a threat actor. Surveying security decision makers across different industries like financial services and manufacturing, 77% of the respondents expect weaponized AI to lead to an increase in the scale and speed of attacks. 

Defenders are also ramping up their use of AI in cyber security – with more than 80% of the respondents agreeing that organizations require advanced defenses to combat offensive AI – resulted in a ‘cyber arms race’ with adversaries and security teams in constant pursuit of the latest technological advancements.  

The rules and signature approach is no longer sufficient in this evolving threat landscape. Because of this collective need, we will continue to see the push of AI innovations in this space as well. By 2025, cyber security technologies will account for 25% of the AI software market.

Despite the intrigue surrounding AI, many people have a limited understanding of how it truly works. The mystery of AI technology is what piques the interest of many cyber security practitioners. As an industry we also know that AI is necessary for advancement, but there is so much noise around AI and machine learning that some teams struggle to understand it. The paradox of choice leaves security teams more frustrated and confused by all the options presented to them.

Identifying True AI

You first need to define what you want the AI technology to solve. This might seem trivial, but many security teams often forget to come back to the fundamentals: what problem are you addressing? What are you trying to improve? 

Not every process needs AI; some processes will simply need automation – these are the more straightforward parts of your business. More complex and bigger systems require AI. The crux is identifying these parts of your business, applying AI and being clear of what you are going to achieve with these AI technologies. 

For example, when it comes to factory floor operations or tracking leave days of employees, businesses employ automation technologies, but when it comes to business decisions like PR strategies or new business exploration, AI is used to predict trends and help business owners make these decisions. 

Similarly, in cyber security, when dealing with known threats such as known malicious malware and hosting sites, automation is great at keeping track of them; workflows and playbooks are also best assessed with automation tools. However, when it comes to unknown unknowns like zero-day attacks, insider threats, IoT threats and supply chain attacks, AI is needed to detect and respond these threats as they emerge.

Automation is often communicated as AI, and it becomes difficult for security teams to differentiate. Automation helps you to quickly make a decision you already know you will make, whereas true AI helps you make a better decision.

Key ways to differentiate true AI from automation:

  • The Data Set: In automation, what you are looking for is very well-scoped. You already know what you are looking for – you are just accelerating the process with rules and signatures. True AI is dynamic. You no longer need to define activities that deserve your attention, the AI highlights and prioritizes this for you.
  • Bias: When you define what you are looking for, as humans inherently we impose our biases on these decisions. We are also limited by our knowledge at that point in time – this leaves out the crucial unknown unknowns.
  • Real-time: Every organization is always changing and it is important that AI takes all that data into consideration. True AI that is real time and also changes with your organization’s growth is hard to find. 

Our AI Research Centre has produced numerous papers on the applications of true AI in cyber security. The Centre comprises of more than 150 members and has more than 100 patents and patents pending. Some of the featured white papers include research on Attack Path Modeling and using AI as a preventative approach in your organization. 

Integrating AI Outputs with People, Process, and Technology


Integrating AI with People

We are living in the time of trust deficit, and that applies to AI as well. As humans we can be skeptical with AI, so how do we build trust for AI such that it works for us? This applies not only to the users of the technology, but the wider organization as well. Since this is the People pillar, the key factors to achieving trust in AI is through education, culture, and exposure. In a culture where people are open to learn and try new AI technologies, we will naturally build trust towards AI over time.

Integrating AI with Process

Then we should consider the integration of AI and its outputs into your workflow and playbooks. To make decisions around that, security managers need to be clear what their security priorities are, or which security gaps a particular technology is meant to fill. Regardless of whether you have an outsourced MSSP/SOC team, 50-strong in-house SOC team, or even just a 2-man team, it is about understanding your priorities and assigning the proper resources to them.

Integrating AI with Technology 

Finally, there is the integration of AI with your existing technology stack. Most security teams deploy different tools and services to help them achieve different goals – whether it is a tool like SIEM, a firewall, an endpoint, or services like pentesting, or vulnerability assessment exercises. One of the biggest challenges is putting all of this information together and pulling actionable insights out of them. Integration on multiple levels is always challenging with complex technologies because they technologies can rate or interpret threats differently.

Security teams often find themselves spending the most time making sense of the output of different tools and services. For example, taking the outcomes from a pentesting report and trying to enhance SOAR configurations, or looking at SOC alerts to advise firewall configurations, or taking vulnerability assessment reports to scope third-party Incident Response teams.

These tools can have a strong mastery of large volumes of data, but eventually ownership of the knowledge should still lie with the human teams – and the way to do that is with continuous feedback and integration. It is no longer efficient to use human teams to carry out this at scale and at speed. 

The Cyber AI Loop is Darktrace’s approach to cyber security. The four product families make up a key aspect of an organization’s cyber security posture. Darktrace PREVENT, DETECT, RESPOND and HEAL each feed back into a continuous, virtuous cycle, constantly strengthening each other’s abilities. 

This cycle augments humans at every stage of an incident lifecycle. For example, PREVENT may alert you to a vulnerability which holds a particularly high risk potential for your organization. It provides clear mitigation advice, and while you are on this, PREVENT will feed into DETECT and RESPOND, which are immediately poised to kick in should an attack occur in the interim. Conversely, once an attack has been contained by RESPOND, it will feed information back into PREVENT which will anticipate an attacker’s likely next move. Cyber AI Loop helps you harden security a holistic way so that month on month, year on year, the organization continuously improves its defensive posture. 

Explainable AI

Despite its complexity, AI needs to produce outputs that are clear and easy to understand in order to be useful. In the heat of the moment during a cyber incident, human teams need to quickly comprehend: What happened here? When did it happen? What devices are affected? What does it mean for my business? What should I deal with first?

To this end, Darktrace applies another level of AI on top of its initial findings that autonomously investigates in the background, reducing a mass of individual security events to just a few overall cyber incidents worthy of human review. It generates natural-language incident reports with all the relevant information for your team to make judgements in an instant. 

Figure 1: An example of how Darktrace filters individual model breaches into incidents and then critical incidents for the human to review 

Cyber AI Analyst does not only take into consideration network detection but also in your endpoints, your cloud space, IoT devices and OT devices. Cyber AI Analyst also looks at your attack surface and the risks associated to triage and show you the most prioritized alerts that if unexpected would cause maximum damage to your organization. These insights are not only delivered in real time but also unique to your environment.

This also helps address another topic that frequently comes up in conversations around AI: false positives. This is of course a valid concern: what is the point of harvesting the value of AI if it means that a small team now must look at thousands of alerts? But we have to remember that while AI allows us to make more connections over the vastness of logs, its goal is not to create more work for security teams, but to augment them instead.

To ensure that your business can continue to own these AI outputs and more importantly the knowledge, Explainable AI such as that used in Darktrace’s Cyber AI Analyst is needed to interpret the findings of AI, to ensure human teams know what happened, what action (if any) the AI took, and why. 

Conclusion

Every organization is different, and its security should reflect that. However, some fundamental common challenges of AI in cyber security are shared amongst all security teams, regardless of size, resources, industry vertical, and culture. Their cyber strategy and maturity levels are what sets them apart. Maturity is not defined by how many professional certifications or how many years of experience the team has. A mature team works together to solve problems. They understand that while AI is not the silver bullet, it is a powerful bullet that if used right, will autonomously harden the security of the complete digital ecosystem, while augmenting the humans tasked with defending it. 

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Germaine Tan
VP, Security & AI Strategy, Field CISO

More in this series

No items found.

Blog

/

Email

/

December 18, 2025

Why organizations are moving to label-free, behavioral DLP for outbound email

Man at laptopDefault blog imageDefault blog image

Why outbound email DLP needs reinventing

In 2025, the global average cost of a data breach fell slightly — but remains substantial at USD 4.44 million (IBM Cost of a Data Breach Report 2025). The headline figure hides a painful reality: many of these breaches stem not from sophisticated hacks, but from simple human error: mis-sent emails, accidental forwarding, or replying with the wrong attachment. Because outbound email is a common channel for sensitive data leaving an organization, the risk posed by everyday mistakes is enormous.

In 2025, 53% of data breaches involved customer PII, making it the most commonly compromised asset (IBM Cost of a Data Breach Report 2025). This makes “protection at the moment of send” essential. A single unintended disclosure can trigger compliance violations, regulatory scrutiny, and erosion of customer trust –consequences that are disproportionate to the marginal human errors that cause them.

Traditional DLP has long attempted to mitigate these impacts, but it relies heavily on perfect labelling and rigid pattern-matching. In reality, data loss rarely presents itself as a neat, well-structured pattern waiting to be caught – it looks like everyday communication, just slightly out of context.

How data loss actually happens

Most data loss comes from frustratingly familiar scenarios. A mistyped name in auto-complete sends sensitive data to the wrong “Alex.” A user forwards a document to a personal Gmail account “just this once.” Someone shares an attachment with a new or unknown correspondent without realizing how sensitive it is.

Traditional, content-centric DLP rarely catches these moments. Labels are missing or wrong. Regexes break the moment the data shifts formats. And static rules can’t interpret the context that actually matters – the sender-recipient relationship, the communication history, or whether this behavior is typical for the user.

It’s the everyday mistakes that hurt the most. The classic example: the Friday 5:58 p.m. mis-send, when auto-complete selects Martin, a former contractor, instead of Marta in Finance.

What traditional DLP approaches offer (and where gaps remain)

Most email DLP today follows two patterns, each useful but incomplete.

  • Policy- and label-centric DLP works when labels are correct — but content is often unlabeled or mislabeled, and maintaining classification adds friction. Gaps appear exactly where users move fastest
  • Rule and signature-based approaches catch known patterns but miss nuance: human error, new workflows, and “unknown unknowns” that don’t match a rule

The takeaway: Protection must combine content + behavior + explainability at send time, without depending on perfect labels.

Your technology primer: The three pillars that make outbound DLP effective

1) Label-free (vs. data classification)

Protects all content, not just what’s labeled. Label-free analysis removes classification overhead and closes gaps from missing or incorrect tags. By evaluating content and context at send time, it also catches misdelivery and other payload-free errors.

  • No labeling burden; no regex/rule maintenance
  • Works when tags are missing, wrong, or stale
  • Detects misdirected sends even when labels look right

2) Behavioral (vs. rules, signatures, threat intelligence)

Understands user behavior, not just static patterns. Behavioral analysis learns what’s normal for each person, surfacing human error and subtle exfiltration that rules can’t. It also incorporates account signals and inbound intel, extending across email and Teams.

  • Flags risk without predefined rules or IOCs
  • Catches misdelivery, unusual contacts, personal forwards, odd timing/volume
  • Blends identity and inbound context across channels

3) Proprietary DSLM (vs. generic LLM)

Optimized for precise, fast, explainable on-send decisions. A DSLM understands email/DLP semantics, avoids generative risks, and stays auditable and privacy-controlled, delivering intelligence reliably without slowing mail flow.

  • Low-latency, on-send enforcement
  • Non-generative for predictable, explainable outcomes
  • Governed model with strong privacy and auditability

The Darktrace approach to DLP

Darktrace / EMAIL – DLP stops misdelivery and sensitive data loss at send time using hold/notify/justify/release actions. It blends behavioral insight with content understanding across 35+ PII categories, protecting both labeled and unlabeled data. Every action is paired with clear explainability: AI narratives show exactly why an email was flagged, supporting analysts and helping end-users learn. Deployment aligns cleanly with existing SOC workflows through mail-flow connectors and optional Microsoft Purview label ingestion, without forcing duplicate policy-building.

Deployment is simple: Microsoft 365 routes outbound mail to Darktrace for real-time, inline decisions without regex or rule-heavy setup.

A buyer’s checklist for DLP solutions

When choosing your DLP solution, you want to be sure that it can deliver precise, explainable protection at the moment it matters – on send – without operational drag.  

To finish, we’ve compiled a handy list of questions you can ask before choosing an outbound DLP solution:

  • Can it operate label free when tags are missing or wrong? 
  • Does it truly learn per user behavior (no shortcuts)? 
  • Is there a domain specific model behind the content understanding (not a generic LLM)? 
  • Does it explain decisions to both analysts and end users? 
  • Will it integrate with your label program and SOC workflows rather than duplicate them? 

For a deep dive into Darktrace’s DLP solution, check out the full solution brief.

[related-resource]

Continue reading
About the author
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

Email

/

December 17, 2025

Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with Darktrace

Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with DarktraceDefault blog imageDefault blog image

What is an Adversary-in-the-middle (AiTM) attack?

Adversary-in-the-Middle (AiTM) attacks are a sophisticated technique often paired with phishing campaigns to steal user credentials. Unlike traditional phishing, which multi-factor authentication (MFA) increasingly mitigates, AiTM attacks leverage reverse proxy servers to intercept authentication tokens and session cookies. This allows attackers to bypass MFA entirely and hijack active sessions, stealthily maintaining access without repeated logins.

This blog examines a real-world incident detected during a Darktrace customer trial, highlighting how Darktrace / EMAILTM and Darktrace / IDENTITYTM identified the emerging compromise in a customer’s email and software-as-a-service (SaaS) environment, tracked its progression, and could have intervened at critical moments to contain the threat had Darktrace’s Autonomous Response capability been enabled.

What does an AiTM attack look like?

Inbound phishing email

Attacks typically begin with a phishing email, often originating from the compromised account of a known contact like a vendor or business partner. These emails will often contain malicious links or attachments leading to fake login pages designed to spoof legitimate login platforms, like Microsoft 365, designed to harvest user credentials.

Proxy-based credential theft and session hijacking

When a user clicks on a malicious link, they are redirected through an attacker-controlled proxy that impersonates legitimate services.  This proxy forwards login requests to Microsoft, making the login page appear legitimate. After the user successfully completes MFA, the attacker captures credentials and session tokens, enabling full account takeover without the need for reauthentication.

Follow-on attacks

Once inside, attackers will typically establish persistence through the creation of email rules or registering OAuth applications. From there, they often act on their objectives, exfiltrating sensitive data and launching additional business email compromise (BEC) campaigns. These campaigns can include fraudulent payment requests to external contacts or internal phishing designed to compromise more accounts and enable lateral movement across the organization.

Darktrace’s detection of an AiTM attack

At the end of September 2025, Darktrace detected one such example of an AiTM attack on the network of a customer trialling Darktrace / EMAIL and Darktrace / IDENTITY.

In this instance, the first indicator of compromise observed by Darktrace was the creation of a malicious email rule on one of the customer’s Office 365 accounts, suggesting the account had likely already been compromised before Darktrace was deployed for the trial.

Darktrace / IDENTITY observed the account creating a new email rule with a randomly generated name, likely to hide its presence from the legitimate account owner. The rule marked all inbound emails as read and deleted them, while ignoring any existing mail rules on the account. This rule was likely intended to conceal any replies to malicious emails the attacker had sent from the legitimate account owner and to facilitate further phishing attempts.

Darktrace’s detection of the anomalous email rule creation.
Figure 1: Darktrace’s detection of the anomalous email rule creation.

Internal and external phishing

Following the creation of the email rule, Darktrace / EMAIL observed a surge of suspicious activity on the user’s account. The account sent emails with subject lines referencing payment information to over 9,000 different external recipients within just one hour. Darktrace also identified that these emails contained a link to an unusual Google Drive endpoint, embedded in the text “download order and invoice”.

Darkrace’s detection of an unusual surge in outbound emails containing suspicious content, shortly following the creation of a new email rule.
Figure 2: Darkrace’s detection of an unusual surge in outbound emails containing suspicious content, shortly following the creation of a new email rule.
Darktrace / EMAIL’s detection of the compromised account sending over 9,000 external phishing emails, containing an unusual Google Drive link.
Figure 3: Darktrace / EMAIL’s detection of the compromised account sending over 9,000 external phishing emails, containing an unusual Google Drive link.

As Darktrace / EMAIL flagged the message with the ‘Compromise Indicators’ tag (Figure 2), it would have been held automatically if the customer had enabled default Data Loss Prevention (DLP) Action Flows in their email environment, preventing any external phishing attempts.

Figure 4: Darktrace / EMAIL’s preview of the email sent by the offending account.
Figure 4: Darktrace / EMAIL’s preview of the email sent by the offending account.

Darktrace analysis revealed that, after clicking the malicious link in the email, recipients would be redirected to a convincing landing page that closely mimicked the customer’s legitimate branding, including authentic imagery and logos, where prompted to download with a PDF named “invoice”.

Figure 5: Download and login prompts presented to recipients after following the malicious email link, shown here in safe view.

After clicking the “Download” button, users would be prompted to enter their company credentials on a page that was likely a credential-harvesting tool, designed to steal corporate login details and enable further compromise of SaaS and email accounts.

Darktrace’s Response

In this case, Darktrace’s Autonomous Response was not fully enabled across the customer’s email or SaaS environments, allowing the compromise to progress,  as observed by Darktrace here.

Despite this, Darktrace / EMAIL’s successful detection of the malicious Google Drive link in the internal phishing emails prompted it to suggest ‘Lock Link’, as a recommended action for the customer’s security team to manually apply. This action would have automatically placed the malicious link behind a warning or screening page blocking users from visiting it.

Autonomous Response suggesting locking the malicious Google Drive link sent in internal phishing emails.
Figure 6: Autonomous Response suggesting locking the malicious Google Drive link sent in internal phishing emails.

Furthermore, if active in the customer’s SaaS environment, Darktrace would likely have been able to mitigate the threat even earlier, at the point of the first unusual activity: the creation of a new email rule. Mitigative actions would have included forcing the user to log out, terminating any active sessions, and disabling the account.

Conclusion

AiTM attacks represent a significant evolution in credential theft techniques, enabling attackers to bypass MFA and hijack active sessions through reverse proxy infrastructure. In the real-world case we explored, Darktrace’s AI-driven detection identified multiple stages of the attack, from anomalous email rule creation to suspicious internal email activity, demonstrating how Autonomous Response could have contained the threat before escalation.

MFA is a critical security measure, but it is no longer a silver bullet. Attackers are increasingly targeting session tokens rather than passwords, exploiting trusted SaaS environments and internal communications to remain undetected. Behavioral AI provides a vital layer of defense by spotting subtle anomalies that traditional tools often miss

Security teams must move beyond static defenses and embrace adaptive, AI-driven solutions that can detect and respond in real time. Regularly review SaaS configurations, enforce conditional access policies, and deploy technologies that understand “normal” behavior to stop attackers before they succeed.

Credit to David Ison (Cyber Analyst), Bertille Pierron (Solutions Engineer), Ryan Traill (Analyst Content Lead)

Appendices

Models

SaaS / Anomalous New Email Rule

Tactic – Technique – Sub-Technique  

Phishing - T1566

Adversary-in-the-Middle - T1557

Continue reading
About the author
Your data. Our AI.
Elevate your network security with Darktrace AI