Blog
/
AI
/
November 25, 2024

Why Artificial Intelligence is the Future of Cybersecurity

This blog explores the impact of AI on the threat landscape, the benefits of AI in cybersecurity, and the role it plays in enhancing security practices and tools.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
Nov 2024

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

More in this series

No items found.

Blog

/

Email

/

September 30, 2025

Out of Character: Detecting Vendor Compromise and Trusted Relationship Abuse with Darktrace

Default blog imageDefault blog image

What is Vendor Email Compromise?

Vendor Email Compromise (VEC) refers to an attack where actors breach a third-party provider to exploit their access, relationships, or systems for malicious purposes. The initially compromised entities are often the target’s existing partners, though this can extend to any organization or individual the target is likely to trust.

It sits at the intersection of supply chain attacks and business email compromise (BEC), blending technical exploitation with trust-based deception. Attackers often infiltrate existing conversations, leveraging AI to mimic tone and avoid common spelling and grammar pitfalls. Malicious content is typically hosted on otherwise reputable file sharing platforms, meaning any shared links initially seem harmless.

While techniques to achieve initial access may have evolved, the goals remain familiar. Threat actors harvest credentials, launch subsequent phishing campaigns, attempt to redirect invoice payments for financial gain, and exfiltrate sensitive corporate data.

Why traditional defenses fall short

These subtle and sophisticated email attacks pose unique challenges for defenders. Few busy people would treat an ongoing conversation with a trusted contact with the same level of suspicion as an email from the CEO requesting ‘URGENT ASSISTANCE!’ Unfortunately, many traditional secure email gateways (SEGs) struggle with this too. Detecting an out-of-character email, when it does not obviously appear out of character, is a complex challenge. It’s hardly surprising, then, that 83% of organizations have experienced a security incident involving third-party vendors [1].  

This article explores how Darktrace detected four different vendor compromise campaigns for a single customer, within a two-week period in 2025.  Darktrace / EMAIL successfully identified the subtle indicators that these seemingly benign emails from trusted senders were, in fact, malicious. Due to the configuration of Darktrace / EMAIL in this customer’s environment, it was unable to take action against the malicious emails. However, if fully enabled to take Autonomous Response, it would have held all offending emails identified.

How does Darktrace detect vendor compromise?

The answer lies at the core of how Darktrace operates: anomaly detection. Rather than relying on known malicious rules or signatures, Darktrace learns what ‘normal’ looks like for an environment, then looks for anomalies across a wide range of metrics. Despite the resourcefulness of the threat actors involved in this case, Darktrace identified many anomalies across these campaigns.

Different campaigns, common traits

A wide variety of approaches was observed. Individuals, shared mailboxes and external contractors were all targeted. Two emails originated from compromised current vendors, while two came from unknown compromised organizations - one in an associated industry. The sender organizations were either familiar or, at the very least, professional in appearance, with no unusual alphanumeric strings or suspicious top-level domains (TLDs). Subject line, such as “New Approved Statement From [REDACTED]” and “[REDACTED] - Proposal Document” appeared unremarkable and were not designed to provoke heightened emotions like typical social engineering or BEC attempts.

All emails had been given a Microsoft Spam Confidence Level of 1, indicating Microsoft did not consider them to be spam or malicious [2]. They also passed authentication checks (including SPF, and in some cases DKIM and DMARC), meaning they appeared to originate from an authentic source for the sender domain and had not been tampered with in transit.  

All observed phishing emails contained a link hosted on a legitimate and commonly used file-sharing site. These sites were often convincingly themed, frequently featuring the name of a trusted vendor either on the page or within the URL, to appear authentic and avoid raising suspicion. However, these links served only as the initial step in a more complex, multi-stage phishing process.

A legitimate file sharing site used in phishing emails to host a secondary malicious link.
Figure 1: A legitimate file sharing site used in phishing emails to host a secondary malicious link.
Another example of a legitimate file sharing endpoint sent in a phishing email and used to host a malicious link.
Figure 2: Another example of a legitimate file sharing endpoint sent in a phishing email and used to host a malicious link.

If followed, the recipient would be redirected, sometimes via CAPTCHA, to fake Microsoft login pages designed to capturing credentials, namely http://pub-ac94c05b39aa4f75ad1df88d384932b8.r2[.]dev/offline[.]html and https://s3.us-east-1.amazonaws[.]com/s3cure0line-0365cql0.19db86c3-b2b9-44cc-b339-36da233a3be2ml0qin/s3cccql0.19db86c3-b2b9-44cc-b339-36da233a3be2%26l0qn[.]html#.

The latter made use of homoglyphs to deceive the user, with a link referencing ‘s3cure0line’, rather than ‘secureonline’. Post-incident investigation using open-source intelligence (OSINT) confirmed that the domains were linked to malicious phishing endpoints [3] [4].

Fake Microsoft login page designed to harvest credentials.
Figure 3: Fake Microsoft login page designed to harvest credentials.
Phishing kit with likely AI-generated image, designed to harvest user credentials. The URL uses ‘s3cure0line’ instead of ‘secureonline’, a subtle misspelling intended to deceive users.
Figure 4: Phishing kit with likely AI-generated image, designed to harvest user credentials. The URL uses ‘s3cure0line’ instead of ‘secureonline’, a subtle misspelling intended to deceive users.

Darktrace Anomaly Detection

Some senders were unknown to the network, with no previous outbound or inbound emails. Some had sent the email to multiple undisclosed recipients using BCC, an unusual behavior for a new sender.  

Where the sender organization was an existing vendor, Darktrace recognized out-of-character behavior, in this case it was the first time a link to a particular file-sharing site had been shared. Often the links themselves exhibited anomalies, either being unusually prominent or hidden altogether - masked by text or a clickable image.

Crucially, Darktrace / EMAIL is able to identify malicious links at the time of processing the emails, without needing to visit the URLs or analyze the destination endpoints, meaning even the most convincing phishing pages cannot evade detection – meaning even the most convincing phishing emails cannot evade detection. This sets it apart from many competitors who rely on crawling the endpoints present in emails. This, among other things, risks disruption to user experience, such as unsubscribing them from emails, for instance.

Darktrace was also able to determine that the malicious emails originated from a compromised mailbox, using a series of behavioral and contextual metrics to make the identification. Upon analysis of the emails, Darktrace autonomously assigned several contextual tags to highlight their concerning elements, indicating that the messages contained phishing links, were likely sent from a compromised account, and originated from a known correspondent exhibiting out-of-character behavior.

A summary of the anomalous email, confirming that it contained a highly suspicious link.
Figure 5: Tags assigned to offending emails by Darktrace / EMAIL.

Figure 6: A summary of the anomalous email, confirming that it contained a highly suspicious link.

Out-of-character behavior caught in real-time

In another customer environment around the same time Darktrace / EMAIL detected multiple emails with carefully crafted, contextually appropriate subject lines sent from an established correspondent being sent to 30 different recipients. In many cases, the attacker hijacked existing threads and inserted their malicious emails into an ongoing conversation in an effort to blend in and avoid detection. As in the previous, the attacker leveraged a well-known service, this time ClickFunnels, to host a document containing another malicious link. Once again, they were assigned a Microsoft Spam Confidence Level of 1, indicating that they were not considered malicious.

The legitimate ClickFunnels page used to host a malicious phishing link.
Figure 7: The legitimate ClickFunnels page used to host a malicious phishing link.

This time, however, the customer had Darktrace / EMAIL fully enabled to take Autonomous Response against suspicious emails. As a result, when Darktrace detected the out-of-character behavior, specifically, the sharing of a link to a previously unused file-sharing domain, and identified the likely malicious intent of the message, it held the email, preventing it from reaching recipients’ inboxes and effectively shutting down the attack.

Figure 8: Darktrace / EMAIL’s detection of malicious emails inserted into an existing thread.*

*To preserve anonymity, all real customer names, email addresses, and other identifying details have been redacted and replaced with fictitious placeholders.

Legitimate messages in the conversation were assigned an Anomaly Score of 0, while the newly inserted malicious emails identified and were flagged with the maximum score of 100.

Key takeaways for defenders

Phishing remains big business, and as the landscape evolves, today’s campaigns often look very different from earlier versions. As with network-based attacks, threat actors are increasingly leveraging legitimate tools and exploiting trusted relationships to carry out their malicious goals, often staying under the radar of security teams and traditional email defenses.

As attackers continue to exploit trusted relationships between organizations and their third-party associates, security teams must remain vigilant to unexpected or suspicious email activity. Protecting the digital estate requires an email solution capable of identifying malicious characteristics, even when they originate from otherwise trusted senders.

Credit to Jennifer Beckett (Cyber Analyst), Patrick Anjos (Senior Cyber Analyst), Ryan Traill (Analyst Content Lead), Kiri Addison (Director of Product)

Appendices

IoC - Type - Description + Confidence  

- http://pub-ac94c05b39aa4f75ad1df88d384932b8.r2[.]dev/offline[.]html#p – fake Microsoft login page

- https://s3.us-east-1.amazonaws[.]com/s3cure0line-0365cql0.19db86c3-b2b9-44cc-b339-36da233a3be2ml0qin/s3cccql0.19db86c3-b2b9-44cc-b339-36da233a3be2%26l0qn[.]html# - link to domain used in homoglyph attack

MITRE ATT&CK Mapping  

Tactic – Technique – Sub-Technique  

Initial Access - Phishing – (T1566)  

References

1.     https://gitnux.org/third-party-risk-statistics/

2.     https://learn.microsoft.com/en-us/defender-office-365/anti-spam-spam-confidence-level-scl-about

3.     https://www.virustotal.com/gui/url/5df9aae8f78445a590f674d7b64c69630c1473c294ce5337d73732c03ab7fca2/detection

4.     https://www.virustotal.com/gui/url/695d0d173d1bd4755eb79952704e3f2f2b87d1a08e2ec660b98a4cc65f6b2577/details

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content

Continue reading
About the author

Blog

/

OT

/

October 1, 2025

Announcing Unified OT Security with Dedicated OT Workflows, Segmentation-Aware Risk Insights, and Next-Gen Endpoint Visibility for Industrial Teams

Default blog imageDefault blog image

The challenge of convergence without clarity

Convergence is no longer a roadmap idea, it is the daily reality for industrial security teams. As Information Technology (IT) and Operational Technology (OT) environments merge, the line between a cyber incident and an operational disruption grows increasingly hard to define. A misconfigured firewall rule can lead to downtime. A protocol misuse might look like a glitch. And when a pump stalls but nothing appears in the Security Operations Center (SOC) dashboard, teams are left asking: is this operational or is this a threat?

The lack of shared context slows down response, creates friction between SOC analysts and plant engineers, and leaves organizations vulnerable at exactly the points where IT and OT converge. Defenders need more than alerts, they need clarity that both sides can trust.

The breakthrough with Darktrace / OT

This latest Darktrace / OT release was built to deliver exactly that. It introduces shared context between Security, IT, and OT operations, helping reduce friction and close the security gaps at the intersection of these domains.

With a dedicated dashboard built for operations teams, extended visibility into endpoints for new forms of detection and CVE collection, expanded protocol coverage, and smarter risk modeling aligned to segmentation policies, teams can now operate from a shared source of truth. These enhancements are not just incremental upgrades, they are foundational improvements designed to bring clarity, efficiency, and trust to converged environments.

A dashboard built for OT engineers

The new Operational Overview provides OT engineers with a workspace designed for them, not for SOC analysts. It brings asset management, risk insights and operational alerts into one place. Engineers can now see activity like firmware changes, controller reprograms or the sudden appearance of a new workstation on the network, providing a tailored view for critical insights and productivity gains without navigating IT-centric workflows. Each device view is now enriched with cross-linked intelligence, make, model, firmware version and the roles inferred by Self-Learning AI, making it easier to understand how each asset behaves, what function it serves, and where it fits within the broader industrial process. By suppressing IT-centric noise, the dashboard highlights only the anomalies that matter to operations, accelerating triage, enabling smoother IT/OT collaboration, and reducing time to root cause without jumping between tools.

This is usability with purpose, a view that matches OT workflows and accelerates response.

Figure 1: The Operational Overview provides an intuitive dashboard summarizing all OT Assets, Alerts, and Risk.

Full-spectrum coverage across endpoints, sensors and protocols

The release also extends visibility into areas that have traditionally been blind spots. Engineering workstations, Human-Machine Interfaces (HMIs), contractor laptops and field devices are often the entry points for attackers, yet the hardest to monitor.

Darktrace introduces Network Endpoint eXtended Telemetry (NEXT) for OT, a lightweight collector built for segmented and resource-constrained environments. NEXT for OT uses Endpoint sensors to capture localized network, and now process-level telemetry, placing it in context alongside other network and asset data to:

  1. Identify vulnerabilities and OS data, which is leveraged by OT Risk Management for risk scoring and patching prioritization, removing the need for third-party CVE collection.
  1. Surface novel threats using Self-Learning AI that standalone Endpoint Detection and Response (EDR) would miss.
  1. Extend Cyber AI Analyst investigations through to the endpoint root cause.

NEXT is part of our existing cSensor endpoint agent, can be deployed standalone or alongside existing EDR tools, and allows capabilities to be enabled or disabled depending on factors such as security or OT team objectives and resource utilization.

Figure 2: Darktrace / OT delivers CVE patch priority insights by combining threat intelligence with extended network and endpoint telemetry

The family of Darktrace Endpoint sensors also receive a boost in deployment flexibility, with on-prem server-based setups, as well as a Windows driver tailored for zero-trust and high-security environments.

Protocol coverage has been extended where it matters most. Darktrace now performs protocol analysis of a wider range of GE and Mitsubishi protocols, giving operators real-time visibility into commands and state changes on Programmable Logic Controllers (PLCs), robots and controllers. Backed by Self-Learning AI, this inspection does more than parse traffic, it understands what normal looks like and flags deviations that signal risk.

Integrated risk and governance workflows

Security data is only valuable when it drives action. Darktrace / OT delivers risk insights that go beyond patching, helping teams take meaningful steps even when remediation isn't possible. Risk is assessed not just by CVE presence, but by how network segmentation, firewall policies, and attack path logic neutralize or contain real-world exposure. This approach empowers defenders to deprioritize low-impact vulnerabilities and focus effort where risk truly exists. Building on the foundation introduced in release 6.3, such as KEV enrichment, endpoint OS data, and exploit mapping, this release introduces new integrations that bring Darktrace / OT intelligence directly into governance workflows.

Fortinet FortiGate firewall ingestion feeds segmentation rules into attack path modeling, revealing real exposure when policies fail and closing feeds into patching prioritization based on a policy to CVE exposure assessment.

  • ServiceNow Configuration Management Database (CMDB) sync ensures asset intelligence stays current across governance platforms, eliminating manual inventory work.

Risk modeling has also been made more operationally relevant. Scores are now contextualized by exploitability, asset criticality, firewall policy, and segmentation posture. Patch recommendations are modeled in terms of safety, uptime and compliance rather than just Common Vulnerability Scoring System (CVSS) numbers. And importantly, risk is prioritized across the Purdue Model, giving defenders visibility into whether vulnerabilities remain isolated to IT or extend into OT-critical layers.

Figure 3: Attack Path Modeling based on NetFlow and network topology reveals high risk points of IT/OT convergence.

The real-world impact for defenders

In today’s environments, attackers move fluidly between IT and OT. Without unified visibility and shared context, incidents cascade faster than teams can respond.

With this release, Darktrace / OT changes that reality. The Operational Overview gives Engineers a dashboard they can use daily, tailored to their workflows. SOC analysts can seamlessly investigate telemetry across endpoints, sensors and protocols that were once blind spots. Operators gain transparency into PLCs and controllers. Governance teams benefit from automated integrations with platforms like Fortinet and ServiceNow. And all stakeholders work from risk models that reflect what truly matters: safety, uptime and compliance.

This release is not about creating more alerts. It is about providing more clarity. By unifying context across IT and OT, Darktrace / OT enables defenders to see more, understand more and act faster.

Because in environments where safety and uptime are non-negotiable, clarity is what matters most.

Join us for our live event where we will discuss these product innovations in greater detail

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance
Your data. Our AI.
Elevate your network security with Darktrace AI