Blog
/
Email
/
July 18, 2023

Understanding Email Security & the Psychology of Trust

We explore how psychological research into the nature of trust relates to our relationship with technology - and what that means for AI solutions.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research
Photo showing woman logging into her laptop with username and passwordDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Jul 2023

When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.

While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.

This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.

What is the psychology of trust?

Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1

Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.

Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2

Defining implicit and routine trust

As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.

Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3

This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.

Trust and technology

As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’  The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.

Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.

The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5   Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.

The implicit trust in workplace technology

To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.

Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.

For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.

Inattentional blindness: How the brain ignores change

Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.

A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.

With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.

How threats exploit our implicit trust in technology

Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.

In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.

In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.

How email security solutions can solve the problem of implicit trust

How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.

Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.

Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.

Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.

For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.

If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.

AI-driven Email Security: The Way Forward

Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.

References

1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx

2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.

3https://www.cognifit.com/gb/attention

4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common

5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.

6https://link.springer.com/article/10.1007/s00426-006-0072-4

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research

More in this series

No items found.

Blog

/

Network

/

October 30, 2025

WSUS Exploited: Darktrace’s Analysis of Post-Exploitation Activities Related to CVE-2025-59287

WSUS Exploited: Darktrace’s Analysis of Post-Exploitation Activities Related to CVE-2025-59287Default blog imageDefault blog image

Introduction

On October 14, 2025, Microsoft disclosed a new critical vulnerability affecting the Windows Server Update Service (WSUS), CVE-2025-59287.  Exploitation of the vulnerability could allow an unauthenticated attacker to remotely execute code [1][6].

WSUS allows for centralized distribution of Microsoft product updates [3]; a server running WSUS is likely to have significant privileges within a network making it a valuable target for threat actors. While WSUS servers are not necessarily expected to be open to the internet, open-source intelligence (OSINT) has reported  thousands of publicly exposed instances that may be vulnerable to exploitation [2].

Microsoft’s initial ‘Patch Tuesday’ update for this vulnerability did not fully mitigate the risk, and so an out-of-band update followed on October 23 [4][5] . Widespread exploitation of this vulnerability started to be observed shortly after the security update [6], prompting CISA to add CVE-2025-59287 to its Known Exploited Vulnerability Catalog (KEV) on October 24 [7].

Attack Overview

The Darktrace Threat Research team have recently identified multiple potential cases of CVE-2025-59287 exploitation, with two detailed here. While the likely initial access method is consistent across the cases, the follow-up activities differed, demonstrating the variety in which such a CVE can be exploited to fulfil each attacker’s specific goals.

The first signs of suspicious activity across both customers were detected by Darktrace on October 24, the same day this vulnerability was added to CISA’s KEV. Both cases discussed here involve customers based in the United States.

Case Study 1

The first case, involving a customer in the Information and Communication sector, began with an internet-facing device making an outbound connection to the hostname webhook[.]site. Observed network traffic indicates the device was a WSUS server.

OSINT has reported abuse of the workers[.]dev service in exploitation of CVE-2025-59287, where enumerated network information gathered through running a script on the compromised device was exfiltrated using this service [8].

In this case, the majority of connectivity seen to webhook[.]site involved a PowerShell user agent; however, cURL user agents were also seen with some connections taking the form of HTTP POSTs. This connectivity appears to align closely with OSINT reports of CVE-2025-59287 post-exploitation behaviour [8][9].

Connections to webhook[.]site continued until October 26. A single URI was seen consistently until October 25, after which the connections used a second URI with a similar format.

Later on October 26, an escalation in command-and-control (C2) communication appears to have occurred, with the device starting to make repeated connections to two rare workers[.]dev subdomains (royal-boat-bf05.qgtxtebl.workers[.]dev & chat.hcqhajfv.workers[.]dev), consistent with C2 beaconing. While workers[.]dev is associated with the legitimate Cloudflare Workers service, the service is commonly abused by malicious actors for C2 infrastructure. The anomalous nature of the connections to both webhook[.]site and workers[.]dev led to Darktrace generating multiple alerts including high-fidelity Enhanced Monitoring alerts and alerts for Darktrace’s Autonomous Response.

Infrastructure insight

Hosted on royal-boat-bf05.qgtxtebl.workers[.]dev is a Microsoft Installer file (MSI) named v3.msi.

Screenshot of v3.msi content.
Figure 1: Screenshot of v3.msi content.

Contained in the MSI file is two Cabinet files named “Sample.cab” and “part2.cab”. After extracting the contents of the cab files, a file named “Config” and a binary named “ServiceEXE”. ServiceEXE is the legitimate DFIR tool Velociraptor, and “Config” contains the configuration details, which include chat.hcqhajfv.workers[.]dev as the server_url, suggesting that Velociraptor is being used as a tunnel to the C2. Additionally, the configuration points to version 0.73.4, a version of Velociraptor that is vulnerable to CVE-2025-6264, a privilege escalation vulnerability.

 Screenshot of Config file.
Figure 2: Screenshot of Config file.

Velociraptor, a legitimate security tool maintained by Rapid7, has been used recently in malicious campaigns. A vulnerable version of tool has been used by threat actors for command execution and endpoint takeover, while other campaigns have used Velociraptor to create a tunnel to the C2, similar to what was observed in this case [10] .

The workers[.]dev communication continued into the early hours of October 27. The most recent suspicious behavior observed on the device involved an outbound connection to a new IP for the network - 185.69.24[.]18/singapure - potentially indicating payload retrieval.

The payload retrieved from “/singapure” is a UPX packed Windows binary. After unpacking the binary, it is an open-source Golang stealer named “Skuld Stealer”. Skuld Stealer has the capabilities to steal crypto wallets, files, system information, browser data and tokens. Additionally, it contains anti-debugging and anti-VM logic, along with a UAC bypass [11].

A timeline outlining suspicious activity on the device alerted by Darktrace.
Figure 3: A timeline outlining suspicious activity on the device alerted by Darktrace.

Case Study 2

The second case involved a customer within the Education sector. The affected device was also internet-facing, with network traffic indicating it was a WSUS server

Suspicious activity in this case once again began on October 24, notably only a few seconds after initial signs of compromise were observed in the first case. Initial anomalous behaviour also closely aligned, with outbound PowerShell connections to webhook[.]site, and then later connections, including HTTP POSTs, to the same endpoint with a cURL user agent.

While Darktrace did not observe any anomalous network activity on the device after October 24, the customer’s security integration resulted in an additional alert on October 27 for malicious activity, suggesting that the compromise may have continued locally.

By leveraging Darktrace’s security integrations, customers can investigate activity across different sources in a seamless manner, gaining additional insight and context to an attack.

A timeline outlining suspicious activity on the device alerted by Darktrace.
Figure 4: A timeline outlining suspicious activity on the device alerted by Darktrace.

Conclusion

Exploitation of a CVE can lead to a wide range of outcomes. In some cases, it may be limited to just a single device with a focused objective, such as exfiltration of sensitive data. In others, it could lead to lateral movement and a full network compromise, including ransomware deployment. As the threat of internet-facing exploitation continues to grow, security teams must be prepared to defend against such a possibility, regardless of the attack type or scale.

By focussing on detection of anomalous behaviour rather than relying on signatures associated with a specific CVE exploit, Darktrace is able to alert on post-exploitation activity regardless of the kind of behaviour seen. In addition, leveraging security integrations provides further context on activities beyond the visibility of Darktrace / NETWORKTM, enabling defenders to investigate and respond to attacks more effectively.

With adversaries weaponizing even trusted incident response tools, maintaining broad visibility and rapid response capabilities becomes critical to mitigating post-exploitation risk.

Credit to Emma Foulger (Global Threat Research Operations Lead), Tara Gould (Threat Research Lead), Eugene Chua (Principal Cyber Analyst & Analyst Team Lead), Nathaniel Jones (VP, Security & AI Strategy, Field CISO),

Edited by Ryan Traill (Analyst Content Lead)

Appendices

References

1.        https://nvd.nist.gov/vuln/detail/CVE-2025-59287

2.    https://www.bleepingcomputer.com/news/security/hackers-now-exploiting-critical-windows-server-wsus-flaw-in-attacks/

3.    https://learn.microsoft.com/en-us/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus

4.    https://www.cisa.gov/news-events/alerts/2025/10/24/microsoft-releases-out-band-security-update-mitigate-windows-server-update-service-vulnerability-cve

5.    https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-59287

6.    https://thehackernews.com/2025/10/microsoft-issues-emergency-patch-for.html

7.    https://www.cisa.gov/known-exploited-vulnerabilities-catalog

8.    https://www.huntress.com/blog/exploitation-of-windows-server-update-services-remote-code-execution-vulnerability

9.    https://unit42.paloaltonetworks.com/microsoft-cve-2025-59287/

10. https://blog.talosintelligence.com/velociraptor-leveraged-in-ransomware-attacks/

11. https://github.com/hackirby/skuld

Darktrace Model Detections

·       Device / New PowerShell User Agent

·       Anomalous Connection / Powershell to Rare External

·       Compromise / Possible Tunnelling to Bin Services

·       Compromise / High Priority Tunnelling to Bin Services

·       Anomalous Server Activity / New User Agent from Internet Facing System

·       Device / New User Agent

·       Device / Internet Facing Device with High Priority Alert

·       Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

·       Anomalous Server Activity / Rare External from Server

·       Compromise / Agent Beacon (Long Period)

·       Device / Large Number of Model Alerts

·       Compromise / Agent Beacon (Medium Period)

·       Device / Long Agent Connection to New Endpoint

·       Compromise / Slow Beaconing Activity To External Rare

·       Security Integration / Low Severity Integration Detection

·       Antigena / Network / Significant Anomaly / Antigena Alerts Over Time Block

·       Antigena / Network / Significant Anomaly / Antigena Enhanced Monitoring from Server Block

·       Antigena / Network / External Threat / Antigena Suspicious Activity Block

·       Antigena / Network / Significant Anomaly / Antigena Significant Server Anomaly Block

List of Indicators of Compromise (IoCs)

IoC - Type - Description + Confidence

o   royal-boat-bf05.qgtxtebl.workers[.]dev – Hostname – Likely C2 Infrastructure

o   royal-boat-bf05.qgtxtebl.workers[.]dev/v3.msi - URI – Likely payload

o   chat.hcqhajfv.workers[.]dev – Hostname – Possible C2 Infrastructure

o   185.69.24[.]18 – IP address – Possible C2 Infrastructure

o   185.69.24[.]18/bin.msi - URI – Likely payload

o   185.69.24[.]18/singapure - URI – Likely payload

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

/

October 24, 2025

Patch Smarter, Not Harder: Now Empowering Security Teams with Business-Aligned Threat Context Agents

Patch Smarter, Not Harder: Now Empowering Security Teams with Business-Aligned Threat Context Agents Default blog imageDefault blog image

Most risk management programs remain anchored in enumeration: scanning every asset, cataloging every CVE, and drowning in lists that rarely translate into action. Despite expensive scanners, annual pen tests, and countless spreadsheets, prioritization still falters at two critical points.

Context gaps at the device level: It’s hard to know which vulnerabilities actually matter to your business given existing privileges, what software it runs, and what controls already reduce risk.

Business translation: Even when the technical priority is clear, justifying effort and spend in financial terms—especially across many affected devices—can delay action. Especially if it means halting other areas of the business that directly generate revenue.

The result is familiar: alert fatigue, “too many highs,” and remediation that trails behind the threat landscape. Darktrace / Proactive Exposure Management addresses this by pairing precise, endpoint‑level context with clear, financial insight so teams can prioritize confidently and mobilize faster.

A powerful combination: No-Telemetry Endpoint Agent + Cost-Benefit Analysis

Darktrace / Proactive Exposure Management now uniquely combines technical precision with business clarity in a single workflow.  With this release, Darktrace / Proactive Exposure Management delivers a more holistic approach, uniting technical context and financial insight to drive proactive risk reduction. The result is a single solution that helps security teams stay ahead of threats while reducing noise, delays, and complexity.

  • No-Telemetry Endpoint: Collects installed software data and maps it to known CVEs—without network traffic—providing device-level vulnerability context and operational relevance.
  • Cost-Benefit Analysis for Patching: Calculates ROI by comparing patching effort with potential exploit impact, factoring in headcount time, device count, patch difficulty, and automation availability.

Introducing the No-Telemetry Endpoint Agent

Darktrace’s new endpoint agent inventories installed software on devices and maps it to known CVEs without collecting network data so you can prioritize using real device context and available security controls.

By grounding vulnerability findings in the reality of each endpoint, including its software footprint and existing controls, teams can cut through generic severity scores and focus on what matters most. The agent is ideal for remote devices, BYOD-adjacent fleets, or environments standardizing on Darktrace, and is available without additional licensing cost.

Darktrace / Proactive Exposure Management user interface
Figure 1: Darktrace / Proactive Exposure Management user interface

Built-In Cost-Benefit Analysis for Patching

Security teams often know what needs fixing but stakeholders need to understand why now. Darktrace’s new cost-benefit calculator compares the total cost to patch against the potential cost of exploit, producing an ROI for the patch action that expresses security action in clear financial terms.

Inputs like engineer time, number of affected devices, patch difficulty, and automation availability are factored in automatically. The result is a business-aligned justification for every patching decision—helping teams secure buy-in, accelerate approvals, and move work forward with one-click ticketing, CSV export, or risk acceptance.

Darktrace / Proactive Exposure Management Cost Benefit Analysis
Figure 2: Darktrace / Proactive Exposure Management Cost Benefit Analysis

A Smarter, Faster Approach to Exposure Management

Together, the no-telemetry endpoint and Cost–Benefit Analysis advance the CTEM motion from theory to practice. You gain higher‑fidelity discovery and validation signals at the device level, paired with business‑ready justification that accelerates mobilization. The result is fewer distractions, clearer priorities, and faster measurable risk reduction. This is not from chasing every alert, but by focusing on what moves the needle now.

  • Smarter Prioritization: Device‑level context trims noise and spotlights the exposures that matter for your business.
  • Faster Decisions: Built‑in ROI turns technical urgency into executive clarity—speeding approvals and action.
  • Practical Execution: Privacy‑conscious endpoint collection and ticketing/export options fit neatly into existing workflows.
  • Better Outcomes: Close the loop faster—discover, prioritize, validate, and mobilize—on the same operating surface.

Committed to innovation

These updates are part of the broader Darktrace release, which also included:

1. Major innovations in cloud security with the launch of the industry’s first fully automated cloud forensics solution, reinforcing Darktrace’s leadership in AI-native security.

2. Darktrace Network Endpoint eXtended Telemetry (NEXT) is revolutionizing NDR with the industry’s first mixed-telemetry agent using Self-Learning AI.

3. Improvements to our OT product, purpose built for industrial infrastructure, Darktrace / OT now brings dedicated OT dashboard, segmentation-aware risk modeling, and expanded visibility into edge assets and automation protocols.

Join our Live Launch Event

When? 

December 9, 2025

What will be covered?

Join our live broadcast to experience how Darktrace is eliminating blind spots for detection and response across your complete enterprise with new innovations in Agentic AI across our ActiveAI Security platform. Industry leaders from IDC will join Darktrace customers to discuss challenges in cross-domain security, with a live walkthrough reshaping the future of Network Detection & Response, Endpoint Detection & Response, Email Security, and SecOps in novel threat detection and autonomous investigations.

Continue reading
About the author
Kelland Goodin
Product Marketing Specialist
Your data. Our AI.
Elevate your network security with Darktrace AI