Blog
/
Endpoint
/
December 12, 2022

ML Integration for Third-Party EDR Alerts

The advantages and benefits of combining EDR technologies with Darktrace: how this integration can enhance your cybersecurity strategy.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
12
Dec 2022

This blog demonstrates how we use EDR integration in Darktrace for detection & investigation. We’ll look at four key features, which are summarized with an example below:  

1)    Contextualizing existing Darktrace information – E.g. ‘There was a Microsoft Defender for Endpoint (MDE) alert 5 minutes after Darktrace saw the device beacon to an unusual destination on the internet. Let me pivot back into the Defender UI’
2)    Cross-data detection engineering
‘Darktrace, create an alert or trigger a response if you see a specific MDE alert and a native Darktrace detection on the same entity over a period of time’
3)    Applying unsupervised machine learning to third-party EDR alerts
‘Darktrace, create an alert or trigger a response if there is a specific MDE alert that is unusual for the entity, given the context’
4)    Use third-party EDR alerts to trigger AI Analyst
‘AI Analyst, this low-fidelity MDE alert flagged something on the endpoint. Please take a deep look at that device at the time of the Defender alert, conduct an investigation on Darktrace data and share your conclusions about whether there is more to it or not’ 

MDE is used as an example above, but Darktrace’s EDR integration capabilities extend beyond MDE to other EDRs as well, for example to Sentinel One and CrowdStrike EDR.

Darktrace brings its Self-Learning AI to your data, no matter where it resides. The data can be anywhere – in email environments, cloud, SaaS, OT, endpoints, or the network, for example. Usually, we want to get as close to the raw data as possible to get the maximum context for our machine learning. 

We will explain how we leverage high-value integrations from our technology partners to bring further context to Darktrace, but also how we apply our Self-Learning AI to third-party data. While there are a broad range of integrations and capabilities available, we will primarily look at Microsoft Defender for Endpoint, CrowdStrike, and SentinelOne and focus on detection in this blog post. 

The Nuts and Bolts – Setting up the Integration

Darktrace is an open platform – almost everything it does is API-driven. Our system and machine learning are flexible enough to ingest new types of data & combine it with already existing information.  

The EDR integrations mentioned here are part of our 1-click integrations. All it requires is the right level of API access from the EDR solutions and the ability for Darktrace to communicate with the EDR’s API. This type of integration can be setup within minutes – it currently doesn’t require additional Darktrace licenses.

Figure 1: Set-up of Darktrace Graph Security API integration

As soon as the setup is complete, it enables various additional capabilities. 
Let’s look at some of the key detection & investigation-focussed capabilities step-by-step.

Contextualizing Existing Darktrace Information

The most basic, but still highly-useful integration is enriching existing Darktrace information with EDR alerts. Darktrace shows a chronological history of associated telemetry and machine learning for each entity observed in the entities event log. 

With an EDR integration enabled, we now start to see EDR alerts for the respective entities turn up in the entity’s event log at the correct point in time – with a ton of context and a 1-click pivot back to the native EDR console: 

Figure 2: A pivot from the Darktrace Threat Visualizer to Microsoft Defender

This context is extremely useful to have in a single screen during investigations. Context is king – it reduces time-to-meaning and skill required to understand alerts.

Cross-Data Detection Engineering

When an EDR integration is activated, Darktrace enables an additional set of detections that leverage the new EDR alerts. This comes out of the box and doesn’t require any further detection engineering. It is worth mentioning though that the new EDR information is being made available in the background for bespoke detection engineering, if advanced users want to leverage these as custom metrics.

The trick here is that the added context provided by the additional EDR alerts allows for more refined detections – primarily to detect malicious activity with higher confidence. A network detection showing us beaconing over an unusual protocol or port combination to a rare destination on the internet is great – but seeing within Darktrace that CrowdStrike detected a potentially hostile file or process three minutes prior to the beaconing detection on the same device will greatly help to prioritize the detections and aid a subsequent investigation.

Here is an example of what this looks like in Darktrace:

Figure 3: A combined model breach in the Threat Visualizer

Applying Unsupervised Machine Learning to Third-Party EDR Alerts


Once we start seeing EDR alerts in Darktrace, we can start treating it like any other data – by applying unsupervised machine learning to it. This means we can then understand how unusual a given EDR detection is for each device in question. This is extremely powerful – it allows to reduce noisy alerts without requiring ongoing EDR alert tuning and opens a whole world of new detection capabilities.

As an example – let’s imagine a low-level malware alert keeps appearing from the EDR on a specific device. This might be a false-positive in the EDR, or just not of interest for the security team, but they may not have the resources or knowledge to further tune their EDR and get rid of this noisy alert.

While Darktrace keeps adding this as contextual information in the device’s event log, it could, depending on the context of the device, the EDR alert, and the overall environment, stop alerting on this particular EDR malware alert on this specific device if it stops being unusual. Over time, noise is reduced across the environment – but if that particular EDR alert appears on another device, or on the same device in a different context, it might get flagged again, as it now is unusual in the given context.

Darktrace then goes a step further, taking those unusual EDR alerts and combining them with unusual activity seen in other Darktrace coverage areas, like the network for example. Combining an unusual EDR alert with an unusual lateral movement attempt, for example, allows it to find these combined, high-precision, cross-data set anomalous events that are highly indicative of an active cyber-attack – without having to pre-define the exact nature of what ‘unusual’ looks like.

Figure 4: Combined EDR & network detection using unsupervised machine learning in Darktrace

Use Third-Party EDR Alerts to Trigger AI Analyst

Everything we discussed so far is great for improving precision in initial detections, adding context, and cutting through alert-noise. We don’t stop there though – we can also now use the third-party EDR alerts to trigger our investigation engine, the AI Analyst.

Cyber AI Analyst replicates and automates typical level 1 and level 2 Security Operations Centre (SOC) workflows. It is usually triggered by every native Darktrace detection. This is not a SOAR where playbooks are statically defined – AI Analyst builds hypotheses, gathers data, evaluates the data & reports on its findings based on the context of each individual scenario & investigation. 

Darktrace can use EDR alerts as starting points for its investigation, with every EDR alert ingested now triggering AI Analyst. This is similar to giving a (low-level) EDR alert to a human analyst and telling them: ‘Go and take a look at information in Darktrace and try to conclude whether there is more to this EDR alert or not.’

The AI Analyst subsequently looks at the entity which had triggered the EDR alert and investigates all available Darktrace data on that entity, over a period of time, in light of that EDR alert. It does not pivot outside Darktrace itself for that investigation (e.g. back into the Microsoft console) but looks at all of the context natively available in Darktrace. If concludes that there is more to this EDR alert – e.g. a bigger incident – it will report on that and clearly flag it. The report can of course be directly downloaded as a PDF to be shared with other stakeholders.

This comes in handy for a variety of reasons – primarily to further automate security operations and alleviate pressure from human teams. AI Analyst’s investigative capabilities sit on top of everything we discussed so far (combining EDR detections with detections from other coverage areas, applying unsupervised machine learning to EDR detections, …).

However, it can also come in handy to follow up on low-severity EDR alerts for which you might not have the human resources to do so.

The below screenshot shows an example of a concluded AI Analyst investigation that was triggered by an EDR alert:

Figure 5: An AI Analyst incident trained on third-party data

The Impact of EDR Integrations

The purpose behind all of this is to augment human teams, save them time and drive further security automation.

By ingesting third-party endpoint alerts, combining it with our existing intelligence and applying unsupervised machine learning to it, we achieve that further security automation. 

Analysts don’t have to switch between consoles for investigations. They can leverage our high-fidelity detections that look for unusual endpoint alerts, in combination with our already powerful detections across cloud and email systems, zero trust architecture, IT and OT networks, and more. 

In our experience, this pinpoints the needle in the haystack – it cuts through noise and reduces the mean-time-to-detect and mean-time-to-investigate drastically.

All of this is done out of the box in Darktrace once the endpoint integrations are enabled. It does not need a data scientist to make the machine learning work. Nor does it need a detection engineer or threat hunter to create bespoke, meaningful detections. We want to reduce the barrier to entry for using detection and investigation solutions – in terms of skill and experience required. The system is still flexible, transparent, and open, meaning that advanced users can create their own combined detections, leveraging unsupervised machine learning across different data sets with a few clicks.

There are of course more endpoint integration capabilities available than what we covered here, and we will explore these in future blog posts.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

AI

/

April 30, 2026

Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

mythos vulnerability discoveryDefault blog imageDefault blog image

Anthropic’s Mythos and what it means for security teams

Recent attention on systems such as Anthropic Mythos highlights a notable problem for defenders. Namely that disclosure’s role in coordinating defensive action is eroding.

As AI systems gain stronger reasoning and coding capability, their usefulness in analyzing complex software environments and identifying weaknesses naturally increases. What has changed is not attacker motivation, but the conditions under which defenders learn about and organize around risk. Vulnerability discovery and exploitation increasingly unfold in ways that turn disclosure into a retrospective signal rather than a reliable starting point for defense.

Faster discovery was inevitable and is already visible

The acceleration of vulnerability discovery was already observable across the ecosystem. Publicly disclosed vulnerabilities (CVEs) have grown at double-digit rates for the past two years, including a 32% increase in 2024 according to NIST, driven in part by AI even prior to Anthropic’s Mythos model. Most notably XBOW topped the HackerOne US bug bounty leaderboard, marking the first time an autonomous penetration tester had done so.  

The technical frontier for AI capabilities has been described elsewhere as jagged, and the implication is that Mythos is exceptional but not unique in this capability. While Mythos appears to make significant progress in complex vulnerability analysis, many other models are already able to find and exploit weaknesses to varying degrees.  

What matters here is not which model performs best, but the fact that vulnerability discovery is no longer a scarce or tightly bounded capability.

The consequence of this shift is not simply earlier discovery. It is a change in the defender-attacker race condition. Disclosure once acted as a rough synchronization point. While attackers sometimes had earlier knowledge, disclosure generally marked the moment when risk became visible and defensive action could be broadly coordinated. Increasingly, that coordination will no longer exist. Exploitation may be underway well before a CVE is published, if it is published at all.

Why patch velocity alone is not the answer

The instinctive response to this shift is to focus on patching faster, but treating patch velocity as the primary solution misunderstands the problem. Most organizations are already constrained in how quickly they can remediate vulnerabilities. Asset sprawl, operational risk, testing requirements, uptime commitments, and unclear ownership all limit response speed, even when vulnerabilities are well understood.

If discovery and exploitation now routinely precede disclosure, then patching cannot be the first line of defense. It becomes one necessary control applied within a timeline that has already shifted. This does not imply that organizations should patch less. It means that patching cannot serve as the organizing principle for defense.

Defense needs a more stable anchor

If disclosure no longer defines when defense begins, then defense needs a reference point that does not depend on knowing the vulnerability in advance.  

Every digital environment has a behavioral character. Systems authenticate, communicate, execute processes, and access resources in relatively consistent ways over time. These patterns are not static rules or signatures. They are learned behaviors that reflect how an organization operates.

When exploitation occurs, even via previously unknown vulnerabilities, those behavioral patterns change.

Attackers may use novel techniques, but they still need to gain access, create processes, move laterally, and will ultimately interact with systems in ways that diverge from what is expected. That deviation is observable regardless of whether the underlying weakness has been formally named.

In an environment where disclosure can no longer be relied on for timing or coordination, behavioral understanding is no longer an optional enhancement; it becomes the only consistently available defensive signal.

Detecting risk before disclosure

Darktrace’s threat research has consistently shown that malicious activity often becomes visible before public disclosure.

In multiple cases, including exploitation of Ivanti, SAP NetWeaver, and Trimble Cityworks, Darktrace detected anomalous behavior days or weeks ahead of CVE publication. These detections did not rely on signatures, threat intelligence feeds, or awareness of the vulnerability itself. They emerged because systems began behaving in ways that did not align with their established patterns.

This reflects a defensive approach grounded in ‘Ethos’, in contrast to the unbounded exploration represented by ‘Mythos’. Here, Mythos describes continuous vulnerability discovery at speed and scale. Ethos reflects an understanding of what is normal and expected within a specific environment, grounded in observed behavior.

Revisiting assume breach

These conditions reinforce a principle long embedded in Zero Trust thinking: assume breach.

If exploitation can occur before disclosure, patching vulnerabilities can no longer act as the organizing principle for defense. Instead, effective defense must focus on monitoring for misuse and constraining attacker activity once access is achieved. Behavioral monitoring allows organizations to identify early‑stage compromise and respond while uncertainty remains, rather than waiting for formal verification.

AI plays a critical role here, not by predicting every exploit, but by continuously learning what normal looks like within a specific environment and identifying meaningful deviation at machine speed. Identifying that deviation enables defenders to respond by constraining activity back towards normal patterns of behavior.

Not an arms race, but an asymmetry

AI is often framed as fueling an arms race between attackers and defenders. In practice, the more important dynamic is asymmetry.

Attackers operate broadly, scanning many environments for opportunities. Defenders operate deeply within their own systems, and it’s this business context which is so significant. Behavioral understanding gives defenders a durable advantage. Attackers may automate discovery, but they cannot easily reproduce what belonging looks like inside a particular organization.

A changed defensive model

AI‑accelerated vulnerability discovery does not mean defenders have lost. It does mean that disclosure‑driven, patch‑centric models no longer provide a sufficient foundation for resilience.

As vulnerability volumes grow and exploitation timelines compress, effective defense increasingly depends on continuous behavioral understanding, detection that does not rely on prior disclosure, and rapid containment to limit impact. In this model, CVEs confirm risk rather than define when defense begins.

The industry has already seen this approach work in practice. As AI continues to reshape both offense and defense, behavioral detection will move from being complementary to being essential.

Continue reading
About the author
Andrew Hollister
Principal Solutions Engineer, Cyber Technician

Blog

/

Network

/

April 29, 2026

Darktrace Malware Analysis: Jenkins Honeypot Reveals Emerging Botnet Targeting Online Games

botnetDefault blog imageDefault blog image

DDoS Botnet discovery

To observe adversary behavior in real time, Darktrace operates a global honeypot network known as “CloudyPots”, designed to capture malicious activity across a wide range of services, protocols, and cloud platforms. These honeypots provide valuable insights into the techniques, tools, and malware actively targeting internet‑facing infrastructure.

How attackers used a Jenkins honeypot to deploy the botnet

One such software honeypotted by Darktrace is Jenkins, a CI build system that allows developers to build code and run tests automatically. The instance of Jenkins in Darktrace’s honeypot is intentionally configured with a weak password, allowing attackers to obtain remote code execution on the service.

In one instance observed by Darktrace on March 18, 2026, a threat actor seemingly attempted to target Darktrace’s Jenkins honeypot to deploy a distributed denial-of-service (DDoS) botnet. Further analysis by Darktrace’s Threat Research team revealed the botnet was intended to specifically target video game servers.

How the Jenkins scriptText endpoint was used for remote code execution

The Jenkins build system features an endpoint named scriptText, which enables users to programmatically send new jobs, in the form of a Groovy script. Groovy is a programming language with similar syntax to Java and runs using the Java Virtual Machine (JVM). An attacker can abuse the scriptText endpoint to run a malicious script, achieving code execution on the victim host.

Request sent to the scriptText endpoint containing the malicious script.
Figure 1: Request sent to the scriptText endpoint containing the malicious script.

The malicious script is sent using the form-data content type, which results in the contents of the script being URL encoded. This encoding can be decoded to recover the original script, as shown in Figure 2, where Darktrace Analysts decoded the script using CyberChef,

The malicious script decoded using CyberChef.
Figure 2: The malicious script decoded using CyberChef.

What happens after Jenkins is compromised

As Jenkins can be deployed on both Microsoft Windows and Linux systems, the script includes separate branches to target each platform.

In the case of Windows, the script performs the following actions:

  • Downloads a payload from 103[.]177.110.202/w.exe and saves it to C:\Windows\Temp\update.dat.
  • Renames the “update.dat” file to “win_sys.exe” (within the same folder)
  • Runs the Unblock-File command is used to remove security restrictions typically applied to files downloaded from the internet.
  • Adds a firewall allow rule is added for TCP port 5444, which the payload uses for command-and-control (C2) communications.

On Linux systems, the script will instead use a Bash one-liner to download the payload from 103[.]177.110.202/bot_x64.exe to /tmp/bot and execute it.

Why this botnet uses a single IP for delivery and command and control

The IP 103[.]177.110.202 belongs to Webico Company Limited, specifically its Tino brand, a Vietnamese company that offers domain registrar services and server hosting. Geolocation data indicates that the IP is located in Ho Chi Minh City. Open-source intelligence (OSINT) analysis revealed multiple malicious associations tied to the IP [1].

Darktrace’s analysis found that the IP 103[.]177.110.202 is used for multiple stages of an attack, including spreading and initial access, delivering payloads, and C2 communication. This is an unusual combination, as many malware families separate their spreading servers from their C2 infrastructure. Typically, malware distribution activity results in a high volume of abuse complaints, which may result in server takedowns or service suspension by internet providers. Separate C2 infrastructure ensures that existing infections remain controllable even if the spreading server is disrupted.

How the malware evades detection and maintains persistence

Analysis of the Linux payload (bot _x64)

The sample begins by setting the environmental variables BUILD_ID and JENKINS_NODE_COOKIE to “dontKillMe”. By default, Jenkins terminates long-running scripts after a defined timeout period; however, setting these variables to “dontKillMe” bypasses this check, allowing the script to continue running uninterrupted.

The script then performs several stealth behaviors to evade detection. First, it deletes the original executable from disk and then renames itself to resemble the legitimate kernel processes “ksoftirqd/0” or “kworker”, which are found on Linux installations by default. It then uses a double fork to daemonize itself, enabling it to run in the background, before redirecting standard input, standard output, and standard error to /dev/null, hiding any logging from the malware. Finally, the script creates a signal handler for signals such as SIGTERM, causing them to be ignored and making it harder to stop the process.

Stealth component of the main function
Figure 3: Stealth component of the main function

How the botnet communicates with command and control (C2)

The sample then connects to the C2 server and sends the detected architecture of the system on which the agent was installed. The malware then enters a loop to handle incoming commands.

The sample features two types of commands, utility commands used to manage the malware, and commands to trigger attacks. Three special commands are defined: “PING” (which replies with PONG as a keep-alive mechanism), “!stop” which causes the malware to exit, and “!update”, which triggers the malware to download a new version from the C2 server and restart itself.

Initial connection to the C2 sever.
Figure 4: Initial connection to the C2 sever.

What DDoS attack techniques this botnet uses

The attack commands consist of the following:

Many of these commands invoke the same function despite appearing to be different attack techniques. For example, specialized attacks such as Cloudflare bypass (cfbypass, uam) use the exact same function as a standard HTTP attack. This may indicate the threat actor is attempting to make the botnet look like it has more capabilities than it actually has, or it could suggest that these commands are placeholders for future attack functionality that has yet to be implemented

All the commands take three arguments: IP, port to attack, and the duration of the attack.

attack_udp and attack_udp_pps

The attack_udp and attack_udp_pps functions both use a basic loop and sendto system call to send UDP packets to the victim’s IP, either targeting a predetermined port or a random port. The attack_udp function sends packets with 1,450 bytes of data, aimed at bandwidth saturation, while the attack_udp_pps function sends smaller 64-byte packets. In both cases, the data body of the packet consists of entirely random data.

Code for the UDP attack method
Figure 5: Code for the UDP attack method

attack_dayz

The attack_dayz function follows a similar structure to the attack_udp function; however, instead of sending random data, it will instead send a TSource Engine Query. This command is specific to Valve Source Engine servers and is designed to return a large volume of data about the targeted server. By repeatedly flooding this request, an attacker can exhaust the resources of a server using a comparatively small amount of data.

The Valve Source Engine server, also called Source Engine Dedicated server, is a server developed by video game company Valve that enables multiplayer gameplay for titles built using the Source game engine, which is also developed by Valve. The Source engine is used in games such as Counterstrike and Team Fortress 2. Curiously, the function attack_dayz, appears to be named after another popular online multiplayer game, DayZ; however, DayZ does not use the Valve Source Engine, making it unclear why this name was chosen.

The code for the “attack_dayz” attack function.
Figure 6: The code for the attack_dayz” attack function.

attack_tcp_push

The attack_tcp_push function establishes a TCP socket with the non-blocking flag set, allowing it to rapidly call functions such as connect() and send() without waiting for their completion. For the duration of the attack, it enters a while loop in which it repeatedly connects to the victim, sends 1,024 bytes of random data, and then closes the connection. This process repeats until the attack duration ends. If the mode flag is set to 1, the function also configures the socket with TCP no-delay enabled, allowing for packets to be sent immediately without buffering, resulting in a higher packet rate and a more effective attack.

The code for the TCP attack function.
Figure 7: The code for the TCP attack function.

attack_http

Similar to attach_tcp_push, attack_http configures a socket with no-delay enabled and non-blocking set. After establishing the connection, it sends 64 HTTP GET requests before closing the socket.

The code for the HTTP attack function.
Figure 8: The code for the HTTP attack function.

attack_special

The attack_special function creates a UDP socket and sets the port and payload based on the value of the mode flag:

  • Mode 0: Port 53 (DNS), sending a 10-byte malformed data packet.
  • Mode 1: Port 27015 (Valve Source Engine), sending the previously observed TSource Engine Query packet.
  • Mode 2: Port 123 (NTP), sending the start of an NTP control request.
The code for the attack_special function.
Figure 9: The code for the attack_special function.

What this botnet reveals about opportunistic attacks on internet-facing systems

Jenkins is one of the less frequently exploited services honeypotted by Darktrace, with only a handful campaigns observed. Nonetheless, the emergence of this new DDoS botnet demonstrates that attackers continue to opportunistically exploit any internet-facing misconfiguration at scale to grow the botnet strength.

While the hosts most commonly affected by these opportunistic attacks are usually “lower-value” systems, this distinction is largely irrelevant for botnets, where numbers alone are more important to overall effectiveness

The presence of game-specific DoS techniques further highlights that the gaming industry continues to be extensively targeted by cyber attackers, with Cloudflare reporting it as the fourth most targeted industry [2]. This botnet has likely already been used against game servers, serving as a reminder for server operators to ensure appropriate mitigations are in place.

Credit to Nathaniel Bill (Malware Research Engineer)
Edited by Ryan Traill (Content Manager)

Indicators of Compromise (IoCs)

103[.]177.110.202 - Attacker and command-and-control IP

F79d05065a2ba7937b8781e69b5859d78d5f65f01fb291ae27d28277a5e37f9b – bot_x64

References

[1] https://www.virustotal.com/gui/url/86db2530298e6335d3ecc66c2818cfbd0a6b11fcdfcb75f575b9fcce1faa00f1/detection

[2] - https://blog.cloudflare.com/ddos-threat-report-2025-q4/

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
Your data. Our AI.
Elevate your network security with Darktrace AI