Blog
/
Cloud
/
January 13, 2025

Agent vs. Agentless Cloud Security: Why Deployment Methods Matter

Cloud security solutions can be deployed with agentless or agent-based approaches or use a combination of methods. Organizations must weigh which method applies best to the assets and data the tool will protect.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Kellie Regan
Director, Product Marketing - Cloud Security
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
13
Jan 2025

The rapid adoption of cloud technologies has brought significant security challenges for organizations of all sizes. According to recent studies, over 70% of enterprises now operate in hybrid or multi-cloud environments, with 93% employing a multi-cloud strategy[1]. This complexity requires robust security tools, but opinions vary on the best deployment method—agent-based, agentless, or a combination of both.

Agent-based and agentless cloud security approaches offer distinct benefits and limitations, and organizations often make deployment choices based on their unique needs depending on the function of the specific assets covered, the types of data stored, and cloud architecture, such as hybrid or multi-cloud deployments.

For example, agentless solutions are increasingly favored for their ease of deployment and ability to provide broad visibility across dynamic cloud environments. These are especially useful for DevOps teams, with 64% of organizations citing faster deployment as a key reason for adopting agentless tools[2].

On the other hand, agent-based solutions remain the preferred choice for environments requiring deep monitoring and granular control, such as securing sensitive high-value workloads in industries like finance and healthcare. In fact, over 50% of enterprises with critical infrastructure report relying on agent-based solutions for their advanced protection capabilities[3].

As the debate continues, many organizations are turning to combined approaches, leveraging the strengths of both agent-based and agentless tools to address the full spectrum of their security needs for comprehensive coverage. Understanding the capabilities and limitations of these methods is critical to building an effective cloud security strategy that adapts to evolving threats and complex infrastructures.

Agent-based cloud security

Agent-based security solutions involve deploying software agents on each device or system that needs protection. Agent-based solutions are great choices when you need in-depth monitoring and protection capabilities. They are ideal for organizations that require deep security controls and real-time active response, particularly in hybrid and on-premises environments.

Key advantages include:

1. Real-time monitoring and protection: Agents detect and block threats like malware, ransomware, and anomalous behaviors in real time, providing ongoing protection and enforcing compliance by continuously monitoring workload activities.  Agents enable full control over workloads for active response such as blocking IP addresses, killing processes, disabling accounts, and isolating infected systems from the network, stopping lateral movement.

2. Deep visibility for hybrid environments: Agent-based approaches allow for full visibility across on-premises, hybrid, and multi-cloud environments by deploying agents on physical and virtual machines. Agents offer detailed insights into system behavior, including processes, files, memory, network connections, and more, detecting subtle anomalies that might indicate security threats. Host-based monitoring tracks vulnerabilities at the system and application level, including unpatched software, rogue processes, and unauthorized network activity.

3. Comprehensive coverage: Agents are very effective in hybrid environments (cloud and on-premises), as they can be installed on both physical and virtual machines.  Agents can function independently on each host device onto which they are installed, which is especially helpful for endpoints that may operate outside of constant network connectivity.

Challenges:

1. Resource-intensive: Agents can consume CPU, memory, and network resources, which may affect performance, especially in environments with large numbers of workloads or ephemeral resources.

2. Challenging in dynamic environments: Managing hundreds or thousands of agents in highly dynamic or ephemeral environments (e.g., containers, serverless functions) can be complex and labor-intensive.

3. Slower deployment: Requires agent installation on each workload or instance, which can be time-consuming, particularly in large or complex environments.  

Agentless cloud security

Agentless security does not require software agents to be installed on each device. Instead, it uses cloud infrastructure and APIs to perform security checks. Agentless solutions are highly scalable with minimal impact on performance, and ideal for cloud-native and highly dynamic environments like serverless and containerized. These solutions are great choices for your cloud-native and multi-cloud environments where rapid deployment, scalability, and minimal impact on performance are critical, but response actions can be handled through external tools or manual processes.

Key advantages include:

1. Scalability and ease of deployment: Because agentless security doesn’t require installation on each individual device, it is much easier to deploy and can quickly scale across a vast number of cloud assets. This approach is ideal for environments where resources are frequently created and destroyed (e.g., serverless, containerized workloads), as there is no need for agent installation or maintenance.

2. Reduced system overhead: Without the need to run local agents, agentless security minimizes the impact on system performance. This is crucial in high-performance environments.

3. Broad visibility: Agentless security connects via API to cloud service providers, offering near-instant visibility and threat detection. It provides a comprehensive view of your cloud environment, making it easier to manage and secure large and complex infrastructures.

Challenges

1. Infrastructure-level monitoring: Agentless solutions rely on cloud service provider logs and API calls, meaning that detection might not be as immediate as agent-based solutions. They collect configuration data and logs, focusing on infrastructure misconfigurations, identity risks, exposed resources, and network traffic, but lack visibility and access to detailed, system-level information such as running processes and host-level vulnerabilities.

2. Cloud-focused: Primarily for cloud environments, although some tools may integrate with on-premises systems through API-based data gathering. For organizations with hybrid cloud environments, this approach fragments visibility and security, leading to blind spots and increasing security risk.

3. Passive remediation: Typically provides alerts and recommendations, but lacks deep control over workloads, requiring manual intervention or orchestration tools (e.g., SOAR platforms) to execute responses. Some agentless tools trigger automated responses via cloud provider APIs (e.g., revoking permissions, adjusting security groups), but with limited scope.

Combined agent-based and agentless approaches

A combined approach leverages the strengths of both agent-based and agentless security for complete coverage. This hybrid strategy helps security teams achieve comprehensive coverage by:

  • Using agent-based solutions for deep, real-time protection and detailed monitoring of critical systems or sensitive workloads.
  • Employing agentless solutions for fast deployment, broader visibility, and easier scalability across all cloud assets, which is particularly useful in dynamic cloud environments where workloads frequently change.

The combined approach has distinct practical applications. For example, imagine a financial services company that deals with sensitive transactions. Its security team might use agent-based security for critical databases to ensure stringent protections are in place. Meanwhile, agentless solutions could be ideal for less critical, transient workloads in the cloud, where rapid scalability and minimal performance impact are priorities. With different data types and infrastructures, the combined approach is best.

Best of both worlds: The benefits of a combined approach

The combined approach not only maximizes security efficacy but also aligns with diverse operational needs. This means that all parts of the cloud environment are secured according to their risk profile and functional requirements. Agent-based deployment provides in-depth monitoring and active protection against threats, suitable for environments requiring tight security controls, such as financial services or healthcare data processing systems. Agentless deployment complements agents by offering broader visibility and easier scalability across diverse and dynamic cloud environments, ideal for rapidly changing cloud resources.

There are three major benefits from combining agent-based and agentless approaches.

1. Building a holistic security posture: By integrating both agent-based and agentless technologies, organizations can ensure that all parts of their cloud environments are covered—from persistent, high-risk endpoints to transient cloud resources. This comprehensive coverage is crucial for detecting and responding to threats promptly and effectively.

2. Reducing overhead while boosting scalability: Agentless systems require no software installation on each device, reducing overhead and eliminating the need to update and maintain agents on a large number of endpoints. This makes it easier to scale security as the organization grows or as the cloud environment changes.

3. Applying targeted protection where needed: Agent-based solutions can be deployed on selected assets that handle sensitive information or are critical to business operations, thus providing focused protection without incurring the costs and complexity of universal deployment.

Use cases for a combined approach

A combined approach gives security teams the flexibility to deploy agent-based and agentless solutions based on the specific security requirements of different assets and environments. As a result, organizations can optimize their security expenditures and operational efforts, allowing for greater adaptability in cloud security use cases.

Let’s take a look at how this could practically play out. In the combined approach, agent-based security can perform the following:

1. Deep monitoring and real-time protection:

  • Workload threat detection: Agent-based solutions monitor individual workloads for suspicious activity, such as unauthorized file changes or unusual resource usage, providing high granularity for detecting threats within critical cloud applications.
  • Behavioral analysis of applications: By deploying agents on virtual machines or containers, organizations can monitor behavior patterns and flag anomalies indicative of insider threats, lateral movement, or Advanced Persistent Threats (APTs).
  • Protecting high-sensitivity environments: Agents provide continuous monitoring and advanced threat protection for environments processing sensitive data, such as payment processing systems or healthcare records, leveraging capabilities like memory protection and file integrity monitoring.

2. Cloud asset protection:

  • Securing critical infrastructure: Agent-based deployments are ideal for assets like databases or storage systems that require real-time defense against exploits and ransomware.
  • Advanced packet inspection: For high-value assets, agents offer deep packet inspection and in-depth logging to detect stealthy attacks such as data exfiltration.
  • Customizable threat response: Agents allow for tailored security rules and automated responses at the workload level, such as shutting down compromised instances or quarantining infected files.

At the same time, agentless cloud security provides complementary benefits such as:

1. Broad visibility and compliance:

  • Asset discovery and management: Agentless systems can quickly scan the entire cloud environment to identify and inventory all assets, a crucial capability for maintaining compliance with regulations like GDPR or HIPAA, which require up-to-date records of data locations and usage.
  • Regulatory compliance auditing and configuration management: Quickly identify gaps in compliance frameworks like PCI DSS or SOC 2 by scanning configurations, permissions, and audit trails without installing agents. Using APIs to check configurations across cloud services ensures that all instances comply with organizational and regulatory standards, an essential aspect for maintaining security hygiene and compliance.
  • Shadow IT Detection: Detect and map unauthorized cloud services or assets that are spun up without security oversight, ensuring full inventory coverage.

2. Rapid environmental assessment:

  • Vulnerability assessment of new deployments: In environments where new code is frequently deployed, agentless security can quickly assess new instances, containers, or workloads in CI/CD pipelines for vulnerabilities and misconfigurations, enabling secure deployments at DevOps speed.
  • Misconfiguration alerts: Detect and alert on common cloud configuration issues, such as exposed storage buckets or overly permissive IAM roles, across cloud providers like AWS, Azure, and GCP.
  • Policy enforcement: Validate that new resources adhere to established security baselines and organizational policies, preventing security drift during rapid cloud scaling.

Combining agent-based and agentless approaches in cloud security not only maximizes the protective capabilities, but also offers flexibility, efficiency, and comprehensive coverage tailored to the diverse and evolving needs of modern cloud environments. This integrated strategy ensures that organizations can protect their assets more effectively while also adapting quickly to new threats and regulatory requirements.

Darktrace offers complementary and flexible deployment options for holistic cloud security

Powered by multilayered AI, Darktrace / CLOUD is a Cloud Detection and Response (CDR) solution that is agentless by default, with optional lightweight, host-based server agents for enhanced real-time actioning and deep inspection. As such, it can deploy in cloud environments in minutes and provide unified visibility and security across hybrid, multi-cloud environments.

With any deployment method, Darktrace supports multi-tenant, hybrid, and serverless cloud environments. Its Self-Learning AI learns the normal behavior across architectures, assets, and users to identify unusual activity that may indicate a threat. With this approach, Darktrace / CLOUD quickly disarms threats, whether they are known, unknown, or completely novel. It then accelerates the investigation process and responds to threats at machine speed.

Learn more about how Darktrace / CLOUD secures multi and hybrid cloud environments in the Solution Brief.

References:

1. Flexera 2023 State of the Cloud Report

2. ESG Research 2023 Report on Cloud-Native Security

3. Gartner, Market Guide for Cloud Workload Protection Platforms, 2023

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Kellie Regan
Director, Product Marketing - Cloud Security

More in this series

No items found.

Blog

/

AI

/

April 30, 2026

Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

mythos vulnerability discoveryDefault blog imageDefault blog image

Anthropic’s Mythos and what it means for security teams

Recent attention on systems such as Anthropic Mythos highlights a notable problem for defenders. Namely that disclosure’s role in coordinating defensive action is eroding.

As AI systems gain stronger reasoning and coding capability, their usefulness in analyzing complex software environments and identifying weaknesses naturally increases. What has changed is not attacker motivation, but the conditions under which defenders learn about and organize around risk. Vulnerability discovery and exploitation increasingly unfold in ways that turn disclosure into a retrospective signal rather than a reliable starting point for defense.

Faster discovery was inevitable and is already visible

The acceleration of vulnerability discovery was already observable across the ecosystem. Publicly disclosed vulnerabilities (CVEs) have grown at double-digit rates for the past two years, including a 32% increase in 2024 according to NIST, driven in part by AI even prior to Anthropic’s Mythos model. Most notably XBOW topped the HackerOne US bug bounty leaderboard, marking the first time an autonomous penetration tester had done so.  

The technical frontier for AI capabilities has been described elsewhere as jagged, and the implication is that Mythos is exceptional but not unique in this capability. While Mythos appears to make significant progress in complex vulnerability analysis, many other models are already able to find and exploit weaknesses to varying degrees.  

What matters here is not which model performs best, but the fact that vulnerability discovery is no longer a scarce or tightly bounded capability.

The consequence of this shift is not simply earlier discovery. It is a change in the defender-attacker race condition. Disclosure once acted as a rough synchronization point. While attackers sometimes had earlier knowledge, disclosure generally marked the moment when risk became visible and defensive action could be broadly coordinated. Increasingly, that coordination will no longer exist. Exploitation may be underway well before a CVE is published, if it is published at all.

Why patch velocity alone is not the answer

The instinctive response to this shift is to focus on patching faster, but treating patch velocity as the primary solution misunderstands the problem. Most organizations are already constrained in how quickly they can remediate vulnerabilities. Asset sprawl, operational risk, testing requirements, uptime commitments, and unclear ownership all limit response speed, even when vulnerabilities are well understood.

If discovery and exploitation now routinely precede disclosure, then patching cannot be the first line of defense. It becomes one necessary control applied within a timeline that has already shifted. This does not imply that organizations should patch less. It means that patching cannot serve as the organizing principle for defense.

Defense needs a more stable anchor

If disclosure no longer defines when defense begins, then defense needs a reference point that does not depend on knowing the vulnerability in advance.  

Every digital environment has a behavioral character. Systems authenticate, communicate, execute processes, and access resources in relatively consistent ways over time. These patterns are not static rules or signatures. They are learned behaviors that reflect how an organization operates.

When exploitation occurs, even via previously unknown vulnerabilities, those behavioral patterns change.

Attackers may use novel techniques, but they still need to gain access, create processes, move laterally, and will ultimately interact with systems in ways that diverge from what is expected. That deviation is observable regardless of whether the underlying weakness has been formally named.

In an environment where disclosure can no longer be relied on for timing or coordination, behavioral understanding is no longer an optional enhancement; it becomes the only consistently available defensive signal.

Detecting risk before disclosure

Darktrace’s threat research has consistently shown that malicious activity often becomes visible before public disclosure.

In multiple cases, including exploitation of Ivanti, SAP NetWeaver, and Trimble Cityworks, Darktrace detected anomalous behavior days or weeks ahead of CVE publication. These detections did not rely on signatures, threat intelligence feeds, or awareness of the vulnerability itself. They emerged because systems began behaving in ways that did not align with their established patterns.

This reflects a defensive approach grounded in ‘Ethos’, in contrast to the unbounded exploration represented by ‘Mythos’. Here, Mythos describes continuous vulnerability discovery at speed and scale. Ethos reflects an understanding of what is normal and expected within a specific environment, grounded in observed behavior.

Revisiting assume breach

These conditions reinforce a principle long embedded in Zero Trust thinking: assume breach.

If exploitation can occur before disclosure, patching vulnerabilities can no longer act as the organizing principle for defense. Instead, effective defense must focus on monitoring for misuse and constraining attacker activity once access is achieved. Behavioral monitoring allows organizations to identify early‑stage compromise and respond while uncertainty remains, rather than waiting for formal verification.

AI plays a critical role here, not by predicting every exploit, but by continuously learning what normal looks like within a specific environment and identifying meaningful deviation at machine speed. Identifying that deviation enables defenders to respond by constraining activity back towards normal patterns of behavior.

Not an arms race, but an asymmetry

AI is often framed as fueling an arms race between attackers and defenders. In practice, the more important dynamic is asymmetry.

Attackers operate broadly, scanning many environments for opportunities. Defenders operate deeply within their own systems, and it’s this business context which is so significant. Behavioral understanding gives defenders a durable advantage. Attackers may automate discovery, but they cannot easily reproduce what belonging looks like inside a particular organization.

A changed defensive model

AI‑accelerated vulnerability discovery does not mean defenders have lost. It does mean that disclosure‑driven, patch‑centric models no longer provide a sufficient foundation for resilience.

As vulnerability volumes grow and exploitation timelines compress, effective defense increasingly depends on continuous behavioral understanding, detection that does not rely on prior disclosure, and rapid containment to limit impact. In this model, CVEs confirm risk rather than define when defense begins.

The industry has already seen this approach work in practice. As AI continues to reshape both offense and defense, behavioral detection will move from being complementary to being essential.

Continue reading
About the author
Andrew Hollister
Principal Solutions Engineer, Cyber Technician

Blog

/

Network

/

April 29, 2026

Darktrace Malware Analysis: Jenkins Honeypot Reveals Emerging Botnet Targeting Online Games

botnetDefault blog imageDefault blog image

DDoS Botnet discovery

To observe adversary behavior in real time, Darktrace operates a global honeypot network known as “CloudyPots”, designed to capture malicious activity across a wide range of services, protocols, and cloud platforms. These honeypots provide valuable insights into the techniques, tools, and malware actively targeting internet‑facing infrastructure.

How attackers used a Jenkins honeypot to deploy the botnet

One such software honeypotted by Darktrace is Jenkins, a CI build system that allows developers to build code and run tests automatically. The instance of Jenkins in Darktrace’s honeypot is intentionally configured with a weak password, allowing attackers to obtain remote code execution on the service.

In one instance observed by Darktrace on March 18, 2026, a threat actor seemingly attempted to target Darktrace’s Jenkins honeypot to deploy a distributed denial-of-service (DDoS) botnet. Further analysis by Darktrace’s Threat Research team revealed the botnet was intended to specifically target video game servers.

How the Jenkins scriptText endpoint was used for remote code execution

The Jenkins build system features an endpoint named scriptText, which enables users to programmatically send new jobs, in the form of a Groovy script. Groovy is a programming language with similar syntax to Java and runs using the Java Virtual Machine (JVM). An attacker can abuse the scriptText endpoint to run a malicious script, achieving code execution on the victim host.

Request sent to the scriptText endpoint containing the malicious script.
Figure 1: Request sent to the scriptText endpoint containing the malicious script.

The malicious script is sent using the form-data content type, which results in the contents of the script being URL encoded. This encoding can be decoded to recover the original script, as shown in Figure 2, where Darktrace Analysts decoded the script using CyberChef,

The malicious script decoded using CyberChef.
Figure 2: The malicious script decoded using CyberChef.

What happens after Jenkins is compromised

As Jenkins can be deployed on both Microsoft Windows and Linux systems, the script includes separate branches to target each platform.

In the case of Windows, the script performs the following actions:

  • Downloads a payload from 103[.]177.110.202/w.exe and saves it to C:\Windows\Temp\update.dat.
  • Renames the “update.dat” file to “win_sys.exe” (within the same folder)
  • Runs the Unblock-File command is used to remove security restrictions typically applied to files downloaded from the internet.
  • Adds a firewall allow rule is added for TCP port 5444, which the payload uses for command-and-control (C2) communications.

On Linux systems, the script will instead use a Bash one-liner to download the payload from 103[.]177.110.202/bot_x64.exe to /tmp/bot and execute it.

Why this botnet uses a single IP for delivery and command and control

The IP 103[.]177.110.202 belongs to Webico Company Limited, specifically its Tino brand, a Vietnamese company that offers domain registrar services and server hosting. Geolocation data indicates that the IP is located in Ho Chi Minh City. Open-source intelligence (OSINT) analysis revealed multiple malicious associations tied to the IP [1].

Darktrace’s analysis found that the IP 103[.]177.110.202 is used for multiple stages of an attack, including spreading and initial access, delivering payloads, and C2 communication. This is an unusual combination, as many malware families separate their spreading servers from their C2 infrastructure. Typically, malware distribution activity results in a high volume of abuse complaints, which may result in server takedowns or service suspension by internet providers. Separate C2 infrastructure ensures that existing infections remain controllable even if the spreading server is disrupted.

How the malware evades detection and maintains persistence

Analysis of the Linux payload (bot _x64)

The sample begins by setting the environmental variables BUILD_ID and JENKINS_NODE_COOKIE to “dontKillMe”. By default, Jenkins terminates long-running scripts after a defined timeout period; however, setting these variables to “dontKillMe” bypasses this check, allowing the script to continue running uninterrupted.

The script then performs several stealth behaviors to evade detection. First, it deletes the original executable from disk and then renames itself to resemble the legitimate kernel processes “ksoftirqd/0” or “kworker”, which are found on Linux installations by default. It then uses a double fork to daemonize itself, enabling it to run in the background, before redirecting standard input, standard output, and standard error to /dev/null, hiding any logging from the malware. Finally, the script creates a signal handler for signals such as SIGTERM, causing them to be ignored and making it harder to stop the process.

Stealth component of the main function
Figure 3: Stealth component of the main function

How the botnet communicates with command and control (C2)

The sample then connects to the C2 server and sends the detected architecture of the system on which the agent was installed. The malware then enters a loop to handle incoming commands.

The sample features two types of commands, utility commands used to manage the malware, and commands to trigger attacks. Three special commands are defined: “PING” (which replies with PONG as a keep-alive mechanism), “!stop” which causes the malware to exit, and “!update”, which triggers the malware to download a new version from the C2 server and restart itself.

Initial connection to the C2 sever.
Figure 4: Initial connection to the C2 sever.

What DDoS attack techniques this botnet uses

The attack commands consist of the following:

Many of these commands invoke the same function despite appearing to be different attack techniques. For example, specialized attacks such as Cloudflare bypass (cfbypass, uam) use the exact same function as a standard HTTP attack. This may indicate the threat actor is attempting to make the botnet look like it has more capabilities than it actually has, or it could suggest that these commands are placeholders for future attack functionality that has yet to be implemented

All the commands take three arguments: IP, port to attack, and the duration of the attack.

attack_udp and attack_udp_pps

The attack_udp and attack_udp_pps functions both use a basic loop and sendto system call to send UDP packets to the victim’s IP, either targeting a predetermined port or a random port. The attack_udp function sends packets with 1,450 bytes of data, aimed at bandwidth saturation, while the attack_udp_pps function sends smaller 64-byte packets. In both cases, the data body of the packet consists of entirely random data.

Code for the UDP attack method
Figure 5: Code for the UDP attack method

attack_dayz

The attack_dayz function follows a similar structure to the attack_udp function; however, instead of sending random data, it will instead send a TSource Engine Query. This command is specific to Valve Source Engine servers and is designed to return a large volume of data about the targeted server. By repeatedly flooding this request, an attacker can exhaust the resources of a server using a comparatively small amount of data.

The Valve Source Engine server, also called Source Engine Dedicated server, is a server developed by video game company Valve that enables multiplayer gameplay for titles built using the Source game engine, which is also developed by Valve. The Source engine is used in games such as Counterstrike and Team Fortress 2. Curiously, the function attack_dayz, appears to be named after another popular online multiplayer game, DayZ; however, DayZ does not use the Valve Source Engine, making it unclear why this name was chosen.

The code for the “attack_dayz” attack function.
Figure 6: The code for the attack_dayz” attack function.

attack_tcp_push

The attack_tcp_push function establishes a TCP socket with the non-blocking flag set, allowing it to rapidly call functions such as connect() and send() without waiting for their completion. For the duration of the attack, it enters a while loop in which it repeatedly connects to the victim, sends 1,024 bytes of random data, and then closes the connection. This process repeats until the attack duration ends. If the mode flag is set to 1, the function also configures the socket with TCP no-delay enabled, allowing for packets to be sent immediately without buffering, resulting in a higher packet rate and a more effective attack.

The code for the TCP attack function.
Figure 7: The code for the TCP attack function.

attack_http

Similar to attach_tcp_push, attack_http configures a socket with no-delay enabled and non-blocking set. After establishing the connection, it sends 64 HTTP GET requests before closing the socket.

The code for the HTTP attack function.
Figure 8: The code for the HTTP attack function.

attack_special

The attack_special function creates a UDP socket and sets the port and payload based on the value of the mode flag:

  • Mode 0: Port 53 (DNS), sending a 10-byte malformed data packet.
  • Mode 1: Port 27015 (Valve Source Engine), sending the previously observed TSource Engine Query packet.
  • Mode 2: Port 123 (NTP), sending the start of an NTP control request.
The code for the attack_special function.
Figure 9: The code for the attack_special function.

What this botnet reveals about opportunistic attacks on internet-facing systems

Jenkins is one of the less frequently exploited services honeypotted by Darktrace, with only a handful campaigns observed. Nonetheless, the emergence of this new DDoS botnet demonstrates that attackers continue to opportunistically exploit any internet-facing misconfiguration at scale to grow the botnet strength.

While the hosts most commonly affected by these opportunistic attacks are usually “lower-value” systems, this distinction is largely irrelevant for botnets, where numbers alone are more important to overall effectiveness

The presence of game-specific DoS techniques further highlights that the gaming industry continues to be extensively targeted by cyber attackers, with Cloudflare reporting it as the fourth most targeted industry [2]. This botnet has likely already been used against game servers, serving as a reminder for server operators to ensure appropriate mitigations are in place.

Credit to Nathaniel Bill (Malware Research Engineer)
Edited by Ryan Traill (Content Manager)

Indicators of Compromise (IoCs)

103[.]177.110.202 - Attacker and command-and-control IP

F79d05065a2ba7937b8781e69b5859d78d5f65f01fb291ae27d28277a5e37f9b – bot_x64

References

[1] https://www.virustotal.com/gui/url/86db2530298e6335d3ecc66c2818cfbd0a6b11fcdfcb75f575b9fcce1faa00f1/detection

[2] - https://blog.cloudflare.com/ddos-threat-report-2025-q4/

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
Your data. Our AI.
Elevate your network security with Darktrace AI