Blog
/
AI
/
January 30, 2025

Reimagining Your SOC: Overcoming Alert Fatigue with AI-Led Investigations  

Reimagining your SOC Part 2/3: This blog explores how the challenges facing the modern SOC can be addressed by transforming the investigation process, unlocking efficiency and scalability in SOC operations with AI.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Jan 2025

The efficiency of a Security Operations Center (SOC) hinges on its ability to detect, analyze and respond to threats effectively. With advancements in AI and automation, key early SOC team metrics such as Mean Time to Detect (MTTD) have seen significant improvements:

  • 96% of defenders believing AI-powered solutions significantly boost the speed and efficiency of prevention, detection, response, and recovery.
  • Organizations leveraging AI and automation can shorten their breach lifecycle by an average of 108 days compared to those without these technologies.

While tool advances have improved performance and effectiveness in the detection phase, this has not been as beneficial to the next step of the process where initial alerts are investigated further to determine their relevance and how they relate to other activities. This is often measured with the metric Mean Time to Analysis (MTTA), although some SOC teams operate a two-level process with teams for initial triage to filter out more obviously uninteresting alerts and for more detailed analysis of the remainder. SOC teams continue to grapple with alert fatigue, overwhelmed analysts, and inefficient triage processes, preventing them from achieving the operational efficiency necessary for a high-performing SOC.

Addressing this core inefficiency requires extending AI's capabilities beyond detection to streamline and optimize the following investigative workflows that underpin effective analysis.

Challenges with SOC alert investigation

Detecting cyber threats is only the beginning of a much broader challenge of SOC efficiency. The real bottleneck often lies in the investigation process.

Detection tools and techniques have evolved significantly with the use of machine learning methods, improving early threat detection. However, after a detection pops up, human analysts still typically step in to evaluate the alert, gather context, and determine whether it’s a true threat or a false alarm and why. If it is a threat, further investigation must be performed to understand the full scope of what may be a much larger problem. This phase, measured by the mean time to analysis, is critical for swift incident response.

Challenges with manual alert investigation:

  • Too many alerts
  • Alerts lack context
  • Cognitive load sits with analysts
  • Insufficient talent in the industry
  • Fierce competition for experienced analysts

For many organizations, investigation is where the struggle of efficiency intensifies. Analysts face overwhelming volumes of alerts, a lack of consolidated context, and the mental strain of juggling multiple systems. With a worldwide shortage of 4 million experienced level two and three SOC analysts, the cognitive burden placed on teams is immense, often leading to alert fatigue and missed threats.

Even with advanced systems in place not all potential detections are investigated. In many cases, only a quarter of initial alerts are triaged (or analyzed). However, the issue runs deeper. Triaging occurs after detection engineering and alert tuning, which often disable many alerts that could potentially reveal true threats but are not accurate enough to justify the time and effort of the security team. This means some potential threats slip through unnoticed.

Understanding alerts in the SOC: Stopping cyber incidents is hard

Let’s take a look at the cyber-attack lifecycle and the steps involved in detecting and stopping an attack:

First we need a trace of an attack…

The attack will produce some sort of digital trace. Novel attacks, insider threats, and attacker techniques such as living-off-the-land can make attacker activities extremely hard to distinguish.

A detection is created…

Then we have to detect the trace, for example some beaconing to a rare domain. Initial detection alerts being raised underpin the MTTD (mean time to detection). Reducing this initial unseen duration is where we have seen significant improvement with modern threat detection tools.

When it comes to threat detection, the possibilities are vast. Your initial lead could come from anything: an alert about unusual network activity, a potential known malware detection, or an odd email. Once that lead comes in, it’s up to your security team to investigate further and determine if this is this a legitimate threat or a false alarm and what the context is behind the alert.

Investigation begins…

It doesn’t just stop at a detection. Typically, humans also need to look at the alert, investigate, understand, analyze, and conclude whether this is a genuine threat that needs a response. We normally measure this as MTTA (mean time to analyze).

Conducting the investigation effectively requires a high degree of skill and efficiency, as every second counts in mitigating potential damage. Security teams must analyze the available data, correlate it across multiple sources, and piece together the timeline of events to understand the full scope of the incident. This process involves navigating through vast amounts of information, identifying patterns, and discerning relevant details. All while managing the pressure of minimizing downtime and preventing further escalation.

Containment begins…

Once we confirm something as a threat, and the human team determines a response is required and understand the scope, we need to contain the incident. That's normally the MTTC (mean time to containment) and can be further split into immediate and more permanent measures.

For more about how AI-led solutions can help in the containment stage read here: Autonomous Response: Streamlining Cybersecurity and Business Operations

The challenge is not only in 1) detecting threats quickly, but also 2) triaging and investigating them rapidly and with precision, and 3) prioritizing the most critical findings to avoid missed opportunities. Effective investigation demands a combination of advanced tools, robust workflows, and the expertise to interpret and act on the insights they generate. Without these, organizations risk delaying critical containment and response efforts, leaving them vulnerable to greater impacts.

While there are further steps (remediation, and of course complete recovery) here we will focus on investigation.

Developing an AI analyst: How Darktrace replicates human investigation

Darktrace has been working on understanding the investigative process of a skilled analyst since 2017. By conducting internal research between Darktrace expert SOC analysts and machine learning engineers, we developed a formalized understanding of investigative processes. This understanding formed the basis of a multi-layered AI system that systematically investigates data, taking advantage of the speed and breadth afforded by machine systems.

With this research we found that the investigative process often revolves around iterating three key steps: hypothesis creation, data collection, and results evaluation.

All these details are crucial for an analyst to determine the nature of a potential threat. Similarly, they are integral components of our Cyber AI Analyst which is an integral component across our product suite. In doing so, Darktrace has been able to replicate the human-driven approach to investigating alerts using machine learning speed and scale.

Here’s how it works:

  • When an initial or third-party alert is triggered, the Cyber AI Analyst initiates a forensic investigation by building multiple hypotheses and gathering relevant data to confirm or refute the nature of suspicious activity, iterating as necessary, and continuously refining the original hypothesis as new data emerges throughout the investigation.
  • Using a combination of machine learning including supervised and unsupervised methods, NLP and graph theory to assess activity, this investigation engine conducts a deep analysis with incidents raised to the human team only when the behavior is deemed sufficiently concerning.
  • After classification, the incident information is organized and processed to generate the analysis summary, including the most important descriptive details, and priority classification, ensuring that critical alerts are prioritized for further action by the human-analyst team.
  • If the alert is deemed unimportant, the complete analysis process is made available to the human team so that they can see what investigation was performed and why this conclusion was drawn.
Darktrace cyber ai analyst workflow, how it works

To illustrate this via example, if a laptop is beaconing to a rare domain, the Cyber AI Analyst would create hypotheses including whether this could be command and control traffic, data exfiltration, or something else. The AI analyst then collects data, analyzes it, makes decisions, iterates, and ultimately raises a new high-level incident alert describing and detailing its findings for human analysts to review and follow up.

Learn more about Darktrace's Cyber AI Analyst

  • Cost savings: Equivalent to adding up to 30 full-time Level 2 analysts without increasing headcount
  • Minimize business risk: Takes on the busy work from human analysts and elevates a team’s overall decision making
  • Improve security outcomes: Identifies subtle, sophisticated threats through holistic investigations

Unlocking an efficient SOC

To create a mature and proactive SOC, addressing the inefficiencies in the alert investigation process is essential. By extending AI's capabilities beyond detection, SOC teams can streamline and optimize investigative workflows, reducing alert fatigue and enhancing analyst efficiency.

This holistic approach not only improves Mean Time to Analysis (MTTA) but also ensures that SOCs are well-equipped to handle the evolving threat landscape. Embracing AI augmentation and automation in every phase of threat management will pave the way for a more resilient and proactive security posture, ultimately leading to a high-performing SOC that can effectively safeguard organizational assets.

Every relevant alert is investigated

The Cyber AI Analyst is not a generative AI system, or an XDR or SEIM aggregator that simply prompts you on what to do next. It uses a multi-layered combination of many different specialized AI methods to investigate every relevant alert from across your enterprise, native, 3rd party, and manual triggers, operating at machine speed and scale. This also positively affects detection engineering and alert tuning, because it does not suffer from fatigue when presented with low accuracy but potentially valuable alerts.

Retain and improve analyst skills

Transferring most analysis processes to AI systems can risk team skills if they don't maintain or build them and if the AI doesn't explain its process. This can reduce the ability to challenge or build on AI results and cause issues if the AI is unavailable. The Cyber AI Analyst, by revealing its investigation process, data gathering, and decisions, promotes and improves these skills. Its deep understanding of cyber incidents can be used for skill training and incident response practice by simulating incidents for security teams to handle.

Create time for cyber risk reduction

Human cybersecurity professionals excel in areas that require critical thinking, strategic planning, and nuanced decision-making. With alert fatigue minimized and investigations streamlined, your analysts can avoid the tedious data collection and analysis stages and instead focus on critical decision-making tasks such as implementing recovery actions and performing threat hunting.

Stay tuned for part 3/3

Part 3/3 in the Reimagine your SOC series explores the preventative security solutions market and effective risk management strategies.

Coming soon!

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI

More in this series

No items found.

Blog

/

Network

/

April 29, 2026

Darktrace Malware Analysis: Jenkins Honeypot Reveals Emerging Botnet Targeting Online Games

botnetDefault blog imageDefault blog image

DDoS Botnet discovery

To observe adversary behavior in real time, Darktrace operates a global honeypot network known as “CloudyPots”, designed to capture malicious activity across a wide range of services, protocols, and cloud platforms. These honeypots provide valuable insights into the techniques, tools, and malware actively targeting internet‑facing infrastructure.

How attackers used a Jenkins honeypot to deploy the botnet

One such software honeypotted by Darktrace is Jenkins, a CI build system that allows developers to build code and run tests automatically. The instance of Jenkins in Darktrace’s honeypot is intentionally configured with a weak password, allowing attackers to obtain remote code execution on the service.

In one instance observed by Darktrace on March 18, 2026, a threat actor seemingly attempted to target Darktrace’s Jenkins honeypot to deploy a distributed denial-of-service (DDoS) botnet. Further analysis by Darktrace’s Threat Research team revealed the botnet was intended to specifically target video game servers.

How the Jenkins scriptText endpoint was used for remote code execution

The Jenkins build system features an endpoint named scriptText, which enables users to programmatically send new jobs, in the form of a Groovy script. Groovy is a programming language with similar syntax to Java and runs using the Java Virtual Machine (JVM). An attacker can abuse the scriptText endpoint to run a malicious script, achieving code execution on the victim host.

Request sent to the scriptText endpoint containing the malicious script.
Figure 1: Request sent to the scriptText endpoint containing the malicious script.

The malicious script is sent using the form-data content type, which results in the contents of the script being URL encoded. This encoding can be decoded to recover the original script, as shown in Figure 2, where Darktrace Analysts decoded the script using CyberChef,

The malicious script decoded using CyberChef.
Figure 2: The malicious script decoded using CyberChef.

What happens after Jenkins is compromised

As Jenkins can be deployed on both Microsoft Windows and Linux systems, the script includes separate branches to target each platform.

In the case of Windows, the script performs the following actions:

  • Downloads a payload from 103[.]177.110.202/w.exe and saves it to C:\Windows\Temp\update.dat.
  • Renames the “update.dat” file to “win_sys.exe” (within the same folder)
  • Runs the Unblock-File command is used to remove security restrictions typically applied to files downloaded from the internet.
  • Adds a firewall allow rule is added for TCP port 5444, which the payload uses for command-and-control (C2) communications.

On Linux systems, the script will instead use a Bash one-liner to download the payload from 103[.]177.110.202/bot_x64.exe to /tmp/bot and execute it.

Why this botnet uses a single IP for delivery and command and control

The IP 103[.]177.110.202 belongs to Webico Company Limited, specifically its Tino brand, a Vietnamese company that offers domain registrar services and server hosting. Geolocation data indicates that the IP is located in Ho Chi Minh City. Open-source intelligence (OSINT) analysis revealed multiple malicious associations tied to the IP [1].

Darktrace’s analysis found that the IP 103[.]177.110.202 is used for multiple stages of an attack, including spreading and initial access, delivering payloads, and C2 communication. This is an unusual combination, as many malware families separate their spreading servers from their C2 infrastructure. Typically, malware distribution activity results in a high volume of abuse complaints, which may result in server takedowns or service suspension by internet providers. Separate C2 infrastructure ensures that existing infections remain controllable even if the spreading server is disrupted.

How the malware evades detection and maintains persistence

Analysis of the Linux payload (bot _x64)

The sample begins by setting the environmental variables BUILD_ID and JENKINS_NODE_COOKIE to “dontKillMe”. By default, Jenkins terminates long-running scripts after a defined timeout period; however, setting these variables to “dontKillMe” bypasses this check, allowing the script to continue running uninterrupted.

The script then performs several stealth behaviors to evade detection. First, it deletes the original executable from disk and then renames itself to resemble the legitimate kernel processes “ksoftirqd/0” or “kworker”, which are found on Linux installations by default. It then uses a double fork to daemonize itself, enabling it to run in the background, before redirecting standard input, standard output, and standard error to /dev/null, hiding any logging from the malware. Finally, the script creates a signal handler for signals such as SIGTERM, causing them to be ignored and making it harder to stop the process.

Stealth component of the main function
Figure 3: Stealth component of the main function

How the botnet communicates with command and control (C2)

The sample then connects to the C2 server and sends the detected architecture of the system on which the agent was installed. The malware then enters a loop to handle incoming commands.

The sample features two types of commands, utility commands used to manage the malware, and commands to trigger attacks. Three special commands are defined: “PING” (which replies with PONG as a keep-alive mechanism), “!stop” which causes the malware to exit, and “!update”, which triggers the malware to download a new version from the C2 server and restart itself.

Initial connection to the C2 sever.
Figure 4: Initial connection to the C2 sever.

What DDoS attack techniques this botnet uses

The attack commands consist of the following:

Many of these commands invoke the same function despite appearing to be different attack techniques. For example, specialized attacks such as Cloudflare bypass (cfbypass, uam) use the exact same function as a standard HTTP attack. This may indicate the threat actor is attempting to make the botnet look like it has more capabilities than it actually has, or it could suggest that these commands are placeholders for future attack functionality that has yet to be implemented

All the commands take three arguments: IP, port to attack, and the duration of the attack.

attack_udp and attack_udp_pps

The attack_udp and attack_udp_pps functions both use a basic loop and sendto system call to send UDP packets to the victim’s IP, either targeting a predetermined port or a random port. The attack_udp function sends packets with 1,450 bytes of data, aimed at bandwidth saturation, while the attack_udp_pps function sends smaller 64-byte packets. In both cases, the data body of the packet consists of entirely random data.

Code for the UDP attack method
Figure 5: Code for the UDP attack method

attack_dayz

The attack_dayz function follows a similar structure to the attack_udp function; however, instead of sending random data, it will instead send a TSource Engine Query. This command is specific to Valve Source Engine servers and is designed to return a large volume of data about the targeted server. By repeatedly flooding this request, an attacker can exhaust the resources of a server using a comparatively small amount of data.

The Valve Source Engine server, also called Source Engine Dedicated server, is a server developed by video game company Valve that enables multiplayer gameplay for titles built using the Source game engine, which is also developed by Valve. The Source engine is used in games such as Counterstrike and Team Fortress 2. Curiously, the function attack_dayz, appears to be named after another popular online multiplayer game, DayZ; however, DayZ does not use the Valve Source Engine, making it unclear why this name was chosen.

The code for the “attack_dayz” attack function.
Figure 6: The code for the attack_dayz” attack function.

attack_tcp_push

The attack_tcp_push function establishes a TCP socket with the non-blocking flag set, allowing it to rapidly call functions such as connect() and send() without waiting for their completion. For the duration of the attack, it enters a while loop in which it repeatedly connects to the victim, sends 1,024 bytes of random data, and then closes the connection. This process repeats until the attack duration ends. If the mode flag is set to 1, the function also configures the socket with TCP no-delay enabled, allowing for packets to be sent immediately without buffering, resulting in a higher packet rate and a more effective attack.

The code for the TCP attack function.
Figure 7: The code for the TCP attack function.

attack_http

Similar to attach_tcp_push, attack_http configures a socket with no-delay enabled and non-blocking set. After establishing the connection, it sends 64 HTTP GET requests before closing the socket.

The code for the HTTP attack function.
Figure 8: The code for the HTTP attack function.

attack_special

The attack_special function creates a UDP socket and sets the port and payload based on the value of the mode flag:

  • Mode 0: Port 53 (DNS), sending a 10-byte malformed data packet.
  • Mode 1: Port 27015 (Valve Source Engine), sending the previously observed TSource Engine Query packet.
  • Mode 2: Port 123 (NTP), sending the start of an NTP control request.
The code for the attack_special function.
Figure 9: The code for the attack_special function.

What this botnet reveals about opportunistic attacks on internet-facing systems

Jenkins is one of the less frequently exploited services honeypotted by Darktrace, with only a handful campaigns observed. Nonetheless, the emergence of this new DDoS botnet demonstrates that attackers continue to opportunistically exploit any internet-facing misconfiguration at scale to grow the botnet strength.

While the hosts most commonly affected by these opportunistic attacks are usually “lower-value” systems, this distinction is largely irrelevant for botnets, where numbers alone are more important to overall effectiveness

The presence of game-specific DoS techniques further highlights that the gaming industry continues to be extensively targeted by cyber attackers, with Cloudflare reporting it as the fourth most targeted industry [2]. This botnet has likely already been used against game servers, serving as a reminder for server operators to ensure appropriate mitigations are in place.

Credit to Nathaniel Bill (Malware Research Engineer)
Edited by Ryan Traill (Content Manager)

Indicators of Compromise (IoCs)

103[.]177.110.202 - Attacker and command-and-control IP

F79d05065a2ba7937b8781e69b5859d78d5f65f01fb291ae27d28277a5e37f9b – bot_x64

References

[1] https://www.virustotal.com/gui/url/86db2530298e6335d3ecc66c2818cfbd0a6b11fcdfcb75f575b9fcce1faa00f1/detection

[2] - https://blog.cloudflare.com/ddos-threat-report-2025-q4/

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer

Blog

/

AI

/

April 28, 2026

State of AI Cybersecurity 2026: 87% of security professionals are seeing more AI-driven threats, but few feel ready to stop them

Default blog imageDefault blog image

The findings in this blog are taken from Darktrace’s annual State of AI Cybersecurity Report 2026.

In part 1 of this blog series, we explored how AI is remaking the attack surface, with new tools, models, agents — and vulnerabilities — popping up just about everywhere. Now embedded in workflows across the enterprise, and often with far-reaching access to sensitive data, AI systems are quickly becoming a favorite target of cyber threat actors.

Among bad actors, though, AI is more often used as a tool than a target. Nearly 62% of organizations  experienced a social engineering attack involving a deepfake, or an incident in which bad actors used AI-generated video or audio to try to trick a biometric authentication system, compared to 32% that reported an AI prompt injection attack.

In the hands of attackers, AI can do many things. It’s being used across the entire kill chain: to supercharge reconnaissance, personalize phishing, accelerate lateral movement, and automate data exfiltration. Evidence from Anthropic demonstrates that threat actors have harnessed AI to orchestrate an entire cyber espionage campaign from end to end, allegedly running it with minimal human involvement.

CISOs inhabit a world where these increasingly sophisticated attacks are ubiquitous. Naturally, combatting AI-powered threats is top of mind among security professionals, but many worry about whether their capabilities are up to the challenge.

AI-powered threats at scale: no longer hypothetical

AI-driven threats share signature characteristics. They operate at speed and scale. Automated tools can probe multiple attack paths, search for multiple vulnerabilities and send out a barrage of phishing emails, all within seconds. The ability to attack everywhere at once, at a pace that no human operator could sustain, is the hallmark of an AI-powered threat. AI-powered threats are also dynamic. They can adapt their behavior to spread across a network more efficiently or rewrite their own code to evade detection.

Security teams are seeing the signs that they’re fighting AI-powered threats at every stage of the kill chain, and the sophistication of these threats is testing their resolve and their resources.

  • 73% say that AI-powered cyber threats are having a significant impact on their organization
  • 92% agree that these threats are forcing them to upgrade their defenses
  • 87% agree that AI is significantly increasing the sophistication and success rate of malware
  • 87% say AI is significantly increasing the workload of their security operations team

These teams now confront a challenge unlike anything they’ve seen before in their careers, and the risks are compounding across workflows, tools, data, and identities. It’s no surprise that 66% of security professionals say their role is more stressful today than it was five years ago, or that 47% report feeling overwhelmed at work.

Up all night: Security professionals’ worry list is long

Traditional security methods were never built to handle the complexity and subtlety of AI-driven behavior. Working in the trenches, defenders have deep firsthand experience of how difficult it can be to detect and stop AI-assisted threats.

Increasingly effective social engineering attacks are among their top concerns. 50% of security leaders mentioned hyper-personalized phishing campaigns as one of their biggest worries, while 40% voiced apprehension about deepfake voice fraud. These concerns are legitimate: AI-generated phishing emails are increasingly tailored to individual organizations, business activities, or individuals. Gone are the telltale signs – like grammar or spelling mistakes – that once distinguished malicious communications. Notably, 33% of the malicious emails Darktrace observed in 2025 contained over 1,000 characters, indicating probable LLM usage.

Security leaders also worry about how bad actors can leverage AI to make attacks even faster and more dynamic. 45% listed automated vulnerability scanning and exploit chaining among their biggest concerns, while 40% mentioned adaptive malware.

Confidence is lacking

Protecting against AI demands capabilities that many organizations have not yet built. It requires interpreting new indicators, uncovering the subtle intent within interactions, and recognizing when AI behavior – human or machine – could be suspicious. Leaders know that their current tools aren’t prepared for this. Nearly half don’t feel confident in their ability to defend against AI-powered attacks.

We’ve asked participants in our survey about their confidence for the last three years now. In 2024, 60% said their organizations were not adequately prepared to defend against AI-driven threats. Last year, that percentage shrunk to 45%, a possible indicator that security programs were making progress. Since then, however, the progress has apparently stalled. 46% of security leaders now feel inadequately prepared to protect their organizations amidst the current threat landscape.

Some of these differences are accentuated across different cultures. Respondents in Japan are far less confident (77% say they are not adequately prepared) than respondents in Brazil (where only 21% don’t feel prepared).

Where security programs are falling short

It’s no longer the case that cybersecurity is overlooked or underfunded by executive leadership. Across industries, management recognizes that AI-powered threats are a growing problem, and insufficient budget is near the bottom of most CISO’s list of reasons that they struggle to defend against AI-powered threats.  

It’s the things that money can’t buy – experience, knowledge, and confidence – that are holding programs back. Near the top of the list of inhibitors that survey participants mention is “insufficient knowledge or use of AI-driven countermeasures.” As bad actors embrace AI technologies en masse, this challenge is coming into clearer focus: attack-centric security tools, which rely on static rules, signatures, and historical attack patterns, were never designed to handle the complexity and subtlety of AI-driven attacks. These challenges feel new to security teams, but they are the core problems Darktrace was built to solve.  

Our Self-Learning AI develops a deep understanding of what “normal” looks like for your organization –including unique traffic patterns, end user habits, application and device profiles – so that it can detect and stop novel, dynamic threats at the first encounter. By focusing on learning the business, rather than the attack, our AI can keep pace with AI-powered threats as they evolve.

Explore the full State of AI Cybersecurity 2026 report for deeper insights into how security leaders are responding to AI-driven risks.

Learn more about securing AI in your enterprise.

[related-resource]

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI