Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Share
21
May 2024
The problem: Microsoft Teams phishing attacks are on the rise
Around 83% of Fortune 500 companies rely on Microsoft Office products and services1, with Microsoft Teams and Microsoft SharePoint in particular emerging as critical platforms to the business operations of the everyday workplace. Researchers across the threat landscape have begun to observe these legitimate services being leveraged more and more by malicious actors as an initial access method.
As Teams becomes a more prominent feature of the workplace many employees rely on it for daily internal and external communication, even surpassing email usage in some organizations. As Microsoft2 states, "Teams changes your relationship with email. When your whole group is working in Teams, it means you'll all get fewer emails. And you'll spend less time in your inbox, because you'll use Teams for more of your conversations."
However, Teams can be exploited to send targeted phishing messages to individuals either internally or externally, while appearing legitimate and safe. Users might receive an external message request from a Teams account claiming to be an IT support service or otherwise affiliated with the organization. Once a user has accepted, the threat actor can launch a social engineering campaign or deliver a malicious payload. As a primarily internal tool there is naturally less training and security awareness around Teams – due to the nature of the channel it is assumed to be a trusted source, meaning that social engineering is already one step ahead.
Figure 1: Screenshot of a Microsoft Teams message request from a Midnight Blizzard-controlled account (courtesy of Microsoft)
Microsoft Teams Phishing Examples
Microsoft has identified several major phishing attacks using Teams within the past year.
In July 2023, Microsoft announced that the threat actor known as Midnight Blizzard – identified by the United States as a Russian state-sponsored group – had launched a series of phishing campaigns via Teams with the aim of stealing user credentials. These attacks used previously compromised Microsoft 365 accounts and set up new domain names that impersonated legitimate IT support organizations. The threat actors then used social engineering tactics to trick targeted users into sharing their credentials via Teams, enabling them to access sensitive data.
At a similar time, threat actor Storm-0324 was observed sending phishing lures via Teams containing links to malicious SharePoint-hosted files. The group targeted organizations that allow Teams users to interact and share files externally. Storm-0324’s goal is to gain initial access to hand over to other threat actors to pursue more dangerous follow-on attacks like ransomware.
The market: Existing Microsoft Teams security solutions are insufficient
Microsoft’s native Teams security focuses on payloads, namely links and attachments, as the principal malicious component of any phishing. These payloads are relatively straightforward to detect with their experience in anti-virus, sandboxing, and IOCs. However, this approach is unable to intervene before the stage at which payloads are delivered, before the user even gets the chance to accept or deny an external message request. At the same time, it risks missing more subtle threats that don’t include attachments or links – like early stage phishing, which is pure social engineering – or completely new payloads.
Equally, the market offering for Teams security is limited. Security solutions available on the market are always payload-focused, rather than taking into account the content and context in which a link or attachment is sent. Answering questions like:
Does it make sense for these two accounts to speak to each other?
Are there any linguistic indicators of inducement?
Furthermore, they do not correlate with email to track threats across multiple communication environments which could signal a wider campaign. Effectively, other market solutions aren’t adding extra value – they are protecting against the same types of threats that Microsoft is already covering by default.
The other aspect of Teams security that native and market solutions fail to address is the account itself. As well as focusing on Teams threats, it’s important to analyze messages to understand the normal mode of communication for a user, and spot when a user’s Teams activity might signal account takeover.
The solution: How Darktrace protects Microsoft Teams against sophisticated threats
With its biggest update to Darktrace/Email ever, Darktrace now offers support for Microsoft Teams. With that, we are bringing the same AI philosophy that protects your email and accounts to your messaging environment.
Our Self-Learning AI looks at content and context for every communication, whether that’s sent in an email or Teams message. It looks at actual user behavior, including language patterns, relationship history of sender and recipient, tone and payloads, to understand if a message poses a threat. This approach allows Darktrace to detect threats such as social engineering and payloadless attacks using visibility and forensic capabilities that Microsoft security doesn’t currently offer, as well as early symptoms of account compromise.
Unlike market solutions, Darktrace doesn’t offer a siloed approach to Teams security. Data and signals from Teams are shared across email to inform detection, and also with the wider Darktrace ActiveAI security platform. By correlating information from email and Teams with network and apps security, Darktrace is able to better identify suspicious Teams activity and vice versa.
Interested in the other ways Darktrace/Email augments threat detection? Read our latest blog on how improving the quality of end-user reporting can decrease the burden on the SOC. To find our more about Darktrace's enduring partnership with Microsoft, click here.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with Darktrace
During a customer trial of Darktrace / EMAIL and Darktrace / IDENTITY, Darktrace detected an adversary-in-the-middle (AiTM) attack that compromised a user’s Office 365 account via a business email compromise (BEC) phishing email. Following the breach, the compromised account was used to launch both internal and external phishing campaigns.
Darktrace Named as a Leader in 2025 Gartner® Magic Quadrant™ for Email Security Platforms
Darktrace announces its Leader position in the Gartner® Magic Quadrant™ for Email Security Platforms, following a recent Customers’ Choice recognition from Gartner Peer Insights.
From Amazon to Louis Vuitton: How Darktrace Detects Black Friday Phishing Attacks
Darktrace detected a surge in Black Friday phishing campaigns exploiting brand impersonation, urgency-driven language, and malicious redirects. Attackers leveraged newly registered domains and social engineering to mimic Amazon, Louis Vuitton, and luxury watch retailers. Learn how layered evasion tactics bypassed security and how Darktrace / EMAIL neutralized these threats.
Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with Darktrace
What is an Adversary-in-the-middle (AiTM) attack?
Adversary-in-the-Middle (AiTM) attacks are a sophisticated technique often paired with phishing campaigns to steal user credentials. Unlike traditional phishing, which multi-factor authentication (MFA) increasingly mitigates, AiTM attacks leverage reverse proxy servers to intercept authentication tokens and session cookies. This allows attackers to bypass MFA entirely and hijack active sessions, stealthily maintaining access without repeated logins.
This blog examines a real-world incident detected during a Darktrace customer trial, highlighting how Darktrace / EMAILTM and Darktrace / IDENTITYTMidentified the emerging compromise in a customer’s email and software-as-a-service (SaaS) environment, tracked its progression, and could have intervened at critical moments to contain the threat had Darktrace’s Autonomous Response capability been enabled.
What does an AiTM attack look like?
Inbound phishing email
Attacks typically begin with a phishing email, often originating from the compromised account of a known contact like a vendor or business partner. These emails will often contain malicious links or attachments leading to fake login pages designed to spoof legitimate login platforms, like Microsoft 365, designed to harvest user credentials.
Proxy-based credential theft and session hijacking
When a user clicks on a malicious link, they are redirected through an attacker-controlled proxy that impersonates legitimate services. This proxy forwards login requests to Microsoft, making the login page appear legitimate. After the user successfully completes MFA, the attacker captures credentials and session tokens, enabling full account takeover without the need for reauthentication.
Follow-on attacks
Once inside, attackers will typically establish persistence through the creation of email rules or registering OAuth applications. From there, they often act on their objectives, exfiltrating sensitive data and launching additional business email compromise (BEC) campaigns. These campaigns can include fraudulent payment requests to external contacts or internal phishing designed to compromise more accounts and enable lateral movement across the organization.
Darktrace’s detection of an AiTM attack
At the end of September 2025, Darktrace detected one such example of an AiTM attack on the network of a customer trialling Darktrace / EMAIL and Darktrace / IDENTITY.
In this instance, the first indicator of compromise observed by Darktrace was the creation of a malicious email rule on one of the customer’s Office 365 accounts, suggesting the account had likely already been compromised before Darktrace was deployed for the trial.
Darktrace / IDENTITY observed the account creating a new email rule with a randomly generated name, likely to hide its presence from the legitimate account owner. The rule marked all inbound emails as read and deleted them, while ignoring any existing mail rules on the account. This rule was likely intended to conceal any replies to malicious emails the attacker had sent from the legitimate account owner and to facilitate further phishing attempts.
Figure 1: Darktrace’s detection of the anomalous email rule creation.
Internal and external phishing
Following the creation of the email rule, Darktrace / EMAIL observed a surge of suspicious activity on the user’s account. The account sent emails with subject lines referencing payment information to over 9,000 different external recipients within just one hour. Darktrace also identified that these emails contained a link to an unusual Google Drive endpoint, embedded in the text “download order and invoice”.
Figure 2: Darkrace’s detection of an unusual surge in outbound emails containing suspicious content, shortly following the creation of a new email rule.
Figure 3: Darktrace / EMAIL’s detection of the compromised account sending over 9,000 external phishing emails, containing an unusual Google Drive link.
As Darktrace / EMAIL flagged the message with the ‘Compromise Indicators’ tag (Figure 2), it would have been held automatically if the customer had enabled default Data Loss Prevention (DLP) Action Flows in their email environment, preventing any external phishing attempts.
Figure 4: Darktrace / EMAIL’s preview of the email sent by the offending account.
Darktrace analysis revealed that, after clicking the malicious link in the email, recipients would be redirected to a convincing landing page that closely mimicked the customer’s legitimate branding, including authentic imagery and logos, where prompted to download with a PDF named “invoice”.
Figure 5: Download and login prompts presented to recipients after following the malicious email link, shown here in safe view.
After clicking the “Download” button, users would be prompted to enter their company credentials on a page that was likely a credential-harvesting tool, designed to steal corporate login details and enable further compromise of SaaS and email accounts.
Darktrace’s Response
In this case, Darktrace’s Autonomous Response was not fully enabled across the customer’s email or SaaS environments, allowing the compromise to progress, as observed by Darktrace here.
Despite this, Darktrace / EMAIL’s successful detection of the malicious Google Drive link in the internal phishing emails prompted it to suggest ‘Lock Link’, as a recommended action for the customer’s security team to manually apply. This action would have automatically placed the malicious link behind a warning or screening page blocking users from visiting it.
Figure 6: Autonomous Response suggesting locking the malicious Google Drive link sent in internal phishing emails.
Furthermore, if active in the customer’s SaaS environment, Darktrace would likely have been able to mitigate the threat even earlier, at the point of the first unusual activity: the creation of a new email rule. Mitigative actions would have included forcing the user to log out, terminating any active sessions, and disabling the account.
Conclusion
AiTM attacks represent a significant evolution in credential theft techniques, enabling attackers to bypass MFA and hijack active sessions through reverse proxy infrastructure. In the real-world case we explored, Darktrace’s AI-driven detection identified multiple stages of the attack, from anomalous email rule creation to suspicious internal email activity, demonstrating how Autonomous Response could have contained the threat before escalation.
MFA is a critical security measure, but it is no longer a silver bullet. Attackers are increasingly targeting session tokens rather than passwords, exploiting trusted SaaS environments and internal communications to remain undetected. Behavioral AI provides a vital layer of defense by spotting subtle anomalies that traditional tools often miss
Security teams must move beyond static defenses and embrace adaptive, AI-driven solutions that can detect and respond in real time. Regularly review SaaS configurations, enforce conditional access policies, and deploy technologies that understand “normal” behavior to stop attackers before they succeed.
Credit to David Ison (Cyber Analyst), Bertille Pierron (Solutions Engineer), Ryan Traill (Analyst Content Lead)
React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within Hours
What is React2Shell?
CVE-2025-55182, also known as React2Shell is a vulnerability within React server components that allows for an unauthenticated attacker to gain remote code execution with a single request. The severity of this vulnerability and ease of exploitability has led to threat actors opportunistically exploiting it within a matter of days of its public disclosure.
Darktrace security researchers rapidly deployed a new honeypot using the Cloudypots system, allowing for the monitoring of exploitation of the vulnerability in the wild.
Cloudypots is a system that enables virtual instances of vulnerable applications to be deployed in the cloud and monitored for attack. This approach allows for Darktrace to deploy high-interaction, realistic honeypots, that appear as genuine deployments of vulnerable software to attackers.
This blog will explore one such campaign, nicknamed “Nuts & Bolts” based on the naming used in payloads.
Analysis of the React2Shell exploit
The React2Shell exploit relies on an insecure deserialization vulnerability within React Server Components’ “Flight” protocol. This protocol uses a custom serialization scheme that security researchers discovered could be abused to run arbitrary JavaScript by crafting the serialized data in a specific way. This is possible because the framework did not perform proper type checking, allowing an attacker to reference types that can be abused to craft a chain that resolves to an anonymous function, and then invoke it with the desired JavaScript as a promise chain.
This code execution can then be used to load the ‘child_process’ node module and execute any command on the target server.
The vulnerability was discovered on December 3, 2025, with a patch made available on the same day [1]. Within 30 hours of the patch, a publicly available proof of concept emerged that could be used to exploit any vulnerable server. This rapid timeline left many servers remaining unpatched by the time attackers began actively exploiting the vulnerability.
Initial access
The threat actor behind the “Nuts & Bolts” campaign uses a spreader server with IP 95.214.52[.]170 to infect victims. The IP appears to be located in Poland and is associated with a hosting provided known as MEVSPACE. The spreader is highly aggressive, launching exploitation attempts, roughly every hour.
When scanning, the spreader primarily targets port 3000, which is the default port for a NEXT.js server in a default or development configuration. It is possible the attacker is avoiding port 80 and 443, as these are more likely to have reverse proxies or WAFs in front of the server, which could disrupt exploitation attempts.
When the spreader finds a new host with port 3000 open, it begins by testing if it is vulnerable to React2Shell by sending a crafted request to run the ‘whoami’ command and store the output in an error digest that is returned to the attacker.
The above snippet is the core part of the crafted request that performs the execution. This allows the attacker to confirm that the server is vulnerable and fetch the user account under which the NEXT.js process is running, which is useful information for determining if a target is worth attacking.
From here, the attacker then sends an additional request to run the actual payload on the victim server.
This snippet attempts to deploy several payloads by using wget (or curl if wget fails) into the /dev directory and execute them. The x86 binary is a Mirai variant that does not appear to have any major alterations to regular Mirai. The ‘nuts/bolts’ endpoint returns a bash script, which is then executed. The script includes several log statements throughout its execution to provide visibility into which parts ran successfully. Similar to the ‘whoami’ request, the output is placed in an error digest for the attacker to review.
In this case, the command-and-control (C2) IP, 89[.]144.31.18, is hosted on a different server operated by a German hosting provider named myPrepaidServer, which offers virtual private server (VPS) services and accepts cryptocurrency payments [2].
Figure 1: Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.
Nuts & Bolts script
This script’s primary purpose is to prepare the box for a cryptocurrency miner.
The script starts by attempting to terminate any competing cryptocurrency miner processes using ‘pkill’ that match on a specific name. It will check for and terminate:
xmrig
softirq (this also matches a system process, which it will fail to kill each invocation)
watcher
/tmp/a.sh
health.sh
Following this, the script will checks for a process named “fghgf”. If it is not running, it will retrieve hxxp://89[.]144.31.18/nuts/lc and write it to /dev/ijnegrrinje.json, as well as retrieving hxxp://89[.]144.31.18/nuts/x and writing it to /dev/fghgf. The script will the executes /dev/fghgf -c /dev/ijnegrrinje.json -B in the background, which is an XMRig miner.
Figure 2: The XMRig deployment script.
The miner is configured to connect to two private pools at 37[.]114.37.94 and 37[.]114.37.82, using “poop” as both the username and password. The use of a private pool conceals the associated wallet address. From here, a short bash script is dropped to /dev/stink.sh. This script continuously crawls all running processes on the system and reads their /proc/pid/exe path, which contains a copy of the original executable that was run. The ‘strings’ utility is run to output all valid ASCII strings found within the data and checks to see if contains either “xmrig”, “rondo” or “UPX 5”. If so, it sends a SIGKILL to the process to terminate it.
Additionally, it will run ‘ls –l’ on the exe path in case it is symlinked to a specific path or has been deleted. If the output contains any of the following strings, the script sends a SIGKILL to terminate the program:
(deleted) - Indicates that the original executable was deleted from the disk, a common tactic used by malware to evade detection.
xmrig
hash
watcher
/dev/a
softirq
rondo
UPX 5.02
Figure 3: The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.
Darktrace observations in customer environments
Following the public disclosure of CVE‑2025‑55182 on December, Darktrace observed multiple exploitation attempts across customer environments beginning around December 4. Darktrace triage identified a series of consistent indicators of compromise (IoCs). By consolidating indicators across multiple deployments and repeat infrastructure clusters, Darktrace identified a consistent kill chain involving shell‑script downloads and HTTP beaconing.
In one example, on December 5, Darktrace observed external connections to malicious IoC endpoints (172.245.5[.]61:38085, 5.255.121[.]141, 193.34.213[.]15), followed by additional connections to other potentially malicious endpoint. These appeared related to the IoCs detailed above, as one suspicious IP address shared the same ASN. After this suspicious external connectivity, Darktrace observed cryptomining-related activity. A few hours later, the device initiated potential lateral movement activity, attempting SMB and RDP sessions with other internal devices on the network. These chain of events appear to identify this activity to be related to the malicious campaign of the exploitation of React2Shell vulnerability.
Generally, outbound HTTP traffic was observed to ports in the range of 3000–3011, most notably port 3001. Requests frequently originated from scripted tools, with user agents such as curl/7.76.1, curl/8.5.0, Wget/1.21.4, and other generic HTTP signatures. The URIs associated with these requests included paths like /nuts/x86 and /n2/x86, as well as long, randomized shell script names such as /gfdsgsdfhfsd_ghsfdgsfdgsdfg.sh. In some cases, parameterized loaders were observed, using query strings like: /?h=<ip>&p=<port>&t=<proto>&a=l64&stage=true.
Infrastructure analysis revealed repeated callbacks to IP-only hosts linked to ASN AS200593 (Prospero OOO), a well-known “bulletproof” hosting provider often utilized by cyber criminals [3], including addresses such as 193.24.123[.]68:3001 and 91.215.85[.]42:3000, alongside other nodes hosting payloads and staging content.
Darktrace model coverage
Darktrace model coverage consistently highlighted behaviors indicative of exploitation. Among the most frequent detections were anomalous server activity on new, non-standard ports and HTTP requests posted to IP addresses without hostnames, often using uncommon application protocols. Models also flagged the appearance of new user agents such as curl and wget originating from internet-facing systems, representing an unusual deviation from baseline behavior.
Additionally, observed activity included the download of scripts and executable files from rare external sources, with Darktrace’s Autonomous Response capability intervening to block suspicious transfers, when enabled. Beaconing patterns were another strong signal, with detections for HTTP beaconing to new or rare IP addresses, sustained SSL or HTTP increases, and long-running compromise indicators such as “Beacon for 4 Days” and “Slow Beaconing.”
Conclusion
While this opportunistic campaign to exploit the React2Shell exploit is not particularly sophisticated, it demonstrates that attackers can rapidly prototyping new methods to take advantage of novel vulnerabilities before widespread patching occurs. With a time to infection of only two minutes from the initial deployment of the honeypot, this serves as a clear reminder that patching vulnerabilities as soon as they are released is paramount.
Credit to Nathaniel Bill (Malware Research Engineer), George Kim (Analyst Consulting Lead – AMS), Calum Hall (Technical Content Researcher), Tara Gould (Malware Research Lead, and Signe Zaharka (Principal Cyber Analyst).