Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
George Kim
Analyst Consulting Lead – AMS
Share
23
Oct 2022
Two of the most popular ways threat actors send malicious emails is through the use of spoofing and impersonation tactics. While spoofed emails are sent on behalf of a trusted domain and obscure the true source of the sender, impersonation emails come from a fake domain, but one that may be visually confused for an authentic one. In order to identify impersonation tactics in a suspicious email, we should first ask why an attacker might utilize an impersonation approach over spoofing.
In contrast to domain spoofing, which lacks validation and can be readily detected by email security gateway softwares, impersonation with a lookalike domain allows attackers to send emails with full SPF and DKIM validation, making them appear legitimate to many security gateways. This blog will explore impersonation tactics and how Darktrace/Email protects against them.
There are two distinct ways to leverage impersonation tactics:
1. Impersonating the domain
2. Impersonating a real user from that domain
Domain impersonation is often implemented with the use of ‘confusable characters’. This involves misspelling through the use of character substitutions which make the domain look as visually similar to the original as possible (eg. m rn, o 0, l I). Threat actors can then also impersonate a real user by adding the the personal field of that user’s email to the new, malicious domain. Comparing impersonation emails with legitimate emails highlights how similar these malicious email addresses are to the real thing (Figure 1).
Figure 1- Email log that highlights the impersonated emails from “Mike Lewis” from the domain “smartercornmerce[.]net”. Along with the impersonated domain, the attackers attempt to impersonate the known user, “Mike Lewis” as well. The use of both distinct types of impersonation categorize the email as what Darktrace/Email refers to as a Double Impersonation email.
Figure 2- Email Summary details of one of the malicious double impersonation emails that was sent by the impersonated sender, “Mike Lewis” from “smartercornmerce[.]net”, that highlights the various anomaly indicators that Darktrace/Email detected, as well the various tags and actions it applied.
Darktrace/Email uses AI which analyses impersonation emails by comparing the ‘From’ header domains of emails against known external domains and generates a percentage score for how likely the domain is to be an imitation of the known domain (Figure 3).
Figure 3- Darktrace compares the external sender, “mike.lewis@smartercornmerce[.]net”, with similar external names and domains that have been observed in different inbound emails on the network.
Impersonation emails are also detected via spoof score metrics such as Domain External Spoof Score and Domain Internal Spoof Score (Figure 4).
Figure 4- Darktrace AI analyzed the malicious double impersonation email from Figure 2 and generated a high Domain External Spoof Score (100) and Spoof Score External (94)
Double Impersonation emails such as the one highlighted in Figure 2 are utilized by threat actors to gain the trust of the recipient and convince them to access malicious payloads such as phishing links and attachments. For example, the malicious double impersonation email from Figure 2 contained a suspicious hidden link to a Wordpress site which could have redirected the user to a phishing endpoint and tricked them into divulging sensitive information (Figure 5). The endpoint itself appears to lead unsuspecting recipients to a false share link posing as a payment-themed Excel file.
Figure 5- Details of the Wordpress link embedded in the suspicious email, which was hidden beneath display text to convince a user to click it without knowledge of where it would lead. The domain has a 100% rarity according to Darktrace AI.
Figure 6- Wordpress webpage that highlights another link for the user to click in order to be redirected to the invoice statement in a Microsoft Excel document.
Various indicators highlighted the webpage as suspicious and potentially malicious. Firstly, the use of ‘SmarterCORNmerce’ in the link to the webpage was at odds with the use of SmarterCOMMERCE throughout the page itself. The link also showed the invoice statement to be an Microsoft Excel file, despite the email suggesting it was a PDF document. Further investigation revealed the link to be associated with a Fleek hosting service and CDN (Figure 7), and that it redirected users to a fake Microsoft page.
Figure 7 - Source code from the Wordpress webpage shows that the fake Microsoft link redirects users to a Fleek hosted page. This page may contain additional javascript content to download malware onto the user’s device.
As well as the domain spoof score metrics highlighted in Figure 4, Darktrace/Email analyses the suspicious payloads embedded in emails and generates scores to indicate the likelihood that a payload may be a phishing attempt.
Figure 8- Additional metrics for the double impersonation email that highlight the high phishing inducement score (96) for the email.
As the DETECT functionality of Darktrace/Email generates high scores metrics such as Domain External Spoof Score and Phishing Inducement, the RESPOND function will fire complementary models which then trigger relevant actions on the various payloads embedded in these emails and even the delivery of the emails themselves. As the impersonation email highlighted in Figure 2 impersonated not only the trusted domain but the known and trusted sender, Darktrace AI triggers the Double Impersonation model. Additional spoofing models such as ‘Basic Known Entity Similarities + Suspicious Content’ and ‘External Domain Similarities + Maximum Similarity’ were also triggered, indicating the high possibility that the suspicious email is a domain and user impersonation email sent by a malicious attacker.
Figure 9- The Email console highlights the different models the email triggered, including the Basic Known Entity Similarities + Suspicious Content and External Domain Similarities + Maximum Similarity model breaches and the various models that triggered significant actions in response to the potentially malicious impersonation email.
When Darktrace/Email detects a malicious double impersonation email, it responds by triggering a Hold action, preventing the email from appearing in the recipient’s inbox. Darktrace/Email’s RESPOND functionality could also take action against the suspicious link payloads embedded in the email with a Double Lock Link action. This will prevent users from attempting to click on malicious phishing links. Such actions highlight how Darktrace/Email excels in using AI to detect and take action against potentially malicious impersonation emails that may be prevalent in any user’s inbox.
Though impersonation is becoming increasingly targeted and efficient, Darktrace/Email has both detection and response capabilities that can ensure customers have secure coverage for their email environments.
Thanks to Ben Atkins for his contributions to this blog.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
How email-delivered prompt injection attacks can target enterprise AI – and why it matters
Prompt injection is a newly emerging threat, with only a handful of confirmed victims so far – targeting how AI systems use data rather than exploiting traditional software vulnerabilities. As agentic AI becomes embedded across enterprise environments, attackers may attempt to manipulate these systems through hidden instructions in everyday email content.
Darktrace Unites Human Behavior and Threat Detection Across Email, Slack, Teams, and Zoom
Introducing the adaptive era of email security: a unified platform that connects personalized coaching, collaboration tools, and user behavior into a self-improving defense system.
Why Organizations are Moving to Label-free, Behavioral DLP for Outbound Email
Modern data loss doesn’t always look like a regex match. It can look like everyday communication slightly out of context. Here’s how a domain specific language model paired with behavioral learning protects labeled and unlabeled data without slowing business down.
The Next Step After Mythos: Defending in a World Where Compromise is Expected
Is Anthropic’s Mythos a turning point for cybersecurity?
Anthropic’s recent announcements around their Mythos model, alongside the launch of Project Glasswing, have generated significant interest across the cybersecurity industry.
The closed-source nature of the Mythos model has understandably attracted a degree of skepticism around some of the claims being made. Additionally, Project Glasswing was initially positioned as a way for software vendors to accelerate the proactive discovery of vulnerabilities in their own code; however, much of the attention has focused on the potential for AI to identify exploitable vulnerabilities for those with malicious intent.
Putting questions around the veracity of those claims to one side – which, for what it’s worth, do appear to be at least partially endorsed by independent bodies such as the UK’s AI Security Institute – this should not be viewed as a critical turning point for the industry. Rather, it reflects the natural direction of travel.
How Mythos affects cybersecurity teams
At Darktrace, extolling the virtues of AI within cybersecurity is understandably close to our hearts. However, taking a step back from the hype, we’d like to consider what developments like this mean for security teams.
Whether it’s Mythos or another model yet to be released, it’s worth remembering that there is no fundamental difference between an AI discovered vulnerability and one discovered by a human. The change is in the paceof discovery and, some may argue, the lower the barrier to entry.
In the hands of a software developer, this is unquestionably positive. Faster discovery enables earlier remediation and more proactive security. But in the hands of an attacker, the same capability will likely lead to a greater number of exploitable vulnerabilities being used in the wild and, critically, vulnerabilities that are not yet known to either the vendor or the end user.
That said, attackers have always been able to find exploitable vulnerabilities and use them undetected for extended periods of time. The use of AI does not fundamentally change this reality, but it does make the process faster and, unfortunately, more likely to occur at scale.
While tools such as Darktrace / Attack Surface Management and / Proactive Exposure Management can help security teams prioritize where to patch, the emergence of AI-driven vulnerability discovery reinforces an important point: patching alone is not a sufficient control against modern cyber-attacks.
Rethinking defense for a world where compromise is expected
Rather than assuming vulnerabilities can simply be patched away, defenders are better served by working from the assumption that their software is already vulnerable - and always will be -and build their security strategy accordingly.
Under that assumption, defenders should expect initial access, particularly across internet exposed assets, to become easier for attackers. What matters then is how quickly that foothold is detected, contained, and prevented from expanding.
For defenders, this places renewed emphasis on a few core capabilities:
Secure-by-design architectures and blast radius reduction, particularly around identity, MFA, segmentation, and Zero Trust principles
Early, scalable detection and containment, favoring behavioral and context-driven signals over signatures alone
Operational resilience, with the expectation of more frequent early-stage incidents that must be managed without burning out teams
How Darktrace helps organizations proactively defend against cyber threats
At Darktrace, we support security teams across all three of these critical capabilities through a multi-layered AI approach. Our Self-Learning AI learns what’s normal for your organization, enabling real-time threat detection, behavioural prediction, incident investigation and autonomous response. - all while empowering your security team with visibility and control.To learn more about Darktrace’s application of AI to cybersecurity download our White Paper here.
Reducing blast radius through visibility and control
Secure-by-design principles depend on understanding how users, devices, and systems behave. By learning the normal patterns of identity and network activity, Darktrace helps teams identify when access is being misused or when activity begins to move beyond expected boundaries. This makes it possible to detect and contain lateral movement early, limiting how far an attacker can progress even after initial access.
Detecting and containing threats at the earliest stage
As AI accelerates vulnerability discovery, defenders need to identify exploitation before it is formally recognized. Darktrace’s behavioral understanding approach enables detection of subtle deviations from normal activity, including those linked to previously unknown vulnerabilities.
A key example of this is our research on identifying cyber threats before public CVE disclosures, demonstrating that assessing activity against what is normal for a specific environment, rather than relying on predefined indicators of compromise, enables detection of intrusions exploiting previously unknown vulnerabilities days or even weeks before details become publicly available.
Additionally, our Autonomous Response capability provides fast, targeted containment focused on the most concerning events, while allowing normal business operations to continue. This has consistently shown that even when attackers use techniques never seen before, Darktrace’s Autonomous Response can contain threats before they have a chance to escalate.
Scaling response without increasing operational burden
As early-stage incidents become more frequent, the ability to investigate and respond efficiently becomes critical. Darktrace’s Cyber AI Analyst’s AI-driven investigation capabilities automatically correlate activity across the environment, prioritizing the most significant threats and reducing the need for manual triage. This allows security teams to respond faster and more consistently, without increasing workload or burnout.
What effective defense looks like in an AI-accelerated landscape
Developments like Mythos highlight a reality that has been building for some time: the window between exposure and exploitation is shrinking, and in many cases, it may disappear entirely. In that environment, relying on patching alone becomes increasingly reactive, leaving little room to respond once access has been established.
The more durable approach is to assume that compromise will occur and focus on controlling what happens next. That means identifying early signs of misuse, containing threats before they spread, and maintaining visibility across the environment so that isolated signals can be understood in context.
AI plays a role on both sides of this equation. While it enables attackers to move faster, it also gives defenders the ability to detect subtle changes in behavior, prioritize what matters, and respond in real time. The advantage will not come from adopting AI in isolation, but from applying it in a way that reduces the gap between detection and action.
AI may be accelerating parts of the attack lifecycle, but the fundamentals of defense, detection, and containment still apply. If anything, they matter more than ever – and AI is just as powerful a tool for defenders as it is for attackers.
When Trust Becomes the Attack Surface: Supply-Chain Attacks in an Era of Automation and Implicit Trust
Software supply-chain attacks in 2026
Software supply-chain attacks now represent the primary threat shaping the 2026 security landscape. Rather than relying on exploits at the perimeter, attackers are targeting the connective tissue of modern engineering environments: package managers, CI/CD automation, developer systems, and even the security tools organizations inherently trust.
These incidents are not isolated cases of poisoned code. They reflect a structural shift toward abusing trusted automation and identity at ecosystem scale, where compromise propagates through systems designed for speed, not scrutiny. Ephemeral build runners, regardless of provider, represent high‑trust, low‑visibility execution zones.
The Axios compromise and the cascading Trivy campaign illustrate how quickly this abuse can move once attacker activity enters build and delivery workflows. This blog provides an overview of the latest supply chain and security tool incidents with Darktrace telemetry and defensive actions to improve organizations defensive cyber posture.
1. Why the Axios Compromise Scaled
On 31 March 2026, attackers hijacked the npm account of Axios’s lead maintainer, publishing malicious versions 1.14.1 and 0.30.4 that silently pulled in a malicious dependency, plain‑crypto‑[email protected]. Axios is a popular HTTP client for node.js and processes 100 million weekly downloads and appears in around 80% of cloud and application environments, making this a high‑leverage breach [1].
The attack chain was simple yet effective:
A compromised maintainer account enabled legitimate‑looking malicious releases.
The poisoned dependency executed Remote Access Trojans (RATs) across Linux, macOS and Windows systems.
The malware beaconed to a remote command-and-control (C2) server every 60 seconds in a loop, awaiting further instructions.
The installer self‑cleaned by deleting malicious artifacts.
All of this matters because a single maintainer compromise was enough to project attacker access into thousands of trusted production environments without exploiting a single vulnerability.
A view from Darktrace
Multiple cases linked with the Axios compromise were identified across Darktrace’s customer base in March 2026, across both Darktrace / NETWORK and Darktrace / CLOUD deployments.
In one Darktrace / CLOUD deployment, an Azure Cloud Asset was observed establishing new external HTTP connectivity to the IP 142.11.206[.]73 on port 8000. Darktrace deemed this activity as highly anomalous for the device based on several factors, including the rarity of the endpoint across the network and the unusual combination of protocol and port for this asset. As a result, the triggering the "Anomalous Connection / Application Protocol on Uncommon Port" model was triggered in Darktrace / CLOUD. Detection was driven by environmental context rather than a known indicator at the time. Subsequent reporting later classified the destination as malicious in relation to the Axios supply‑chain compromise, reinforcing the gap that often exists between initial attacker activity and the availability of actionable intelligence. [5]
Additionally, shortly before this C2 connection, the device was observed communicating with various endpoints associated with the NPM package manager, further reinforcing the association with this attack.
Figure 1: Darktrace’s detection of the unusual external connection to 142.11[.]206[.]73 via port 8000.
Within Axios cases observed within Darktrace / NETWORK customer environments, activity generally focused on the use of newly observed cURL user agents in outbound connections to the C2 URL sfrclak[.]com/6202033, alongside the download of malicious files.
In other cases, Darktrace / NETWORK customers with Microsoft Defender for Endpoint integration received alerts flagging newly observed system executables and process launches associated with C2 communication.
Figure 2: A Security Integration Alert from Microsoft Defender for Endpoint associated with the Axios supply chain attack.
2. Why Trivy bypassed security tooling trust
Between late February and March 22, 2026, the threat group TeamPCP leveraged credentials from a previous incident to insert malicious artifacts across Trivy’s distribution ecosystem, including its CI automation, release binaries, Visual Studio Code extensions, and Docker container images [2].
While public reporting has emphasized GitHub Actions, Darktrace telemetry highlights attacker execution within CI/CD runner environments, including ephemeral build runners. These execution contexts are typically granted broad trust and limited visibility, allowing malicious activity within build automation to blend into expected operational workflows, regardless of provider.
This was a coordinated multi‑phase attack:
75 of 76 of trivy-action tags and all setup‑trivy tags were force‑pushed to deliver a malicious payload.
A malicious binary (v0.69.4) was distributed across all major distribution channels.
Developer machines were compromised, receiving a persistent backdoor and a self-propagating worm.
Secrets were exfiltrated at scale, including SSH keys, Kuberenetes tokens, database passwords, and cloud credentials across Amazon Web Service (AWS), Azure, and Google Cloud Platform (GCP).
Within Darktrace’s customer base, an AWS EC2 instance monitored by Darktrace / CLOUD appeared to have been impacted by the Trivy attack. On March 19, the device was seen connecting to the attacker-controlled C2 server scan[.]aquasecurtiy[.]org (45.148.10[.]212), triggering the model 'Anomalous Server Activity / Outgoing from Server’ in Darktrace / CLOUD.
Despite this limited historical context, Darktrace assessed this activity as suspicious due to the rarity of the destination endpoint across the wider deployment. This resulted in the triggering of a model alert and the generation of a Cyber AI Analyst incident to further analyze and correlate the attack activity.
TeamPCP’s continued abused of GitHub Actions against security and IT tooling has also been observed more recently in Darktrace’s customer base. On April 22, an AWS asset was seen connecting to the C2 endpoint audit.checkmarx[.]cx (94.154.172[.]43). The timing of this activity suggests a potential link to a malicious Bitwarden package distributed by the threat actor, which was only available for a short timeframe on April 22. [4][3]
Figure 3: A model alert flagging unusual external connectivity from the AWS asset, as seen in Darktrace / CLOUD .
While the Trivy activity originated within build automation, the underlying failure mode mirrors later intrusions observed via management tooling. In both cases, attackers leveraged platforms designed for scale and trust to execute actions that blended into normal operational noise until downstream effects became visible.
Quest KACE: Legacy Risk, Real Impact
The Quest KACE System Management Appliance (SMA) incident reinforces that software risk is not confined to development pipelines alone. High‑trust infrastructure and management platforms are increasingly leveraged by adversaries when left unpatched or exposed to the internet.
Throughout March 2026, attackers exploited CVE 2025-32975 to authentication on outdated, internet-facing KACE appliances, gaining administrative control and pushing remote payloads into enterprise environments. Organizations still running pre-patch versions effectively handed adversaries a turnkey foothold, reaffirming a simple strategic truth: legacy management systems are now part of the supply-chain threat surface, and treating them as “low-risk utilities” is no longer defensible [3].
Within the Darktrace customer base, a potential case was identified in mid-March involving an internet-facing server that exhibited the use of a new user agent alongside unusual file downloads and unexpected external connectivity. Darktrace identified the device downloading file downloads from "216.126.225[.]156/x", "216.126.225[.]156/ct.py" and "216.126.225[.]156/n", using the user agents, "curl/8.5.0" & "Python-urllib/3.9".
The timeframe and IoCs observed point towards likely exploitation of CVE‑2025‑32975. As with earlier incidents, the activity became visible through deviations in expected system behavior rather than through advance knowledge of exploitation or attacker infrastructure. The delay between observed exploitation and its addition to the Known Exploited Vulnerabilities (KEV) catalogue underscores a recurring failure: retrospective validation cannot keep pace with adversaries operating at automation speed.
The strategic pattern: Ecosystem‑scale adversaries
The Axios and Trivy compromises are not anomalies; they are signals of a structural shift in the threat landscape. In this post-trust era, the compromise of a single maintainer, repository token, or CI/CD tag can produce large-scale blast radiuses with downstream victims numbering in the thousands. Attackers are no longer just exploiting vulnerabilities; they are exploiting infrastructure privileges, developer trust relationships, and automated build systems that the industry has generally under secured.
Supply‑chain compromise should now be treated as an assumed breach scenario, not a specialized threat class, particularly across build, integration, and management infrastructure. Organizations must operate under the assumption that compromise will occur within trusted software and automation layers, not solely at the network edge or user endpoint. Defenders should therefore expect compromise to emerge from trusted automation layers before it is labelled, validated, or widely understood.
The future of supply‑chain defense lies in continuous behavioral visibility, autonomous detection across developer and build environments, and real‑time anomaly identification.
As AI increasingly shapes software development and security operations, defenders must assume adversaries will also operate with AI in the loop. The defensive edge will come not from predicting specific compromises, but from continuously interrogating behavior across environments humans can no longer feasibly monitor at scale.
Credit to Nathaniel Jones (VP, Security & AI Strategy, FCISCO), Emma Foulger (Global Threat Research Operations Lead), Justin Torres (Senior Cyber Analyst), Tara Gould (Malware Research Lead)