Blog
/
/
March 20, 2019

The Invisible Threat: How AI Catches the Ursnif Trojan

The cyber AI approach successfully detected the Ursnif infections even though the new variant of this malware was unknown to security vendors at the time.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
20
Mar 2019

Over the past few months, I’ve analyzed some of the world’s stealthiest trojan attacks like Emotet, which employ deception to bypass traditional security tools that rely on rules and signatures. Guest contributor Keith Siepel also explained how cyber AI defenses managed to catch a zero-day trojan on his firm’s network for which no such rules or signatures yet exist. Indeed, with the incidence of banking trojans having increased by 239% among our customer base last year, it appears that this kind of subterfuge is the new normal.

However, one particularly sophisticated trojan, Ursnif, takes deception a step further evidence of which we are still seeing emerge. Rather than writing executable files that contain malicious code, some of its variants instead exploit vulnerabilities inherent to a user’s own applications, essentially turning the victim’s computer against them. The result of this increasingly common technique is that — once the victim has been tricked into clicking a malicious link or duped into opening an attachment via a phishing email — Ursnif begins to ‘live off the land’, blending into the victim’s environment. And by exploiting Microsoft Office and Windows features, such as document macros, PsExec, and PowerShell scripts, Ursnif can execute commands directly from the computer’s RAM.

One of the most prevalent and destructive strains of the Gozi banking malware, Ursnif was recently placed at the center of a new campaign that saw it dramatically expand its functionality. Originally created to infect hosts with spyware in order to steal sensitive banking information and user credentials, it can now also deploy advanced ransomware like GandCrab. These new functions are aided by the elusive trojan’s aforementioned file-less capabilities, which render it invisible to many security tools and allow it to hide in plain sight within legitimate, albeit corrupted applications. Shining a light on Ursnif therefore requires AI tools that can learn to spot when these applications act abnormally:

Cyber AI detects Ursnif on multiple client networks

First campaign: February 4, 2019

Darktrace detected the initial Ursnif compromise on a customer’s network when it caught several devices connecting to a highly unusual endpoint and subsequently downloading masqueraded files, causing Darktrace’s “Anomalous File / Masqueraded File Transfer” model to breach. Such files are often masqueraded as other file types not only to bypass traditional security measures but also to deceive users — for instance, with the intention of tricking a user into executing a file received in a malicious email by disguising it as a document.

As it happens, this Ursnif variant was a zero-day at the time Darktrace detected it, meaning that its files were unknown to antivirus vendors. But while the never-before-seen files bypassed the customer’s endpoint tools, Darktrace AI leveraged its understanding of the unique ‘pattern of life’ for every user and device in the customer’s network to flag these file downloads as threatening anomalies — without relying on signatures.

A sample of the masqueraded files initially downloaded:

File: xtex13.gas
File MIME type: application/x-dosexec
Size: 549.38 KB
Connection UID: C8SlueG1mT7VdcJ00

File: zyteb17.gas
File MIME type: application/x-dosexec
SHA-1 hash: 4ed60393575d6b47bd82eeb03629bdcb8876a73f
Size: 276.48 KB

File: File: adnaz2.gas
File MIME type: application/x-dosexec
Size: 380.93 KB
Connection UID: CmPOzP1AC4tzuuuW00

A sample of the endpoints detected:

kieacsangelita[.]city · 209.141.60[.]214
muikarellep[.]band · 46.29.167[.]73
cjasminedison[.]com · 185.120.58[.]13

Following the initial suspicious downloads, the compromised devices were further observed making regular connections to multiple rare destinations not previously seen on the affected network in a pattern of beaconing connectivity. In some cases, Darktrace marked these external destinations as suspicious when it recognized the hostnames they queried as algorithm-generated domains. High volumes of DNS requests for such domains is a common characteristic of malware infections, which use this tactic to maintain communication with C2 servers in spite of domain black-listing. In other cases, the endpoints were deemed suspicious because of their use of self-signed SSL certificates, which cyber-criminals often use because they do not require verification by a trusted authority.

In fact, the large volume of anomalous connections commonly triggered a number of Darktrace’s behavioral models, including:

Compromise / DGA Beacon
Anomalous Connection / Suspicious Self-Signed SSL
Compromise / High Volume of Connections with Beacon Score
Compromise / Beaconing Activity To Rare External Endpoint

Beaconing is a method of communication frequently seen when a compromised device attempts to relay information to its control infrastructure in order to receive further instructions. This behavior is characterized by persistent external connections to one or multiple endpoints, a pattern that was repeatedly observed for those devices that had previously downloaded malicious files from the endpoints later associated with the Ursnif campaign. While beaconing behavior to unusual destinations is not necessarily always indicative of infection, Darktrace AI concluded that, in combination with the suspicious file downloads, this type of activity represented a clear indication of compromise.

Figure 1: A device event log that shows the device had connected to internal mail servers shortly before downloading the malicious files.

Lateral movement and file-less capabilities

In the wake of the initial compromise, Darktrace AI also detected Ursnif’s lateral movement and file-less capabilities in real time. In the case of one infected device, an “Anomalous Connection / High Volume of New Service Control” model breach was triggered following the aforementioned suspicious activities. The device in question was flagged after making anomalous SMB connections to at least 47 other internal devices, and after accessing file shares which it had not previously connected. Subsequently, the device was observed writing to the other devices’ service control pipe – a channel used for the remote control of services. The anomalous use of these remote-control channels represent compelling examples of how Ursnif leverages its file-less capabilities to facilitate lateral movement.

Figure 2: Volume of SMB writes made to the service control pipe on internal devices by one of the infected devices, as shown on the Darktrace UI.

Although network administrators often use remote control channels for legitimate purposes, Darktrace AI considered this particular usage highly suspicious, particularly as both devices had previously breached a number of behavioral models as a result of infection.

Second campaign: March 18, 2019

A second Ursnif campaign was detected just this week. At the time of detection, no OSINT was available for the C2 servers nor the malware samples.

On a US manufacturer’s network, the initial malware download took place from: xqzuua1594[.]com/loq91/10x.php?l=mow1.jad hosted on IP 94.154.10[.]62.
Every single malware download is unique. This is indicating auto-patching or a malware factory working in the background.
Darktrace immediately identified this as another Anomalous File / Masqueraded File Transfer.

Directly after this, initial C2 was observed with the following parameters:

HTTP GET to: vwdlpknpsierra[.]email
Destination IP: 162.248.225[.]14
URI: /images/CKicJCsNNNfaJwX6CJ/0Ohp3OUfj/pI_2FszUK7ybqh33Qdwz/bOUeatCG2Qfks5DTzzO/H6SeiL8YozEYXKfornjfVt/hBgfcPVPCOf1H/2qo12IGl/L3B18ld4ZSx37TbdTUpALih/A5dl8FVHel/jMPIKnQfd/H.avi
User Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko

What’s interesting here is that the C2 server provides a Sufee Admin login page:

This C2 appears to have bad operational security (OPSEC) as browsing random URIs on the server reveals some of the dashboard’s contents:

The initial C2 communication was followed by sustained TCP beaconing to ksylviauudaren[.]band on 185.180.198[.]245 over port 443 with SSL encryption using a self-signed certificate. Darktrace highlighted this C2 behavior as Compromise / Sustained TCP Beaconing Activity To Rare Endpoint and Anomalous Connection / Repeated Rare External SSL Self-Signed IP.

As of the writing of this article, the domain ksylviauudaren[.]band was still not recognized in OSINT as malicious – highlighting again Darktrace’s independence of signatures and rules to catch previously unknown threats.

Conclusion

The cyber AI approach successfully detected the Ursnif infections even though the new variant of this malware was unknown to security vendors at the time. Moreover, it even managed to catch Ursnif’s file-less capabilities for lateral movement through its modelling of expected patterns of connectivity. In terms of the wider security context, the ease with which cyber AI flagged such sophisticated malware — malware which takes action by corrupting a computer’s own applications — further demonstrates that AI anomaly detection is the only way to navigate a threat landscape increasingly populated by near-invisible trojans.

IoCs

kieacsangelita[.]city · 209.141.60[.]214
muikarellep[.]band · 46.29.167[.]73
cjasminedison[.]com · 185.120.58[.]13
xqzuua1594[.]com · 94.154.10.[6]2
vwdlpknpsierra[.]email · 162.248.225[.]14
ksylviauudaren[.]band · 185.180.198[.]245

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

Email

/

December 18, 2025

Why organizations are moving to label-free, behavioral DLP for outbound email

Man at laptopDefault blog imageDefault blog image

Why outbound email DLP needs reinventing

In 2025, the global average cost of a data breach fell slightly — but remains substantial at USD 4.44 million (IBM Cost of a Data Breach Report 2025). The headline figure hides a painful reality: many of these breaches stem not from sophisticated hacks, but from simple human error: mis-sent emails, accidental forwarding, or replying with the wrong attachment. Because outbound email is a common channel for sensitive data leaving an organization, the risk posed by everyday mistakes is enormous.

In 2025, 53% of data breaches involved customer PII, making it the most commonly compromised asset (IBM Cost of a Data Breach Report 2025). This makes “protection at the moment of send” essential. A single unintended disclosure can trigger compliance violations, regulatory scrutiny, and erosion of customer trust –consequences that are disproportionate to the marginal human errors that cause them.

Traditional DLP has long attempted to mitigate these impacts, but it relies heavily on perfect labelling and rigid pattern-matching. In reality, data loss rarely presents itself as a neat, well-structured pattern waiting to be caught – it looks like everyday communication, just slightly out of context.

How data loss actually happens

Most data loss comes from frustratingly familiar scenarios. A mistyped name in auto-complete sends sensitive data to the wrong “Alex.” A user forwards a document to a personal Gmail account “just this once.” Someone shares an attachment with a new or unknown correspondent without realizing how sensitive it is.

Traditional, content-centric DLP rarely catches these moments. Labels are missing or wrong. Regexes break the moment the data shifts formats. And static rules can’t interpret the context that actually matters – the sender-recipient relationship, the communication history, or whether this behavior is typical for the user.

It’s the everyday mistakes that hurt the most. The classic example: the Friday 5:58 p.m. mis-send, when auto-complete selects Martin, a former contractor, instead of Marta in Finance.

What traditional DLP approaches offer (and where gaps remain)

Most email DLP today follows two patterns, each useful but incomplete.

  • Policy- and label-centric DLP works when labels are correct — but content is often unlabeled or mislabeled, and maintaining classification adds friction. Gaps appear exactly where users move fastest
  • Rule and signature-based approaches catch known patterns but miss nuance: human error, new workflows, and “unknown unknowns” that don’t match a rule

The takeaway: Protection must combine content + behavior + explainability at send time, without depending on perfect labels.

Your technology primer: The three pillars that make outbound DLP effective

1) Label-free (vs. data classification)

Protects all content, not just what’s labeled. Label-free analysis removes classification overhead and closes gaps from missing or incorrect tags. By evaluating content and context at send time, it also catches misdelivery and other payload-free errors.

  • No labeling burden; no regex/rule maintenance
  • Works when tags are missing, wrong, or stale
  • Detects misdirected sends even when labels look right

2) Behavioral (vs. rules, signatures, threat intelligence)

Understands user behavior, not just static patterns. Behavioral analysis learns what’s normal for each person, surfacing human error and subtle exfiltration that rules can’t. It also incorporates account signals and inbound intel, extending across email and Teams.

  • Flags risk without predefined rules or IOCs
  • Catches misdelivery, unusual contacts, personal forwards, odd timing/volume
  • Blends identity and inbound context across channels

3) Proprietary DSLM (vs. generic LLM)

Optimized for precise, fast, explainable on-send decisions. A DSLM understands email/DLP semantics, avoids generative risks, and stays auditable and privacy-controlled, delivering intelligence reliably without slowing mail flow.

  • Low-latency, on-send enforcement
  • Non-generative for predictable, explainable outcomes
  • Governed model with strong privacy and auditability

The Darktrace approach to DLP

Darktrace / EMAIL – DLP stops misdelivery and sensitive data loss at send time using hold/notify/justify/release actions. It blends behavioral insight with content understanding across 35+ PII categories, protecting both labeled and unlabeled data. Every action is paired with clear explainability: AI narratives show exactly why an email was flagged, supporting analysts and helping end-users learn. Deployment aligns cleanly with existing SOC workflows through mail-flow connectors and optional Microsoft Purview label ingestion, without forcing duplicate policy-building.

Deployment is simple: Microsoft 365 routes outbound mail to Darktrace for real-time, inline decisions without regex or rule-heavy setup.

A buyer’s checklist for DLP solutions

When choosing your DLP solution, you want to be sure that it can deliver precise, explainable protection at the moment it matters – on send – without operational drag.  

To finish, we’ve compiled a handy list of questions you can ask before choosing an outbound DLP solution:

  • Can it operate label free when tags are missing or wrong? 
  • Does it truly learn per user behavior (no shortcuts)? 
  • Is there a domain specific model behind the content understanding (not a generic LLM)? 
  • Does it explain decisions to both analysts and end users? 
  • Will it integrate with your label program and SOC workflows rather than duplicate them? 

For a deep dive into Darktrace’s DLP solution, check out the full solution brief.

[related-resource]

Continue reading
About the author
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

Email

/

December 17, 2025

Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with Darktrace

Beyond MFA: Detecting Adversary-in-the-Middle Attacks and Phishing with DarktraceDefault blog imageDefault blog image

What is an Adversary-in-the-middle (AiTM) attack?

Adversary-in-the-Middle (AiTM) attacks are a sophisticated technique often paired with phishing campaigns to steal user credentials. Unlike traditional phishing, which multi-factor authentication (MFA) increasingly mitigates, AiTM attacks leverage reverse proxy servers to intercept authentication tokens and session cookies. This allows attackers to bypass MFA entirely and hijack active sessions, stealthily maintaining access without repeated logins.

This blog examines a real-world incident detected during a Darktrace customer trial, highlighting how Darktrace / EMAILTM and Darktrace / IDENTITYTM identified the emerging compromise in a customer’s email and software-as-a-service (SaaS) environment, tracked its progression, and could have intervened at critical moments to contain the threat had Darktrace’s Autonomous Response capability been enabled.

What does an AiTM attack look like?

Inbound phishing email

Attacks typically begin with a phishing email, often originating from the compromised account of a known contact like a vendor or business partner. These emails will often contain malicious links or attachments leading to fake login pages designed to spoof legitimate login platforms, like Microsoft 365, designed to harvest user credentials.

Proxy-based credential theft and session hijacking

When a user clicks on a malicious link, they are redirected through an attacker-controlled proxy that impersonates legitimate services.  This proxy forwards login requests to Microsoft, making the login page appear legitimate. After the user successfully completes MFA, the attacker captures credentials and session tokens, enabling full account takeover without the need for reauthentication.

Follow-on attacks

Once inside, attackers will typically establish persistence through the creation of email rules or registering OAuth applications. From there, they often act on their objectives, exfiltrating sensitive data and launching additional business email compromise (BEC) campaigns. These campaigns can include fraudulent payment requests to external contacts or internal phishing designed to compromise more accounts and enable lateral movement across the organization.

Darktrace’s detection of an AiTM attack

At the end of September 2025, Darktrace detected one such example of an AiTM attack on the network of a customer trialling Darktrace / EMAIL and Darktrace / IDENTITY.

In this instance, the first indicator of compromise observed by Darktrace was the creation of a malicious email rule on one of the customer’s Office 365 accounts, suggesting the account had likely already been compromised before Darktrace was deployed for the trial.

Darktrace / IDENTITY observed the account creating a new email rule with a randomly generated name, likely to hide its presence from the legitimate account owner. The rule marked all inbound emails as read and deleted them, while ignoring any existing mail rules on the account. This rule was likely intended to conceal any replies to malicious emails the attacker had sent from the legitimate account owner and to facilitate further phishing attempts.

Darktrace’s detection of the anomalous email rule creation.
Figure 1: Darktrace’s detection of the anomalous email rule creation.

Internal and external phishing

Following the creation of the email rule, Darktrace / EMAIL observed a surge of suspicious activity on the user’s account. The account sent emails with subject lines referencing payment information to over 9,000 different external recipients within just one hour. Darktrace also identified that these emails contained a link to an unusual Google Drive endpoint, embedded in the text “download order and invoice”.

Darkrace’s detection of an unusual surge in outbound emails containing suspicious content, shortly following the creation of a new email rule.
Figure 2: Darkrace’s detection of an unusual surge in outbound emails containing suspicious content, shortly following the creation of a new email rule.
Darktrace / EMAIL’s detection of the compromised account sending over 9,000 external phishing emails, containing an unusual Google Drive link.
Figure 3: Darktrace / EMAIL’s detection of the compromised account sending over 9,000 external phishing emails, containing an unusual Google Drive link.

As Darktrace / EMAIL flagged the message with the ‘Compromise Indicators’ tag (Figure 2), it would have been held automatically if the customer had enabled default Data Loss Prevention (DLP) Action Flows in their email environment, preventing any external phishing attempts.

Figure 4: Darktrace / EMAIL’s preview of the email sent by the offending account.
Figure 4: Darktrace / EMAIL’s preview of the email sent by the offending account.

Darktrace analysis revealed that, after clicking the malicious link in the email, recipients would be redirected to a convincing landing page that closely mimicked the customer’s legitimate branding, including authentic imagery and logos, where prompted to download with a PDF named “invoice”.

Figure 5: Download and login prompts presented to recipients after following the malicious email link, shown here in safe view.

After clicking the “Download” button, users would be prompted to enter their company credentials on a page that was likely a credential-harvesting tool, designed to steal corporate login details and enable further compromise of SaaS and email accounts.

Darktrace’s Response

In this case, Darktrace’s Autonomous Response was not fully enabled across the customer’s email or SaaS environments, allowing the compromise to progress,  as observed by Darktrace here.

Despite this, Darktrace / EMAIL’s successful detection of the malicious Google Drive link in the internal phishing emails prompted it to suggest ‘Lock Link’, as a recommended action for the customer’s security team to manually apply. This action would have automatically placed the malicious link behind a warning or screening page blocking users from visiting it.

Autonomous Response suggesting locking the malicious Google Drive link sent in internal phishing emails.
Figure 6: Autonomous Response suggesting locking the malicious Google Drive link sent in internal phishing emails.

Furthermore, if active in the customer’s SaaS environment, Darktrace would likely have been able to mitigate the threat even earlier, at the point of the first unusual activity: the creation of a new email rule. Mitigative actions would have included forcing the user to log out, terminating any active sessions, and disabling the account.

Conclusion

AiTM attacks represent a significant evolution in credential theft techniques, enabling attackers to bypass MFA and hijack active sessions through reverse proxy infrastructure. In the real-world case we explored, Darktrace’s AI-driven detection identified multiple stages of the attack, from anomalous email rule creation to suspicious internal email activity, demonstrating how Autonomous Response could have contained the threat before escalation.

MFA is a critical security measure, but it is no longer a silver bullet. Attackers are increasingly targeting session tokens rather than passwords, exploiting trusted SaaS environments and internal communications to remain undetected. Behavioral AI provides a vital layer of defense by spotting subtle anomalies that traditional tools often miss

Security teams must move beyond static defenses and embrace adaptive, AI-driven solutions that can detect and respond in real time. Regularly review SaaS configurations, enforce conditional access policies, and deploy technologies that understand “normal” behavior to stop attackers before they succeed.

Credit to David Ison (Cyber Analyst), Bertille Pierron (Solutions Engineer), Ryan Traill (Analyst Content Lead)

Appendices

Models

SaaS / Anomalous New Email Rule

Tactic – Technique – Sub-Technique  

Phishing - T1566

Adversary-in-the-Middle - T1557

Continue reading
About the author
David Ison
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI