Blog
/
/
February 1, 2021

Explore AI Email Security Approaches with Darktrace

Stay informed on the latest AI approaches to email security. Explore Darktrace's comparisons to find the best solution for your cybersecurity needs!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
01
Feb 2021

Innovations in artificial intelligence (AI) have fundamentally changed the email security landscape in recent years, but it can often be hard to determine what makes one system different to the next. In reality, under that umbrella term there exists a significant distinction in approach which may determine whether the technology provides genuine protection or simply a perceived notion of defense.

One backward-looking approach involves feeding a machine thousands of emails that have already been deemed to be malicious, and training it to look for patterns in these emails in order to spot future attacks. The second approach uses an AI system to analyze the entirety of an organization’s real-world data, enabling it to establish a notion of what is ‘normal’ and then spot subtle deviations indicative of an attack.

In the below, we compare the relative merits of each approach, with special consideration to novel attacks that leverage the latest news headlines to bypass machine learning systems trained on data sets. Training a machine on previously identified ‘known bads’ is only advantageous in certain, specific contexts that don’t change over time: to recognize the intent behind an email, for example. However, an effective email security solution must also incorporate a self-learning approach that understands ‘normal’ in the context of an organization in order to identify unusual and anomalous emails and catch even the novel attacks.

Signatures – a backward-looking approach

Over the past few decades, cyber security technologies have looked to mitigate risk by preventing previously seen attacks from occurring again. In the early days, when the lifespan of a given strain of malware or the infrastructure of an attack was in the range of months and years, this method was satisfactory. But the approach inevitably results in playing catch-up with malicious actors: it always looks to the past to guide detection for the future. With decreasing lifetimes of attacks, where a domain could be used in a single email and never seen again, this historic-looking signature-based approach is now being widely replaced by more intelligent systems.

Training a machine on ‘bad’ emails

The first AI approach we often see in the wild involves harnessing an extremely large data set with thousands or millions of emails. Once these emails have come through, an AI is trained to look for common patterns in malicious emails. The system then updates its models, rules set, and blacklists based on that data.

This method certainly represents an improvement to traditional rules and signatures, but it does not escape the fact that it is still reactive, and unable to stop new attack infrastructure and new types of email attacks. It is simply automating that flawed, traditional approach – only instead of having a human update the rules and signatures, a machine is updating them instead.

Relying on this approach alone has one basic but critical flaw: it does not enable you to stop new types of attacks that it has never seen before. It accepts that there has to be a ‘patient zero’ – or first victim – in order to succeed.

The industry is beginning to acknowledge the challenges with this approach, and huge amounts of resources – both automated systems and security researchers – are being thrown into minimizing its limitations. This includes leveraging a technique called “data augmentation” that involves taking a malicious email that slipped through and generating many “training samples” using open-source text augmentation libraries to create “similar” emails – so that the machine learns not only the missed phish as ‘bad’, but several others like it – enabling it to detect future attacks that use similar wording, and fall into the same category.

But spending all this time and effort into trying to fix an unsolvable problem is like putting all your eggs in the wrong basket. Why try and fix a flawed system rather than change the game altogether? To spell out the limitations of this approach, let us look at a situation where the nature of the attack is entirely new.

The rise of ‘fearware’

When the global pandemic hit, and governments began enforcing travel bans and imposing stringent restrictions, there was undoubtedly a collective sense of fear and uncertainty. As explained previously in this blog, cyber-criminals were quick to capitalize on this, taking advantage of people’s desire for information to send out topical emails related to COVID-19 containing malware or credential-grabbing links.

These emails often spoofed the Centers for Disease Control and Prevention (CDC), or later on, as the economic impact of the pandemic began to take hold, the Small Business Administration (SBA). As the global situation shifted, so did attackers’ tactics. And in the process, over 130,000 new domains related to COVID-19 were purchased.

Let’s now consider how the above approach to email security might fare when faced with these new email attacks. The question becomes: how can you train a model to look out for emails containing ‘COVID-19’, when the term hasn’t even been invented yet?

And while COVID-19 is the most salient example of this, the same reasoning follows for every single novel and unexpected news cycle that attackers are leveraging in their phishing emails to evade tools using this approach – and attracting the recipient’s attention as a bonus. Moreover, if an email attack is truly targeted to your organization, it might contain bespoke and tailored news referring to a very specific thing that supervised machine learning systems could never be trained on.

This isn’t to say there’s not a time and a place in email security for looking at past attacks to set yourself up for the future. It just isn’t here.

Spotting intention

Darktrace uses this approach for one specific use which is future-proof and not prone to change over time, to analyze grammar and tone in an email in order to identify intention: asking questions like ‘does this look like an attempt at inducement? Is the sender trying to solicit some sensitive information? Is this extortion?’ By training a system on an extremely large data set collected over a period of time, you can start to understand what, for instance, inducement looks like. This then enables you to easily spot future scenarios of inducement based on a common set of characteristics.

Training a system in this way works because, unlike news cycles and the topics of phishing emails, fundamental patterns in tone and language don’t change over time. An attempt at solicitation is always an attempt at solicitation, and will always bear common characteristics.

For this reason, this approach only plays one small part of a very large engine. It gives an additional indication about the nature of the threat, but is not in itself used to determine anomalous emails.

Detecting the unknown unknowns

In addition to using the above approach to identify intention, Darktrace uses unsupervised machine learning, which starts with extracting and extrapolating thousands of data points from every email. Some of these are taken directly from the email itself, while others are only ascertainable by the above intention-type analysis. Additional insights are also gained from observing emails in the wider context of all available data across email, network and the cloud environment of the organization.

Only after having a now-significantly larger and more comprehensive set of indicators, with a more complete description of that email, can the data be fed into a topic-indifferent machine learning engine to start questioning the data in millions of ways in order to understand if it belongs, given the wider context of the typical ‘pattern of life’ for the organization. Monitoring all emails in conjunction allows the machine to establish things like:

  • Does this person usually receive ZIP files?
  • Does this supplier usually send links to Dropbox?
  • Has this sender ever logged in from China?
  • Do these recipients usually get the same emails together?

The technology identifies patterns across an entire organization and gains a continuously evolving sense of ‘self’ as the organization grows and changes. It is this innate understanding of what is and isn’t ‘normal’ that allows AI to spot the truly ‘unknown unknowns’ instead of just ‘new variations of known bads.’

This type of analysis brings an additional advantage in that it is language and topic agnostic: because it focusses on anomaly detection rather than finding specific patterns that indicate threat, it is effective regardless of whether an organization typically communicates in English, Spanish, Japanese, or any other language.

By layering both of these approaches, you can understand the intention behind an email and understand whether that email belongs given the context of normal communication. And all of this is done without ever making an assumption or having the expectation that you’ve seen this threat before.

Years in the making

It’s well established now that the legacy approach to email security has failed – and this makes it easy to see why existing recommendation engines are being applied to the cyber security space. On first glance, these solutions may be appealing to a security team, but highly targeted, truly unique spear phishing emails easily skirt these systems. They can’t be relied on to stop email threats on the first encounter, as they have a dependency on known attacks with previously seen topics, domains, and payloads.

An effective, layered AI approach takes years of research and development. There is no single mathematical model to solve the problem of determining malicious emails from benign communication. A layered approach accepts that competing mathematical models each have their own strengths and weaknesses. It autonomously determines the relative weight these models should have and weighs them against one another to produce an overall ‘anomaly score’ given as a percentage, indicating exactly how unusual a particular email is in comparison to the organization’s wider email traffic flow.

It is time for email security to well and truly drop the assumption that you can look at threats of the past to predict tomorrow’s attacks. An effective AI cyber security system can identify abnormalities with no reliance on historical attacks, enabling it to catch truly unique novel emails on the first encounter – before they land in the inbox.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

More in this series

No items found.

Blog

/

AI

/

December 5, 2025

Simplifying Cross Domain Investigations

simplifying cross domain thraetsDefault blog imageDefault blog image

Cross-domain gaps mean cross-domain attacks  

Organizations are built on increasingly complex digital estates. Nowadays, the average IT ecosystem spans across a large web of interconnected domains like identity, network, cloud, and email.  

While these domain-specific technologies may boost business efficiency and scalability, they also provide blind spots where attackers can shelter undetected. Threat actors can slip past defenses because security teams often use different detection tools in each realm of their digital infrastructure. Adversaries will purposefully execute different stages of an attack across different domains, ensuring no single tool picks up too many traces of their malicious activity. Identifying and investigating this type of threat, known as a cross-domain attack, requires mastery in event correlation.  

For example, one isolated network scan detected on your network may seem harmless at first glance. Only when it is stitched together with a rare O365 login, a new email rule and anomalous remote connections to an S3 bucket in AWS does it begin to manifest as an actual intrusion.  

However, there are a whole host of other challenges that arise with detecting this type of attack. Accessing those alerts in the respective on-premise network, SaaS and IaaS environments, understanding them and identifying which ones are related to each other takes significant experience, skill and time. And time favours no one but the threat actor.  

Anatomy of a cross domain attack
Figure 1: Anatomy of a cross domain attack

Diverse domains and empty grocery shelves

In April 2025, the UK faced a throwback to pandemic-era shortages when the supermarket giant Marks & Spencer (M&S) was crippled by a cyberattack, leaving empty shelves across its stores and massive disruptions to its online service.  

The threat actors, a group called Scattered Spider, exploited multiple layers of the organization’s digital infrastructure. Notably, the group were able to bypass the perimeter not by exploiting a technical vulnerability, but an identity. They used social engineering tactics to impersonate an M&S employee and successfully request a password reset.  

Once authenticated on the network, they accessed the Windows domain controller and exfiltrated the NTDS.dit file – a critical file containing hashed passwords for all users in the domain. After cracking those hashes offline, they returned to the network with escalated privileges and set their sights on the M&S cloud infrastructure. They then launched the encryption payload on the company’s ESXi virtual machines.

To wrap up, the threat actors used a compromised employee’s email account to send an “abuse-filled” email to the M&S CEO, bragging about the hack and demanding payment. This was possibly more of a psychological attack on the CEO than a technically integral part of the cyber kill chain. However, it revealed yet another one of M&S’s domains had been compromised.  

In summary, the group’s attack spanned four different domains:

Identity: Social engineering user impersonation

Network: Exfiltration of NTDS.dit file

Cloud: Ransomware deployed on ESXI VMs

Email: Compromise of user account to contact the CEO

Adept at exploiting nuance

This year alone, several high-profile cyber-attacks have been attributed to the same group, Scattered Spider, including the hacks on Victoria’s Secret, Adidas, Hawaiian Airlines, WestJet, the Co-op and Harrods. It begs the question, what has made this group so successful?

In the M&S attack, they showcased their advanced proficiency in social engineering, which they use to bypass identity controls and gain initial access. They demonstrated deep knowledge of cloud environments by deploying ransomware onto virtualised infrastructure. However, this does not exemplify a cookie-cutter template of attack methods that brings them success every time.

According to CISA, Scattered Spider typically use a remarkable variety of TTPs (tactics, techniques and procedures) across multiple domains to carry out their campaigns. From leveraging legitimate remote access tools in the network, to manipulating AWS EC2 cloud instances or spoofing email domains, the list of TTPs used by the group is eye-wateringly long. Additionally, the group reportedly evades detection by “frequently modifying their TTPs”.  

If only they had better intentions. Any security director would be proud of a red team who not only has this depth and breadth of domain-centric knowledge but is also consistently upskilling.  

Yet, staying ahead of adversaries who seamlessly move across domains and fluently exploit every system they encounter is just one of many hurdles security teams face when investigating cross-domain attacks.  

Resource-heavy investigations

There was a significant delay in time to detection of the M&S intrusion. News outlet BleepingComputer reported that attackers infiltrated the M&S network as early as February 2025. They maintained persistence for weeks before launching the attack in late April 2025, indicating that early signs of compromise were missed or not correlated across domains.

While it’s unclear exactly why M&S missed the initial intrusion, one can speculate about the unique challenges investigating cross-domain attacks present.  

Challenges of cross-domain investigation

First and foremost, correlation work is arduous because the string of malicious behaviour doesn’t always stem from the same device.  

A hypothetical attack could begin with an O365 credential creating a new email rule. Weeks later, that same credential authenticates anomalously on two different devices. One device downloads an .exe file from a strange website, while the other starts beaconing every minute to a rare external IP address that no one else in the organisation has ever connected to. A month later, a third device downloads 1.3 GiB of data from a recently spun up S3 bucket and gradually transfers a similar amount of data to that same rare IP.

Amid a sea of alerts and false positives, connecting the dots of a malicious attack like this takes time and meticulous correlation. Factor in the nuanced telemetry data related to each domain and things get even more complex.  

An analyst who specialises in network security may not understand the unique logging formats or API calls in the cloud environment. Perhaps they are proficient in protecting the Windows Active Directory but are unfamiliar with cloud IAM.  

Cloud is also an inherently more difficult domain to investigate. With 89% of organizations now operating in multi-cloud environments time must be spent collecting logs, snapshots and access records. Coupled with the threat of an ephemeral asset disappearing, the risk of missing a threat is high. These are some of the reasons why research shows that 65% of organisations spend 3-5 extra days investigating cloud incidents.  

Helpdesk teams handling user requests over the phone require a different set of skills altogether. Imagine a threat actor posing as an employee and articulately requesting an urgent password reset or a temporary MFA deactivation. The junior Helpdesk agent— unfamiliar with the exception criteria, eager to help and feeling pressure from the persuasive manipulator at the end of the phoneline—could easily fall victim to this type of social engineering.  

Empowering analysts through intelligent automation

Even the most skilled analysts can’t manually piece together every strand of malicious activity stretching across domains. But skill alone isn’t enough. The biggest hurdle in investigating these attacks often comes down to whether the team have the time, context, and connected visibility needed to see the full picture.

Many organizations attempt to bridge the gap by stitching together a patchwork of security tools. One platform for email, another for endpoint, another for cloud, and so on. But this fragmentation reinforces the very silos that cross-domain attacks exploit. Logs must be exported, normalized, and parsed across tools a process that is not only error-prone but slow. By the time indicators are correlated, the intrusion has often already deepened.

That’s why automation and AI are becoming indispensable. The future of cross-domain investigation lies in systems that can:

  • Automatically correlate activity across domains and data sources, turning disjointed alerts into a single, interpretable incident.
  • Generate and test hypotheses autonomously, identifying likely chains of malicious behaviour without waiting for human triage.
  • Explain findings in human terms, reducing the knowledge gap between junior and senior analysts.
  • Operate within and across hybrid environments, from on-premise networks to SaaS, IaaS, and identity systems.

This is where Darktrace transforms alerting and investigations. Darktrace’s Cyber AI Analyst automates the process of correlation, hypothesis testing, and narrative building, not just within one domain, but across many. An anomalous O365 login, a new S3 bucket, and a suspicious beaconing host are stitched together automatically, surfacing the story behind the alerts rather than leaving it buried in telemetry.

How threat activity is correlated in Cyber AI Analyst
Figure 2: How threat activity is correlated in Cyber AI Analyst

By analyzing events from disparate tools and sources, AI Analyst constructs a unified timeline of activity showing what happened, how it spread, and where to focus next. For analysts, it means investigation time is measured in minutes, not days. For security leaders, it means every member of the SOC, regardless of experience, can contribute meaningfully to a cross-domain response.

Figure 3: Correlation showcasing cross domains (SaaS and IaaS) in Cyber AI Analyst

Until now, forensic investigations were slow, manual, and reserved for only the largest organizations with specialized DFIR expertise. Darktrace / Forensic Acquisition & Investigation changes that by leveraging the scale and elasticity of the cloud itself to automate the entire investigation process. From capturing full disk and memory at detection to reconstructing attacker timelines in minutes, the solution turns fragmented workflows into streamlined investigations available to every team.

What once took days now takes minutes. Now, forensic investigations in the cloud are faster, more scalable, and finally accessible to every security team, no matter their size or expertise.

Continue reading
About the author
Benjamin Druttman
Cyber Security AI Technical Instructor

Blog

/

Network

/

December 5, 2025

Atomic Stealer: Darktrace’s Investigation of a Growing macOS Threat

Atomic Stealer: Darktrace’s Investigation of a Growing macOS ThreatDefault blog imageDefault blog image

The Rise of Infostealers Targeting Apple Users

In a threat landscape historically dominated by Windows-based threats, the growing prevalence of macOS information stealers targeting Apple users is becoming an increasing concern for organizations. Infostealers are a type of malware designed to steal sensitive data from target devices, often enabling attackers to extract credentials and financial data for resale or further exploitation. Recent research identified infostealers as the largest category of new macOS malware, with an alarming 101% increase in the last two quarters of 2024 [1].

What is Atomic Stealer?

Among the most notorious is Atomic macOS Stealer (or AMOS), first observed in 2023. Known for its sophisticated build, Atomic Stealer can exfiltrate a wide range of sensitive information including keychain passwords, cookies, browser data and cryptocurrency wallets.

Originally marketed on Telegram as a Malware-as-a-Service (MaaS), Atomic Stealer has become a popular malware due to its ability to target macOS. Like other MaaS offerings, it includes services like a web panel for managing victims, with reports indicating a monthly subscription cost between $1,000 and $3,000 [2]. Although Atomic Stealer’s original intent was as a standalone MaaS product, its unique capability to target macOS has led to new variants emerging at an unprecedented rate

Even more concerning, the most recent variant has now added a backdoor for persistent access [3]. This backdoor presents a significant threat, as Atomic Stealer campaigns are believed to have reached an around 120 countries. The addition of a backdoor elevates Atomic Stealer to the rare category of backdoor deployments potentially at a global scale, something only previously attributed to nation-state threat actors [4].

This level of sophistication is also evident in the wide range of distribution methods observed since its first appearance; including fake application installers, malvertising and terminal command execution via the ClickFix technique. The ClickFix technique is particularly noteworthy: once the malware is downloaded onto the device, users are presented with what appears to be a legitimate macOS installation prompt. In reality, however, the user unknowingly initiates the execution of the Atomic Stealer malware.

This blog will focus on activity observed across multiple Darktrace customer environments where Atomic Stealer was detected, along with several indicators of compromise (IoCs). These included devices that successfully connected to endpoints associated with Atomic Stealer, those that attempted but failed to establish connections, and instances suggesting potential data exfiltration activity.

Darktrace’s Coverage of Atomic Stealer

As this evolving threat began to spread across the internet in June 2025, Darktrace observed a surge in Atomic Stealer activity, impacting numerous customers in 24 different countries worldwide. Initially, most of the cases detected in 2025 affected Darktrace customers within the Europe, Middle East, and Africa (EMEA) region. However, later in the year, Darktrace began to observe a more even distribution of cases across EMEA, the Americas (AMS), and Asia Pacific (APAC). While multiple sectors were impacted by Atomic Stealer, Darktrace customers in the education sector were the most affected, particularly during September and October, coinciding with the return to school and universities after summer closures. This spike likely reflects increased device usage as students returned and reconnected potentially compromised devices to school and campus environments.

Starting from June, Darktrace detected multiple events of suspicious HTTP activity to external connections to IPs in the range 45.94.47.0/24. Investigation by Darktrace’s Threat Research team revealed several distinct patterns ; HTTP POST requests to the URI “/contact”, identical cURL User Agents and HTTP requests to “/api/tasks/[base64 string]” URIs.

Within one observed customer’s environment in July, Darktrace detected two devices making repeated initiated HTTP connections over port 80 to IPs within the same range. The first, Device A, was observed making GET requests to the IP 45.94.47[.]158 (AS60781 LeaseWeb Netherlands B.V.), targeting the URI “/api/tasks/[base64string]” using the “curl/8.7.2” user agent. This pattern suggested beaconing activity and triggered the ‘Beaconing Activity to External Rare' model alert in Darktrace / NETWORK, with Device A’s Model Event Log showing repeated connections. The IP associated with this endpoint has since been flagged by multiple open-source intelligence (OSINT) vendors as being associated with Atomic Stealer [5].

Darktrace’s detection of Device A showing repeated connections to the suspicious IP address over port 80, indicative of beaconing behavior.
Figure 1: Darktrace’s detection of Device A showing repeated connections to the suspicious IP address over port 80, indicative of beaconing behavior.

Darktrace’s Cyber AI Analyst subsequently launched an investigation into the activity, uncovering that the GET requests resulted in a ‘503 Service Unavailable’ response, likely indicating that the server was temporarily unable to process the requests.

Cyber AI Analyst Incident showing the 503 Status Code, indicating that the server was temporarily unavailable.
Figure 2: Cyber AI Analyst Incident showing the 503 Status Code, indicating that the server was temporarily unavailable.

This unusual activity prompted Darktrace’s Autonomous Response capability to recommend several blocking actions for the device in an attempt to stop the malicious activity. However, as the customer’s Autonomous Response configuration was set to Human Confirmation Mode, Darktrace was unable to automatically apply these actions. Had Autonomous Response been fully enabled, these connections would have been blocked, likely rendering the malware ineffective at reaching its malicious command-and-control (C2) infrastructure.

Autonomous Response’s suggested actions to block suspicious connectivity on Device A in the first customer environment.
Figure 3: Autonomous Response’s suggested actions to block suspicious connectivity on Device A in the first customer environment.

In another customer environment in August, Darktrace detected similar IoCs, noting a device establishing a connection to the external endpoint 45.94.47[.]149 (ASN: AS57043 Hostkey B.V.). Shortly after the initial connections, the device was observed making repeated requests to the same destination IP, targeting the URI /api/tasks/[base64string] with the user agent curl/8.7.1, again suggesting beaconing activity. Further analysis of this endpoint after the fact revealed links to Atomic Stealer in OSINT reporting [6].

Cyber AI Analyst investigation finding a suspicious URI and user agent for the offending device within the second customer environment.
Figure 4:  Cyber AI Analyst investigation finding a suspicious URI and user agent for the offending device within the second customer environment.

As with the customer in the first case, had Darktrace’s Autonomous Response been properly configured on the customer’s network, it would have been able to block connectivity with 45.94.47[.]149. Instead, Darktrace suggested recommended actions that the customer’s security team could manually apply to help contain the attack.

Autonomous Response’s suggested actions to block suspicious connectivity to IP 45.94.47[.]149 for the device within the second customer environment.
Figure 5: Autonomous Response’s suggested actions to block suspicious connectivity to IP 45.94.47[.]149 for the device within the second customer environment.

In the most recent case observed by Darktrace in October, multiple instances of Atomic Stealer activity were seen across one customer’s environment, with two devices communicating with Atomic Stealer C2 infrastructure. During this incident, one device was observed making an HTTP GET request to the IP 45.94.47[.]149 (ASN: AS60781 LeaseWeb Netherlands B.V.). These connections targeted the URI /api/tasks/[base64string, using the user agent curl/8.7.1.  

Shortly afterward, the device began making repeated connections over port 80 to the same external IP, 45.94.47[.]149. This activity continued for several days until Darktrace detected the device making an HTTP POST request to a new IP, 45.94.47[.]211 (ASN: AS57043 Hostkey B.V.), this time targeting the URI /contact, again using the curl/8.7.1 user agent. Similar to the other IPs observed in beaconing activity, OSINT reporting later linked this one to information stealer C2 infrastructure [7].

Darktrace’s detection of suspicious beaconing connectivity with the suspicious IP 45.94.47.211.
Figure 6: Darktrace’s detection of suspicious beaconing connectivity with the suspicious IP 45.94.47.211.

Further investigation into this customer’s network revealed that similar activity had been occurring as far back as August, when Darktrace detected data exfiltration on a second device. Cyber AI Analyst identified this device making a single HTTP POST connection to the external IP 45.94.47[.]144, another IP with malicious links [8], using the user agent curl/8.7.1 and targeting the URI /contact.

Cyber AI Analyst investigation finding a successful POST request to 45.94.47[.]144 for the device within the third customer environment.
Figure 7:  Cyber AI Analyst investigation finding a successful POST request to 45.94.47[.]144 for the device within the third customer environment.

A deeper investigation into the technical details within the POST request revealed the presence of a file named “out.zip”, suggesting potential data exfiltration.

Advanced Search log in Darktrace / NETWORK showing “out.zip”, indicating potential data exfiltration for a device within the third customer environment.
Figure 8: Advanced Search log in Darktrace / NETWORK showing “out.zip”, indicating potential data exfiltration for a device within the third customer environment.

Similarly, in another environment, Darktrace was able to collect a packet capture (PCAP) of suspected Atomic Stealer activity, which revealed potential indicators of data exfiltration. This included the presence of the “out.zip” file being exfiltrated via an HTTP POST request, along with data that appeared to contain details of an Electrum cryptocurrency wallet and possible passwords.

Read more about Darktrace’s full deep dive into a similar case where this tactic was leveraged by malware as part of an elaborate cryptocurrency scam.

PCAP of an HTTP POST request showing the file “out.zip” and details of Electrum Cryptocurrency wallet.
Figure 9: PCAP of an HTTP POST request showing the file “out.zip” and details of Electrum Cryptocurrency wallet.

Although recent research attributes the “out.zip” file to a new variant named SHAMOS [9], it has also been linked more broadly to Atomic Stealer [10]. Indeed, this is not the first instance where Darktrace has seen the “out.zip” file in cases involving Atomic Stealer either. In a previous blog detailing a social engineering campaign that targeted cryptocurrency users with the Realst Stealer, the macOS version of Realst contained a binary that was found to be Atomic Stealer, and similar IoCs were identified, including artifacts of data exfiltration such as the “out.zip” file.

Conclusion

The rapid rise of Atomic Stealer and its ability to target macOS marks a significant shift in the threat landscape and should serve as a clear warning to Apple users who were traditionally perceived as more secure in a malware ecosystem historically dominated by Windows-based threats.

Atomic Stealer’s growing popularity is now challenging that perception, expanding its reach and accessibility to a broader range of victims. Even more concerning is the emergence of a variant embedded with a backdoor, which is likely to increase its appeal among a diverse range of threat actors. Darktrace’s ability to adapt and detect new tactics and IoCs in real time delivers the proactive defense organizations need to protect themselves against emerging threats before they can gain momentum.

Credit to Isabel Evans (Cyber Analyst), Dylan Hinz (Associate Principal Cyber Analyst)
Edited by Ryan Traill (Analyst Content Lead)

Appendices

References

1.     https://www.scworld.com/news/infostealers-targeting-macos-jumped-by-101-in-second-half-of-2024

2.     https://www.kandji.io/blog/amos-macos-stealer-analysis

3.     https://www.broadcom.com/support/security-center/protection-bulletin/amos-stealer-adds-backdoor

4.     https://moonlock.com/amos-backdoor-persistent-access

5.     https://www.virustotal.com/gui/ip-address/45.94.47.158/detection

6.     https://www.trendmicro.com/en_us/research/25/i/an-mdr-analysis-of-the-amos-stealer-campaign.html

7.     https://www.virustotal.com/gui/ip-address/45.94.47.211/detection

8.     https://www.virustotal.com/gui/ip-address/45.94.47.144/detection

9.     https://securityaffairs.com/181441/malware/over-300-entities-hit-by-a-variant-of-atomic-macos-stealer-in-recent-campaign.html

10.   https://binhex.ninja/malware-analysis-blogs/amos-stealer-atomic-stealer-malware.html

Darktrace Model Detections

Darktrace / NETWORK

  • Compromise / Beaconing Activity To External Rare
  • Compromise / HTTP Beaconing to New IP
  • Compromise / HTTP Beaconing to Rare Destination
  • Anomalous Connection / New User Agent to IP Without Hostname
  • Device / New User Agent
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Slow Beaconing Activity To External Rare
  • Anomalous Connection / Posting HTTP to IP Without Hostname
  • Compromise / Quick and Regular Windows HTTP Beaconing

Autonomous Response

  • Antigena / Network / Significant Anomaly::Antigena Alerts Over Time Block
  • Antigena / Network / Significant Anomaly::Antigena Significant Anomaly from Client Block
  • Antigena / Network / External Threat::Antigena Suspicious Activity Block

List of IoCs

  • 45.94.47[.]149 – IP – Atomic C2 Endpoint
  • 45.94.47[.]144 – IP – Atomic C2 Endpoint
  • 45.94.47[.]158 – IP – Atomic C2 Endpoint
  • 45.94.47[.]211 – IP – Atomic C2 Endpoint
  • out.zip - File Output – Possible ZIP file for Data Exfiltration

MITRE ATT&CK Mapping:

Tactic –Technique – Sub-Technique

Execution - T1204.002 - User Execution: Malicious File

Credential Access - T1555.001 - Credentials from Password Stores: Keychain

Credential Access - T1555.003 - Credentials from Web Browsers

Command & Control - T1071 - Application Layer Protocol

Exfiltration - T1041 - Exfiltration Over C2 Channel

Continue reading
About the author
Isabel Evans
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI