ブログ
/
Email
/
April 10, 2023

Employee-Conscious Email Security Solutions in the Workforce

Email threats commonly affect organizations. Read Darktrace's expert insights on how to safeguard your business by educating employees about email security.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023

When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it. 

However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity. 

AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it. 

Finding a Balance of Employee Involvement in Security Strategies

Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.  

Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.

Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security. 

Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.

On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working. 

So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it. 

Reducing False Reports While Improving Security Awareness Training 

Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.  

By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email. 

The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection

In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.  

Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.

The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy. 

While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails. 

Leveraging AI to Generate Productivity Gains

Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.

Preventing Email Mishaps: How to Deal with Human Error

Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss. 

However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.  

If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk. 

Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow. 

Email Security Informed by Employee Experience

As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.  

In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

AI

/

December 5, 2025

Simplifying Cross Domain Investigations

Default blog imageDefault blog image

Cross-domain gaps mean cross-domain attacks  

Organizations are built on increasingly complex digital estates. Nowadays, the average IT ecosystem spans across a large web of interconnected domains like identity, network, cloud, and email.  

While these domain-specific technologies may boost business efficiency and scalability, they also provide blind spots where attackers can shelter undetected. Threat actors can slip past defenses because security teams often use different detection tools in each realm of their digital infrastructure. Adversaries will purposefully execute different stages of an attack across different domains, ensuring no single tool picks up too many traces of their malicious activity. Identifying and investigating this type of threat, known as a cross-domain attack, requires mastery in event correlation.  

For example, one isolated network scan detected on your network may seem harmless at first glance. Only when it is stitched together with a rare O365 login, a new email rule and anomalous remote connections to an S3 bucket in AWS does it begin to manifest as an actual intrusion.  

However, there are a whole host of other challenges that arise with detecting this type of attack. Accessing those alerts in the respective on-premise network, SaaS and IaaS environments, understanding them and identifying which ones are related to each other takes significant experience, skill and time. And time favours no one but the threat actor.  

Anatomy of a cross domain attack
Figure 1: Anatomy of a cross domain attack

Diverse domains and empty grocery shelves

In April 2025, the UK faced a throwback to pandemic-era shortages when the supermarket giant Marks & Spencer (M&S) was crippled by a cyberattack, leaving empty shelves across its stores and massive disruptions to its online service.  

The threat actors, a group called Scattered Spider, exploited multiple layers of the organization’s digital infrastructure. Notably, the group were able to bypass the perimeter not by exploiting a technical vulnerability, but an identity. They used social engineering tactics to impersonate an M&S employee and successfully request a password reset.  

Once authenticated on the network, they accessed the Windows domain controller and exfiltrated the NTDS.dit file – a critical file containing hashed passwords for all users in the domain. After cracking those hashes offline, they returned to the network with escalated privileges and set their sights on the M&S cloud infrastructure. They then launched the encryption payload on the company’s ESXi virtual machines.

To wrap up, the threat actors used a compromised employee’s email account to send an “abuse-filled” email to the M&S CEO, bragging about the hack and demanding payment. This was possibly more of a psychological attack on the CEO than a technically integral part of the cyber kill chain. However, it revealed yet another one of M&S’s domains had been compromised.  

In summary, the group’s attack spanned four different domains:

Identity: Social engineering user impersonation

Network: Exfiltration of NTDS.dit file

Cloud: Ransomware deployed on ESXI VMs

Email: Compromise of user account to contact the CEO

Adept at exploiting nuance

This year alone, several high-profile cyber-attacks have been attributed to the same group, Scattered Spider, including the hacks on Victoria’s Secret, Adidas, Hawaiian Airlines, WestJet, the Co-op and Harrods. It begs the question, what has made this group so successful?

In the M&S attack, they showcased their advanced proficiency in social engineering, which they use to bypass identity controls and gain initial access. They demonstrated deep knowledge of cloud environments by deploying ransomware onto virtualised infrastructure. However, this does not exemplify a cookie-cutter template of attack methods that brings them success every time.

According to CISA, Scattered Spider typically use a remarkable variety of TTPs (tactics, techniques and procedures) across multiple domains to carry out their campaigns. From leveraging legitimate remote access tools in the network, to manipulating AWS EC2 cloud instances or spoofing email domains, the list of TTPs used by the group is eye-wateringly long. Additionally, the group reportedly evades detection by “frequently modifying their TTPs”.  

If only they had better intentions. Any security director would be proud of a red team who not only has this depth and breadth of domain-centric knowledge but is also consistently upskilling.  

Yet, staying ahead of adversaries who seamlessly move across domains and fluently exploit every system they encounter is just one of many hurdles security teams face when investigating cross-domain attacks.  

Resource-heavy investigations

There was a significant delay in time to detection of the M&S intrusion. News outlet BleepingComputer reported that attackers infiltrated the M&S network as early as February 2025. They maintained persistence for weeks before launching the attack in late April 2025, indicating that early signs of compromise were missed or not correlated across domains.

While it’s unclear exactly why M&S missed the initial intrusion, one can speculate about the unique challenges investigating cross-domain attacks present.  

Challenges of cross-domain investigation

First and foremost, correlation work is arduous because the string of malicious behaviour doesn’t always stem from the same device.  

A hypothetical attack could begin with an O365 credential creating a new email rule. Weeks later, that same credential authenticates anomalously on two different devices. One device downloads an .exe file from a strange website, while the other starts beaconing every minute to a rare external IP address that no one else in the organisation has ever connected to. A month later, a third device downloads 1.3 GiB of data from a recently spun up S3 bucket and gradually transfers a similar amount of data to that same rare IP.

Amid a sea of alerts and false positives, connecting the dots of a malicious attack like this takes time and meticulous correlation. Factor in the nuanced telemetry data related to each domain and things get even more complex.  

An analyst who specialises in network security may not understand the unique logging formats or API calls in the cloud environment. Perhaps they are proficient in protecting the Windows Active Directory but are unfamiliar with cloud IAM.  

Cloud is also an inherently more difficult domain to investigate. With 89% of organizations now operating in multi-cloud environments time must be spent collecting logs, snapshots and access records. Coupled with the threat of an ephemeral asset disappearing, the risk of missing a threat is high. These are some of the reasons why research shows that 65% of organisations spend 3-5 extra days investigating cloud incidents.  

Helpdesk teams handling user requests over the phone require a different set of skills altogether. Imagine a threat actor posing as an employee and articulately requesting an urgent password reset or a temporary MFA deactivation. The junior Helpdesk agent— unfamiliar with the exception criteria, eager to help and feeling pressure from the persuasive manipulator at the end of the phoneline—could easily fall victim to this type of social engineering.  

Empowering analysts through intelligent automation

Even the most skilled analysts can’t manually piece together every strand of malicious activity stretching across domains. But skill alone isn’t enough. The biggest hurdle in investigating these attacks often comes down to whether the team have the time, context, and connected visibility needed to see the full picture.

Many organizations attempt to bridge the gap by stitching together a patchwork of security tools. One platform for email, another for endpoint, another for cloud, and so on. But this fragmentation reinforces the very silos that cross-domain attacks exploit. Logs must be exported, normalized, and parsed across tools a process that is not only error-prone but slow. By the time indicators are correlated, the intrusion has often already deepened.

That’s why automation and AI are becoming indispensable. The future of cross-domain investigation lies in systems that can:

  • Automatically correlate activity across domains and data sources, turning disjointed alerts into a single, interpretable incident.
  • Generate and test hypotheses autonomously, identifying likely chains of malicious behaviour without waiting for human triage.
  • Explain findings in human terms, reducing the knowledge gap between junior and senior analysts.
  • Operate within and across hybrid environments, from on-premise networks to SaaS, IaaS, and identity systems.

This is where Darktrace transforms alerting and investigations. Darktrace’s Cyber AI Analyst automates the process of correlation, hypothesis testing, and narrative building, not just within one domain, but across many. An anomalous O365 login, a new S3 bucket, and a suspicious beaconing host are stitched together automatically, surfacing the story behind the alerts rather than leaving it buried in telemetry.

How threat activity is correlated in Cyber AI Analyst
Figure 2: How threat activity is correlated in Cyber AI Analyst

By analyzing events from disparate tools and sources, AI Analyst constructs a unified timeline of activity showing what happened, how it spread, and where to focus next. For analysts, it means investigation time is measured in minutes, not days. For security leaders, it means every member of the SOC, regardless of experience, can contribute meaningfully to a cross-domain response.

Figure 3: Correlation showcasing cross domains (SaaS and IaaS) in Cyber AI Analyst

Until now, forensic investigations were slow, manual, and reserved for only the largest organizations with specialized DFIR expertise. Darktrace / Forensic Acquisition & Investigation changes that by leveraging the scale and elasticity of the cloud itself to automate the entire investigation process. From capturing full disk and memory at detection to reconstructing attacker timelines in minutes, the solution turns fragmented workflows into streamlined investigations available to every team.

What once took days now takes minutes. Now, forensic investigations in the cloud are faster, more scalable, and finally accessible to every security team, no matter their size or expertise.

Continue reading
About the author
Benjamin Druttman
Cyber Security AI Technical Instructor

Blog

/

Network

/

December 5, 2025

Atomic Stealer: Darktrace’s Investigation of a Growing macOS Threat

Default blog imageDefault blog image

The Rise of Infostealers Targeting Apple Users

In a threat landscape historically dominated by Windows-based threats, the growing prevalence of macOS information stealers targeting Apple users is becoming an increasing concern for organizations. Infostealers are a type of malware designed to steal sensitive data from target devices, often enabling attackers to extract credentials and financial data for resale or further exploitation. Recent research identified infostealers as the largest category of new macOS malware, with an alarming 101% increase in the last two quarters of 2024 [1].

What is Atomic Stealer?

Among the most notorious is Atomic macOS Stealer (or AMOS), first observed in 2023. Known for its sophisticated build, Atomic Stealer can exfiltrate a wide range of sensitive information including keychain passwords, cookies, browser data and cryptocurrency wallets.

Originally marketed on Telegram as a Malware-as-a-Service (MaaS), Atomic Stealer has become a popular malware due to its ability to target macOS. Like other MaaS offerings, it includes services like a web panel for managing victims, with reports indicating a monthly subscription cost between $1,000 and $3,000 [2]. Although Atomic Stealer’s original intent was as a standalone MaaS product, its unique capability to target macOS has led to new variants emerging at an unprecedented rate

Even more concerning, the most recent variant has now added a backdoor for persistent access [3]. This backdoor presents a significant threat, as Atomic Stealer campaigns are believed to have reached an around 120 countries. The addition of a backdoor elevates Atomic Stealer to the rare category of backdoor deployments potentially at a global scale, something only previously attributed to nation-state threat actors [4].

This level of sophistication is also evident in the wide range of distribution methods observed since its first appearance; including fake application installers, malvertising and terminal command execution via the ClickFix technique. The ClickFix technique is particularly noteworthy: once the malware is downloaded onto the device, users are presented with what appears to be a legitimate macOS installation prompt. In reality, however, the user unknowingly initiates the execution of the Atomic Stealer malware.

This blog will focus on activity observed across multiple Darktrace customer environments where Atomic Stealer was detected, along with several indicators of compromise (IoCs). These included devices that successfully connected to endpoints associated with Atomic Stealer, those that attempted but failed to establish connections, and instances suggesting potential data exfiltration activity.

Darktrace’s Coverage of Atomic Stealer

As this evolving threat began to spread across the internet in June 2025, Darktrace observed a surge in Atomic Stealer activity, impacting numerous customers in 24 different countries worldwide. Initially, most of the cases detected in 2025 affected Darktrace customers within the Europe, Middle East, and Africa (EMEA) region. However, later in the year, Darktrace began to observe a more even distribution of cases across EMEA, the Americas (AMS), and Asia Pacific (APAC). While multiple sectors were impacted by Atomic Stealer, Darktrace customers in the education sector were the most affected, particularly during September and October, coinciding with the return to school and universities after summer closures. This spike likely reflects increased device usage as students returned and reconnected potentially compromised devices to school and campus environments.

Starting from June, Darktrace detected multiple events of suspicious HTTP activity to external connections to IPs in the range 45.94.47.0/24. Investigation by Darktrace’s Threat Research team revealed several distinct patterns ; HTTP POST requests to the URI “/contact”, identical cURL User Agents and HTTP requests to “/api/tasks/[base64 string]” URIs.

Within one observed customer’s environment in July, Darktrace detected two devices making repeated initiated HTTP connections over port 80 to IPs within the same range. The first, Device A, was observed making GET requests to the IP 45.94.47[.]158 (AS60781 LeaseWeb Netherlands B.V.), targeting the URI “/api/tasks/[base64string]” using the “curl/8.7.2” user agent. This pattern suggested beaconing activity and triggered the ‘Beaconing Activity to External Rare' model alert in Darktrace / NETWORK, with Device A’s Model Event Log showing repeated connections. The IP associated with this endpoint has since been flagged by multiple open-source intelligence (OSINT) vendors as being associated with Atomic Stealer [5].

Darktrace’s detection of Device A showing repeated connections to the suspicious IP address over port 80, indicative of beaconing behavior.
Figure 1: Darktrace’s detection of Device A showing repeated connections to the suspicious IP address over port 80, indicative of beaconing behavior.

Darktrace’s Cyber AI Analyst subsequently launched an investigation into the activity, uncovering that the GET requests resulted in a ‘503 Service Unavailable’ response, likely indicating that the server was temporarily unable to process the requests.

Cyber AI Analyst Incident showing the 503 Status Code, indicating that the server was temporarily unavailable.
Figure 2: Cyber AI Analyst Incident showing the 503 Status Code, indicating that the server was temporarily unavailable.

This unusual activity prompted Darktrace’s Autonomous Response capability to recommend several blocking actions for the device in an attempt to stop the malicious activity. However, as the customer’s Autonomous Response configuration was set to Human Confirmation Mode, Darktrace was unable to automatically apply these actions. Had Autonomous Response been fully enabled, these connections would have been blocked, likely rendering the malware ineffective at reaching its malicious command-and-control (C2) infrastructure.

Autonomous Response’s suggested actions to block suspicious connectivity on Device A in the first customer environment.
Figure 3: Autonomous Response’s suggested actions to block suspicious connectivity on Device A in the first customer environment.

In another customer environment in August, Darktrace detected similar IoCs, noting a device establishing a connection to the external endpoint 45.94.47[.]149 (ASN: AS57043 Hostkey B.V.). Shortly after the initial connections, the device was observed making repeated requests to the same destination IP, targeting the URI /api/tasks/[base64string] with the user agent curl/8.7.1, again suggesting beaconing activity. Further analysis of this endpoint after the fact revealed links to Atomic Stealer in OSINT reporting [6].

Cyber AI Analyst investigation finding a suspicious URI and user agent for the offending device within the second customer environment.
Figure 4:  Cyber AI Analyst investigation finding a suspicious URI and user agent for the offending device within the second customer environment.

As with the customer in the first case, had Darktrace’s Autonomous Response been properly configured on the customer’s network, it would have been able to block connectivity with 45.94.47[.]149. Instead, Darktrace suggested recommended actions that the customer’s security team could manually apply to help contain the attack.

Autonomous Response’s suggested actions to block suspicious connectivity to IP 45.94.47[.]149 for the device within the second customer environment.
Figure 5: Autonomous Response’s suggested actions to block suspicious connectivity to IP 45.94.47[.]149 for the device within the second customer environment.

In the most recent case observed by Darktrace in October, multiple instances of Atomic Stealer activity were seen across one customer’s environment, with two devices communicating with Atomic Stealer C2 infrastructure. During this incident, one device was observed making an HTTP GET request to the IP 45.94.47[.]149 (ASN: AS60781 LeaseWeb Netherlands B.V.). These connections targeted the URI /api/tasks/[base64string, using the user agent curl/8.7.1.  

Shortly afterward, the device began making repeated connections over port 80 to the same external IP, 45.94.47[.]149. This activity continued for several days until Darktrace detected the device making an HTTP POST request to a new IP, 45.94.47[.]211 (ASN: AS57043 Hostkey B.V.), this time targeting the URI /contact, again using the curl/8.7.1 user agent. Similar to the other IPs observed in beaconing activity, OSINT reporting later linked this one to information stealer C2 infrastructure [7].

Darktrace’s detection of suspicious beaconing connectivity with the suspicious IP 45.94.47.211.
Figure 6: Darktrace’s detection of suspicious beaconing connectivity with the suspicious IP 45.94.47.211.

Further investigation into this customer’s network revealed that similar activity had been occurring as far back as August, when Darktrace detected data exfiltration on a second device. Cyber AI Analyst identified this device making a single HTTP POST connection to the external IP 45.94.47[.]144, another IP with malicious links [8], using the user agent curl/8.7.1 and targeting the URI /contact.

Cyber AI Analyst investigation finding a successful POST request to 45.94.47[.]144 for the device within the third customer environment.
Figure 7:  Cyber AI Analyst investigation finding a successful POST request to 45.94.47[.]144 for the device within the third customer environment.

A deeper investigation into the technical details within the POST request revealed the presence of a file named “out.zip”, suggesting potential data exfiltration.

Advanced Search log in Darktrace / NETWORK showing “out.zip”, indicating potential data exfiltration for a device within the third customer environment.
Figure 8: Advanced Search log in Darktrace / NETWORK showing “out.zip”, indicating potential data exfiltration for a device within the third customer environment.

Similarly, in another environment, Darktrace was able to collect a packet capture (PCAP) of suspected Atomic Stealer activity, which revealed potential indicators of data exfiltration. This included the presence of the “out.zip” file being exfiltrated via an HTTP POST request, along with data that appeared to contain details of an Electrum cryptocurrency wallet and possible passwords.

Read more about Darktrace’s full deep dive into a similar case where this tactic was leveraged by malware as part of an elaborate cryptocurrency scam.

PCAP of an HTTP POST request showing the file “out.zip” and details of Electrum Cryptocurrency wallet.
Figure 9: PCAP of an HTTP POST request showing the file “out.zip” and details of Electrum Cryptocurrency wallet.

Although recent research attributes the “out.zip” file to a new variant named SHAMOS [9], it has also been linked more broadly to Atomic Stealer [10]. Indeed, this is not the first instance where Darktrace has seen the “out.zip” file in cases involving Atomic Stealer either. In a previous blog detailing a social engineering campaign that targeted cryptocurrency users with the Realst Stealer, the macOS version of Realst contained a binary that was found to be Atomic Stealer, and similar IoCs were identified, including artifacts of data exfiltration such as the “out.zip” file.

Conclusion

The rapid rise of Atomic Stealer and its ability to target macOS marks a significant shift in the threat landscape and should serve as a clear warning to Apple users who were traditionally perceived as more secure in a malware ecosystem historically dominated by Windows-based threats.

Atomic Stealer’s growing popularity is now challenging that perception, expanding its reach and accessibility to a broader range of victims. Even more concerning is the emergence of a variant embedded with a backdoor, which is likely to increase its appeal among a diverse range of threat actors. Darktrace’s ability to adapt and detect new tactics and IoCs in real time delivers the proactive defense organizations need to protect themselves against emerging threats before they can gain momentum.

Credit to Isabel Evans (Cyber Analyst), Dylan Hinz (Associate Principal Cyber Analyst)
Edited by Ryan Traill (Analyst Content Lead)

Appendices

References

1.     https://www.scworld.com/news/infostealers-targeting-macos-jumped-by-101-in-second-half-of-2024

2.     https://www.kandji.io/blog/amos-macos-stealer-analysis

3.     https://www.broadcom.com/support/security-center/protection-bulletin/amos-stealer-adds-backdoor

4.     https://moonlock.com/amos-backdoor-persistent-access

5.     https://www.virustotal.com/gui/ip-address/45.94.47.158/detection

6.     https://www.trendmicro.com/en_us/research/25/i/an-mdr-analysis-of-the-amos-stealer-campaign.html

7.     https://www.virustotal.com/gui/ip-address/45.94.47.211/detection

8.     https://www.virustotal.com/gui/ip-address/45.94.47.144/detection

9.     https://securityaffairs.com/181441/malware/over-300-entities-hit-by-a-variant-of-atomic-macos-stealer-in-recent-campaign.html

10.   https://binhex.ninja/malware-analysis-blogs/amos-stealer-atomic-stealer-malware.html

Darktrace Model Detections

Darktrace / NETWORK

  • Compromise / Beaconing Activity To External Rare
  • Compromise / HTTP Beaconing to New IP
  • Compromise / HTTP Beaconing to Rare Destination
  • Anomalous Connection / New User Agent to IP Without Hostname
  • Device / New User Agent
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Slow Beaconing Activity To External Rare
  • Anomalous Connection / Posting HTTP to IP Without Hostname
  • Compromise / Quick and Regular Windows HTTP Beaconing

Autonomous Response

  • Antigena / Network / Significant Anomaly::Antigena Alerts Over Time Block
  • Antigena / Network / Significant Anomaly::Antigena Significant Anomaly from Client Block
  • Antigena / Network / External Threat::Antigena Suspicious Activity Block

List of IoCs

  • 45.94.47[.]149 – IP – Atomic C2 Endpoint
  • 45.94.47[.]144 – IP – Atomic C2 Endpoint
  • 45.94.47[.]158 – IP – Atomic C2 Endpoint
  • 45.94.47[.]211 – IP – Atomic C2 Endpoint
  • out.zip - File Output – Possible ZIP file for Data Exfiltration

MITRE ATT&CK Mapping:

Tactic –Technique – Sub-Technique

Execution - T1204.002 - User Execution: Malicious File

Credential Access - T1555.001 - Credentials from Password Stores: Keychain

Credential Access - T1555.003 - Credentials from Web Browsers

Command & Control - T1071 - Application Layer Protocol

Exfiltration - T1041 - Exfiltration Over C2 Channel

Continue reading
About the author
Isabel Evans
Cyber Analyst
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ