Blog
/
/
February 10, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

AI hype is everywhere, but not many vendors are getting specific. Darktrace’s multi-layered AI combines various machine learning techniques for behavioral analytics, real-time threat detection, investigation, and autonomous response.
No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Feb 2025

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s Self-Learning AI is a multi-layered engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s Self-Learning AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Learn how AI is Applied in Cybersecurity

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.

More in this series

No items found.

Blog

/

/

December 8, 2025

Simplifying Cross Domain Investigations

simplifying cross domain thraetsDefault blog imageDefault blog image

Cross-domain gaps mean cross-domain attacks  

Organizations are built on increasingly complex digital estates. Nowadays, the average IT ecosystem spans across a large web of interconnected domains like identity, network, cloud, and email.  

While these domain-specific technologies may boost business efficiency and scalability, they also provide blind spots where attackers can shelter undetected. Threat actors can slip past defenses because security teams often use different detection tools in each realm of their digital infrastructure. Adversaries will purposefully execute different stages of an attack across different domains, ensuring no single tool picks up too many traces of their malicious activity. Identifying and investigating this type of threat, known as a cross-domain attack, requires mastery in event correlation.  

For example, one isolated network scan detected on your network may seem harmless at first glance. Only when it is stitched together with a rare O365 login, a new email rule and anomalous remote connections to an S3 bucket in AWS does it begin to manifest as an actual intrusion.  

However, there are a whole host of other challenges that arise with detecting this type of attack. Accessing those alerts in the respective on-premise network, SaaS and IaaS environments, understanding them and identifying which ones are related to each other takes significant experience, skill and time. And time favours no one but the threat actor.  

Anatomy of a cross domain attack
Figure 1: Anatomy of a cross domain attack

Diverse domains and empty grocery shelves

In April 2025, the UK faced a throwback to pandemic-era shortages when the supermarket giant Marks & Spencer (M&S) was crippled by a cyberattack, leaving empty shelves across its stores and massive disruptions to its online service.  

The threat actors, a group called Scattered Spider, exploited multiple layers of the organization’s digital infrastructure. Notably, the group were able to bypass the perimeter not by exploiting a technical vulnerability, but an identity. They used social engineering tactics to impersonate an M&S employee and successfully request a password reset.  

Once authenticated on the network, they accessed the Windows domain controller and exfiltrated the NTDS.dit file – a critical file containing hashed passwords for all users in the domain. After cracking those hashes offline, they returned to the network with escalated privileges and set their sights on the M&S cloud infrastructure. They then launched the encryption payload on the company’s ESXi virtual machines.

To wrap up, the threat actors used a compromised employee’s email account to send an “abuse-filled” email to the M&S CEO, bragging about the hack and demanding payment. This was possibly more of a psychological attack on the CEO than a technically integral part of the cyber kill chain. However, it revealed yet another one of M&S’s domains had been compromised.  

In summary, the group’s attack spanned four different domains:

Identity: Social engineering user impersonation

Network: Exfiltration of NTDS.dit file

Cloud: Ransomware deployed on ESXI VMs

Email: Compromise of user account to contact the CEO

Adept at exploiting nuance

This year alone, several high-profile cyber-attacks have been attributed to the same group, Scattered Spider, including the hacks on Victoria’s Secret, Adidas, Hawaiian Airlines, WestJet, the Co-op and Harrods. It begs the question, what has made this group so successful?

In the M&S attack, they showcased their advanced proficiency in social engineering, which they use to bypass identity controls and gain initial access. They demonstrated deep knowledge of cloud environments by deploying ransomware onto virtualised infrastructure. However, this does not exemplify a cookie-cutter template of attack methods that brings them success every time.

According to CISA, Scattered Spider typically use a remarkable variety of TTPs (tactics, techniques and procedures) across multiple domains to carry out their campaigns. From leveraging legitimate remote access tools in the network, to manipulating AWS EC2 cloud instances or spoofing email domains, the list of TTPs used by the group is eye-wateringly long. Additionally, the group reportedly evades detection by “frequently modifying their TTPs”.  

If only they had better intentions. Any security director would be proud of a red team who not only has this depth and breadth of domain-centric knowledge but is also consistently upskilling.  

Yet, staying ahead of adversaries who seamlessly move across domains and fluently exploit every system they encounter is just one of many hurdles security teams face when investigating cross-domain attacks.  

Resource-heavy investigations

There was a significant delay in time to detection of the M&S intrusion. News outlet BleepingComputer reported that attackers infiltrated the M&S network as early as February 2025. They maintained persistence for weeks before launching the attack in late April 2025, indicating that early signs of compromise were missed or not correlated across domains.

While it’s unclear exactly why M&S missed the initial intrusion, one can speculate about the unique challenges investigating cross-domain attacks present.  

Challenges of cross-domain investigation

First and foremost, correlation work is arduous because the string of malicious behaviour doesn’t always stem from the same device.  

A hypothetical attack could begin with an O365 credential creating a new email rule. Weeks later, that same credential authenticates anomalously on two different devices. One device downloads an .exe file from a strange website, while the other starts beaconing every minute to a rare external IP address that no one else in the organisation has ever connected to. A month later, a third device downloads 1.3 GiB of data from a recently spun up S3 bucket and gradually transfers a similar amount of data to that same rare IP.

Amid a sea of alerts and false positives, connecting the dots of a malicious attack like this takes time and meticulous correlation. Factor in the nuanced telemetry data related to each domain and things get even more complex.  

An analyst who specialises in network security may not understand the unique logging formats or API calls in the cloud environment. Perhaps they are proficient in protecting the Windows Active Directory but are unfamiliar with cloud IAM.  

Cloud is also an inherently more difficult domain to investigate. With 89% of organizations now operating in multi-cloud environments time must be spent collecting logs, snapshots and access records. Coupled with the threat of an ephemeral asset disappearing, the risk of missing a threat is high. These are some of the reasons why research shows that 65% of organisations spend 3-5 extra days investigating cloud incidents.  

Helpdesk teams handling user requests over the phone require a different set of skills altogether. Imagine a threat actor posing as an employee and articulately requesting an urgent password reset or a temporary MFA deactivation. The junior Helpdesk agent— unfamiliar with the exception criteria, eager to help and feeling pressure from the persuasive manipulator at the end of the phoneline—could easily fall victim to this type of social engineering.  

Empowering analysts through intelligent automation

Even the most skilled analysts can’t manually piece together every strand of malicious activity stretching across domains. But skill alone isn’t enough. The biggest hurdle in investigating these attacks often comes down to whether the team have the time, context, and connected visibility needed to see the full picture.

Many organizations attempt to bridge the gap by stitching together a patchwork of security tools. One platform for email, another for endpoint, another for cloud, and so on. But this fragmentation reinforces the very silos that cross-domain attacks exploit. Logs must be exported, normalized, and parsed across tools a process that is not only error-prone but slow. By the time indicators are correlated, the intrusion has often already deepened.

That’s why automation and AI are becoming indispensable. The future of cross-domain investigation lies in systems that can:

  • Automatically correlate activity across domains and data sources, turning disjointed alerts into a single, interpretable incident.
  • Generate and test hypotheses autonomously, identifying likely chains of malicious behaviour without waiting for human triage.
  • Explain findings in human terms, reducing the knowledge gap between junior and senior analysts.
  • Operate within and across hybrid environments, from on-premise networks to SaaS, IaaS, and identity systems.

This is where Darktrace transforms alerting and investigations. Darktrace’s Cyber AI Analyst automates the process of correlation, hypothesis testing, and narrative building, not just within one domain, but across many. An anomalous O365 login, a new S3 bucket, and a suspicious beaconing host are stitched together automatically, surfacing the story behind the alerts rather than leaving it buried in telemetry.

How threat activity is correlated in Cyber AI Analyst
Figure 2: How threat activity is correlated in Cyber AI Analyst

By analyzing events from disparate tools and sources, AI Analyst constructs a unified timeline of activity showing what happened, how it spread, and where to focus next. For analysts, it means investigation time is measured in minutes, not days. For security leaders, it means every member of the SOC, regardless of experience, can contribute meaningfully to a cross-domain response.

Figure 3: Correlation showcasing cross domains (SaaS and IaaS) in Cyber AI Analyst

Until now, forensic investigations were slow, manual, and reserved for only the largest organizations with specialized DFIR expertise. Darktrace / Forensic Acquisition & Investigation changes that by leveraging the scale and elasticity of the cloud itself to automate the entire investigation process. From capturing full disk and memory at detection to reconstructing attacker timelines in minutes, the solution turns fragmented workflows into streamlined investigations available to every team.

What once took days now takes minutes. Now, forensic investigations in the cloud are faster, more scalable, and finally accessible to every security team, no matter their size or expertise.

Continue reading
About the author
Benjamin Druttman
Cyber Security AI Technical Instructor

Blog

/

Network

/

December 5, 2025

Atomic Stealer: Darktrace’s Investigation of a Growing macOS Threat

Atomic Stealer: Darktrace’s Investigation of a Growing macOS ThreatDefault blog imageDefault blog image

The Rise of Infostealers Targeting Apple Users

In a threat landscape historically dominated by Windows-based threats, the growing prevalence of macOS information stealers targeting Apple users is becoming an increasing concern for organizations. Infostealers are a type of malware designed to steal sensitive data from target devices, often enabling attackers to extract credentials and financial data for resale or further exploitation. Recent research identified infostealers as the largest category of new macOS malware, with an alarming 101% increase in the last two quarters of 2024 [1].

What is Atomic Stealer?

Among the most notorious is Atomic macOS Stealer (or AMOS), first observed in 2023. Known for its sophisticated build, Atomic Stealer can exfiltrate a wide range of sensitive information including keychain passwords, cookies, browser data and cryptocurrency wallets.

Originally marketed on Telegram as a Malware-as-a-Service (MaaS), Atomic Stealer has become a popular malware due to its ability to target macOS. Like other MaaS offerings, it includes services like a web panel for managing victims, with reports indicating a monthly subscription cost between $1,000 and $3,000 [2]. Although Atomic Stealer’s original intent was as a standalone MaaS product, its unique capability to target macOS has led to new variants emerging at an unprecedented rate

Even more concerning, the most recent variant has now added a backdoor for persistent access [3]. This backdoor presents a significant threat, as Atomic Stealer campaigns are believed to have reached an around 120 countries. The addition of a backdoor elevates Atomic Stealer to the rare category of backdoor deployments potentially at a global scale, something only previously attributed to nation-state threat actors [4].

This level of sophistication is also evident in the wide range of distribution methods observed since its first appearance; including fake application installers, malvertising and terminal command execution via the ClickFix technique. The ClickFix technique is particularly noteworthy: once the malware is downloaded onto the device, users are presented with what appears to be a legitimate macOS installation prompt. In reality, however, the user unknowingly initiates the execution of the Atomic Stealer malware.

This blog will focus on activity observed across multiple Darktrace customer environments where Atomic Stealer was detected, along with several indicators of compromise (IoCs). These included devices that successfully connected to endpoints associated with Atomic Stealer, those that attempted but failed to establish connections, and instances suggesting potential data exfiltration activity.

Darktrace’s Coverage of Atomic Stealer

As this evolving threat began to spread across the internet in June 2025, Darktrace observed a surge in Atomic Stealer activity, impacting numerous customers in 24 different countries worldwide. Initially, most of the cases detected in 2025 affected Darktrace customers within the Europe, Middle East, and Africa (EMEA) region. However, later in the year, Darktrace began to observe a more even distribution of cases across EMEA, the Americas (AMS), and Asia Pacific (APAC). While multiple sectors were impacted by Atomic Stealer, Darktrace customers in the education sector were the most affected, particularly during September and October, coinciding with the return to school and universities after summer closures. This spike likely reflects increased device usage as students returned and reconnected potentially compromised devices to school and campus environments.

Starting from June, Darktrace detected multiple events of suspicious HTTP activity to external connections to IPs in the range 45.94.47.0/24. Investigation by Darktrace’s Threat Research team revealed several distinct patterns ; HTTP POST requests to the URI “/contact”, identical cURL User Agents and HTTP requests to “/api/tasks/[base64 string]” URIs.

Within one observed customer’s environment in July, Darktrace detected two devices making repeated initiated HTTP connections over port 80 to IPs within the same range. The first, Device A, was observed making GET requests to the IP 45.94.47[.]158 (AS60781 LeaseWeb Netherlands B.V.), targeting the URI “/api/tasks/[base64string]” using the “curl/8.7.2” user agent. This pattern suggested beaconing activity and triggered the ‘Beaconing Activity to External Rare' model alert in Darktrace / NETWORK, with Device A’s Model Event Log showing repeated connections. The IP associated with this endpoint has since been flagged by multiple open-source intelligence (OSINT) vendors as being associated with Atomic Stealer [5].

Darktrace’s detection of Device A showing repeated connections to the suspicious IP address over port 80, indicative of beaconing behavior.
Figure 1: Darktrace’s detection of Device A showing repeated connections to the suspicious IP address over port 80, indicative of beaconing behavior.

Darktrace’s Cyber AI Analyst subsequently launched an investigation into the activity, uncovering that the GET requests resulted in a ‘503 Service Unavailable’ response, likely indicating that the server was temporarily unable to process the requests.

Cyber AI Analyst Incident showing the 503 Status Code, indicating that the server was temporarily unavailable.
Figure 2: Cyber AI Analyst Incident showing the 503 Status Code, indicating that the server was temporarily unavailable.

This unusual activity prompted Darktrace’s Autonomous Response capability to recommend several blocking actions for the device in an attempt to stop the malicious activity. However, as the customer’s Autonomous Response configuration was set to Human Confirmation Mode, Darktrace was unable to automatically apply these actions. Had Autonomous Response been fully enabled, these connections would have been blocked, likely rendering the malware ineffective at reaching its malicious command-and-control (C2) infrastructure.

Autonomous Response’s suggested actions to block suspicious connectivity on Device A in the first customer environment.
Figure 3: Autonomous Response’s suggested actions to block suspicious connectivity on Device A in the first customer environment.

In another customer environment in August, Darktrace detected similar IoCs, noting a device establishing a connection to the external endpoint 45.94.47[.]149 (ASN: AS57043 Hostkey B.V.). Shortly after the initial connections, the device was observed making repeated requests to the same destination IP, targeting the URI /api/tasks/[base64string] with the user agent curl/8.7.1, again suggesting beaconing activity. Further analysis of this endpoint after the fact revealed links to Atomic Stealer in OSINT reporting [6].

Cyber AI Analyst investigation finding a suspicious URI and user agent for the offending device within the second customer environment.
Figure 4:  Cyber AI Analyst investigation finding a suspicious URI and user agent for the offending device within the second customer environment.

As with the customer in the first case, had Darktrace’s Autonomous Response been properly configured on the customer’s network, it would have been able to block connectivity with 45.94.47[.]149. Instead, Darktrace suggested recommended actions that the customer’s security team could manually apply to help contain the attack.

Autonomous Response’s suggested actions to block suspicious connectivity to IP 45.94.47[.]149 for the device within the second customer environment.
Figure 5: Autonomous Response’s suggested actions to block suspicious connectivity to IP 45.94.47[.]149 for the device within the second customer environment.

In the most recent case observed by Darktrace in October, multiple instances of Atomic Stealer activity were seen across one customer’s environment, with two devices communicating with Atomic Stealer C2 infrastructure. During this incident, one device was observed making an HTTP GET request to the IP 45.94.47[.]149 (ASN: AS60781 LeaseWeb Netherlands B.V.). These connections targeted the URI /api/tasks/[base64string, using the user agent curl/8.7.1.  

Shortly afterward, the device began making repeated connections over port 80 to the same external IP, 45.94.47[.]149. This activity continued for several days until Darktrace detected the device making an HTTP POST request to a new IP, 45.94.47[.]211 (ASN: AS57043 Hostkey B.V.), this time targeting the URI /contact, again using the curl/8.7.1 user agent. Similar to the other IPs observed in beaconing activity, OSINT reporting later linked this one to information stealer C2 infrastructure [7].

Darktrace’s detection of suspicious beaconing connectivity with the suspicious IP 45.94.47.211.
Figure 6: Darktrace’s detection of suspicious beaconing connectivity with the suspicious IP 45.94.47.211.

Further investigation into this customer’s network revealed that similar activity had been occurring as far back as August, when Darktrace detected data exfiltration on a second device. Cyber AI Analyst identified this device making a single HTTP POST connection to the external IP 45.94.47[.]144, another IP with malicious links [8], using the user agent curl/8.7.1 and targeting the URI /contact.

Cyber AI Analyst investigation finding a successful POST request to 45.94.47[.]144 for the device within the third customer environment.
Figure 7:  Cyber AI Analyst investigation finding a successful POST request to 45.94.47[.]144 for the device within the third customer environment.

A deeper investigation into the technical details within the POST request revealed the presence of a file named “out.zip”, suggesting potential data exfiltration.

Advanced Search log in Darktrace / NETWORK showing “out.zip”, indicating potential data exfiltration for a device within the third customer environment.
Figure 8: Advanced Search log in Darktrace / NETWORK showing “out.zip”, indicating potential data exfiltration for a device within the third customer environment.

Similarly, in another environment, Darktrace was able to collect a packet capture (PCAP) of suspected Atomic Stealer activity, which revealed potential indicators of data exfiltration. This included the presence of the “out.zip” file being exfiltrated via an HTTP POST request, along with data that appeared to contain details of an Electrum cryptocurrency wallet and possible passwords.

Read more about Darktrace’s full deep dive into a similar case where this tactic was leveraged by malware as part of an elaborate cryptocurrency scam.

PCAP of an HTTP POST request showing the file “out.zip” and details of Electrum Cryptocurrency wallet.
Figure 9: PCAP of an HTTP POST request showing the file “out.zip” and details of Electrum Cryptocurrency wallet.

Although recent research attributes the “out.zip” file to a new variant named SHAMOS [9], it has also been linked more broadly to Atomic Stealer [10]. Indeed, this is not the first instance where Darktrace has seen the “out.zip” file in cases involving Atomic Stealer either. In a previous blog detailing a social engineering campaign that targeted cryptocurrency users with the Realst Stealer, the macOS version of Realst contained a binary that was found to be Atomic Stealer, and similar IoCs were identified, including artifacts of data exfiltration such as the “out.zip” file.

Conclusion

The rapid rise of Atomic Stealer and its ability to target macOS marks a significant shift in the threat landscape and should serve as a clear warning to Apple users who were traditionally perceived as more secure in a malware ecosystem historically dominated by Windows-based threats.

Atomic Stealer’s growing popularity is now challenging that perception, expanding its reach and accessibility to a broader range of victims. Even more concerning is the emergence of a variant embedded with a backdoor, which is likely to increase its appeal among a diverse range of threat actors. Darktrace’s ability to adapt and detect new tactics and IoCs in real time delivers the proactive defense organizations need to protect themselves against emerging threats before they can gain momentum.

Credit to Isabel Evans (Cyber Analyst), Dylan Hinz (Associate Principal Cyber Analyst)
Edited by Ryan Traill (Analyst Content Lead)

Appendices

References

1.     https://www.scworld.com/news/infostealers-targeting-macos-jumped-by-101-in-second-half-of-2024

2.     https://www.kandji.io/blog/amos-macos-stealer-analysis

3.     https://www.broadcom.com/support/security-center/protection-bulletin/amos-stealer-adds-backdoor

4.     https://moonlock.com/amos-backdoor-persistent-access

5.     https://www.virustotal.com/gui/ip-address/45.94.47.158/detection

6.     https://www.trendmicro.com/en_us/research/25/i/an-mdr-analysis-of-the-amos-stealer-campaign.html

7.     https://www.virustotal.com/gui/ip-address/45.94.47.211/detection

8.     https://www.virustotal.com/gui/ip-address/45.94.47.144/detection

9.     https://securityaffairs.com/181441/malware/over-300-entities-hit-by-a-variant-of-atomic-macos-stealer-in-recent-campaign.html

10.   https://binhex.ninja/malware-analysis-blogs/amos-stealer-atomic-stealer-malware.html

Darktrace Model Detections

Darktrace / NETWORK

  • Compromise / Beaconing Activity To External Rare
  • Compromise / HTTP Beaconing to New IP
  • Compromise / HTTP Beaconing to Rare Destination
  • Anomalous Connection / New User Agent to IP Without Hostname
  • Device / New User Agent
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Slow Beaconing Activity To External Rare
  • Anomalous Connection / Posting HTTP to IP Without Hostname
  • Compromise / Quick and Regular Windows HTTP Beaconing

Autonomous Response

  • Antigena / Network / Significant Anomaly::Antigena Alerts Over Time Block
  • Antigena / Network / Significant Anomaly::Antigena Significant Anomaly from Client Block
  • Antigena / Network / External Threat::Antigena Suspicious Activity Block

List of IoCs

  • 45.94.47[.]149 – IP – Atomic C2 Endpoint
  • 45.94.47[.]144 – IP – Atomic C2 Endpoint
  • 45.94.47[.]158 – IP – Atomic C2 Endpoint
  • 45.94.47[.]211 – IP – Atomic C2 Endpoint
  • out.zip - File Output – Possible ZIP file for Data Exfiltration

MITRE ATT&CK Mapping:

Tactic –Technique – Sub-Technique

Execution - T1204.002 - User Execution: Malicious File

Credential Access - T1555.001 - Credentials from Password Stores: Keychain

Credential Access - T1555.003 - Credentials from Web Browsers

Command & Control - T1071 - Application Layer Protocol

Exfiltration - T1041 - Exfiltration Over C2 Channel

Continue reading
About the author
Isabel Evans
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI