ブログ
/
/
April 17, 2024

Cerber Ransomware: Dissecting the three heads

Cerber ransomware's Linux variant is actively exploiting CVE-2023-22518 in Confluence servers. It uses three UPX-packed C++ payloads: a primary stager, a log checker for environment assessment, and an encryptor that renames files with a .L0CK3D extension.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
17
Apr 2024

Introduction: Cerber ransomware

Researchers at Cado Security Labs (now part of Darktrace) received reports of the Cerber ransomware being deployed onto servers running the Confluence application via the CVE-2023-22518 exploit. [1] There is a large amount of coverage on the Windows variant, however there is very little about the Linux variant. This blog will discuss an analysis of the Linux variant. 

Cerber emerged and was at the peak of its activity around 2016, and has since only occasional campaigns, most recently targeting the aforementioned Confluence vulnerability. It consists of three highly obfuscated C++ payloads, compiled as a 64-bit Executable and Linkable Format (ELF, the format for executable binary files on Linux) and packed with UPX. UPX is a very common packer used by many threat actors. It allows the actual program code to be stored encoded in the binary, and at runtime extracted into memory and executed (“unpacked”). This is done to prevent software from scanning the payload and detecting the malware.

Pure C++ payloads are becoming less common on Linux, with many threat actors now employing newer programming languages such as Rust or Go. [2] This is likely due to the Cerber payload first being released almost 8 years ago. While it will have certainly received updates, the language and tooling choices are likely to have stuck around for the lifetime of the payload.

Initial access

Cado researchers observed instances of the Cerber ransomware being deployed after a threat actor leveraged CVE-2023-22518 in order to gain access to vulnerable instances of Confluence [3]. It is an improper authorization vulnerability that allows an attacker to reset the Confluence application and create a new administrator account using an unprotected configuration restore endpoint used by the setup wizard.

[19/Mar/2024:15:57:24 +0000] - http-nio-8090-exec-10 13.40.171.234 POST /json/setup-restore.action?synchronous=true HTTP/1.1 302 81796ms - - python-requests/2.31.0 
[19/Mar/2024:15:57:24 +0000] - http-nio-8090-exec-3 13.40.171.234 GET /json/setup-restore-progress.action?taskId= HTTP/1.1 200 108ms 283 - python-requests/2.31.0 

Once an administrator account is created, it can be used to gain code execution by uploading & installing a malicious module via the admin panel. In this case, the Effluence web shell plugin is directly uploaded and installed, which provides a web UI for executing arbitrary commands on the host.

Web Shell recreation
Figure 1: Recreation of installing a web shell on a Confluence instance

The threat actor uses this web shell to download and run the primary Cerber payload. In a default install, the Confluence application is executed as the “confluence” user, a low privilege user. As such, the data the ransomware is able to encrypt is limited to files owned by the confluence user. It will of course succeed in encrypting the datastore for the Confluence application, which can store important information. If it was running as a higher privilege user, it would be able to encrypt more files, as it will attempt to encrypt all files on the system.

Primary payload

Summary of payload:

  • Written in C++, highly obfuscated, and packed with UPX
  • Serves as a stager for further payloads
  • Uses a C2 server at 45[.]145[.]6[.]112 to download and unpack further payloads
  • Deletes itself off disk upon execution

The primary payload is packed with UPX, just like the other payloads. Its main purpose is to set up the environment and grab further payloads in order to run.

Upon execution it unpacks itself and tries to create a file at /var/lock/0init-ld.lo. It is speculated that this was meant to serve as a lock file and prevent duplicate execution of the ransomware, however if the lock file already exists the result is discarded, and execution continues as normal anyway. 

It then connects to the (now defunct) C2 server at 45[.]145[.]6[.]112 and pulls down the secondary payload, a log checker, known internally as agttydck. It does this by doing a simple GET /agttydcki64 request to the server using HTTP and writing the payload body out to /tmp/agttydck.bat. It then executes it with /tmp and ck.log passed as arguments. The execution of the payload is detailed in the next section.

Once the secondary payload has finished executing, the primary payload checks if the log file at /tmp/ck.log it wrote exists. If it does, it then proceeds to delete itself and agttydcki64 from the disk. As it is still running in memory, it then downloads the encryptor payload, known internally as agttydcb, and drops it at /tmp/agttydcb.bat. The packing on this payload is more complex. The file command reports it as a DOS executable and the bat extension would imply this as well. However, it does not have the correct magic bytes, and the high entropy of the file suggests that it is potentially encoded or encrypted. Indeed, the primary payload reads it in and then writes out a decoded ELF file back using the same stream, overwriting the content. It is unclear the exact mechanism used to decode agttydcb. The primary payload then executes the decoded agttydcb, the behavior of which is documented in a later section.

2283  openat(AT_FDCWD, "/tmp/agttydcb.bat", O_RDWR) = 4 
2283  read(4, "\353[\254R\333\372\22,\1\251\f\235 'A>\234\33\25E3g\335\0252\344vBg\177\356\321"..., 450560) = 450560 
2283  lseek(4, 0, SEEK_SET)             = 0 
2283  write(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\2\0>\0\1\0\0\0X\334F\0\0\0\0\0"..., 450560) = 450560 
2283  close(4)                          = 0 

Truncated strace output for the decoding process

Log check payload - agttydck

Summary of payload:

  • Written in C++, highly obfuscated, and packed with UPX
  • Tries to write the phrase “success” to a given file passed in arguments
  • Likely a check for sandboxing, or to check the permission level of the malware on the system

The log checker payload, agttydck, likely serves as a permission checker. It is a very simple payload and was easy to analyze statically despite the obfuscation. Like the other payloads, it is UPX packed.

When run, it concatenates each argument passed to it and delimits with forward slashes in order to obtain a full path. In this case, it is passed /tmp and ck.log, which becomes /tmp/ck.log. It then tries to open this file in write mode, and if it succeeds writes the word “success” and returns 0. If it does not succeed, it returns 1.

cleaned-up routine
Figure 2: Cleaned-up routine that writes out the success phrase

The purpose of this check isn’t exactly clear. It could be to check if the tmp directory is writable and that it can write, which may be a check for if the system is too locked down for the encryptor to work. Given the check is run in a process separate to the primary payload, it could also be an attempt to detect sandboxes that may not handle files correctly, resulting in the primary payload not being told about the file created by the child.

Encryptor - agttydck

Summary of payload:

  • Written in C++, highly obfuscated, and packed with UPX
  • Writes log file /tmp/log.0 on start and /tmp/log.1 on completion, likely for debugging
  • Walks the root directory looking for directories it can encrypt
  • Writes a ransom note to each directory
  • Overwrites all files in directory with their encrypted content and adds a .L0CK3D extension

The encryptor, agttydcb, achieves the goal of the ransomware, which is to encrypt files on the filesystem. Like the other payloads, it is UPX packed and written with heavily obfuscated C++. Upon launch, it deletes itself off disk so as to not leave any artefacts. It then creates a file at /tmp/log.0, but with no content. As it creates a second file at /tmp/log.1 (also with no content) after encryption finishes, it is possible these were debug markers that the attacker mistakenly left in.

The encryptor then spawns a new thread to do the actual encryption. The payload attempts to write a ransom note at /<directory>/read-me3.txt. If it succeeds, it will walk all files in the directory and attempt to encrypt them. If it fails, it moves on to the next directory. The encryptor chooses to pick which directories to encrypt by walking the root file system. For example, it will try to encrypt /usr, and then /var, etc.

Cerber ransom note
Figure 3: Ransom note left by Cerber

When it has identified a file to encrypt, it opens a read-write file stream to the file and reads in the entire file. It is then encrypted in memory before it seeks to the start of the stream and writes the encrypted data, overwriting the file content, and rendering the file fully encrypted. It then renames the file to have the .L0CK3D extension. Rewriting the same file instead of making a new file and deleting the old one is useful on Linux as directories may be set to append only, preventing the outright deletion of files. Rewriting the file may also rewrite the data on the underlying storage, making recovery with advanced forensics also impossible.

2290  openat(AT_FDCWD, "/home/ubuntu/example", O_RDWR) = 6 
2290  read(6, "file content"..., 3691) = 3691 
2290  write(6, "\241\253\270'\10\365?\2\300\304\275=\30B\34\230\254\357\317\242\337UD\266\362\\\210\215\245!\255f"
..., 3691) = 3691 
2290  close(6)                          = 0 
2290  rename("/home/ubuntu/example", "/home/ubuntu/example.L0CK3D") = 0 

Truncated strace of the encryption process

Once this finishes, it tries to delete itself again (which fails as it already deleted itself) and creates /tmp/log.1. It then gracefully exits. Despite the ransom note claiming the files were exfiltrated, Cado researchers did not observe any behavior that showed this.

Conclusion

Cerber is a relatively sophisticated, albeit aging, ransomware payload. While the use of the Confluence vulnerability allows it to compromise a large amount of likely high value systems, often the data it is able to encrypt will be limited to just the confluence data and in well configured systems this will be backed up. This greatly limits the efficacy of the ransomware in extracting money from victims, as there is much less incentive to pay up.

IoCs

The payloads are packed with UPX so will match against existing UPX Yara rules.

Hashes (sha256)

cerber_primary 4ed46b98d047f5ed26553c6f4fded7209933ca9632b998d265870e3557a5cdfe

agttydcb 1849bc76e4f9f09fc6c88d5de1a7cb304f9bc9d338f5a823b7431694457345bd

agttydck ce51278578b1a24c0fc5f8a739265e88f6f8b32632cf31bf7c142571eb22e243

IPs

C2 (Defunct) 45[.]145[.]6[.]112

References

  1. https://confluence.atlassian.com/security/cve-2023-22518-improper-authorization-vulnerability-in-confluence-data-center-and-server-1311473907.html
  1. https://www.proofpoint.com/uk/threat-reference/cerber-ransomware  
  1. https://nvd.nist.gov/vuln/detail/CVE-2023-22518

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher

More in this series

No items found.

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

Default blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community

Blog

/

Email

/

December 22, 2025

Why Organizations are Moving to Label-free, Behavioral DLP for Outbound Email

Default blog imageDefault blog image

Why outbound email DLP needs reinventing

In 2025, the global average cost of a data breach fell slightly — but remains substantial at USD 4.44 million (IBM Cost of a Data Breach Report 2025). The headline figure hides a painful reality: many of these breaches stem not from sophisticated hacks, but from simple human error: mis-sent emails, accidental forwarding, or replying with the wrong attachment. Because outbound email is a common channel for sensitive data leaving an organization, the risk posed by everyday mistakes is enormous.

In 2025, 53% of data breaches involved customer PII, making it the most commonly compromised asset (IBM Cost of a Data Breach Report 2025). This makes “protection at the moment of send” essential. A single unintended disclosure can trigger compliance violations, regulatory scrutiny, and erosion of customer trust –consequences that are disproportionate to the marginal human errors that cause them.

Traditional DLP has long attempted to mitigate these impacts, but it relies heavily on perfect labelling and rigid pattern-matching. In reality, data loss rarely presents itself as a neat, well-structured pattern waiting to be caught – it looks like everyday communication, just slightly out of context.

How data loss actually happens

Most data loss comes from frustratingly familiar scenarios. A mistyped name in auto-complete sends sensitive data to the wrong “Alex.” A user forwards a document to a personal Gmail account “just this once.” Someone shares an attachment with a new or unknown correspondent without realizing how sensitive it is.

Traditional, content-centric DLP rarely catches these moments. Labels are missing or wrong. Regexes break the moment the data shifts formats. And static rules can’t interpret the context that actually matters – the sender-recipient relationship, the communication history, or whether this behavior is typical for the user.

It’s the everyday mistakes that hurt the most. The classic example: the Friday 5:58 p.m. mis-send, when auto-complete selects Martin, a former contractor, instead of Marta in Finance.

What traditional DLP approaches offer (and where gaps remain)

Most email DLP today follows two patterns, each useful but incomplete.

  • Policy- and label-centric DLP works when labels are correct — but content is often unlabeled or mislabeled, and maintaining classification adds friction. Gaps appear exactly where users move fastest
  • Rule and signature-based approaches catch known patterns but miss nuance: human error, new workflows, and “unknown unknowns” that don’t match a rule

The takeaway: Protection must combine content + behavior + explainability at send time, without depending on perfect labels.

Your technology primer: The three pillars that make outbound DLP effective

1) Label-free (vs. data classification)

Protects all content, not just what’s labeled. Label-free analysis removes classification overhead and closes gaps from missing or incorrect tags. By evaluating content and context at send time, it also catches misdelivery and other payload-free errors.

  • No labeling burden; no regex/rule maintenance
  • Works when tags are missing, wrong, or stale
  • Detects misdirected sends even when labels look right

2) Behavioral (vs. rules, signatures, threat intelligence)

Understands user behavior, not just static patterns. Behavioral analysis learns what’s normal for each person, surfacing human error and subtle exfiltration that rules can’t. It also incorporates account signals and inbound intel, extending across email and Teams.

  • Flags risk without predefined rules or IOCs
  • Catches misdelivery, unusual contacts, personal forwards, odd timing/volume
  • Blends identity and inbound context across channels

3) Proprietary DSLM (vs. generic LLM)

Optimized for precise, fast, explainable on-send decisions. A DSLM understands email/DLP semantics, avoids generative risks, and stays auditable and privacy-controlled, delivering intelligence reliably without slowing mail flow.

  • Low-latency, on-send enforcement
  • Non-generative for predictable, explainable outcomes
  • Governed model with strong privacy and auditability

The Darktrace approach to DLP

Darktrace / EMAIL – DLP stops misdelivery and sensitive data loss at send time using hold/notify/justify/release actions. It blends behavioral insight with content understanding across 35+ PII categories, protecting both labeled and unlabeled data. Every action is paired with clear explainability: AI narratives show exactly why an email was flagged, supporting analysts and helping end-users learn. Deployment aligns cleanly with existing SOC workflows through mail-flow connectors and optional Microsoft Purview label ingestion, without forcing duplicate policy-building.

Deployment is simple: Microsoft 365 routes outbound mail to Darktrace for real-time, inline decisions without regex or rule-heavy setup.

A buyer’s checklist for DLP solutions

When choosing your DLP solution, you want to be sure that it can deliver precise, explainable protection at the moment it matters – on send – without operational drag.  

To finish, we’ve compiled a handy list of questions you can ask before choosing an outbound DLP solution:

  • Can it operate label free when tags are missing or wrong? 
  • Does it truly learn per user behavior (no shortcuts)? 
  • Is there a domain specific model behind the content understanding (not a generic LLM)? 
  • Does it explain decisions to both analysts and end users? 
  • Will it integrate with your label program and SOC workflows rather than duplicate them? 

For a deep dive into Darktrace’s DLP solution, check out the full solution brief.

[related-resource]

Continue reading
About the author
Carlos Gray
Senior Product Marketing Manager, Email
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ