ブログ
/
AI
/
February 10, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

AI hype is everywhere, but not many vendors are getting specific. Darktrace’s multi-layered AI combines various machine learning techniques for behavioral analytics, real-time threat detection, investigation, and autonomous response.
No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Feb 2025

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s Self-Learning AI is a multi-layered engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s Self-Learning AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Learn how AI is Applied in Cybersecurity

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

No items found.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
No items found.

More in this series

No items found.

Blog

/

/

January 13, 2026

Runtime Is Where Cloud Security Really Counts: The Importance of Detection, Forensics and Real-Time Architecture Awareness

Default blog imageDefault blog image

Introduction: Shifting focus from prevention to runtime

Cloud security has spent the last decade focused on prevention; tightening configurations, scanning for vulnerabilities, and enforcing best practices through Cloud Native Application Protection Platforms (CNAPP). These capabilities remain essential, but they are not where cloud attacks happen.

Attacks happen at runtime: the dynamic, ephemeral, constantly changing execution layer where applications run, permissions are granted, identities act, and workloads communicate. This is also the layer where defenders traditionally have the least visibility and the least time to respond.

Today’s threat landscape demands a fundamental shift. Reducing cloud risk now requires moving beyond static posture and CNAPP only approaches and embracing realtime behavioral detection across workloads and identities, paired with the ability to automatically preserve forensic evidence. Defenders need a continuous, real-time understanding of what “normal” looks like in their cloud environments, and AI capable of processing massive data streams to surface deviations that signal emerging attacker behavior.

Runtime: The layer where attacks happen

Runtime is the cloud in motion — containers starting and stopping, serverless functions being called, IAM roles being assumed, workloads auto scaling, and data flowing across hundreds of services. It’s also where attackers:

  • Weaponize stolen credentials
  • Escalate privileges
  • Pivot programmatically
  • Deploy malicious compute
  • Manipulate or exfiltrate data

The challenge is complex: runtime evidence is ephemeral. Containers vanish; critical process data disappears in seconds. By the time a human analyst begins investigating, the detail required to understand and respond to the alert, often is already gone. This volatility makes runtime the hardest layer to monitor, and the most important one to secure.

What Darktrace / CLOUD Brings to Runtime Defence

Darktrace / CLOUD is purpose-built for the cloud execution layer. It unifies the capabilities required to detect, contain, and understand attacks as they unfold, not hours or days later. Four elements define its value:

1. Behavioral, real-time detection

The platform learns normal activity across cloud services, identities, workloads, and data flows, then surfaces anomalies that signify real attacker behavior, even when no signature exists.

2. Automated forensic level artifact collection

The moment Darktrace detects a threat, it can automatically capture volatile forensic evidence; disk state, memory, logs, and process context, including from ephemeral resources. This preserves the truth of what happened before workloads terminate and evidence disappears.

3. AI-led investigation

Cyber AI Analyst assembles cloud behaviors into a coherent incident story, correlating identity activity, network flows, and Cloud workload behavior. Analysts no longer need to pivot across dashboards or reconstruct timelines manually.

4. Live architectural awareness

Darktrace continuously maps your cloud environment as it operates; including services, identities, connectivity, and data pathways. This real-time visibility makes anomalies clearer and investigations dramatically faster.

Together, these capabilities form a runtime-first security model.

Why CNAPP alone isn’t enough

CNAPP platforms excel at pre deployment checks all the way down to developer workstations, identifying misconfigurations, concerning permission combinations, vulnerable images, and risky infrastructure choices. But CNAPP’s breadth is also its limitation. CNAPP is about posture. Runtime defense is about behavior.

CNAPP tells you what could go wrong; runtime detection highlights what is going wrong right now.

It cannot preserve ephemeral evidence, correlate active behaviors across domains, or contain unfolding attacks with the precision and speed required during a real incident. Prevention remains essential, but prevention alone cannot stop an attacker who is already operating inside your cloud environment.

Real-world AWS Scenario: Why Runtime Monitoring Wins

A recent incident detected by Darktrace / CLOUD highlights how cloud compromises unfold, and why runtime visibility is non-negotiable. Each step below reflects detections that occur only when monitoring behavior in real time.

1. External Credential Use

Detection: Unusual external source for credential use: An attacker logs into a cloud account from a never-before-seen location, the earliest sign of account takeover.

2. AWS CLI Pivot

Detection: Unusual CLI activity: The attacker switches to programmatic access, issuing commands from a suspicious host to gain automation and stealth.

3. Credential Manipulation

Detection: Rare password reset: They reset or assign new passwords to establish persistence and bypass existing security controls.

4. Cloud Reconnaissance

Detection: Burst of resource discovery: The attacker enumerates buckets, roles, and services to map high value assets and plan next steps.

5. Privilege Escalation

Detection: Anomalous IAM update: Unauthorized policy updates or role changes grant the attacker elevated access or a backdoor.

6. Malicious Compute Deployment

Detection: Unusual EC2/Lambda/ECS creation: The attacker deploys compute resources for mining, lateral movement, or staging further tools.

7. Data Access or Tampering

Detection: Unusual S3 modifications: They alter S3 permissions or objects, often a prelude to data exfiltration or corruption.

Only some of these actions would appear in a posture scan, crucially after the fact.
Every one of these runtime detections is visible only through real-time behavioral monitoring while the attack is in progress.

The future of cloud security Is runtime-first

Cloud defense can no longer revolve solely around prevention. Modern attacks unfold in runtime, across a fast-changing mesh of workloads, services, and — critically — identities. To reduce risk, organizations must be able to detect, understand, and contain malicious activity as it happens, before ephemeral evidence disappears and before attacker's pivot across identity layers.

Darktrace / CLOUD delivers this shift by turning runtime, the most volatile and consequential layer in the cloud, into a fully defensible control point through unified visibility across behavior, workloads, and identities. It does this by providing:

  • Real-time behavior detection across workloads and identity activity
  • Autonomous response actions for rapid containment
  • Automated forensic level artifact preservation the moment events occur
  • AI-driven investigation that separates weak signals from true attacker patterns
  • Live cloud environment insight to understand context and impact instantly

Cloud security must evolve from securing what might go wrong to continuously understanding what is happening; in runtime, across identities, and at the speed attackers operate. Unifying runtime and identity visibility is how defenders regain the advantage.

[related-resource]

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace

Blog

/

Network

/

January 12, 2026

Maduro Arrest Used as a Lure to Deliver Backdoor

Default blog imageDefault blog image

Introduction

Threat actors frequently exploit ongoing world events to trick users into opening and executing malicious files. Darktrace security researchers recently identified a threat group using reports around the arrest of Venezuelan President Nicolàs Maduro on January 3, 2025, as a lure to deliver backdoor malware.

Technical Analysis

While the exact initial access method is unknown, it is likely that a spear-phishing email was sent to victims, containing a zip archive titled “US now deciding what’s next for Venezuela.zip”. This file included an executable named “Maduro to be taken to New York.exe” and a dynamic-link library (DLL), “kugou.dll”.  

The binary “Maduro to be taken to New York.exe” is a legitimate binary (albeit with an expired signature) related to KuGou, a Chinese streaming platform. Its function is to load the DLL “kugou.dll” via DLL search order. In this instance, the expected DLL has been replaced with a malicious one with the same name to load it.  

DLL called with LoadLibraryW.
Figure 1: DLL called with LoadLibraryW.

Once the DLL is executed, a directory is created C:\ProgramData\Technology360NB with the DLL copied into the directory along with the executable, renamed as “DataTechnology.exe”. A registry key is created for persistence in “HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Lite360” to run DataTechnology.exe --DATA on log on.

 Registry key added for persistence.
Figure 2. Registry key added for persistence.
Folder “Technology360NB” created.
Figure 3: Folder “Technology360NB” created.

During execution, a dialog box appears with the caption “Please restart your computer and try again, or contact the original author.”

Message box prompting user to restart.
Figure 4. Message box prompting user to restart.

Prompting the user to restart triggers the malware to run from the registry key with the command --DATA, and if the user doesn't, a forced restart is triggered. Once the system is reset, the malware begins periodic TLS connections to the command-and-control (C2) server 172.81.60[.]97 on port 443. While the encrypted traffic prevents direct inspection of commands or data, the regular beaconing and response traffic strongly imply that the malware has the ability to poll a remote server for instructions, configuration, or tasking.

Conclusion

Threat groups have long used geopolitical issues and other high-profile events to make malicious content appear more credible or urgent. Since the onset of the war in Ukraine, organizations have been repeatedly targeted with spear-phishing emails using subject lines related to the ongoing conflict, including references to prisoners of war [1]. Similarly, the Chinese threat group Mustang Panda frequently uses this tactic to deploy backdoors, using lures related to the Ukrainian war, conventions on Tibet [2], the South China Sea [3], and Taiwan [4].  

The activity described in this blog shares similarities with previous Mustang Panda campaigns, including the use of a current-events archive, a directory created in ProgramData with a legitimate executable used to load a malicious DLL and run registry keys used for persistence. While there is an overlap of tactics, techniques and procedures (TTPs), there is insufficient information available to confidently attribute this activity to a specific threat group. Users should remain vigilant, especially when opening email attachments.

Credit to Tara Gould (Malware Research Lead)
Edited by Ryan Traill (Analyst Content Lead)

Indicators of Compromise (IoCs)

172.81.60[.]97
8f81ce8ca6cdbc7d7eb10f4da5f470c6 - US now deciding what's next for Venezuela.zip
722bcd4b14aac3395f8a073050b9a578 - Maduro to be taken to New York.exe
aea6f6edbbbb0ab0f22568dcb503d731  - kugou.dll

References

[1] https://cert.gov.ua/article/6280422  

[2] https://www.ibm.com/think/x-force/hive0154-mustang-panda-shifts-focus-tibetan-community-deploy-pubload-backdoor

[3] https://www.ibm.com/think/x-force/hive0154-targeting-us-philippines-pakistan-taiwan

[4] https://www.ibm.com/think/x-force/hive0154-targeting-us-philippines-pakistan-taiwan

Continue reading
About the author
Tara Gould
Malware Research Lead
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ