Blog
/
/
November 25, 2024

Why Artificial Intelligence is the Future of Cybersecurity

This blog explores the impact of AI on the threat landscape, the benefits of AI in cybersecurity, and the role it plays in enhancing security practices and tools.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
Nov 2024

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI

More in this series

No items found.

Blog

/

/

January 13, 2026

Runtime Is Where Cloud Security Really Counts: The Importance of Detection, Forensics and Real-Time Architecture Awareness

runtime, cloud security, cnaapDefault blog imageDefault blog image

Introduction: Shifting focus from prevention to runtime

Cloud security has spent the last decade focused on prevention; tightening configurations, scanning for vulnerabilities, and enforcing best practices through Cloud Native Application Protection Platforms (CNAPP). These capabilities remain essential, but they are not where cloud attacks happen.

Attacks happen at runtime: the dynamic, ephemeral, constantly changing execution layer where applications run, permissions are granted, identities act, and workloads communicate. This is also the layer where defenders traditionally have the least visibility and the least time to respond.

Today’s threat landscape demands a fundamental shift. Reducing cloud risk now requires moving beyond static posture and CNAPP only approaches and embracing realtime behavioral detection across workloads and identities, paired with the ability to automatically preserve forensic evidence. Defenders need a continuous, real-time understanding of what “normal” looks like in their cloud environments, and AI capable of processing massive data streams to surface deviations that signal emerging attacker behavior.

Runtime: The layer where attacks happen

Runtime is the cloud in motion — containers starting and stopping, serverless functions being called, IAM roles being assumed, workloads auto scaling, and data flowing across hundreds of services. It’s also where attackers:

  • Weaponize stolen credentials
  • Escalate privileges
  • Pivot programmatically
  • Deploy malicious compute
  • Manipulate or exfiltrate data

The challenge is complex: runtime evidence is ephemeral. Containers vanish; critical process data disappears in seconds. By the time a human analyst begins investigating, the detail required to understand and respond to the alert, often is already gone. This volatility makes runtime the hardest layer to monitor, and the most important one to secure.

What Darktrace / CLOUD Brings to Runtime Defence

Darktrace / CLOUD is purpose-built for the cloud execution layer. It unifies the capabilities required to detect, contain, and understand attacks as they unfold, not hours or days later. Four elements define its value:

1. Behavioral, real-time detection

The platform learns normal activity across cloud services, identities, workloads, and data flows, then surfaces anomalies that signify real attacker behavior, even when no signature exists.

2. Automated forensic level artifact collection

The moment Darktrace detects a threat, it can automatically capture volatile forensic evidence; disk state, memory, logs, and process context, including from ephemeral resources. This preserves the truth of what happened before workloads terminate and evidence disappears.

3. AI-led investigation

Cyber AI Analyst assembles cloud behaviors into a coherent incident story, correlating identity activity, network flows, and Cloud workload behavior. Analysts no longer need to pivot across dashboards or reconstruct timelines manually.

4. Live architectural awareness

Darktrace continuously maps your cloud environment as it operates; including services, identities, connectivity, and data pathways. This real-time visibility makes anomalies clearer and investigations dramatically faster.

Together, these capabilities form a runtime-first security model.

Why CNAPP alone isn’t enough

CNAPP platforms excel at pre deployment checks all the way down to developer workstations, identifying misconfigurations, concerning permission combinations, vulnerable images, and risky infrastructure choices. But CNAPP’s breadth is also its limitation. CNAPP is about posture. Runtime defense is about behavior.

CNAPP tells you what could go wrong; runtime detection highlights what is going wrong right now.

It cannot preserve ephemeral evidence, correlate active behaviors across domains, or contain unfolding attacks with the precision and speed required during a real incident. Prevention remains essential, but prevention alone cannot stop an attacker who is already operating inside your cloud environment.

Real-world AWS Scenario: Why Runtime Monitoring Wins

A recent incident detected by Darktrace / CLOUD highlights how cloud compromises unfold, and why runtime visibility is non-negotiable. Each step below reflects detections that occur only when monitoring behavior in real time.

1. External Credential Use

Detection: Unusual external source for credential use: An attacker logs into a cloud account from a never-before-seen location, the earliest sign of account takeover.

2. AWS CLI Pivot

Detection: Unusual CLI activity: The attacker switches to programmatic access, issuing commands from a suspicious host to gain automation and stealth.

3. Credential Manipulation

Detection: Rare password reset: They reset or assign new passwords to establish persistence and bypass existing security controls.

4. Cloud Reconnaissance

Detection: Burst of resource discovery: The attacker enumerates buckets, roles, and services to map high value assets and plan next steps.

5. Privilege Escalation

Detection: Anomalous IAM update: Unauthorized policy updates or role changes grant the attacker elevated access or a backdoor.

6. Malicious Compute Deployment

Detection: Unusual EC2/Lambda/ECS creation: The attacker deploys compute resources for mining, lateral movement, or staging further tools.

7. Data Access or Tampering

Detection: Unusual S3 modifications: They alter S3 permissions or objects, often a prelude to data exfiltration or corruption.

Only some of these actions would appear in a posture scan, crucially after the fact.
Every one of these runtime detections is visible only through real-time behavioral monitoring while the attack is in progress.

The future of cloud security Is runtime-first

Cloud defense can no longer revolve solely around prevention. Modern attacks unfold in runtime, across a fast-changing mesh of workloads, services, and — critically — identities. To reduce risk, organizations must be able to detect, understand, and contain malicious activity as it happens, before ephemeral evidence disappears and before attacker's pivot across identity layers.

Darktrace / CLOUD delivers this shift by turning runtime, the most volatile and consequential layer in the cloud, into a fully defensible control point through unified visibility across behavior, workloads, and identities. It does this by providing:

  • Real-time behavior detection across workloads and identity activity
  • Autonomous response actions for rapid containment
  • Automated forensic level artifact preservation the moment events occur
  • AI-driven investigation that separates weak signals from true attacker patterns
  • Live cloud environment insight to understand context and impact instantly

Cloud security must evolve from securing what might go wrong to continuously understanding what is happening; in runtime, across identities, and at the speed attackers operate. Unifying runtime and identity visibility is how defenders regain the advantage.

[related-resource]

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace

Blog

/

Network

/

January 12, 2026

Maduro Arrest Used as a Lure to Deliver Backdoor

maduro arrest used as lure to deliver backdoorDefault blog imageDefault blog image

Introduction

Threat actors frequently exploit ongoing world events to trick users into opening and executing malicious files. Darktrace security researchers recently identified a threat group using reports around the arrest of Venezuelan President Nicolàs Maduro on January 3, 2025, as a lure to deliver backdoor malware.

Technical Analysis

While the exact initial access method is unknown, it is likely that a spear-phishing email was sent to victims, containing a zip archive titled “US now deciding what’s next for Venezuela.zip”. This file included an executable named “Maduro to be taken to New York.exe” and a dynamic-link library (DLL), “kugou.dll”.  

The binary “Maduro to be taken to New York.exe” is a legitimate binary (albeit with an expired signature) related to KuGou, a Chinese streaming platform. Its function is to load the DLL “kugou.dll” via DLL search order. In this instance, the expected DLL has been replaced with a malicious one with the same name to load it.  

DLL called with LoadLibraryW.
Figure 1: DLL called with LoadLibraryW.

Once the DLL is executed, a directory is created C:\ProgramData\Technology360NB with the DLL copied into the directory along with the executable, renamed as “DataTechnology.exe”. A registry key is created for persistence in “HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Lite360” to run DataTechnology.exe --DATA on log on.

 Registry key added for persistence.
Figure 2. Registry key added for persistence.
Folder “Technology360NB” created.
Figure 3: Folder “Technology360NB” created.

During execution, a dialog box appears with the caption “Please restart your computer and try again, or contact the original author.”

Message box prompting user to restart.
Figure 4. Message box prompting user to restart.

Prompting the user to restart triggers the malware to run from the registry key with the command --DATA, and if the user doesn't, a forced restart is triggered. Once the system is reset, the malware begins periodic TLS connections to the command-and-control (C2) server 172.81.60[.]97 on port 443. While the encrypted traffic prevents direct inspection of commands or data, the regular beaconing and response traffic strongly imply that the malware has the ability to poll a remote server for instructions, configuration, or tasking.

Conclusion

Threat groups have long used geopolitical issues and other high-profile events to make malicious content appear more credible or urgent. Since the onset of the war in Ukraine, organizations have been repeatedly targeted with spear-phishing emails using subject lines related to the ongoing conflict, including references to prisoners of war [1]. Similarly, the Chinese threat group Mustang Panda frequently uses this tactic to deploy backdoors, using lures related to the Ukrainian war, conventions on Tibet [2], the South China Sea [3], and Taiwan [4].  

The activity described in this blog shares similarities with previous Mustang Panda campaigns, including the use of a current-events archive, a directory created in ProgramData with a legitimate executable used to load a malicious DLL and run registry keys used for persistence. While there is an overlap of tactics, techniques and procedures (TTPs), there is insufficient information available to confidently attribute this activity to a specific threat group. Users should remain vigilant, especially when opening email attachments.

Credit to Tara Gould (Malware Research Lead)
Edited by Ryan Traill (Analyst Content Lead)

Indicators of Compromise (IoCs)

172.81.60[.]97
8f81ce8ca6cdbc7d7eb10f4da5f470c6 - US now deciding what's next for Venezuela.zip
722bcd4b14aac3395f8a073050b9a578 - Maduro to be taken to New York.exe
aea6f6edbbbb0ab0f22568dcb503d731  - kugou.dll

References

[1] https://cert.gov.ua/article/6280422  

[2] https://www.ibm.com/think/x-force/hive0154-mustang-panda-shifts-focus-tibetan-community-deploy-pubload-backdoor

[3] https://www.ibm.com/think/x-force/hive0154-targeting-us-philippines-pakistan-taiwan

[4] https://www.ibm.com/think/x-force/hive0154-targeting-us-philippines-pakistan-taiwan

Continue reading
About the author
Tara Gould
Malware Research Lead
Your data. Our AI.
Elevate your network security with Darktrace AI