Blog
/
/
November 25, 2024

Why Artificial Intelligence is the Future of Cybersecurity

This blog explores the impact of AI on the threat landscape, the benefits of AI in cybersecurity, and the role it plays in enhancing security practices and tools.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
Nov 2024

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI

More in this series

No items found.

Blog

/

Network

/

April 21, 2026

How a Compromised eScan Update Enabled Multi‑Stage Malware and Blockchain C2

multi-stage malwareDefault blog imageDefault blog image

The rise of supply chain attacks

In recent years, the abuse of trusted software has become increasingly common, with supply chain compromises emerging as one of the fastest growing vectors for cyber intrusions. As highlighted in Darktrace’s Annual Threat Report 2026, attackers and state-actors continue to find significant value in gaining access to networks through compromised trusted links, third-party tools, or legitimate software. In January 2026, a supply chain compromise affecting MicroWorld Technologies’ eScan antivirus product was reported, with malicious updates distributed to customers through the legitimate update infrastructure. This, in turn, resulted in a multi‑stage loader malware being deployed on compromised devices [1][2].

An overview of eScan exploitation

According to eScan’s official threat advisory, unauthorized access to a regional update server resulted in an “incorrect file placed in the update distribution path” [3]. Customers associated with the affected update servers who downloaded the update during a two-hour window on January 20 were impacted, with affected Windows devices subsequently have experiencing various errors related to update functions and notifications [3].

While eScan did not specify which regional update servers were affected by the malicious update, all impacted Darktrace customer environments were located in the Europe, Middle East, and Africa (EMEA) region.

External research reported that a malicious 32-bit executable file , “Reload.exe”, was first installed on affected devices, which then dropped the 64-bit downloader, “CONSCTLX.exe”. This downloader establishes persistence by creating scheduled tasks such as “CorelDefrag”, which are responsible for executing PowerShell scripts. Subsequently, it evades detection by tampering with the Windows HOSTS file and eScan registry to prevent future remote updates intended for remediation. Additional payloads are then downloaded from its command-and-control (C2) server [1].

Darktrace’s coverage of eScan exploitation

Initial Access and Blockchain as multi-distributed C2 Infrastructure

On January 20, the same day as the aforementioned two‑hour exploit window, Darktrace observed multiple devices across affected networks downloading .dlz package files from eScan update servers, followed by connections to an anomalous endpoint, vhs.delrosal[.]net, which belongs to the attackers’ C2 infrastructure.

The endpoint contained a self‑signed SSL certificate with the string “O=Internet Widgits Pty Ltd, ST=SomeState, C=AU”, a default placeholder commonly used in SSL/TLS certificates for testing and development environments, as well as in malicious C2 infrastructure [4].

Utilizing a multi‑distributed C2 infrastructure, the attackers also leveraged domains linked with the Solana open‑source blockchain for C2 purposes, namely “.sol”. These domains were human‑readable names that act as aliases for cryptocurrency wallet addresses. As browsers do not natively resolve .sol domains, the Solana Naming System (formerly known as Bonfida, an independent contributor within the Solana ecosystem) provides a proxy service, through endpoints such as sol-domain[.]org, to enable browser access.

Darktrace observed devices connecting to blackice.sol-domain[.]org, indicating that attackers were likely using this proxy to reach a .sol domain for C2 activity. Given this behavior, it is likely that the attackers leveraged .sol domains as a dead drop resolver, a C2 technique in which threat actors host information on a public and legitimate service, such as a blockchain. Additional proxy resolver endpoints, such as sns-resolver.bonfida.workers[.]dev, were also observed.

Solana transactions are transparent, allowing all activity to be viewed publicly. When Darktrace analysts examined the transactions associated with blackice[.]sol, they observed that the earliest records dated November 7, 2025, which coincides with the creation date of the known C2 endpoint vhs[.]delrosal[.]net as shown in WHOIS Lookup information [4][5].

WHOIS Look records of the C2 endpoint vhs[.]delrosal[.]net.
Figure 1: WHOIS Look records of the C2 endpoint vhs[.]delrosal[.]net.
 Earliest observed transaction record for blackice[.]sol on public ledgers.
Figure 2: Earliest observed transaction record for blackice[.]sol on public ledgers.

Subsequent instructions found within the transactions contained strings such as “CNAME= vhs[.]delrosal[.]net”, indicating attempts to direct the device toward the malicious endpoint. A more recent transaction recorded on January 28 included strings such as “hxxps://96.9.125[.]243/i;code=302”, suggesting an effort to change C2 endpoints. Darktrace observed multiple alerts triggered for these endpoints across affected devices.

Similar blockchain‑related endpoints, such as “tumama.hns[.]to”, were also observed in C2 activities. The hns[.]to service allows web browsers to access websites registered on Handshake, a decentralized blockchain‑based framework designed to replace centralized authorities and domain registries for top‑level domains. This shift toward decentralized, blockchain‑based infrastructure likely reflects increased efforts by attackers to evade detection.

In outgoing connections to these malicious endpoints across affected networks, Darktrace / NETWORK recognized that the activity was 100% rare and anomalous for both the devices and the wider networks, likely indicative of malicious beaconing, regardless of the underlying trusted infrastructure. In addition to generating multiple model alerts to capture this malicious activity across affected networks, Darktrace’s Cyber AI Analyst was able to compile these separate events into broader incidents that summarized the entire attack chain, allowing customers’ security teams to investigate and remediate more efficiently. Moreover, in customer environments where Darktrace’s Autonomous Response capability was enabled, Darktrace took swift action to contain the attack by blocking beaconing connections to the malicious endpoints, even when those endpoints were associated with seemingly trustworthy services.

Conclusion

Attacks targeting trusted relationships continue to be a popular strategy among threat actors. Activities linked to trusted or widely deployed software are often unintentionally whitelisted by existing security solutions and gateways. Darktrace observed multiple devices becoming impacted within a very short period, likely because tools such as antivirus software are typically mass‑deployed across numerous endpoints. As a result, a single compromised delivery mechanism can greatly expand the attack surface.

Attackers are also becoming increasingly creative in developing resilient C2 infrastructure and exploiting legitimate services to evade detection. Defenders are therefore encouraged to closely monitor anomalous connections and file downloads. Darktrace’s ability to detect unusual activity amidst ever‑changing tactics and indicators of compromise (IoCs) helps organizations maintain a proactive and resilient defense posture against emerging threats.

Credit to Joanna Ng (Associate Principal Cybersecurity Analyst) and Min Kim (Associate Principal Cybersecurity Analyst) and Tara Gould (Malware Researcher Lead)

Edited by Ryan Traill (Content Manager)

Appendices

Darktrace Model Detections

  • Anomalous File::Zip or Gzip from Rare External Location
  • Anomalous Connection / Suspicious Self-Signed SSL
  • Anomalous Connection / Rare External SSL Self-Signed
  • Anomalous Connection / Suspicious Expired SSL
  • Anomalous Server Activity / Anomalous External Activity from Critical Network Device

List of Indicators of Compromise (IoCs)

  • vhs[.]delrosal[.]net – C2 server
  • tumama[.]hns[.]to – C2 server
  • blackice.sol-domain[.]org – C2 server
  • 96.9.125[.]243 – C2 Server

MITRE ATT&CK Mapping

  • T1071.001 - Command and Control: Web Protocols
  • T1588.001 - Resource Development
  • T1102.001 - Web Service: Dead Drop Resolver
  • T1195 – Supple Chain Compromise

References

[1] https://www.morphisec.com/blog/critical-escan-threat-bulletin/

[2] https://www.bleepingcomputer.com/news/security/escan-confirms-update-server-breached-to-push-malicious-update/

[3] hxxps://download1.mwti.net/documents/Advisory/eScan_Security_Advisory_2026[.]pdf

[4] https://www.virustotal.com/gui/domain/delrosal.net

[5] hxxps://explorer.solana[.]com/address/2wFAbYHNw4ewBHBJzmDgDhCXYoFjJnpbdmeWjZvevaVv

Continue reading
About the author
Joanna Ng
Associate Principal Analyst

Blog

/

/

April 17, 2026

Why Behavioral AI Is the Answer to Mythos

mythos behavioral aiDefault blog imageDefault blog image

How AI is breaking the patch-and-prevent security model

The business world was upended last week by the news that Anthropic has developed a powerful new AI model, Claude Mythos, which poses unprecedented risk because of its ability to expose flaws in IT systems.  

Whether it’s Mythos or OpenAI’s GPT-5.4-Cyber, which was just announced on Tuesday, supercharged AI models in the hands of hackers will allow them to carry out attacks at machine speed, much faster than most businesses can stop them.  

This news underscores a stark reality for all leaders: Patching holes alone is not a sufficient control against modern cyberattacks. You must assume that your software is already vulnerable right now. And while LLMs are very good at spotting vulnerabilities, they’re pretty bad at reliably patching them.

Project Glasswing members say it could take months or years for patches to be applied. While that work is done, enterprises must be protected against Zero-Day attacks, or security holes that are still undiscovered.  

Most cybersecurity strategies today are built like a daily multivitamin: broad, preventative, and designed to keep the system generally healthy over time. Patch regularly. Update software. Reduce known vulnerabilities. It’s necessary, disciplined, and foundational. But it’s also built for a world where the risks are well known and defined, cycles are predictable, and exposure unfolds at a manageable pace.

What happens when that model no longer holds?

The AI cyber advantage: Behavioral AI

The vulnerabilities exposed by AI systems like Mythos aren’t the well-understood risks your “multivitamin” was designed to address. They are transient, fast-emerging entry points that exist just long enough to be exploited.

In that environment, prevention alone isn’t enough. You don’t need more vitamins—you need a painkiller. The future of cybersecurity won’t be defined by how well you maintain baseline health. It will be defined by how quickly you respond when something breaks and every second counts.

That’s why behavioral AI gives businesses a durable cyber advantage. Rather than trying to figure out what the attacker looks like, it learns what “normal” looks like across the digital ecosystem of each individual business.  

That’s exactly how behavioral AI works. It understands the self, or what's normal for the organization, and then it can spot deviations in from normal that are actually early-stage attacks.

The Darktrace approach to cybersecurity

At Darktrace, we’ve been defending our 10,000 customers using behavioral AI cybersecurity developed in our AI Research Centre in Cambridge, U.K.

Darktrace was built on the understanding that attacks do not arrive neatly labeled, and that the most damaging threats often emerge before signatures, indicators, or public disclosures can catch up.  

Our AI algorithms learn in real time from your personalized business data to learn what’s normal for every person and every asset, and the flows of data within your organization. By continuously understanding “normal” across your entire digital ecosystem, Darktrace identifies and contains threats emerging from unknown vulnerabilities and compromised supply chain dependencies, autonomously curtailing attacks at machine speed.  

Security for novel threats

Darktrace is built for a world where AI is not just accelerating attacks, but fundamentally reshaping how they originate. What makes our AI so unique is that it's proven time and again to identify cyber threats before public vulnerability disclosures, such as critical Ivanti vulnerabilities in 2025 and SAP NetWeaver exploitations tied to nation-state threat actors.  

As AI reshapes how vulnerabilities are found and exploited, cybersecurity must be anchored in something more durable than a list of known flaws. It requires a real-time understanding of the business itself: what belongs, what does not, and what must be stopped immediately.

What leaders should do right now

The leadership priority must shift accordingly.

First, stop treating unknown vulnerabilities as an edge case. AI‑driven discovery makes them the norm. Security programs built primarily around known flaws, signatures, and threat intelligence will always lag behind an attacker that is operating in real time.

Second, insist on an understanding of what is actually normal across the business. When threats are novel, labels are useless. The earliest and most reliable signal of danger is abnormal behavior—systems, users, or data flows that suddenly depart from what is expected. If you cannot see that deviation as it happens, you are effectively blind during the most critical window.

Finally, assume that the next serious incident will occur before remediation guidance is available. Ask what happens in those first minutes and hours. The organizations that maintain resilience are not the ones waiting for disclosure cycles to catch up—they are the ones that can autonomously identify and contain emerging threats as they unfold.

This is the reality of cybersecurity in an AI‑shaped world. Patching and prevention remain important foundations, but the advantage now belongs to those who can respond instantly when the unpredictable occurs.

Behavioral AI is security designed not just for known threats, but for the ones that AI will discover next.

[related-resource]

Continue reading
About the author
Ed Jennings
President and CEO
Your data. Our AI.
Elevate your network security with Darktrace AI