Blog
/
/
November 25, 2024

Why Artificial Intelligence is the Future of Cybersecurity

This blog explores the impact of AI on the threat landscape, the benefits of AI in cybersecurity, and the role it plays in enhancing security practices and tools.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
Nov 2024

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI

More in this series

No items found.

Blog

/

Network

/

May 14, 2026

Chinese APT Campaign Targets Entities with Updated FDMTP Backdoor

Default blog imageDefault blog image

Darktrace have identified activity consistent with Chinese-nexus operations, a Twill Typhoon-linked campaign targeting customer environments, primarily within the Asia-Pacific & Japan (APJ) region

Beginning in late September 2025, multiple affected hosts were observed making requests to domains impersonating content delivery networks (CDNs), including infrastructure masquerading as Yahoo- and Apple-affiliated services. Across these cases, Darktrace identified a consistent behavioral execution pattern: the retrieval of legitimate binaries alongside malicious Dynamic Link Libraries (DLLs), enabling sideloading and execution of a modular .NET-based Remote Access Trojan (RAT) framework.

The activity aligns with patterns described in Darktrace’s previous Chinese-nexus operations report, Crimson Echo. In this case, observed modular intrusion chains built on legitimate software, and staged payload delivery. Threat actors retrieve legitimate binaries alongside configuration files and malicious DLLs to enable sideloading of a .NET-based RAT.

Observed Campaign

Across cases, the same ordered sequence appears: retrieval of a legitimate executable, (2) retrieval of a matching .config file, (3) retrieval of the malicious

DLL, (4) repeated DLL downloads over time, and (5) command-and-control (C2) communication. The .config file retrieves a malicious binary, while the legitimate binary provides a legitimate process to run it in.

Darktrace assesses with moderate confidence that this activity aligns with publicly reported Twill Typhoon tradecraft. The observed use of FDMTP, DLL sideloading, and overlapping infrastructure is consistent with previously observed operations, though not unique to a single actor. While initial access was not directly observed, previous Twill Typhoon campaigns have typically involved spear-phishing.

What Darktrace Observed

Since late September 2025, Darktrace has observed multiple customer environments making HTTP GET requests to infrastructure presenting as “CDN” endpoints for well-known platforms (including Yahoo and Apple lookalikes). Across cases, the affected hosts retrieved legitimate executables, then matching .config files (same base filename), then DLLs intended for sideloading. The sequencing of a legitimate binary + configuration + DLL  has been previously observed in campaigns linked to China-nexus threat actors.

In several cases, affected hosts also issued outbound requests to a /GetCluster endpoint, including the protocol=Dotnet-Tcpdmtp parameter. This activity was repeatedly followed by retrieval of DLL content that was subsequently used for search-order hijacking within legitimate processes.

In the September–October 2025 cases, Darktrace alerting commonly surfaced early-stage registration and C2 setup behaviors, followed by retrieval of a DLL (e.g., Client.dll) from the same external host, sometimes repeatedly over multiple days, consistent with establishing and maintaining the execution chain.

In April 2026, a finance-sector endpoint initiated a series of GET requests to yahoo-cdn[.]it[.]com, first fetching legitimate binaries (including vshost.exe and dfsvc.exe), then repeatedly retrieving associated configuration and DLL components (including dfsvc.exe.config and dnscfg.dll) over an 11-day window. The use of both Visual Studio hosting and OneClick (dfsvc.exe) paths are used to ensure the malware can run in the targeted environment.

Technical Analysis

Initial staging and execution

While the initial access method is unknown, Darktrace security researchers identified multiple archives containing the malware.

A representative example includes a ZIP archive (“test.zip”) containing:

  • A legitimate executable: biz_render.exe (Sogou Pinyin IME)
  • A malicious DLL: browser_host.dll

Contained within the zip archive named “test.zip” is the legitimate binary “biz_render.exe”, a popular Chinese Input Method Editor (IME) Sogou Pinyin.

Alongside the legitimate binary is a malicious DLL named “browser_host.dll”. As the legitimate binary loads a legitimate DLL named “browser_host.dll” via LoadLibraryExW, the malicious DLL has been named the same to sideload the malicious DLL into biz_render.exe. By supplying a malicious DLL with an identical name, the actor hijacks execution flow, enabling the payload to execute within a trusted process.

Figure 1: Biz_render.exe loading browser_host.dll.

The legitimate binary invokes the function GetBrowserManagerInstance from the sideloaded “browser_host.dll”, which then performs XOR-based decryption of embedded strings (key 0x90) to resolve and dynamically load mscoree.dll.

The DLL uses the Windows Common Language Runtime (CLR) to execute managed .NET code inside the process rather than relying solely on native binaries. During execution, the loader loads a payload directly into memory as .NET assemblies, enabling an in-memory execution.

C2 Registration

A GET request is made to:

GET /GetCluster?protocol=DotNet-TcpDmtp&tag={0}&uid={1}

with the custom header:

Verify_Token: Dmtp

This returns Base64-encoded and gzip-compressed IP addresses used for subsequent communication.

Figure 2: Decoded IPs.

Staged payload retrieval

Subsequent activity includes retrieval of multiple components from yahoo-cdn.it[.]com. The following GET requests are made:

/dfsvc.exe

/dnscfg.dll

/dfsvc.exe.config

/vhost.exe

/Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll

/config.etl

ClickOnce and AppDomain hijacking

Dfsvc.exe is the legitimate Windows ClickOnce Engine, part of the .NET framework used for updating ClickOnce Applications. Accompanying dfsvc.exe is a legitimate dfsvc.exe.config file that is used to store configuration data for the application. However, in this instance the malware has replaced the legitimate dfsvc.exe.config with the one retrieved from the server in: C:\Windows\Microsoft.NET\Framework64\v4.0.30319.

Additionally, vhost.exe the legitimate Visual Studio hosting process is retrieved from the server, along with “Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll” and “config.etl”. The DLL is used to decrypt the AES encrypted payload in config.etl and load it. The encrypted payload is dnscfg.dll, which can be loaded into vshost instead of dfsvc, and may be used if the environment does not support .NET.

Figure 3: ClickOnce configuration.

The malicious configuration disables logging, forces the application to load dnscfg.dll from the remote server, and uses a custom AppDomainManager to ensure the DLL is executed during initialization of dfsvc.exe. To ensure persistence, a scheduled task is added for %APPDATA%\Local\Microsoft\WindowsApps\dfsvc.exe.

Core payload

The DLL dnscfg.dll is a .NET binary named Client.TcpDmtp.dll. The payload is a heavily obfuscated backdoor that generates its logic at runtime and communicates with the command and control (C2) over custom TCP, DMTP (Duplex Message Transport Protocol) and appears to be an updated version of FDMTP to version 3.2.5.1

Figure 4: InitializeNewDomain.

The payload:

  • Uses cluster-based resolution (GetHostFromCluster)
  • Implements token validation
  • Enters a persistent execution loop (LoopMessage)
  • Supports structured remote tasking over DMTP

Once connected, the malware enters a persistent loop (LoopMessage), enabling it to receive commands from the remote server.

Figure 5: DMTP Connect function.

Rather than referencing values directly, they are retrieved through containers that are resolved at runtime. String values are stored in an encrypted byte array (_0) and decrypted by a custom XOR-based string decryption routine (dcsoft). The lower 16 bits of the provided key are XORed with 0xA61D (42525) to derive the initial XOR key, while subsequent bits define the string length and offset into the encrypted byte array. Each character is reconstructed from two encrypted bytes and XORed with the incrementing key value, producing the plaintext string used by the payload.

Figure 6: Decrypted strings.

Embedded in the resources section are multiple compressed binaries, the majority of which are library files. The only exceptions are client.core.dll and client.dmtpframe.dll.

Figure 7: Resources.

Modular framework and plugins

The payload embeds multiple compressed libraries, notably:

  • client.core.dll
  • client.dmtpframe.dll

Client.core.dll is a core library used for system profiling, C2 communication and plugin execution. The implant has the functionality to retrieve information including antivirus products, domain name, HWID, CLR version, administrator status, hardware details, network details, operating system, and user.

Figure 8: Client.Core.Info functions.

Additionally, the component is responsible for loading plugins, with support for both binary and JSON-based plugin execution. This allows plugins to receive commands and parameters in different formats depending on the task being performed.

The framework handles details such as plugin hashes, method names, task identifiers, caller tracking, and argument processing, allowing plugins to be executed consistently within the environment. In addition to execution management, the library also provides plugins with access to common runtime functionality such as logging, communication, and process handling.

Figure 9: Client.core functions.

client.dmtpframe.dll handles:

  • DMTP communication
  • Heartbeats and reconnection
  • Plugin persistence via registry:

HKCU\Software\Microsoft\IME\{id}

Client.dmtpframe.dll is built on the TouchSocket DMTP networking library and continues to manage the remote plugins. The DLL implements remote communication features including heartbeat maintenance, reconnection handling, RPC-style messaging, SSL support, and token-based verification. The DLL also has the ability to add plugins to the registry under HKCU/Software/Microsoft/IME/{id} for persistence.

Plugins observed

While the full set of plugins remains unknown, researchers were able to identify four plugins, including:

  • Persist.WpTask.dll - used to create, remove and trigger scheduled Windows tasks remotely.
  • Persist.registry.dll - used to manage registry persistence with the ability to create, and delete registry values, along with hidden persistence keys.
  • Persist.extra.dll - used to load and persist the main framework.
  • Assist.dll - used to remotely retrieve files or commands, as well as manipulate system processes.
Figure 10: Plugins stored in IME registry.
Figure 11: Obfuscated script in plugin resources.

Persist.extra.dll is a module that is used to load a script “setup.log” to load and persist the main framework. Stored within the resources section of the binary is an obfuscated script that creates a .NET COM object that is added to the registry key HKCU\Software\Classes\TypeLib\ {9E175B61-F52A-11D8-B9A5-505054503030} \1.0\1\Win64 for persistence. After deobfuscating this script, another DLL is revealed named “WindowsBase.dll”.

Figure 12: Registry entry for script.

The binary checks in with icloud-cdn[.]net every five minutes, retrieves a version string, downloads an encrypted payload named checksum.bin, saves it locally as C:\ProgramData\USOShared\Logs\checksum.etl, decrypts it with AES using the hardcoded key POt_L[Bsh0=+@0a., and loads the decrypted assembly directly from memory via Assembly.Load(byte[]). The version.txt file acts as an update marker so it only re-downloads when the remote version changes, while the mutex prevents duplicate instances.

Figure 13: USOShared/Logs.

Checksum.etl is decrypted with AES and loaded into memory, loading another .NET DLL named “Client.dll”. This binary is the same as “dnscfg.dll” mentioned at the start and allows the threat actors to update the main framework based on the version.

Conclusion

Across cases, Darktrace consistently observed the following sequence:

  • Retrieval of legitimate executables
  • Retrieval of DLLs for sideloading
  • C2 registration via /GetCluster

This approach is consistent with broader China-nexus tradecraft. As outlined in Darktrace’s Crimson Echo report, the stable feature of this activity is behavioral. Infrastructure rotates and payloads can change, but the execution model persists. For defenders, the implication is straightforward: detection anchored to individual indicators will degrade quickly. Detection anchored to a behavioral sequence offer a far more durable approach.

Credit to Tara Gould (Malware Research Lead), Adam Potter (Senior Cyber Analyst), Emma Foulger (Global Threat Research Operations Lead), Nathaniel Jones (VP, Security & AI Strategy)

Edited by Ryan Traill (Content Manager)


Appendices

A detailed list of detection models and triggered indicators is provided alongside IoCs.

Indicators of Compromise (IoCs)

Test.zip - fc3959ebd35286a82c662dc81ca658cb

Dnscfg.dll - b2c8f1402d336963478f4c5bc36c961a

Client.TcpDmtp.dll - c52b4a16d93a44376f0407f1c06e0b

Browser_host.dll - c17f39d25def01d5c87615388925f45a

Client.DmtpFrame.dll - 482cc72e01dfa54f30efe4fefde5422d

Persist.Extra - 162F69FE29EB7DE12B684E979A446131

Persist.Registry - 067FBAD4D6905D6E13FDC19964C1EA52

Assist - 2CD781AB63A00CE5302ED844CFBECC27

Persist.WpTask - DF3437C88866C060B00468055E6FA146

Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll - c650a624455c5222906b60aac7e57d48

www.icloud-cdn[.]net

www.yahoo-cdn.it[.]com

154.223.58[.]142[AP8] [EF9]

MITRE ATT&CK Techniques

T1106 – Native API

T1053.005 - Scheduled Task

T1546.16 - Component Object Model Hijacking

T1547.001 - Registry Run Keys

T1511.001 - Dynamic Link Library Injection

T1622 – Debugger Evasion

T1140 – Deobfuscate/Decode Files or Information

T1574.001 - Hijack Execution Flow: DLL

T1620 – Reflective Code Loading

T1082 – System Information Discovery

T1007 – System Service Discovery

T1030 – System Owner/User Discovery

T1071.001 - Web Protocols

T1027.007 - Dynamic API Resolution

T1095 – Non-Application Layer Protocol

Darktrace Model Alerts

·      Compromise / Beaconing Activity To External Rare

·      Compromise / HTTP Beaconing to Rare Destination

·      Anomalous File / Script from Rare External Location

·      Compromise / Sustained SSL or HTTP Increase

·      Compromise / Agent Beacon to New Endpoint

·      Anomalous File / EXE from Rare External Location

·      Anomalous File / Multiple EXE from Rare External Locations

·      Compromise / Quick and Regular Windows HTTP Beaconing

·      Compromise / High Volume of Connections with Beacon Score

·      Anomalous File / Anomalous Octet Stream (No User Agent)

·      Compromise / Repeating Connections Over 4 Days

·      Device / Large Number of Model Alerts

·      Anomalous Connection / Multiple Connections to New External TCP Port

·      Compromise / Large Number of Suspicious Failed Connections

·      Anomalous Connection / Multiple Failed Connections to Rare Endpoint

·      Device / Increased External Connectivity

Continue reading
About the author
Tara Gould
Malware Research Lead

Blog

/

/

May 12, 2026

Resilience at the Speed of AI: Defending the Modern Campus with Darktrace

Default blog imageDefault blog image

Why higher education is a different cybersecurity battlefield

After four decades in IT, now serving as both CIO and CISO, I’ve learned one simple truth: cybersecurity is never “done.” It’s a constant game of cat and mouse. Criminals evolve. Technologies advance. Regulations expand. But in higher education, the challenge is uniquely complex.

Unlike a bank or a military installation, we can’t lock down networks to a narrow set of approved applications. Higher education environments are open by design. Students collaborate globally, faculty conduct cutting-edge research, and administrators manage critical operations, all of which require seamless access to the internet, global networks, cloud platforms, and connected systems.

Combine that openness with expanding regulatory mandates and tight budgets, and the balancing act becomes clear.

Threat actors don’t operate under the same constraints. Often well-funded and sponsored by nation-states with significant resources, they’re increasingly organized, strategic, and innovative.

That sophistication shows up in the tactics we face every day, from social engineering and ransomware to AI-driven impersonation attacks. We’re dealing with massive volumes of data, countless signals, and a very small window between detection and damage.

No human team, no matter how talented or how numerous, can manually sift through that noise at the speed required.

Discovering a force multiplier

Nothing in cybersecurity is 100% foolproof. I never “set it and forget it.” But for institutions balancing rising threats and finite resources, the Darktrace ActiveAI Security Platform™ offers something incredibly valuable: peace of mind through speed and scale.

It closes the gap between detection and response in a way humans can’t possibly match. At the speed of light, it can quarantine, investigate, and contain anomalous activity.

I’ve purchased and deployed Darktrace three separate times at three different institutions because I’ve seen firsthand what it can do and what it enables teams like mine to achieve.

I first encountered Darktrace while serving as CIO for a large multi-campus college system. What caught my attention was Darktrace's Self-Learning AI, and its ability to learn what "normal" looked like across our network. Instead of relying solely on static signatures or rigid rules, Darktrace built a behavioral baseline unique to our environment and alerted us in real time when something simply didn’t look right.

In higher education, where strict lockdowns aren’t realistic, that behavioral model made all the difference. We deployed it across five campuses, and the impact was immediate. Operating 24/7, Darktrace surfaced threats in ways our team couldn’t replicate manually.

Over time, the Darktrace platform evolved alongside the changing threat landscape, expanding into intrusion prevention, cloud visibility, and email security. At subsequent institutions, including Washington College, Darktrace was one of my first strategic investments.

Revealing the hidden threat other tools missed

One of the most surprising investigations of my career involved a data leak. Leadership suspected sensitive information from high-level meetings was being exposed, but our traditional tools couldn’t provide any answers.

Using Darktrace’s deep network visibility, down to packet-level data, we traced unusual connections to our CCTV camera system, which had been configured with a manufacturer’s default password. A small group of employees had hacked into the CCTV cameras, accessed audio-enabled recordings from boardroom meetings, and stored copies locally.

No other tool in our environment could have surfaced those connections the way Darktrace did. It was a clear example of why using AI to deeply understand how your organization, systems, and tools normally behave, matters: threats and risks don’t always look the way we expect.

Elevating a D-rating into a A-level security program

When I arrived at my last CISO role, the institution had recently experienced a significant ransomware attack. Attackers located  data  which informed their setting  ransom demands to an amount they knew would likely result in payment. It was a sobering example of how calculated and strategic modern cybercriminals have become.

Third-party cyber ratings reflected that reality, with a  D rating.

To raise the bar, we implemented a comprehensive security program and integrated layered defenses; -deploying state of the art tools and methods-  across the environment, with Darktrace at its core.

After a 90-day learning period to establish our behavioral baseline, we transitioned the platform into fully autonomous mode. In a single 30-day span, Darktrace conducted more than 2,500 investigations and autonomously resolved 92% of all false positives.

For a small team, that’s transformative. Instead of drowning in alerts, my staff focused on less than  200 meaningful cases that warranted human review.

Today, we maintain a perfect A rating from third-party assessors and have remained cybersafe.

Peace of mind isn’t about complacency

The effect of Darktrace as a force multiplier has a real human impact.

With the time reclaimed through automation, we expanded community education programs and implemented simulated phishing exercises. Through sustained training and awareness efforts, we reduced social engineering susceptibility from nearly 45% to under 5%.

On a personal level, Darktrace allows me to sleep better at night and take time off knowing we have intelligent systems monitoring and responding around the clock. For any CIO or CISO carrying institutional risk on their shoulders, that matters.

The next era: AI vs. AI

A new chapter in cybersecurity is unfolding as adversaries leverage AI to enhance scale, speed, and believability. Phishing campaigns are more personalized, impersonation attempts are more precise, and deepfake video technology, including live video, is disturbingly authentic. At the same time, organizations are rapidly adopting AI across their own environments —from GenAI assistants to embedded tools to autonomous agents. These systems don’t operate within fixed rules. They act across email, cloud, SaaS, and identity systems, often with broad permissions, and their behavior can evolve over time in ways that are difficult to predict or control.

That creates a new kind of security challenge. It’s not just about defending against AI-powered threats but understanding and governing how AI behaves within your environment, including what it can access, how it acts, and where risk begins to emerge.

From my perspective, this is a natural next step for Darktrace.

Darktrace brings a level of maturity and behavioral understanding uniquely suited to the complexity of AI environments. Self-Learning AI learns the normal patterns of each business to interpret context, uncover subtle intent, and detect meaningful deviations without relying on predefined rules or signatures. Extending into securing AI by bringing real-time visibility and control to GenAI assistants, AI agents, development environments and Shadow AI, feels like the logical evolution of what Darktrace already does so well.

Just as importantly, Darktrace is already built for dynamic, cross-domain environments where risk doesn’t sit in a single tool or control plane. In higher education, activity already spans multiple systems and, with AI, that interconnection only accelerates.

Having deployed Darktrace multiple times, I have confidence it’s uniquely positioned to lead in this space and help organizations adopt AI with greater visibility and control.

---

Since authoring this blog, Irving Bruckstein has transitioned to the role of Chief Executive Officer of the Cyberaigroup.

Continue reading
About the author
Irving Bruckstein
CEO CyberAIgroup
Your data. Our AI.
Elevate your network security with Darktrace AI