Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Senior Cyber Analyst & Team Lead
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Senior Cyber Analyst & Team Lead

More in this series

No items found.

Blog

/

Network

/

May 14, 2026

Chinese APT Campaign Targets Entities with Updated FDMTP Backdoor

Default blog imageDefault blog image

Darktrace have identified activity consistent with Chinese-nexus operations, a Twill Typhoon-linked campaign targeting customer environments, primarily within the Asia-Pacific & Japan (APJ) region

Beginning in late September 2025, multiple affected hosts were observed making requests to domains impersonating content delivery networks (CDNs), including infrastructure masquerading as Yahoo- and Apple-affiliated services. Across these cases, Darktrace identified a consistent behavioral execution pattern: the retrieval of legitimate binaries alongside malicious Dynamic Link Libraries (DLLs), enabling sideloading and execution of a modular .NET-based Remote Access Trojan (RAT) framework.

The activity aligns with patterns described in Darktrace’s previous Chinese-nexus operations report, Crimson Echo. In this case, observed modular intrusion chains built on legitimate software, and staged payload delivery. Threat actors retrieve legitimate binaries alongside configuration files and malicious DLLs to enable sideloading of a .NET-based RAT.

Observed Campaign

Across cases, the same ordered sequence appears: retrieval of a legitimate executable, (2) retrieval of a matching .config file, (3) retrieval of the malicious

DLL, (4) repeated DLL downloads over time, and (5) command-and-control (C2) communication. The .config file retrieves a malicious binary, while the legitimate binary provides a legitimate process to run it in.

Darktrace assesses with moderate confidence that this activity aligns with publicly reported Twill Typhoon tradecraft. The observed use of FDMTP, DLL sideloading, and overlapping infrastructure is consistent with previously observed operations, though not unique to a single actor. While initial access was not directly observed, previous Twill Typhoon campaigns have typically involved spear-phishing.

What Darktrace Observed

Since late September 2025, Darktrace has observed multiple customer environments making HTTP GET requests to infrastructure presenting as “CDN” endpoints for well-known platforms (including Yahoo and Apple lookalikes). Across cases, the affected hosts retrieved legitimate executables, then matching .config files (same base filename), then DLLs intended for sideloading. The sequencing of a legitimate binary + configuration + DLL  has been previously observed in campaigns linked to China-nexus threat actors.

In several cases, affected hosts also issued outbound requests to a /GetCluster endpoint, including the protocol=Dotnet-Tcpdmtp parameter. This activity was repeatedly followed by retrieval of DLL content that was subsequently used for search-order hijacking within legitimate processes.

In the September–October 2025 cases, Darktrace alerting commonly surfaced early-stage registration and C2 setup behaviors, followed by retrieval of a DLL (e.g., Client.dll) from the same external host, sometimes repeatedly over multiple days, consistent with establishing and maintaining the execution chain.

In April 2026, a finance-sector endpoint initiated a series of GET requests to yahoo-cdn[.]it[.]com, first fetching legitimate binaries (including vshost.exe and dfsvc.exe), then repeatedly retrieving associated configuration and DLL components (including dfsvc.exe.config and dnscfg.dll) over an 11-day window. The use of both Visual Studio hosting and OneClick (dfsvc.exe) paths are used to ensure the malware can run in the targeted environment.

Technical Analysis

Initial staging and execution

While the initial access method is unknown, Darktrace security researchers identified multiple archives containing the malware.

A representative example includes a ZIP archive (“test.zip”) containing:

  • A legitimate executable: biz_render.exe (Sogou Pinyin IME)
  • A malicious DLL: browser_host.dll

Contained within the zip archive named “test.zip” is the legitimate binary “biz_render.exe”, a popular Chinese Input Method Editor (IME) Sogou Pinyin.

Alongside the legitimate binary is a malicious DLL named “browser_host.dll”. As the legitimate binary loads a legitimate DLL named “browser_host.dll” via LoadLibraryExW, the malicious DLL has been named the same to sideload the malicious DLL into biz_render.exe. By supplying a malicious DLL with an identical name, the actor hijacks execution flow, enabling the payload to execute within a trusted process.

Figure 1: Biz_render.exe loading browser_host.dll.

The legitimate binary invokes the function GetBrowserManagerInstance from the sideloaded “browser_host.dll”, which then performs XOR-based decryption of embedded strings (key 0x90) to resolve and dynamically load mscoree.dll.

The DLL uses the Windows Common Language Runtime (CLR) to execute managed .NET code inside the process rather than relying solely on native binaries. During execution, the loader loads a payload directly into memory as .NET assemblies, enabling an in-memory execution.

C2 Registration

A GET request is made to:

GET /GetCluster?protocol=DotNet-TcpDmtp&tag={0}&uid={1}

with the custom header:

Verify_Token: Dmtp

This returns Base64-encoded and gzip-compressed IP addresses used for subsequent communication.

Figure 2: Decoded IPs.

Staged payload retrieval

Subsequent activity includes retrieval of multiple components from yahoo-cdn.it[.]com. The following GET requests are made:

/dfsvc.exe

/dnscfg.dll

/dfsvc.exe.config

/vhost.exe

/Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll

/config.etl

ClickOnce and AppDomain hijacking

Dfsvc.exe is the legitimate Windows ClickOnce Engine, part of the .NET framework used for updating ClickOnce Applications. Accompanying dfsvc.exe is a legitimate dfsvc.exe.config file that is used to store configuration data for the application. However, in this instance the malware has replaced the legitimate dfsvc.exe.config with the one retrieved from the server in: C:\Windows\Microsoft.NET\Framework64\v4.0.30319.

Additionally, vhost.exe the legitimate Visual Studio hosting process is retrieved from the server, along with “Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll” and “config.etl”. The DLL is used to decrypt the AES encrypted payload in config.etl and load it. The encrypted payload is dnscfg.dll, which can be loaded into vshost instead of dfsvc, and may be used if the environment does not support .NET.

Figure 3: ClickOnce configuration.

The malicious configuration disables logging, forces the application to load dnscfg.dll from the remote server, and uses a custom AppDomainManager to ensure the DLL is executed during initialization of dfsvc.exe. To ensure persistence, a scheduled task is added for %APPDATA%\Local\Microsoft\WindowsApps\dfsvc.exe.

Core payload

The DLL dnscfg.dll is a .NET binary named Client.TcpDmtp.dll. The payload is a heavily obfuscated backdoor that generates its logic at runtime and communicates with the command and control (C2) over custom TCP, DMTP (Duplex Message Transport Protocol) and appears to be an updated version of FDMTP to version 3.2.5.1

Figure 4: InitializeNewDomain.

The payload:

  • Uses cluster-based resolution (GetHostFromCluster)
  • Implements token validation
  • Enters a persistent execution loop (LoopMessage)
  • Supports structured remote tasking over DMTP

Once connected, the malware enters a persistent loop (LoopMessage), enabling it to receive commands from the remote server.

Figure 5: DMTP Connect function.

Rather than referencing values directly, they are retrieved through containers that are resolved at runtime. String values are stored in an encrypted byte array (_0) and decrypted by a custom XOR-based string decryption routine (dcsoft). The lower 16 bits of the provided key are XORed with 0xA61D (42525) to derive the initial XOR key, while subsequent bits define the string length and offset into the encrypted byte array. Each character is reconstructed from two encrypted bytes and XORed with the incrementing key value, producing the plaintext string used by the payload.

Figure 6: Decrypted strings.

Embedded in the resources section are multiple compressed binaries, the majority of which are library files. The only exceptions are client.core.dll and client.dmtpframe.dll.

Figure 7: Resources.

Modular framework and plugins

The payload embeds multiple compressed libraries, notably:

  • client.core.dll
  • client.dmtpframe.dll

Client.core.dll is a core library used for system profiling, C2 communication and plugin execution. The implant has the functionality to retrieve information including antivirus products, domain name, HWID, CLR version, administrator status, hardware details, network details, operating system, and user.

Figure 8: Client.Core.Info functions.

Additionally, the component is responsible for loading plugins, with support for both binary and JSON-based plugin execution. This allows plugins to receive commands and parameters in different formats depending on the task being performed.

The framework handles details such as plugin hashes, method names, task identifiers, caller tracking, and argument processing, allowing plugins to be executed consistently within the environment. In addition to execution management, the library also provides plugins with access to common runtime functionality such as logging, communication, and process handling.

Figure 9: Client.core functions.

client.dmtpframe.dll handles:

  • DMTP communication
  • Heartbeats and reconnection
  • Plugin persistence via registry:

HKCU\Software\Microsoft\IME\{id}

Client.dmtpframe.dll is built on the TouchSocket DMTP networking library and continues to manage the remote plugins. The DLL implements remote communication features including heartbeat maintenance, reconnection handling, RPC-style messaging, SSL support, and token-based verification. The DLL also has the ability to add plugins to the registry under HKCU/Software/Microsoft/IME/{id} for persistence.

Plugins observed

While the full set of plugins remains unknown, researchers were able to identify four plugins, including:

  • Persist.WpTask.dll - used to create, remove and trigger scheduled Windows tasks remotely.
  • Persist.registry.dll - used to manage registry persistence with the ability to create, and delete registry values, along with hidden persistence keys.
  • Persist.extra.dll - used to load and persist the main framework.
  • Assist.dll - used to remotely retrieve files or commands, as well as manipulate system processes.
Figure 10: Plugins stored in IME registry.
Figure 11: Obfuscated script in plugin resources.

Persist.extra.dll is a module that is used to load a script “setup.log” to load and persist the main framework. Stored within the resources section of the binary is an obfuscated script that creates a .NET COM object that is added to the registry key HKCU\Software\Classes\TypeLib\ {9E175B61-F52A-11D8-B9A5-505054503030} \1.0\1\Win64 for persistence. After deobfuscating this script, another DLL is revealed named “WindowsBase.dll”.

Figure 12: Registry entry for script.

The binary checks in with icloud-cdn[.]net every five minutes, retrieves a version string, downloads an encrypted payload named checksum.bin, saves it locally as C:\ProgramData\USOShared\Logs\checksum.etl, decrypts it with AES using the hardcoded key POt_L[Bsh0=+@0a., and loads the decrypted assembly directly from memory via Assembly.Load(byte[]). The version.txt file acts as an update marker so it only re-downloads when the remote version changes, while the mutex prevents duplicate instances.

Figure 13: USOShared/Logs.

Checksum.etl is decrypted with AES and loaded into memory, loading another .NET DLL named “Client.dll”. This binary is the same as “dnscfg.dll” mentioned at the start and allows the threat actors to update the main framework based on the version.

Conclusion

Across cases, Darktrace consistently observed the following sequence:

  • Retrieval of legitimate executables
  • Retrieval of DLLs for sideloading
  • C2 registration via /GetCluster

This approach is consistent with broader China-nexus tradecraft. As outlined in Darktrace’s Crimson Echo report, the stable feature of this activity is behavioral. Infrastructure rotates and payloads can change, but the execution model persists. For defenders, the implication is straightforward: detection anchored to individual indicators will degrade quickly. Detection anchored to a behavioral sequence offer a far more durable approach.

Credit to Tara Gould (Malware Research Lead), Adam Potter (Senior Cyber Analyst), Emma Foulger (Global Threat Research Operations Lead), Nathaniel Jones (VP, Security & AI Strategy)

Edited by Ryan Traill (Content Manager)


Appendices

A detailed list of detection models and triggered indicators is provided alongside IoCs.

Indicators of Compromise (IoCs)

Test.zip - fc3959ebd35286a82c662dc81ca658cb

Dnscfg.dll - b2c8f1402d336963478f4c5bc36c961a

Client.TcpDmtp.dll - c52b4a16d93a44376f0407f1c06e0b

Browser_host.dll - c17f39d25def01d5c87615388925f45a

Client.DmtpFrame.dll - 482cc72e01dfa54f30efe4fefde5422d

Persist.Extra - 162F69FE29EB7DE12B684E979A446131

Persist.Registry - 067FBAD4D6905D6E13FDC19964C1EA52

Assist - 2CD781AB63A00CE5302ED844CFBECC27

Persist.WpTask - DF3437C88866C060B00468055E6FA146

Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll - c650a624455c5222906b60aac7e57d48

www.icloud-cdn[.]net

www.yahoo-cdn.it[.]com

154.223.58[.]142[AP8] [EF9]

MITRE ATT&CK Techniques

T1106 – Native API

T1053.005 - Scheduled Task

T1546.16 - Component Object Model Hijacking

T1547.001 - Registry Run Keys

T1511.001 - Dynamic Link Library Injection

T1622 – Debugger Evasion

T1140 – Deobfuscate/Decode Files or Information

T1574.001 - Hijack Execution Flow: DLL

T1620 – Reflective Code Loading

T1082 – System Information Discovery

T1007 – System Service Discovery

T1030 – System Owner/User Discovery

T1071.001 - Web Protocols

T1027.007 - Dynamic API Resolution

T1095 – Non-Application Layer Protocol

Darktrace Model Alerts

·      Compromise / Beaconing Activity To External Rare

·      Compromise / HTTP Beaconing to Rare Destination

·      Anomalous File / Script from Rare External Location

·      Compromise / Sustained SSL or HTTP Increase

·      Compromise / Agent Beacon to New Endpoint

·      Anomalous File / EXE from Rare External Location

·      Anomalous File / Multiple EXE from Rare External Locations

·      Compromise / Quick and Regular Windows HTTP Beaconing

·      Compromise / High Volume of Connections with Beacon Score

·      Anomalous File / Anomalous Octet Stream (No User Agent)

·      Compromise / Repeating Connections Over 4 Days

·      Device / Large Number of Model Alerts

·      Anomalous Connection / Multiple Connections to New External TCP Port

·      Compromise / Large Number of Suspicious Failed Connections

·      Anomalous Connection / Multiple Failed Connections to Rare Endpoint

·      Device / Increased External Connectivity

Continue reading
About the author
Tara Gould
Malware Research Lead

Blog

/

/

May 12, 2026

Resilience at the Speed of AI: Defending the Modern Campus with Darktrace

Default blog imageDefault blog image

Why higher education is a different cybersecurity battlefield

After four decades in IT, now serving as both CIO and CISO, I’ve learned one simple truth: cybersecurity is never “done.” It’s a constant game of cat and mouse. Criminals evolve. Technologies advance. Regulations expand. But in higher education, the challenge is uniquely complex.

Unlike a bank or a military installation, we can’t lock down networks to a narrow set of approved applications. Higher education environments are open by design. Students collaborate globally, faculty conduct cutting-edge research, and administrators manage critical operations, all of which require seamless access to the internet, global networks, cloud platforms, and connected systems.

Combine that openness with expanding regulatory mandates and tight budgets, and the balancing act becomes clear.

Threat actors don’t operate under the same constraints. Often well-funded and sponsored by nation-states with significant resources, they’re increasingly organized, strategic, and innovative.

That sophistication shows up in the tactics we face every day, from social engineering and ransomware to AI-driven impersonation attacks. We’re dealing with massive volumes of data, countless signals, and a very small window between detection and damage.

No human team, no matter how talented or how numerous, can manually sift through that noise at the speed required.

Discovering a force multiplier

Nothing in cybersecurity is 100% foolproof. I never “set it and forget it.” But for institutions balancing rising threats and finite resources, the Darktrace ActiveAI Security Platform™ offers something incredibly valuable: peace of mind through speed and scale.

It closes the gap between detection and response in a way humans can’t possibly match. At the speed of light, it can quarantine, investigate, and contain anomalous activity.

I’ve purchased and deployed Darktrace three separate times at three different institutions because I’ve seen firsthand what it can do and what it enables teams like mine to achieve.

I first encountered Darktrace while serving as CIO for a large multi-campus college system. What caught my attention was Darktrace's Self-Learning AI, and its ability to learn what "normal" looked like across our network. Instead of relying solely on static signatures or rigid rules, Darktrace built a behavioral baseline unique to our environment and alerted us in real time when something simply didn’t look right.

In higher education, where strict lockdowns aren’t realistic, that behavioral model made all the difference. We deployed it across five campuses, and the impact was immediate. Operating 24/7, Darktrace surfaced threats in ways our team couldn’t replicate manually.

Over time, the Darktrace platform evolved alongside the changing threat landscape, expanding into intrusion prevention, cloud visibility, and email security. At subsequent institutions, including Washington College, Darktrace was one of my first strategic investments.

Revealing the hidden threat other tools missed

One of the most surprising investigations of my career involved a data leak. Leadership suspected sensitive information from high-level meetings was being exposed, but our traditional tools couldn’t provide any answers.

Using Darktrace’s deep network visibility, down to packet-level data, we traced unusual connections to our CCTV camera system, which had been configured with a manufacturer’s default password. A small group of employees had hacked into the CCTV cameras, accessed audio-enabled recordings from boardroom meetings, and stored copies locally.

No other tool in our environment could have surfaced those connections the way Darktrace did. It was a clear example of why using AI to deeply understand how your organization, systems, and tools normally behave, matters: threats and risks don’t always look the way we expect.

Elevating a D-rating into a A-level security program

When I arrived at my last CISO role, the institution had recently experienced a significant ransomware attack. Attackers located  data  which informed their setting  ransom demands to an amount they knew would likely result in payment. It was a sobering example of how calculated and strategic modern cybercriminals have become.

Third-party cyber ratings reflected that reality, with a  D rating.

To raise the bar, we implemented a comprehensive security program and integrated layered defenses; -deploying state of the art tools and methods-  across the environment, with Darktrace at its core.

After a 90-day learning period to establish our behavioral baseline, we transitioned the platform into fully autonomous mode. In a single 30-day span, Darktrace conducted more than 2,500 investigations and autonomously resolved 92% of all false positives.

For a small team, that’s transformative. Instead of drowning in alerts, my staff focused on less than  200 meaningful cases that warranted human review.

Today, we maintain a perfect A rating from third-party assessors and have remained cybersafe.

Peace of mind isn’t about complacency

The effect of Darktrace as a force multiplier has a real human impact.

With the time reclaimed through automation, we expanded community education programs and implemented simulated phishing exercises. Through sustained training and awareness efforts, we reduced social engineering susceptibility from nearly 45% to under 5%.

On a personal level, Darktrace allows me to sleep better at night and take time off knowing we have intelligent systems monitoring and responding around the clock. For any CIO or CISO carrying institutional risk on their shoulders, that matters.

The next era: AI vs. AI

A new chapter in cybersecurity is unfolding as adversaries leverage AI to enhance scale, speed, and believability. Phishing campaigns are more personalized, impersonation attempts are more precise, and deepfake video technology, including live video, is disturbingly authentic. At the same time, organizations are rapidly adopting AI across their own environments —from GenAI assistants to embedded tools to autonomous agents. These systems don’t operate within fixed rules. They act across email, cloud, SaaS, and identity systems, often with broad permissions, and their behavior can evolve over time in ways that are difficult to predict or control.

That creates a new kind of security challenge. It’s not just about defending against AI-powered threats but understanding and governing how AI behaves within your environment, including what it can access, how it acts, and where risk begins to emerge.

From my perspective, this is a natural next step for Darktrace.

Darktrace brings a level of maturity and behavioral understanding uniquely suited to the complexity of AI environments. Self-Learning AI learns the normal patterns of each business to interpret context, uncover subtle intent, and detect meaningful deviations without relying on predefined rules or signatures. Extending into securing AI by bringing real-time visibility and control to GenAI assistants, AI agents, development environments and Shadow AI, feels like the logical evolution of what Darktrace already does so well.

Just as importantly, Darktrace is already built for dynamic, cross-domain environments where risk doesn’t sit in a single tool or control plane. In higher education, activity already spans multiple systems and, with AI, that interconnection only accelerates.

Having deployed Darktrace multiple times, I have confidence it’s uniquely positioned to lead in this space and help organizations adopt AI with greater visibility and control.

---

Since authoring this blog, Irving Bruckstein has transitioned to the role of Chief Executive Officer of the Cyberaigroup.

Continue reading
About the author
Irving Bruckstein
CEO CyberAIgroup
Your data. Our AI.
Elevate your network security with Darktrace AI