Blog
/
AI
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Senior Cyber Analyst & Team Lead
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Senior Cyber Analyst & Team Lead

More in this series

No items found.

Blog

/

Network

/

March 26, 2026

Phantom Footprints: Tracking GhostSocks Malware

Default blog imageDefault blog image

Why are attackers using residential proxies?

In today's threat landscape, blending in to normal activity is the key to success for attackers and the growing reliance on residential proxies shows a significant shift in how threat actors are attempting to bypass IP detection tools.

The increasing dependency on residential proxies has exposed how prevalent proxy services are and how reliant a diverse range of threat actors are on them. From cybercriminal groups to state‑sponsored actors, the need to bypass IP detection tools is fundamental to the success of these groups. One malware that has quietly become notorious for its ability to avoid anomaly detection is GhostSocks, a malware that turns compromised devices into residential proxies.

What is GhostSocks?

Originally marketed on the Russian underground forum xss[.]is as a Malware‑as‑a‑Service (MaaS), GhostSocks enables threat actors to turn compromised devices into residential proxies, leveraging the victim's internet bandwidth to route malicious traffic through it.

How does Ghostsocks malware work? 

The malware offers the threat actor a “clean” IP address, making it look like it is coming from a household user. This enables the bypassing of geographic restrictions and IP detection tools, a perfect tool for avoiding anomaly detection. It wasn’t until 2024, when a partnership was announced with the infamous information stealer Lumma Stealer, that GhostSocks surged into widespread adoption and alluded to who may be the author of the proxy malware.

Written in GoLang, GhostSocks utilizes the SOCKS5 proxy protocol, creating a SOCKS5 connection on infected devices. It uses a relay‑based C2 implementation, where an intermediary server sits in between the real command-and-control (C2) server and the infected device.

How does Ghostsocks malware evade detection?

To further increase evasion, the Ghostsocks malware wraps its SOCKS5 tunnels in TLS encryption, allowing its malicious traffic to blend into normal network traffic.

Early variants of GhostSocks do not implement a persistence mechanism; however, later versions achieve persistence via registry run keys, ensuring sustained proxy operational time [1].

While proxying is its primary purpose, GhostSocks also incorporates backdoor functionality, enabling malicious actors to run arbitrary commands and download and deploy additional malicious payloads. This was evident with the well‑known ransomware group Black Basta, which reportedly used GhostSocks as a way of maintaining long‑term access to victims’ networks [1].

Darktrace’s detection of GhostSocks Malware

Darktrace observed a steady increase in GhostSocks activity across its customer base from late 2025, with its Threat Research team identifying multiple incidents involving the malware. In one notable case from December 2025, Darktrace detected GhostSocks operating alongside Lumma Stealer, reinforcing that the partnership between Lumma and GhostSocks remains active despite recent attempts to disrupt Lumma’s infrastructure.

Darktrace’s first detection of GhostSocks‑related activity came when a device on the network of a customer in the education sector began making connections to an endpoint with a suspicious self‑signed certificate that had never been seen on the network before.

The endpoint in question, 159.89.46[.]92 with the hostname retreaw[.]click, has been flagged by multiple open‑source intelligence (OSINT) sources as being associated with Lumma Stealer’s C2 infrastructure [2], indicating its likely role in the delivery of malicious payloads.

Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.
Figure 1: Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.

Less than two minutes later, Darktrace observed the same device downloading the executable (.exe) file “Renewable.exe” from the IP 86.54.24[.]29, which Darktrace recognized as 100% rare for this network.

Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.
Figure 2: Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.

Both the file MD5 hash and the executable itself have been identified by multiple OSINT vendors as being associated with the GhostSocks malware [3], with the executable likely the backdoor component of the GhostSocks malware, facilitating the distribution of additional malicious payloads [4].

Following this detection, Darktrace’s Autonomous Response capability recommended a blocking action for the device in an early attempt to stop the malicious file download. In this instance, Darktrace was configured in Human Confirmation Mode, meaning the customer’s security team was required to manually apply any mitigative response actions. Had Autonomous Response been fully enabled at the time of the attack, the connections to 86.54.24[.]29 would have been blocked, rendering the malware ineffective at reaching its C2 infrastructure and halting any further malicious communication.

 Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.
Figure 3: Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.

As the attack was able to progress, two days later the device was detected downloading additional payloads from the endpoint www.lbfs[.]site (23.106.58[.]48), including “Setup.exe”, “,.exe”, and “/vp6c63yoz.exe”.

Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.
Figure 4: Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.

Once again, Darktrace recognized the anomalous nature of these downloads and suggested that a “group pattern of life” be enforced on the offending device in an attempt to contain the activity. By enforcing a pattern of life on a device, Darktrace restricts its activity to connections and behaviors similar to those performed by peer devices within the same group, while still allowing it to carry out its expected activity, effectively preventing deviations indicative of compromise while minimizing disruption. As mentioned earlier, these mitigative actions required manual implementation, so the activity was able to continue. Darktrace proceeded to suggest further actions to contain subsequent malicious downloads, including an attempt to block all outbound traffic to stop the attack from progressing.

An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.
Figure 5: An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.

Around the same time, a third executable download was detected, this time from the hostname hxxp[://]d2ihv8ymzp14lr.cloudfront.net/2021-08-19/udppump[.]exe, along with the file “udppump.exe”.While GhostSocks may have been present only to facilitate the delivery of additional payloads, there is no indication that these CloudFront endpoints or files are functionally linked to GhostSocks. Rather, the evidence points to broader malicious file‑download activity.

Shortly after the multiple executable files had been downloaded, Darktrace observed the device initiating a series of repeated successful connections to several rare external endpoints, behavior consistent with early-stage C2 beaconing activity.

Cyber AI Analyst’s investigation

Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.
Figure 7: Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.

Throughout the course of this attack, Darktrace’s Cyber AI Analyst carried out its own autonomous investigation, piecing together seemingly separate events into one wider incident encompassing the first suspicious downloads beginning on December 4, the unusual connectivity to many suspicious IPs that followed, and the successful beaconing activity observed two days later. By analyzing these events in real-time and viewing them as part of the bigger picture, Cyber AI Analyst was able to construct an in‑depth breakdown of the attack to aid the customer’s investigation and remediation efforts.

Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.
Figure 8: Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.

Conclusion

The versatility offered by GhostSocks is far from new, but its ability to convert compromised devices into residential proxy nodes, while enabling long‑term, covert network access—illustrates how threat actors continue to maximise the value of their victims’ infrastructure. Its growing popularity, coupled with its ongoing partnership with Lumma, demonstrates that infrastructure takedowns alone are insufficient; as long as threat actors remain committed to maintaining anonymity and can rapidly rebuild their ecosystems, related malware activity is likely to persist in some form.

Credit to Isabel Evans (Cyber Analyst), Gernice Lee (Associate Principal Analyst & Regional Consultancy Lead – APJ)
Edited by Ryan Traill (Content Manager)

Appendices

References

1.    https://bloo.io/research/malware/ghostsocks

2.    https://www.virustotal.com/gui/domain/retreaw.click/community

3.    https://synthient.com/blog/ghostsocks-from-initial-access-to-residential-proxy

4.    https://www.joesandbox.com/analysis/1810568/0/html

5. https://www.virustotal.com/gui/url/fab6525bf6e77249b74736cb74501a9491109dc7950688b3ae898354eb920413

Darktrace Model Detections

Real-time Detection Models

Anomalous Connection / Suspicious Self-Signed SSL

Anomalous Connection / Rare External SSL Self-Signed

Anomalous File / EXE from Rare External Location

Anomalous File / Multiple EXE from Rare External Locations

Compromise / Possible Fast Flux C2 Activity

Compromise / Large Number of Suspicious Successful Connections

Compromise / Large Number of Suspicious Failed Connections

Compromise / Sustained SSL or HTTP Increase

Autonomous Response Models

Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block

Antigena / Network / External Threat / Antigena Suspicious File Block

Antigena / Network / Significant Anomaly / Antigena Controlled and Model Alert

Antigena / Network / External Threat / Antigena File then New Outbound Block

Antigena / Network / Significant Anomaly / Antigena Alerts Over Time Block

Antigena / Network / External Threat / Antigena Suspicious Activity Block

MITRE ATT&CK Mapping

Tactic – Technique – Sub-Technique

Resource Development – T1588 - Malware

Initial Access - T1189 - Drive-by Compromise

Persistence – T1112 – Modify Registry

Command and Control – T1071 – Application Layer Protocol

Command and Control – T1095 – Non-application Layer Protocol

Command and Control – T1071 – Web Protocols

Command and Control – T1571 – Non-Standard Port

Command and Control – T1102 – One-Way Communication

List of Indicators of Compromise (IoCs)

86.54.24[.]29 - IP - Likely GhostSocks C2

http[://]86.54.24[.]29/Renewable[.]exe - Hostname - GhostSocks Distribution Endpoint

http[://]d2ihv8ymzp14lr.cloudfront[.]net/2021-08-19/udppump[.]exe - CDN - Payload Distribution Endpoint

www.lbfs[.]site - Hostname - Likely C2 Endpoint

retreaw[.]click - Hostname - Lumma C2 Endpoint

alltipi[.]com - Hostname - Possible C2 Endpoint

w2.bruggebogeyed[.]site - Hostname - Possible C2 Endpoint

9b90c62299d4bed2e0752e2e1fc777ac50308534 - SHA1 file hash – Likely GhostSocks payload

3d9d7a7905e46a3e39a45405cb010c1baa735f9e - SHA1 file hash - Likely follow-up payload

10f928e00a1ed0181992a1e4771673566a02f4e3 - SHA1 file hash - Likely follow-up payload

Continue reading
About the author
Gernice Lee
Associate Principal Analyst & Regional Consultancy Lead

Blog

/

AI

/

March 27, 2026

State of AI Cybersecurity 2026: 92% of security professionals concerned about the impact of AI agents

Default blog imageDefault blog image

The findings in this blog are taken from Darktrace's annual State of AI Cybersecurity Report 2026.

AI is already embedded in day-to-day enterprise activity, with 78% of participants in one recent survey reporting that their organizations are using generative AI in at least one business function. Generative AI now acts as an always-on assistant, researcher, creator, and coach across an expanding array of departments and functions. Autonomous agents are performing multi-step operational workflows from end to end. AI features have been layered on top of every SaaS application. And vibe coding is making it possible for employees without deep technical expertise to build their own AI-powered automations.

According to Gartner, more than 80% of enterprises will have deployed GenAI models, applications, or APIs in production environments by the end of this year, up from less than 5% in 2023. Companies report a 130% increase in spending on AI over the same period, with 72% of business leaders using AI tools at least weekly. The outsized efficiency and productivity gains that were once a future vision are quickly becoming everyday reality.

AI is currently driving business growth and innovation, and organizations risk falling behind peers if they don’t keep up with the pace of adoption, but it is also quietly expanding the enterprise attack surface. The modern CISO is challenged to both enable innovation and protect the business from these emerging threats.

AI agents introduce new risks and vulnerabilities

AI agents are playing growing roles in enterprise production environments. In many cases, these agents act with broad permissions across multiple software systems and platforms. This means they’re granted far-reaching access – to sensitive data, business-critical applications, tokens and APIs, and IT and security tools. With this access comes risk for security leaders – 92% are concerned about the use of AI agents across the workforce and their impact on security.

These agents must be governed as identities, with least-privilege access and ongoing monitoring. They can’t be thought of as invisible aspects of the application estate. Understanding how AI agents behave, and how to manage their permissions, control their behavior, and limit their data access will be a top security priority throughout 2026.

Generative AI prompts: The next frontier

Prompts are how users – both human and agentic – interact with AI systems, and they’re where natural language gets translated into model behavior. Natural language is infinite in its potential combinations and permutations, making this aspect of the attack surface open-ended and far more complex than traditional CVEs. With carefully crafted prompts, bad actors may be able to coax models into disclosing sensitive data, bypassing guardrails, or initiating undesirable actions.

Among security leaders, the biggest worries about AI usage in their environments all involve ways that systems might be manipulated to bypass traditional controls.

  • 61% are most concerned about the exposure of sensitive data
  • 56% are most concerned about potential data security and policy violations
  • 51% are most concerned about the misuse or abuse of AI tools

The more employees rely on AI in their day-to-day workflows, the more critical it becomes for security teams to understand how prompt behavior determines model behavior – and where that behavior could go wrong.

What does “securing AI” mean in practice?

AI adoption opens new security risks that blur the boundaries between traditional security disciplines. A single malicious interaction with an AI model could involve identity misuse, sensitive data exposure, application logic abuse, and supply chain risk – all within a single workflow. Protecting this dynamic and rapidly evolving attack surface requires an approach that spans identity security, cloud security, application security, data security, software development security, and more.

The task for security leaders is to implement the tools, policies, and frameworks to mitigate these novel, expansive, and cross-disciplinary risks.

However, within most enterprises, AI policy creation remains in its infancy. Just 37% of security leaders report that their organization has a formal AI policy, representing a small but worrisome decrease from last year. Conversations about AI abound: in 52% of organizations, there’s discussion about an AI policy. Still, talk is cheap, and leaders will need to take action if they’re to successfully enable secure AI innovation.

To govern and protect their AI systems, organizations must take a multi-pronged approach. This requires building out policies, but it also demands that they are able to:

  • Monitor the prompts driving GenAI assistants and agents in real time. Organizations must be able to inspect prompts, sessions, and responses across enterprise GenAI tools, low- and high-code environments, and SaaS and SASE so that they can detect clever conversational prompt attacks and malicious chaining.
  • Secure all business AI agent identities. Security teams need to identify all the agents acting within their environment and supply chain, map their connections and interactions via MCP and services like Amazon S3, and audit their behavior across the cloud, SaaS environments, and on the network and endpoint devices.
  • Maintain centralized, comprehensive visibility. Understanding intent, assessing risks, and enforcing policies all require that security teams have a single view that spans AI interactions across the entire business.
  • Discover and control shadow AI. Teams need to be able to identify unsanctioned AI activities, distinguish the misuse of legitimate tools from their appropriate use, and apply policies to protect data, while guiding users towards approved solutions.

Scaling AI safely and responsibly

The approach that most cybersecurity vendors have taken – using historical patterns to predict future threats – doesn’t work well for AI systems. Because AI changes its behavior in response to the information it encounters while taking action, previous patterns don’t indicate what it will do next. Looking at past attacks can’t tell you how complex models will behave in your individual business.

Securing AI requires interpreting ambiguous interactions, uncovering subtleties that reveal intent within extended conversations, understanding how access accumulates over time, and recognizing when behavior – both human and machine – begins to drift towards areas of risk. To do this, you need to understand what “normal” looks like in each unique organization: how users, systems, applications, and AI agents behave, how they communicate, and how data flows between them.

Darktrace has spent more than a decade designing AI-powered solutions that can understand and adapt to evolving behavior in complex environments. This technology learns directly from the environment it protects, identifying malicious actions that deviate from normal operations, so that it can stop AI-related threats on the very first encounter.

As AI adoption reshapes enterprise operations, humans and machines will collaborate more and more often. This collaboration might dramatically expand the attack surface, but it also has the potential to be a force multiplier for defenders.

Explore the full State of AI Cybersecurity 2026 report for deeper insights into how security leaders are responding to AI-driven risks.

Learn more about securing AI in your enterprise.

[related-resource]

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI