ブログ
/
AI
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

Network

/

March 10, 2026

NetSupport RAT: How Legitimate Tools Can Be as Damaging as Malware

Default blog imageDefault blog image

What is NetSupport Manager?

NetSupport Manager is a legitimate IT tool used by system administrators for remote support, monitoring, and management. In use since 1989, NetSupport Manager enables users to remotely access and navigate systems across different platforms and operating systems [1].

What is NetSupport RAT?

Although NetSupport Manager is a legitimate tool that can be used by IT and security professionals, there has been a rising number of cases in which it is abused to gain unauthorized access to victim systems. This misuse has become so prevalent that, in recent years, security researchers have begun referring to NetSupport as a Remote Access Trojan (RAT), a term typically used for malware that enables a threat actor to remotely access or control an infected device [2][3][4].

NetSupport RAT activity summary

The initial stages of NetSupport RAT infection may vary depending on the source of the initial compromise. Using tactics such as the social engineering tactic ClickFix, threat actors attempt to trick users into inadvertently executing malicious PowerShell commands under the guise of resolving a non-existent issue or completing a fake CAPTCHA verification [5]. Other attack vectors such as phishing emails, fake browser updates, malicious websites, search engine optimization (SEO) poisoning, malvertising and drive-by downloads are also employed to direct users to fraudulent pages and fake reCAPTCHA verification checks, ultimately inducing them to execute malicious PowerShell commands [5][6][7]. This leads to the successful installation of NetSupport Manager on the compromised device, which is often placed in non-standard directories such as AppData, ProgramData, or Downloads [3][8].

Once installed, the adversary is able to gain remote access to the affected machine, monitor user activity, exfiltrate data, communicate with the command-and-control (C2) server, and maintain persistence [5]. External research has also highlighted that post-exploitation of NetSupport RAT has involved the additional download of malicious payloads [2][5].

Attack flow diagram highlighting key events across each phase of the attack phase
Figure 1: Attack flow diagram highlighting key events across each phase of the attack phase [2][5].

Darktrace coverage

In November of 2025, suspicious behavior indicative of the malicious abuse of NetSupport Manager was observed on multiple customers across Europe, the Middle East, and Africa (EMEA) and the Americas (AMS).

While open-source intelligence (OSINT) has reported that, in a recent campaign, a threat actor impersonated government entities to trick users in organizations in the Information Technology, Government and Financial Services sectors in Central Asia into downloading NetSupport Manager [8], approximately a third of Darktrace’s affected customers in November were based in the US while the rest were based in EMEA. This contrast underscores how widely NetSupport Manager is leveraged by threat actors and highlights its accessibility as an initial access tool.  

The Darktrace customers affected were in sectors including Information and Communication, Manufacturing and Arts, entertainment and recreation.

The ClickFix social engineering tactic typically used to distribute the NetSupport RAT is known to target multiple industries, including Technology, Manufacturing and Energy sectors [9]. It also reflects activity observed in the campaign targeting Central Asia, where the Information Technology sector was among those affected [8].

The prevalence of affected Education customers highlights NetSupport’s marketing focus on the Education sector [10]. This suggests that threat actors are also aware of this marketing strategy and have exploited the trust it creates to deploy NetSupport Manager and gain access to their targets’ systems. While the execution of the PowerShell commands that led to the installation of NetSupport Manager falls outside of Darktrace's purview in cases identified, Darktrace was still able to identify a pattern of devices making connections to multiple rare external domains and IP addresses associated with the NetSupport RAT, using a wide range of ports over the HTTP protocol. A full list of associated domains and IP addresses is provided in the Appendices of this blog.

Although OSINT identifies multiple malicious domains and IP addresses as used as C2 servers, signature-based detections of NetSupport RAT indicators of compromise (IoCs) may miss broader activity, as new malicious websites linked to the RAT continue to appear.

Darktrace’s anomaly‑based approach allows it to establish a normal ‘pattern of life’ for each device on a network and identify when behavior deviates from this baseline, enabling the detection of unusual activity even when it does not match known IoCs or tactics, techniques and procedures (TTPs).

In one customer environment in late 2025, Darktrace / NETWORK detected a device initiating new connections to the rare external endpoint, thetavaluemetrics[.]com (74.91.125[.]57), along with the use of a previously unseen user agent, which it recognized as highly unusual for the network.

Darktrace’s detection of HTTP POST requests to a suspicious URI and new user agent usage.
Figure 2: Darktrace’s detection of HTTP POST requests to a suspicious URI and new user agent usage.

Darktrace identified that user agent present in connections to this endpoint was the ‘NetSupport Manager/1.3’, initially suggesting legitimate NetSupport Manager activity. Subsequent investigation, however, revealed that the endpoint was in fact a malicious NetSupportRAT C2 endpoint [12]. Shortly after, Darktrace detected the same device performing HTTP POST requests to the URI fakeurl[.]htm. This pattern of activity is consistent with OSINT reporting that details communication between compromised devices and NetSupport Connectivity Gateways functioning as C2 servers [11].

Conclusion

As seen not only with NetSupport Manager but with any legitimate or open‑source software used by IT and security professionals, the legitimacy of a tool does not prevent it from being abused by threat actors. Open‑source software, especially tools with free or trial versions such as NetSupport Manager, remains readily accessible for malicious use, including network compromise. In an age where remote work is still prevalent, validating any anomalous use of software and remote management tools is essential to reducing opportunities for unauthorized access.

Darktrace’s anomaly‑based detection enables security teams to identify malicious use of legitimate tools, even when clear signatures or indicators of compromise are absent, helping to prevent further impact on a network.


Credit to George Kim (Analyst Consulting Lead – AMS), Anna Gilbertson (Senior Cyber Analyst)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Alerts

·       Compromise / Suspicious HTTP and Anomalous Activity

·       Compromise / New User Agent and POST

·       Device / New User Agent

·       Anomalous Connection / New User Agent to IP Without Hostname

·       Anomalous Connection / Posting HTTP to IP Without Hostname

·       Anomalous Connection / Multiple Failed Connections to Rare Endpoint

·       Anomalous Connection / Application Protocol on Uncommon Port

·       Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

·       Compromise / Beaconing Activity To External Rare

·       Compromise / HTTP Beaconing to Rare Destination

·       Compromise / Agent Beacon (Medium Period)

·       Compromise / Agent Beacon (Long Period)

·       Compromise / Quick and Regular Windows HTTP Beaconing

·       Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

·       Compromise / POST and Beacon to Rare External

Indicators of Compromise (IoCs)

Indicator           Type     Description

/fakeurl.htm URI            NetSupportRAT C2 URI

thetavaluemetrics[.]com        Connection hostname              NetSupportRAT C2 Endpoint

westford-systems[.]icu            Connection hostname              NetSupportRAT C2 Endpoint

holonisz[.]com                Connection hostname              NetSupportRAT C2 Endpoint

heaveydutyl[.]com      Connection hostname              NetSupportRAT C2 Endpoint

nsgatetest1[.]digital   Connection hostname              NetSupportRAT C2 Endpoint

finalnovel[.]com            Connection hostname              NetSupportRAT C2 Endpoint

217.91.235[.]17              IP             NetSupportRAT C2 Endpoint

45.94.47[.]224                 IP             NetSupportRAT C2 Endpoint

74.91.125[.]57                 IP             NetSupportRAT C2 Endpoint

88.214.27[.]48                 IP             NetSupportRAT C2 Endpoint

104.21.40[.]75                 IP             NetSupportRAT C2 Endpoint

38.146.28[.]242              IP             NetSupportRAT C2 Endpoint

185.39.19[.]233              IP             NetSupportRAT C2 Endpoint

45.88.79[.]237                 IP             NetSupportRAT C2 Endpoint

141.98.11[.]224              IP             NetSupportRAT C2 Endpoint

88.214.27[.]166              IP             NetSupportRAT C2 Endpoint

107.158.128[.]84          IP             NetSupportRAT C2 Endpoint

87.120.93[.]98                 IP             Rhadamanthys C2 Endpoint

References

1.         https://mspalliance.com/netsupport-debuts-netsupport-24-7/

2.         https://blogs.vmware.com/security/2023/11/netsupport-rat-the-rat-king-returns.html

3.          https://redcanary.com/threat-detection-report/threats/netsupport-manager/

4.         https://www.elastic.co/guide/en/security/8.19/netsupport-manager-execution-from-an-unusual-path.html

5.          https://rewterz.com/threat-advisory/netsupport-rat-delivered-through-spoofed-verification-pages-active-iocs

6.           https://thehackernews.com/2025/11/new-evalusion-clickfix-campaign.html

7.         https://corelight.com/blog/detecting-netsupport-manager-abuse

8.         https://thehackernews.com/2025/11/bloody-wolf-expands-java-based.html

9.         https://unit42.paloaltonetworks.com/preventing-clickfix-attack-vector/

10.  https://www.netsupportsoftware.com/education-solutions/

11.  https://www.esentire.com/blog/unpacking-netsupport-rat-loaders-delivered-via-clickfix

  1. https://threatfox.abuse.ch/browse/malware/win.netsupportmanager_rat/
  2. https://www.virustotal.com/gui/url/5fe6936a69c786c9ded9f31ed1242c601cd64e1d90cecd8a7bb03182c47906c2

Continue reading
About the author
George Kim
Analyst Consulting Lead – AMS

Blog

/

Cloud

/

March 5, 2026

Inside Cloud Compromise: Investigating Attacker Activity with Darktrace / Forensic Acquisition & Investigation

Default blog imageDefault blog image

Investigating Cloud Attacks with Forensic Acquisition & Investigation

Darktrace / Forensic Acquisition & Investigation™ is the industry’s first truly automated forensic solution purpose-built for the cloud. This blog will demonstrate how an investigation can be carried out against a compromised cloud server in minutes, rather than hours or days.

The compromised server investigated in this case originates from Darktrace’s Cloudypots system, a global honeypot network designed to observe adversary activity in real time across a wide range of cloud services. Whenever an attacker successfully compromises one of these honeypots, a forensic copy of the virtual server's disk is preserved for later analysis. Using Forensic Acquisition & Investigation, analysts can then investigate further and obtain detailed insights into the compromise including complete attacker timelines and root cause analysis.

Forensic Acquisition & Investigation supports importing artifacts from a variety of sources, including EC2 instances, ECS, S3 buckets, and more. The Cloudypots system produces a raw disk image whenever an attack is detected and stores it in an S3 bucket. This allows the image to be directly imported into Forensic Acquisition & Investigation using the S3 bucket import option.

As Forensic Acquisition & Investigation runs cloud-natively, no additional configuration is required to add a specific S3 bucket. Analysts can browse and acquire forensic assets from any bucket that the configured IAM role is permitted to access. Operators can also add additional IAM credentials, including those from other cloud providers, to extend access across multiple cloud accounts and environments.

Figure 1: Forensic Acquisition & Investigation import screen.

Forensic Acquisition & Investigation then retrieves a copy of the file and automatically begins running the analysis pipeline on the artifact. This pipeline performs a full forensic analysis of the disk and builds a timeline of the activity that took place on the compromised asset. By leveraging Forensic Acquisition & Investigation’s cloud-native analysis system, this process condenses hour of manual work into just minutes.

Successful import of a forensic artifact and initiation of the analysis pipeline.
Figure 2: Successful import of a forensic artifact and initiation of the analysis pipeline.

Once processing is complete, the preserved artifact is visible in the Evidence tab, along with a summary of key information obtained during analysis, such as the compromised asset’s hostname, operating system, cloud provider, and key event count.

The Evidence overview showing the acquired disk image.
Figure 3: The Evidence overview showing the acquired disk image.

Clicking on the “Key events” field in the listing opens the timeline view, automatically filtered to show system- generated alarms.

The timeline provides a chronological record of every event that occurred on the system, derived from multiple sources, including:

  • Parsed log files such as the systemd journal, audit logs, application specific logs, and others.
  • Parsed history files such as .bash_history, allowing executed commands to be shown on the timeline.
  • File-specific events, such as files being created, accessed, modified, or executables being run, etc.

This approach allows timestamped information and events from multiple sources to be aggregated and parsed into a single, concise view, greatly simplifying the data review process.

Alarms are created for specific timeline events that match either a built-in system rule, curated by Darktrace’s Threat Research team or an operator-defined created at the project level. These alarms help quickly filter out noise and highlight on events of interest, such as the creation of a file containing known malware, access to sensitive files like Amazon Web Service (AWS) credentials, suspicious arguments or commands, and more.

 The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.
Figure 4: The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.

In this case, several alarms were generated for suspicious Base64 arguments being passed to Selenium. Examining the event data, it appears the attacker spawned a Selenium Grid session with the following payload:

"request.payload": "[Capabilities {browserName: chrome, goog:chromeOptions: {args: [-cimport base64;exec(base64...], binary: /usr/bin/python3, extensions: []}, pageLoadStrategy: normal}]"

This is a common attack vector for Selenium Grid. The chromeOptions object is intended to specify arguments for how Google Chrome should be launched; however, in this case the attacker has abused the binary field to execute the Python3 binary instead of Chrome. Combined with the option to specify command-line arguments, the attacker can use Python3’s -c option to execute arbitrary Python code, in this instance, decoding and executing a Base64 payload.

Selenium’s logs truncate the Arguments field automatically, so an alternate method is required to retrieve the full payload. To do this, the search bar can be used to find all events that occurred around the same time as this flagged event.

Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].
Figure 5: Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].

Scrolling through the search results, an entry from Java’s systemd journal can be identified. This log contains the full, unaltered payload. GCHQ’s CyberChef can then be used to decode the Base64 data into the attacker’s script, which will ultimately be executed.[NJ9]

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ