Blog
/
AI
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

Network

/

May 16, 2025

Catching a RAT: How Darktrace neutralized AsyncRAT

woman working on laptopDefault blog imageDefault blog image

What is a RAT?

As the proliferation of new and more advanced cyber threats continues, the Remote Access Trojan (RAT) remains a classic tool in a threat actor's arsenal. RATs, whether standardized or custom-built, enable attackers to remotely control compromised devices, facilitating a range of malicious activities.

What is AsyncRAT?

Since its first appearance in 2019, AsyncRAT has become increasingly popular among a wide range of threat actors, including cybercriminals and advanced persistent threat (APT) groups.

Originally available on GitHub as a legitimate tool, its open-source nature has led to widespread exploitation. AsyncRAT has been used in numerous campaigns, including prolonged attacks on essential US infrastructure, and has even reportedly penetrated the Chinese cybercriminal underground market [1] [2].

How does AsyncRAT work?

Original source code analysis of AsyncRAT demonstrates that once installed, it establishes persistence via techniques such as creating scheduled tasks or registry keys and uses SeDebugPrivilege to gain elevated privileges [3].

Its key features include:

  • Keylogging
  • File search
  • Remote audio and camera access
  • Exfiltration techniques
  • Staging for final payload delivery

These are generally typical functions found in traditional RATs. However, it also boasts interesting anti-detection capabilities. Due to the popularity of Virtual Machines (VM) and sandboxes for dynamic analysis, this RAT checks for the manufacturer via the WMI query 'Select * from Win32_ComputerSystem' and looks for strings containing 'VMware' and 'VirtualBox' [4].

Darktrace’s coverage of AsyncRAT

In late 2024 and early 2025, Darktrace observed a spike in AsyncRAT activity across various customer environments. Multiple indicators of post-compromise were detected, including devices attempting or successfully connecting to endpoints associated with AsyncRAT.

On several occasions, Darktrace identified a clear association with AsyncRAT through the digital certificates of the highlighted SSL endpoints. Darktrace’s Real-time Detection effectively identified and alerted on suspicious activities related to AsyncRAT. In one notable incident, Darktrace’s Autonomous Response promptly took action to contain the emerging threat posed by AsyncRAT.

AsyncRAT attack overview

On December 20, 2024, Darktrace first identified the use of AsyncRAT, noting a device successfully establishing SSL connections to the uncommon external IP 185.49.126[.]50 (AS199654 Oxide Group Limited) via port 6606. The IP address appears to be associated with AsyncRAT as flagged by open-source intelligence (OSINT) sources [5]. This activity triggered the device to alert the ‘Anomalous Connection / Rare External SSL Self-Signed' model.

Model alert in Darktrace / NETWORK showing the repeated SSL connections to a rare external Self-Signed endpoint, 185.49.126[.]50.
Figure 1: Model alert in Darktrace / NETWORK showing the repeated SSL connections to a rare external Self-Signed endpoint, 185.49.126[.]50.

Following these initial connections, the device was observed making a significantly higher number of connections to the same endpoint 185.49.126[.]50 via port 6606 over an extended period. This pattern suggested beaconing activity and triggered the 'Compromise/Beaconing Activity to External Rare' model alert.

Further analysis of the original source code, available publicly, outlines the default ports used by AsyncRAT clients for command-and-control (C2) communications [6]. It reveals that port 6606 is the default port for creating a new AsyncRAT client. Darktrace identified both the Certificate Issuer and the Certificate Subject as "CN=AsyncRAT Server". This SSL certificate encrypts the packets between the compromised system and the server. These indicators of compromise (IoCs) detected by Darktrace further suggest that the device was successfully connecting to a server associated with AsyncRAT.

Model alert in Darktrace / NETWORK displaying the Digital Certificate attributes, IP address and port number associated with AsyncRAT.
Figure 2: Model alert in Darktrace / NETWORK displaying the Digital Certificate attributes, IP address and port number associated with AsyncRAT.
Darktrace’s detection of repeated connections to the suspicious IP address 185.49.126[.]50 over port 6606, indicative of beaconing behavior.
Figure 3: Darktrace’s detection of repeated connections to the suspicious IP address 185.49.126[.]50 over port 6606, indicative of beaconing behavior.
Darktrace's Autonomous Response actions blocking the suspicious IP address,185.49.126[.]50.
Figure 4: Darktrace's Autonomous Response actions blocking the suspicious IP address,185.49.126[.]50.

A few days later, the same device was detected making numerous connections to a different IP address, 195.26.255[.]81 (AS40021 NL-811-40021), via various ports including 2106, 6606, 7707, and 8808. Notably, ports 7707 and 8808 are also default ports specified in the original AsyncRAT source code [6].

Darktrace’s detection of connections to the suspicious endpoint 195.26.255[.]81, where the default ports (6606, 7707, and 8808) for AsyncRAT were observed.
Figure 5: Darktrace’s detection of connections to the suspicious endpoint 195.26.255[.]81, where the default ports (6606, 7707, and 8808) for AsyncRAT were observed.

Similar to the activity observed with the first endpoint, 185.49.126[.]50, the Certificate Issuer for the connections to 195.26.255[.]81 was identified as "CN=AsyncRAT Server". Further OSINT investigation confirmed associations between the IP address 195.26.255[.]81 and AsyncRAT [7].

Darktrace's detection of a connection to the suspicious IP address 195.26.255[.]81 and the domain name identified under the common name (CN) of a certificate as AsyncRAT Server
Figure 6: Darktrace's detection of a connection to the suspicious IP address 195.26.255[.]81 and the domain name identified under the common name (CN) of a certificate as AsyncRAT Server.

Once again, Darktrace's Autonomous Response acted swiftly, blocking the connections to 195.26.255[.]81 throughout the observed AsyncRAT activity.

Figure 7: Darktrace's Autonomous Response actions were applied against the suspicious IP address 195.26.255[.]81.

A day later, Darktrace again alerted to further suspicious activity from the device. This time, connections to the suspicious endpoint 'kashuub[.]com' and IP address 191.96.207[.]246 via port 8041 were observed. Further analysis of port 8041 suggests it is commonly associated with ScreenConnect or Xcorpeon ASIC Carrier Ethernet Transport [8]. ScreenConnect has been observed in recent campaign’s where AsyncRAT has been utilized [9]. Additionally, one of the ASN’s observed, namely ‘ASN Oxide Group Limited’, was seen in both connections to kashuub[.]com and 185.49.126[.]50.

This could suggest a parallel between the two endpoints, indicating they might be hosting AsyncRAT C2 servers, as inferred from our previous analysis of the endpoint 185.49.126[.]50 and its association with AsyncRAT [5]. OSINT reporting suggests that the “kashuub[.]com” endpoint may be associated with ScreenConnect scam domains, further supporting the assumption that the endpoint could be a C2 server.

Darktrace’s Autonomous Response technology was once again able to support the customer here, blocking connections to “kashuub[.]com”. Ultimately, this intervention halted the compromise and prevented the attack from escalating or any sensitive data from being exfiltrated from the customer’s network into the hands of the threat actors.

Darktrace’s Autonomous Response applied a total of nine actions against the IP address 191.96.207[.]246 and the domain 'kashuub[.]com', successfully blocking the connections.
Figure 8: Darktrace’s Autonomous Response applied a total of nine actions against the IP address 191.96.207[.]246 and the domain 'kashuub[.]com', successfully blocking the connections.

Due to the popularity of this RAT, it is difficult to determine the motive behind the attack; however, from existing knowledge of what the RAT does, we can assume accessing and exfiltrating sensitive customer data may have been a factor.

Conclusion

While some cybercriminals seek stability and simplicity, openly available RATs like AsyncRAT provide the infrastructure and open the door for even the most amateur threat actors to compromise sensitive networks. As the cyber landscape continually shifts, RATs are now being used in all types of attacks.

Darktrace’s suite of AI-driven tools provides organizations with the infrastructure to achieve complete visibility and control over emerging threats within their network environment. Although AsyncRAT’s lack of concealment allowed Darktrace to quickly detect the developing threat and alert on unusual behaviors, it was ultimately Darktrace Autonomous Response's consistent blocking of suspicious connections that prevented a more disruptive attack.

Credit to Isabel Evans (Cyber Analyst), Priya Thapa (Cyber Analyst) and Ryan Traill (Analyst Content Lead)

Appendices

  • Real-time Detection Models
       
    • Compromise / Suspicious SSL Activity
    •  
    • Compromise / Beaconing Activity To      External Rare
    •  
    • Compromise / High Volume of      Connections with Beacon Score
    •  
    • Anomalous Connection / Suspicious      Self-Signed SSL
    •  
    • Compromise / Sustained SSL or HTTP      Increase
    •  
    • Compromise / SSL Beaconing to Rare      Destination
    •  
    • Compromise / Suspicious Beaconing      Behaviour
    •  
    • Compromise / Large Number of      Suspicious Failed Connections
  •  
  • Autonomous     Response Models
       
    • Antigena / Network / Significant      Anomaly / Antigena Controlled and Model Alert
    •  
    • Antigena / Network / Significant      Anomaly / Antigena Enhanced Monitoring from Client Block

List of IoCs

·     185.49.126[.]50 - IP – AsyncRAT C2 Endpoint

·     195.26.255[.]81 – IP - AsyncRAT C2 Endpoint

·      191.96.207[.]246 – IP – Likely AsyncRAT C2 Endpoint

·     CN=AsyncRAT Server - SSL certificate - AsyncRATC2 Infrastructure

·      Kashuub[.]com– Hostname – Likely AsyncRAT C2 Endpoint

MITRE ATT&CK Mapping:

Tactic –Technique – Sub-Technique  

 

Execution– T1053 - Scheduled Task/Job: Scheduled Task

DefenceEvasion – T1497 - Virtualization/Sandbox Evasion: System Checks

Discovery– T1057 – Process Discovery

Discovery– T1082 – System Information Discovery

LateralMovement - T1021.001 - Remote Services: Remote Desktop Protocol

Collection/ Credential Access – T1056 – Input Capture: Keylogging

Collection– T1125 – Video Capture

Commandand Control – T1105 - Ingress Tool Transfer

Commandand Control – T1219 - Remote Access Software

Exfiltration– T1041 - Exfiltration Over C2 Channel

 

References

[1]  https://blog.talosintelligence.com/operation-layover-how-we-tracked-attack/

[2] https://intel471.com/blog/china-cybercrime-undergrond-deepmix-tea-horse-road-great-firewall

[3] https://www.attackiq.com/2024/08/01/emulate-asyncrat/

[4] https://www.fortinet.com/blog/threat-research/spear-phishing-campaign-with-new-techniques-aimed-at-aviation-companies

[5] https://www.virustotal.com/gui/ip-address/185.49.126[.]50/community

[6] https://dfir.ch/posts/asyncrat_quasarrat/

[7] https://www.virustotal.com/gui/ip-address/195.26.255[.]81

[8] https://www.speedguide.net/port.php?port=8041

[9] https://www.esentire.com/blog/exploring-the-infection-chain-screenconnects-link-to-asyncrat-deployment

[10] https://scammer.info/t/taking-out-connectwise-sites/153479/518?page=26

Continue reading
About the author
Isabel Evans
Cyber Analyst

Blog

/

OT

/

May 13, 2025

Revolutionizing OT Risk Prioritization with Darktrace 6.3

man in hard hat on tabletDefault blog imageDefault blog image

Powering smarter protection for industrial systems

In industrial environments, security challenges are deeply operational. Whether you’re running a manufacturing line, a power grid, or a semiconductor fabrication facility (fab), you need to know: What risks can truly disrupt my operations, and what should I focus on first?

Teams need the right tools to shift from reactive defense, constantly putting out fires, to proactively thinking about their security posture. However, most OT teams are stuck using IT-centric tools that don’t speak the language of industrial systems, are consistently overwhelmed with static CVE lists, and offer no understanding of OT-specific protocols. The result? Compliance gaps, siloed insights, and risk models that don’t reflect real-world exposure, making risk prioritization seem like a luxury.

Darktrace / OT 6.3 was built in direct response to these challenges. Developed in close collaboration with OT operators and engineers, this release introduces powerful upgrades that deliver the context, visibility, and automation security teams need, without adding complexity. It’s everything OT defenders need to protect critical operations in one platform that understands the language of industrial systems.

additions to darktrace / ot 6/3

Contextual risk modeling with smarter Risk Scoring

Darktrace / OT 6.3 introduces major upgrades to OT Risk Management, helping teams move beyond generic CVE lists with AI-driven risk scoring and attack path modeling.

By factoring in real-world exploitability, asset criticality, and operational context, this release delivers a more accurate view of what truly puts critical systems at risk.

The platform now integrates:

  • CISA’s Known Exploited Vulnerabilities (KEV) database
  • End-of-life status for legacy OT devices
  • Firewall misconfiguration analysis
  • Incident response plan alignment

Most OT environments are flooded with vulnerability data that lacks context. CVE scores often misrepresent risk by ignoring how threats move through the environment or whether assets are even reachable. Firewalls are frequently misconfigured or undocumented, and EOL (End of Life) devices, some of the most vulnerable, often go untracked.

Legacy tools treat these inputs in isolation. Darktrace unifies them, showing teams exactly which attack paths adversaries could exploit, mapped to the MITRE ATT&CK framework, with visibility into where legacy tech increases exposure.

The result: teams can finally focus on the risks that matter most to uptime, safety, and resilience without wasting resources on noise.

Automating compliance with dynamic IEC-62443 reporting

Darktrace / OT now includes a purpose-built IEC-62443-3-3 compliance module, giving industrial teams real-time visibility into their alignment with regulatory standards. No spreadsheets required!

Industrial environments are among the most heavily regulated. However, for many OT teams, staying compliant is still a manual, time-consuming process.

Darktrace / OT introduces a dedicated IEC-62443-3-3 module designed specifically for industrial environments. Security and operations teams can now map their security posture to IEC standards in real time, directly within the platform. The module automatically gathers evidence across all four security levels, flags non-compliance, and generates structured reports to support audit preparation, all in just a few clicks.Most organizations rely on spreadsheets or static tools to track compliance, without clear visibility into which controls meet standards like IEC-62443. The result is hidden gaps, resource-heavy audits, and slow remediation cycles.

Even dedicated compliance tools are often built for IT, require complex setup, and overlook the unique devices found in OT environments. This leaves teams stuck with fragmented reporting and limited assurance that their controls are actually aligned with regulatory expectations.

By automating compliance tracking, surfacing what matters most, and being purpose built for industrial environments, Darktrace / OT empowers organizations to reduce audit fatigue, eliminate blind spots, and focus resources where they’re needed most.

Expanding protocol visibility with deep insights for specialized OT operations

Darktrace has expanded its Deep Packet Inspection (DPI) capabilities to support five industry-specific protocols, across healthcare, semiconductor manufacturing, and ABB control systems.

The new protocols build on existing capabilities across all OT industry verticals and protocol types to ensure the Darktrace Self-Learning AI TM can learn intelligently about even more assets in complex industrial environments. By enabling native, AI-driven inspection of these protocols, Darktrace can identify both security threats and operational issues without relying on additional appliances or complex integrations.

Most security platforms lack native support for industry-specific protocols, creating critical visibility gaps in customer environments like healthcare, semiconductor manufacturing, and ABB-heavy industrial automation. Without deep protocol awareness, organizations struggle to accurately identify specialized OT and IoT assets, detect malicious activity concealed within proprietary protocol traffic, and generate reliable device risk profiles due to insufficient telemetry.

These blind spots result in incomplete asset inventories, and ultimately, flawed risk posture assessments which over-index for CVE patching and legacy equipment.

By combining protocol-aware detection with full-stack visibility across IT, OT, and IoT, Darktrace’s AI can correlate anomalies across domains. For example, connecting an anomaly from a Medical IoT (MIoT) device with suspicious behavior in IT systems, providing actionable, contextual insights other solutions often miss.

Conclusion

Together, these capabilities take OT security beyond alert noise and basic CVE matching, delivering continuous compliance, protocol-aware visibility, and actionable, prioritized risk insights, all inside a single, unified platform built for the realities of industrial environments.

[related-resource]

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance
Your data. Our AI.
Elevate your network security with Darktrace AI