Blog
/
Email
/
November 27, 2025

Phishing attacks surge by 620% in the lead-up to Black Friday

Black Friday continues to be a prime opportunity for threat actors, with early analysis from Darktrace showing a significant spike in attackers impersonating well-known brands, as well as the brands most frequently impersonated by scammers. Plus, check out our top tips to stay safe while filling your basket with deals.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
27
Nov 2025

Black Friday deals are rolling in, and so are the phishing scams

As the world gears up for Black Friday and the festive shopping season, inboxes flood with deals and delivery notifications, creating a perfect storm for phishing attackers to strike.

Contributing to the confusion, legitimate brands often rely on similar urgency cues, limited-time offers, and high-volume email campaigns used by scammers, blurring the lines between real deals and malicious lookalikes. While security teams remain extra vigilant during this period, the risk of phishing emails slipping in unnoticed remains high, as does the risk of individuals clicking to take advantage of holiday shopping offers.

Analysis conducted by Darktrace’s global analyst team revealed that phishing attacks taking advantage of Black Friday jumped by 620% in the weeks leading up to the holiday weekend, with the volume of phishing attacks expected to jump a further 20-30% during Black Friday week itself.

First observation: Brand impersonation

Brand impersonation was one of the techniques that stood out, with threat actors creating convincing emails – likely assisted by generative AI – purporting to be from household brands including special offers and promotions.

The week before Thanksgiving (15-21 November) saw 201% more phishing attempts mimicking US retailers than the same week in October, as attackers sought to profit off the back of the busy holiday shopping season. It’s not just about volume, either – attackers are spoofing brands people love to shop with during the holidays. Fake emails that look like they’re from well-known retailers like Macy’s, Walmart, and Target were up by 54% just across last week1. Even so, Amazon is the most impersonated brand, making up 80% of phishing attempts in Darktrace’s analysis of global consumer brands like Apple, Alibaba and Netflix.  

While major brands invest heavily in protecting their organizations and customers from cyber-attacks, impersonation is a complicated area as it falls outside of a brand’s legitimate infrastructure and security remit. Retail brands have a huge attack surface, creating plenty of vectors for impersonation, while fake domains, social profiles, and promotional messages can be created quickly and at scale.

Second observation: Fake marketing domains

One prominent Black Friday phishing campaign observed landing in many inboxes uses fake domains purporting to be from marketing sites, like “Pal.PetPlatz.com” and “Epicbrandmarketing.com”.

These emails tend to operate in one of two ways. Some contain “deals” for luxury items such as Rolex watches or Louis Vuitton handbags, designed to tempt readers into clicking. However, the majority are tied to a made-up brand called Deal Watchdogs, which promotes “can’t-miss” Amazon Black Friday offers – designed to lure readers into acting fast to secure legitimate time-sensitive deals. Any user who clicks a link is taken to a fake Amazon website where they are tricked into inputting sensitive data and payment details.

Third observation: The impact of generative AI

The biggest shift seen in phishing in recent years is how much more convincing scam emails are thanks to generative AI. 27% of phishing emails observed by Darktrace in 2024 contained over 1,000 characters2, suggesting LLM use in their creation. Tools like ChatGPT and Gemini lower the barrier to entry for cyber-criminals, allowing them to create phishing campaigns that humans find it difficult to spot.  

Let’s take a look at a dummy email created by a member of our team without a technical background to illustrate how easy it is to spin up an email that looks and feels like a genuine Black Friday offer. With two prompts, generative AI created a convincing “sale” email that could easily pass as the real thing without requiring any technical skill.

A fake Black Friday deal email created using generative AI, with only two prompts. The image has been pixelated for marketing purposes.

Anyone can now create convincing brand spoofs, and they can do it at scale. That makes it even more important for email users to pause, check the sender, and think before they click.

Why phishing scams hurt consumers and brands

These spoofs don’t just drain shoppers’ bank accounts and grab their personal data. They erode trust, drive people away from real sites, and ultimately hurt brands’ sales. And the fakes keep getting sharper, more convincing, and harder to spot.

Though brands should implement email controls like DMARC to help reduce spoofing, they can’t stop attackers from registering new look-alike domains or using other channels. At the end of the day, human users remain vulnerable to well-crafted scams, particularly when the element of trust from a well-known brand is involved. And while brands can’t prevent all impersonation scams, the fallout can still erode consumer trust and damage their reputation.

In order to limit the impact of these scams, two things need to work together: better education so consumers know when to slow down and look twice, and email security (plus a DMARC solution and an attack surface management tool) that can adapt faster than the attackers – protecting both shoppers and the brands they love.

Tips to stay safe while Black Friday shopping online

On top of retailers implementing robust email security, there are some simple steps shoppers can take to stay safer while shopping this holiday season.

  • Check every website (twice). Scammers make tiny changes you can barely see. They’ll switch Walmart.com for Waimart.com and most people won’t notice. If something looks even slightly off, check the URL carefully and, if you’re unsure, search for reviews of that exact address.
  • Santa keeps the real gifts in the workshop. Don’t just click through from sales emails. Use them as a prompt to log in directly to the official app or site, where any genuine notifications will appear.
  • Look at the payment options. Real retailers usually offer a handful of recognizable ways to pay; if a site pushes only odd methods or upfront transfers, don’t use it.
  • Be skeptical of Christmas miracles. If a deal on a big-ticket item looks too good to be true, it usually is.
  • Leave the rushing to the elves. Countdown timers and “last chance” banners are designed to make you click before you think. Take a breath, double-check the sender and the site, and then decide whether to buy.

Email security you can trust this holiday season

The heightened holiday shopping season shines a spotlight on an uncomfortable reality: now that phishing emails are harder than ever to distinguish from legitimate brand communication, traditional spam filters and Secure Email Gateways struggle to keep up. In order to protect against communication-based attacks, organizations require email security that can evaluate the full context of an email – not just surface-level indicators – and stop malicious messages before they reach inboxes.

Darktrace / EMAIL uses Self-Learning AI to understand the behavior and patterns of every user, so it can detect the subtle inconsistencies that reveal a message isn’t genuine, from shifts in tone and writing style to unexpected links, unfamiliar senders, or off-brand visual cues. By identifying these anomalies automatically – and either holding them entirely, or neutralizing malicious elements – it removes the burden from employees to catch near-imperceptible errors and reinforces protection for the entire organization, from staff to customers to brand reputation.

Join our live broadcast on 9 December, where Darktrace will reveal new, industry-first innovations in email security keeping organizations safe this Christmas – from DMARC to DLP. Sign up to the live launch event now.

For a deeper dive into some specific Black Friday phishing campaigns surfaced by the Darktrace threat analysis team, read the follow-up blog here.

A note on methodology

Insights derive from anonymous live data across 6,500 customers protected by Darktrace / EMAIL. Darktrace created models tracking verified phishing emails that:

  • Explicitly mentioned Black Friday
  • Impersonated US retailers popular during the holiday season (Walmart, Target, Best Buy, Macy's, Old Navy, 1800-Flowers)
  • Impersonated major global brands (Apple, eBay, Netflix, Alibaba and PayPal)

Tracking ran from October 1 to November 21.

References

[1] Based on live tracking of phishing emails spoofing Walmart, Target, Best Buy, Macy's, Old Navy, 1800-Flowers across email inboxes protected by Darktrace.  November 15 – November 21, 2025

[2] Based on analysis of 30.4 million phishing emails between December 21, 2023, and December 18, 2024. Darktrace Annual Threat Report 2024.

[related-resource]

Replace your SEG with context-aware email security

A practical guide for CISOs for replacing outdated SEGs with AI-driven email security, optimized for Microsoft 365.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Carlos Gray
Senior Product Marketing Manager, Email

More in this series

No items found.

Blog

/

Network

/

March 26, 2026

Phantom Footprints: Tracking GhostSocks Malware

Default blog imageDefault blog image

Why are attackers using residential proxies?

In today's threat landscape, blending in to normal activity is the key to success for attackers and the growing reliance on residential proxies shows a significant shift in how threat actors are attempting to bypass IP detection tools.

The increasing dependency on residential proxies has exposed how prevalent proxy services are and how reliant a diverse range of threat actors are on them. From cybercriminal groups to state‑sponsored actors, the need to bypass IP detection tools is fundamental to the success of these groups. One malware that has quietly become notorious for its ability to avoid anomaly detection is GhostSocks, a malware that turns compromised devices into residential proxies.

What is GhostSocks?

Originally marketed on the Russian underground forum xss[.]is as a Malware‑as‑a‑Service (MaaS), GhostSocks enables threat actors to turn compromised devices into residential proxies, leveraging the victim's internet bandwidth to route malicious traffic through it.

How does Ghostsocks malware work? 

The malware offers the threat actor a “clean” IP address, making it look like it is coming from a household user. This enables the bypassing of geographic restrictions and IP detection tools, a perfect tool for avoiding anomaly detection. It wasn’t until 2024, when a partnership was announced with the infamous information stealer Lumma Stealer, that GhostSocks surged into widespread adoption and alluded to who may be the author of the proxy malware.

Written in GoLang, GhostSocks utilizes the SOCKS5 proxy protocol, creating a SOCKS5 connection on infected devices. It uses a relay‑based C2 implementation, where an intermediary server sits in between the real command-and-control (C2) server and the infected device.

How does Ghostsocks malware evade detection?

To further increase evasion, the Ghostsocks malware wraps its SOCKS5 tunnels in TLS encryption, allowing its malicious traffic to blend into normal network traffic.

Early variants of GhostSocks do not implement a persistence mechanism; however, later versions achieve persistence via registry run keys, ensuring sustained proxy operational time [1].

While proxying is its primary purpose, GhostSocks also incorporates backdoor functionality, enabling malicious actors to run arbitrary commands and download and deploy additional malicious payloads. This was evident with the well‑known ransomware group Black Basta, which reportedly used GhostSocks as a way of maintaining long‑term access to victims’ networks [1].

Darktrace’s detection of GhostSocks Malware

Darktrace observed a steady increase in GhostSocks activity across its customer base from late 2025, with its Threat Research team identifying multiple incidents involving the malware. In one notable case from December 2025, Darktrace detected GhostSocks operating alongside Lumma Stealer, reinforcing that the partnership between Lumma and GhostSocks remains active despite recent attempts to disrupt Lumma’s infrastructure.

Darktrace’s first detection of GhostSocks‑related activity came when a device on the network of a customer in the education sector began making connections to an endpoint with a suspicious self‑signed certificate that had never been seen on the network before.

The endpoint in question, 159.89.46[.]92 with the hostname retreaw[.]click, has been flagged by multiple open‑source intelligence (OSINT) sources as being associated with Lumma Stealer’s C2 infrastructure [2], indicating its likely role in the delivery of malicious payloads.

Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.
Figure 1: Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.

Less than two minutes later, Darktrace observed the same device downloading the executable (.exe) file “Renewable.exe” from the IP 86.54.24[.]29, which Darktrace recognized as 100% rare for this network.

Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.
Figure 2: Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.

Both the file MD5 hash and the executable itself have been identified by multiple OSINT vendors as being associated with the GhostSocks malware [3], with the executable likely the backdoor component of the GhostSocks malware, facilitating the distribution of additional malicious payloads [4].

Following this detection, Darktrace’s Autonomous Response capability recommended a blocking action for the device in an early attempt to stop the malicious file download. In this instance, Darktrace was configured in Human Confirmation Mode, meaning the customer’s security team was required to manually apply any mitigative response actions. Had Autonomous Response been fully enabled at the time of the attack, the connections to 86.54.24[.]29 would have been blocked, rendering the malware ineffective at reaching its C2 infrastructure and halting any further malicious communication.

 Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.
Figure 3: Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.

As the attack was able to progress, two days later the device was detected downloading additional payloads from the endpoint www.lbfs[.]site (23.106.58[.]48), including “Setup.exe”, “,.exe”, and “/vp6c63yoz.exe”.

Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.
Figure 4: Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.

Once again, Darktrace recognized the anomalous nature of these downloads and suggested that a “group pattern of life” be enforced on the offending device in an attempt to contain the activity. By enforcing a pattern of life on a device, Darktrace restricts its activity to connections and behaviors similar to those performed by peer devices within the same group, while still allowing it to carry out its expected activity, effectively preventing deviations indicative of compromise while minimizing disruption. As mentioned earlier, these mitigative actions required manual implementation, so the activity was able to continue. Darktrace proceeded to suggest further actions to contain subsequent malicious downloads, including an attempt to block all outbound traffic to stop the attack from progressing.

An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.
Figure 5: An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.

Around the same time, a third executable download was detected, this time from the hostname hxxp[://]d2ihv8ymzp14lr.cloudfront.net/2021-08-19/udppump[.]exe, along with the file “udppump.exe”.While GhostSocks may have been present only to facilitate the delivery of additional payloads, there is no indication that these CloudFront endpoints or files are functionally linked to GhostSocks. Rather, the evidence points to broader malicious file‑download activity.

Shortly after the multiple executable files had been downloaded, Darktrace observed the device initiating a series of repeated successful connections to several rare external endpoints, behavior consistent with early-stage C2 beaconing activity.

Cyber AI Analyst’s investigation

Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.
Figure 7: Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.

Throughout the course of this attack, Darktrace’s Cyber AI Analyst carried out its own autonomous investigation, piecing together seemingly separate events into one wider incident encompassing the first suspicious downloads beginning on December 4, the unusual connectivity to many suspicious IPs that followed, and the successful beaconing activity observed two days later. By analyzing these events in real-time and viewing them as part of the bigger picture, Cyber AI Analyst was able to construct an in‑depth breakdown of the attack to aid the customer’s investigation and remediation efforts.

Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.
Figure 8: Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.

Conclusion

The versatility offered by GhostSocks is far from new, but its ability to convert compromised devices into residential proxy nodes, while enabling long‑term, covert network access—illustrates how threat actors continue to maximise the value of their victims’ infrastructure. Its growing popularity, coupled with its ongoing partnership with Lumma, demonstrates that infrastructure takedowns alone are insufficient; as long as threat actors remain committed to maintaining anonymity and can rapidly rebuild their ecosystems, related malware activity is likely to persist in some form.

Credit to Isabel Evans (Cyber Analyst), Gernice Lee (Associate Principal Analyst & Regional Consultancy Lead – APJ)
Edited by Ryan Traill (Content Manager)

Appendices

References

1.    https://bloo.io/research/malware/ghostsocks

2.    https://www.virustotal.com/gui/domain/retreaw.click/community

3.    https://synthient.com/blog/ghostsocks-from-initial-access-to-residential-proxy

4.    https://www.joesandbox.com/analysis/1810568/0/html

5. https://www.virustotal.com/gui/url/fab6525bf6e77249b74736cb74501a9491109dc7950688b3ae898354eb920413

Darktrace Model Detections

Real-time Detection Models

Anomalous Connection / Suspicious Self-Signed SSL

Anomalous Connection / Rare External SSL Self-Signed

Anomalous File / EXE from Rare External Location

Anomalous File / Multiple EXE from Rare External Locations

Compromise / Possible Fast Flux C2 Activity

Compromise / Large Number of Suspicious Successful Connections

Compromise / Large Number of Suspicious Failed Connections

Compromise / Sustained SSL or HTTP Increase

Autonomous Response Models

Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block

Antigena / Network / External Threat / Antigena Suspicious File Block

Antigena / Network / Significant Anomaly / Antigena Controlled and Model Alert

Antigena / Network / External Threat / Antigena File then New Outbound Block

Antigena / Network / Significant Anomaly / Antigena Alerts Over Time Block

Antigena / Network / External Threat / Antigena Suspicious Activity Block

MITRE ATT&CK Mapping

Tactic – Technique – Sub-Technique

Resource Development – T1588 - Malware

Initial Access - T1189 - Drive-by Compromise

Persistence – T1112 – Modify Registry

Command and Control – T1071 – Application Layer Protocol

Command and Control – T1095 – Non-application Layer Protocol

Command and Control – T1071 – Web Protocols

Command and Control – T1571 – Non-Standard Port

Command and Control – T1102 – One-Way Communication

List of Indicators of Compromise (IoCs)

86.54.24[.]29 - IP - Likely GhostSocks C2

http[://]86.54.24[.]29/Renewable[.]exe - Hostname - GhostSocks Distribution Endpoint

http[://]d2ihv8ymzp14lr.cloudfront[.]net/2021-08-19/udppump[.]exe - CDN - Payload Distribution Endpoint

www.lbfs[.]site - Hostname - Likely C2 Endpoint

retreaw[.]click - Hostname - Lumma C2 Endpoint

alltipi[.]com - Hostname - Possible C2 Endpoint

w2.bruggebogeyed[.]site - Hostname - Possible C2 Endpoint

9b90c62299d4bed2e0752e2e1fc777ac50308534 - SHA1 file hash – Likely GhostSocks payload

3d9d7a7905e46a3e39a45405cb010c1baa735f9e - SHA1 file hash - Likely follow-up payload

10f928e00a1ed0181992a1e4771673566a02f4e3 - SHA1 file hash - Likely follow-up payload

Continue reading
About the author
Gernice Lee
Associate Principal Analyst & Regional Consultancy Lead

Blog

/

AI

/

March 27, 2026

State of AI Cybersecurity 2026: 92% of security professionals concerned about the impact of AI agents

Default blog imageDefault blog image

The findings in this blog are taken from Darktrace's annual State of AI Cybersecurity Report 2026.

AI is already embedded in day-to-day enterprise activity, with 78% of participants in one recent survey reporting that their organizations are using generative AI in at least one business function. Generative AI now acts as an always-on assistant, researcher, creator, and coach across an expanding array of departments and functions. Autonomous agents are performing multi-step operational workflows from end to end. AI features have been layered on top of every SaaS application. And vibe coding is making it possible for employees without deep technical expertise to build their own AI-powered automations.

According to Gartner, more than 80% of enterprises will have deployed GenAI models, applications, or APIs in production environments by the end of this year, up from less than 5% in 2023. Companies report a 130% increase in spending on AI over the same period, with 72% of business leaders using AI tools at least weekly. The outsized efficiency and productivity gains that were once a future vision are quickly becoming everyday reality.

AI is currently driving business growth and innovation, and organizations risk falling behind peers if they don’t keep up with the pace of adoption, but it is also quietly expanding the enterprise attack surface. The modern CISO is challenged to both enable innovation and protect the business from these emerging threats.

AI agents introduce new risks and vulnerabilities

AI agents are playing growing roles in enterprise production environments. In many cases, these agents act with broad permissions across multiple software systems and platforms. This means they’re granted far-reaching access – to sensitive data, business-critical applications, tokens and APIs, and IT and security tools. With this access comes risk for security leaders – 92% are concerned about the use of AI agents across the workforce and their impact on security.

These agents must be governed as identities, with least-privilege access and ongoing monitoring. They can’t be thought of as invisible aspects of the application estate. Understanding how AI agents behave, and how to manage their permissions, control their behavior, and limit their data access will be a top security priority throughout 2026.

Generative AI prompts: The next frontier

Prompts are how users – both human and agentic – interact with AI systems, and they’re where natural language gets translated into model behavior. Natural language is infinite in its potential combinations and permutations, making this aspect of the attack surface open-ended and far more complex than traditional CVEs. With carefully crafted prompts, bad actors may be able to coax models into disclosing sensitive data, bypassing guardrails, or initiating undesirable actions.

Among security leaders, the biggest worries about AI usage in their environments all involve ways that systems might be manipulated to bypass traditional controls.

  • 61% are most concerned about the exposure of sensitive data
  • 56% are most concerned about potential data security and policy violations
  • 51% are most concerned about the misuse or abuse of AI tools

The more employees rely on AI in their day-to-day workflows, the more critical it becomes for security teams to understand how prompt behavior determines model behavior – and where that behavior could go wrong.

What does “securing AI” mean in practice?

AI adoption opens new security risks that blur the boundaries between traditional security disciplines. A single malicious interaction with an AI model could involve identity misuse, sensitive data exposure, application logic abuse, and supply chain risk – all within a single workflow. Protecting this dynamic and rapidly evolving attack surface requires an approach that spans identity security, cloud security, application security, data security, software development security, and more.

The task for security leaders is to implement the tools, policies, and frameworks to mitigate these novel, expansive, and cross-disciplinary risks.

However, within most enterprises, AI policy creation remains in its infancy. Just 37% of security leaders report that their organization has a formal AI policy, representing a small but worrisome decrease from last year. Conversations about AI abound: in 52% of organizations, there’s discussion about an AI policy. Still, talk is cheap, and leaders will need to take action if they’re to successfully enable secure AI innovation.

To govern and protect their AI systems, organizations must take a multi-pronged approach. This requires building out policies, but it also demands that they are able to:

  • Monitor the prompts driving GenAI assistants and agents in real time. Organizations must be able to inspect prompts, sessions, and responses across enterprise GenAI tools, low- and high-code environments, and SaaS and SASE so that they can detect clever conversational prompt attacks and malicious chaining.
  • Secure all business AI agent identities. Security teams need to identify all the agents acting within their environment and supply chain, map their connections and interactions via MCP and services like Amazon S3, and audit their behavior across the cloud, SaaS environments, and on the network and endpoint devices.
  • Maintain centralized, comprehensive visibility. Understanding intent, assessing risks, and enforcing policies all require that security teams have a single view that spans AI interactions across the entire business.
  • Discover and control shadow AI. Teams need to be able to identify unsanctioned AI activities, distinguish the misuse of legitimate tools from their appropriate use, and apply policies to protect data, while guiding users towards approved solutions.

Scaling AI safely and responsibly

The approach that most cybersecurity vendors have taken – using historical patterns to predict future threats – doesn’t work well for AI systems. Because AI changes its behavior in response to the information it encounters while taking action, previous patterns don’t indicate what it will do next. Looking at past attacks can’t tell you how complex models will behave in your individual business.

Securing AI requires interpreting ambiguous interactions, uncovering subtleties that reveal intent within extended conversations, understanding how access accumulates over time, and recognizing when behavior – both human and machine – begins to drift towards areas of risk. To do this, you need to understand what “normal” looks like in each unique organization: how users, systems, applications, and AI agents behave, how they communicate, and how data flows between them.

Darktrace has spent more than a decade designing AI-powered solutions that can understand and adapt to evolving behavior in complex environments. This technology learns directly from the environment it protects, identifying malicious actions that deviate from normal operations, so that it can stop AI-related threats on the very first encounter.

As AI adoption reshapes enterprise operations, humans and machines will collaborate more and more often. This collaboration might dramatically expand the attack surface, but it also has the potential to be a force multiplier for defenders.

Explore the full State of AI Cybersecurity 2026 report for deeper insights into how security leaders are responding to AI-driven risks.

Learn more about securing AI in your enterprise.

[related-resource]

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI