ブログ
/
Email
/
March 20, 2024

How Phishing Attacks Are Becoming Harder to Identify

Learn about the cyber risks posed by advanced phishing attacks and how AI can enhance security solutions to defend against them.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
20
Mar 2024

The state of email security and phishing attacks

Employees send and receive hundreds of emails a day to keep businesses moving. Unfortunately, it just takes one employee to interact with an undetected phishing email to potentially put an entire organization at risk from cyber disruption. Attackers know this, which is why they continue to develop and improve email phishing attacks.  

Increased attack sophistication makes it harder than ever for traditional cyber security solutions like SEGs, firewalls, and spam filters to detect and mitigate increasingly novel and sophisticated email threats.

When there are tell-tale signs of a threat, these solutions can  identify an incoming message as suspicious. Pointers such as emails from unknown senders, messages which contain an unusual amount of poor spelling and grammar or encourage the receiver to respond to an unexpected but supposedly urgent request.

That is, if the phishing attacks weren’t blocked by security measures before reaching the victim’s inbox. But, this is happening more and more often as phishing campaigns are becoming more advanced. Attackers are showing signs of consistently bypassing traditional protections and getting through to exploit victims.  

Darktrace email threat reporting

In its End of Year Threat Report, Darktrace analyzed over 10 million phishing emails targeting customer environments between September 1 and December 31, 2023.  Our findings signal that attackers are starting to take advantage of advancements in artificial intelligence (AI), including using Generative AI tools such as Large Language Models (LLMs) to create more convincing and sophisticated phishing messages – and at scale.

LLMs and Phishing

With the right AI prompts, attackers can use these LLMs to help write convincing email messages designed to target specific countries, companies or even individuals – all without the suspicious hallmarks which are traditionally associated with standard phishing attacks. The attackers don’t even need to speak the language of the individuals or groups they’re targeting.  LLMs lower language barriers for attackers; using their native tongue, they can simply ask the Generative AI to write a message in the language of their choosing.  

These techniques are designed to build trust and manipulate recipients into giving up sensitive information like user credentials, intellectual property or bank information or coerce them into downloading malicious payloads which can be used to launch further attacks on business infrastructure.  With the appropriate research, attackers can tailor the messages to increase the chances of being successful, like making them look like a legitimate company email or request.

Social engineering phishing attacks

A year ago. Darktrace shared research which found a 135% increase in ‘novel social engineering attacks’ in the first two months of 2023, corresponding with the widespread adoption of ChatGPT. These novel phishing attacks showed a strong linguistic deviation compared to other phishing emails, which suggested to us that Generative AI was already providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale

We’ve seen this trend continue. Our End of Year Threat Report found 38% of these emails were identified as utilizing novel social engineering techniques.

Attackers are also deploying another technique to make phishing emails look more convincing – they’re making the emails themselves longer and more sophisticated.  

A potential victim might be suspicious of an ‘urgent’ email which prompts them to take action without an explanation - but if there’s additional context in the text, it adds an aura of legitimacy which is difficult to act against.

And threat actors know this; 28% of phishing emails analyzed by Darktrace over the period were identified as having “significant” amount of text – containing over 1,000 characters, which equates to over 200 words.  

It’s a sign that attackers are innovating and bolstering their efforts to craft sophisticated phishing campaigns, potentially leveraging Generative AI tools to automate social engineering activity by creating longer, more convincing phishing emails.  

QR code phishing

But this is far from the only innovative method which attackers are using to bypass traditional security defences. Among the 10 million plus emails analyzed during the reporting period, Darktrace/Email detected over 639,000 malicious QR codes within the messages.

Malicious QR codes placed within emails have become an increasingly common form of phishing attack, especially as QR codes have become a more common method for sharing links to information or buying links for products in recent years.

Attackers are deploying QR codes because they provide a way of directing unsuspecting victims to malicious websites or download links without needing to use a traditional phishing URL.  

The advantage of implanting QR codes for attackers is that while phishing URLs are something which traditional security solutions are actively looking to identify and mitigate, malicious QR codes are more difficult for them to detect.

Applying AI to email security

Traditional security solutions which rely heavily on previously identified malicious emails and known bad senders are struggling to identify and defend against these novel and increasingly sophisticated email threats.

But by using AI that learns the unique digital environment and patterns of each business, Darktrace/Email can recognize the subtle deviations in expected email activity to determine whether any given email could represent a threat to the business. It is then able to make highly accurate decisions to mitigate and neutralize any email attack it faces helping to keep your organization safe from cyber disruption.

It’s therefore imperative that in the battle against ever-evolving, ever more sophisticated cyber threats, defenders are also embracing AI to keep businesses safe. By effectively applying AI to cyber security challenges, defenders can take a proactive approach to cyber security, staying one step ahead of malicious attackers, with real-time detection and automated response to known and unknown threats looking to disrupt the business via the inbox.  

Darktrace/Email was recently awarded a 2024 AI Excellence Award for Machine Learning by Business Intelligence Group.

Join Darktrace on 9 April for a virtual event to explore the latest innovations needed to get ahead of the rapidly evolving threat landscape. Register today to hear more about our latest innovations coming to Darktrace’s offerings.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community

More in this series

No items found.

Blog

/

OT

/

November 20, 2025

Managing OT Remote Access with Zero Trust Control & AI Driven Detection

Default blog imageDefault blog image

The shift toward IT-OT convergence

Recently, industrial environments have become more connected and dependent on external collaboration. As a result, truly air-gapped OT systems have become less of a reality, especially when working with OEM-managed assets, legacy equipment requiring remote diagnostics, or third-party integrators who routinely connect in.

This convergence, whether it’s driven by digital transformation mandates or operational efficiency goals, are making OT environments more connected, more automated, and more intertwined with IT systems. While this convergence opens new possibilities, it also exposes the environment to risks that traditional OT architectures were never designed to withstand.

The modernization gap and why visibility alone isn’t enough

The push toward modernization has introduced new technology into industrial environments, creating convergence between IT and OT environments, and resulting in a lack of visibility. However, regaining that visibility is just a starting point. Visibility only tells you what is connected, not how access should be governed. And this is where the divide between IT and OT becomes unavoidable.

Security strategies that work well in IT often fall short in OT, where even small missteps can lead to environmental risk, safety incidents, or costly disruptions. Add in mounting regulatory pressure to enforce secure access, enforce segmentation, and demonstrate accountability, and it becomes clear: visibility alone is no longer sufficient. What industrial environments need now is precision. They need control. And they need to implement both without interrupting operations. All this requires identity-based access controls, real-time session oversight, and continuous behavioral detection.

The risk of unmonitored remote access

This risk becomes most evident during critical moments, such as when an OEM needs urgent access to troubleshoot a malfunctioning asset.

Under that time pressure, access is often provisioned quickly with minimal verification, bypassing established processes. Once inside, there’s little to no real-time oversight of user actions whether they’re executing commands, changing configurations, or moving laterally across the network. These actions typically go unlogged or unnoticed until something breaks. At that point, teams are stuck piecing together fragmented logs or post-incident forensics, with no clear line of accountability.  

In environments where uptime is critical and safety is non-negotiable, this level of uncertainty simply isn’t sustainable.

The visibility gap: Who’s doing what, and when?

The fundamental issue we encounter is the disconnect between who has access and what they are doing with it.  

Traditional access management tools may validate credentials and restrict entry points, but they rarely provide real-time visibility into in-session activity. Even fewer can distinguish between expected vendor behavior and subtle signs of compromise, misuse or misconfiguration.  

As a result, OT and security teams are often left blind to the most critical part of the puzzle, intent and behavior.

Closing the gaps with zero trust controls and AI‑driven detection

Managing remote access in OT is no longer just about granting a connection, it’s about enforcing strict access parameters while continuously monitoring for abnormal behavior. This requires a two-pronged approach: precision access control, and intelligent, real-time detection.

Zero Trust access controls provide the foundation. By enforcing identity-based, just-in-time permissions, OT environments can ensure that vendors and remote users only access the systems they’re explicitly authorized to interact with, and only for the time they need. These controls should be granular enough to limit access down to specific devices, commands, or functions. By applying these principles consistently across the Purdue Model, organizations can eliminate reliance on catch-all VPN tunnels, jump servers, and brittle firewall exceptions that expose the environment to excess risk.

Access control is only one part of the equation

Darktrace / OT complements zero trust controls with continuous, AI-driven behavioral detection. Rather than relying on static rules or pre-defined signatures, Darktrace uses Self-Learning AI to build a live, evolving understanding of what’s “normal” in the environment, across every device, protocol, and user. This enables real-time detection of subtle misconfigurations, credential misuse, or lateral movement as they happen, not after the fact.

By correlating user identity and session activity with behavioral analytics, Darktrace gives organizations the full picture: who accessed which system, what actions they performed, how those actions compared to historical norms, and whether any deviations occurred. It eliminates guesswork around remote access sessions and replaces it with clear, contextual insight.

Importantly, Darktrace distinguishes between operational noise and true cyber-relevant anomalies. Unlike other tools that lump everything, from CVE alerts to routine activity, into a single stream, Darktrace separates legitimate remote access behavior from potential misuse or abuse. This means organizations can both audit access from a compliance standpoint and be confident that if a session is ever exploited, the misuse will be surfaced as a high-fidelity, cyber-relevant alert. This approach serves as a compensating control, ensuring that even if access is overextended or misused, the behavior is still visible and actionable.

If a session deviates from learned baselines, such as an unusual command sequence, new lateral movement path, or activity outside of scheduled hours, Darktrace can flag it immediately. These insights can be used to trigger manual investigation or automated enforcement actions, such as access revocation or session isolation, depending on policy.

This layered approach enables real-time decision-making, supports uninterrupted operations, and delivers complete accountability for all remote activity, without slowing down critical work or disrupting industrial workflows.

Where Zero Trust Access Meets AI‑Driven Oversight:

  • Granular Access Enforcement: Role-based, just-in-time access that aligns with Zero Trust principles and meets compliance expectations.
  • Context-Enriched Threat Detection: Self-Learning AI detects anomalous OT behavior in real time and ties threats to access events and user activity.
  • Automated Session Oversight: Behavioral anomalies can trigger alerting or automated controls, reducing time-to-contain while preserving uptime.
  • Full Visibility Across Purdue Layers: Correlated data connects remote access events with device-level behavior, spanning IT and OT layers.
  • Scalable, Passive Monitoring: Passive behavioral learning enables coverage across legacy systems and air-gapped environments, no signatures, agents, or intrusive scans required.

Complete security without compromise

We no longer have to choose between operational agility and security control, or between visibility and simplicity. A Zero Trust approach, reinforced by real-time AI detection, enables secure remote access that is both permission-aware and behavior-aware, tailored to the realities of industrial operations and scalable across diverse environments.

Because when it comes to protecting critical infrastructure, access without detection is a risk and detection without access control is incomplete.

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance

Blog

/

/

November 20, 2025

Securing Generative AI: Managing Risk in Amazon Bedrock with Darktrace / CLOUD

Default blog imageDefault blog image

Security risks and challenges of generative AI in the enterprise

Generative AI and managed foundation model platforms like Amazon Bedrock are transforming how organizations build and deploy intelligent applications. From chatbots to summarization tools, Bedrock enables rapid agent development by connecting foundation models to enterprise data and services. But with this flexibility comes a new set of security challenges, especially around visibility, access control, and unintended data exposure.

As organizations move quickly to operationalize generative AI, traditional security controls are struggling to keep up. Bedrock’s multi-layered architecture, spanning agents, models, guardrails, and underlying AWS services, creates new blind spots that standard posture management tools weren’t designed to handle. Visibility gaps make it difficult to know which datasets agents can access, or how model outputs might expose sensitive information. Meanwhile, developers often move faster than security teams can review IAM permissions or validate guardrails, leading to misconfigurations that expand risk. In shared-responsibility environments like AWS, this complexity can blur the lines of ownership, making it critical for security teams to have continuous, automated insight into how AI systems interact with enterprise data.

Darktrace / CLOUD provides comprehensive visibility and posture management for Bedrock environments, automatically detecting and proactively scanning agents and knowledge bases, helping teams secure their AI infrastructure without slowing down expansion and innovation.

A real-world scenario: When access goes too far

Consider a scenario where an organization deploys a Bedrock agent to help internal staff quickly answer business questions using company knowledge. The agent was connected to a knowledge base pointing at documents stored in Amazon S3 and given access to internal services via APIs.

To get the system running quickly, developers assigned the agent a broad execution role. This role granted access to multiple S3 buckets, including one containing sensitive customer records. The over-permissioning wasn’t malicious; it stemmed from the complexity of IAM policy creation and the difficulty of identifying which buckets held sensitive data.

The team assumed the agent would only use the intended documents. However, they did not fully consider how employees might interact with the agent or how it might act on the data it processed.  

When an employee asked a routine question about quarterly customer activity, the agent surfaced insights that included regulated data, revealing it to someone without the appropriate access.

This wasn’t a case of prompt injection or model manipulation. The agent simply followed instructions and used the resources it was allowed to access. The exposure was valid under IAM policy, but entirely unintended.

How Darktrace / CLOUD prevents these risks

Darktrace / CLOUD helps organizations avoid scenarios like unintended data exposure by providing layered visibility and intelligent analysis across Bedrock and SageMaker environments. Here’s how each capability works in practice:

Configuration-level visibility

Bedrock deployments often involve multiple components: agents, guardrails, and foundation models, each with its own configuration. Darktrace / CLOUD indexes these configurations so teams can:

  1. Inspect deployed agents and confirm they are connected only to approved data sources.
  2. Track evaluation job setups and their links to Amazon S3 datasets, uncovering hidden data flows that could expose sensitive information.
  3. Maintain full awareness of all AI components, reducing the chance of overlooked assets introducing risk.

By unifying configuration data across Bedrock, SageMaker, and other AWS services, Darktrace / CLOUD provides a single source of truth for AI asset visibility. Teams can instantly see how each component is configured and whether it aligns with corporate security policies. This eliminates guesswork, accelerates audits, and helps prevent misaligned settings from creating data exposure risks.

 Agents for bedrock relationship views.
Figure 1: Agents for bedrock relationship views

Architectural awareness

Complex AI environments can make it difficult to understand how components interact. Darktrace / CLOUD generates real-time architectural diagrams that:

  1. Visualize relationships between agents, models, and datasets.
  1. Highlight unintended data access paths or risk propagation across interconnected services.

This clarity helps security teams spot vulnerabilities before they lead to exposure. By surfacing these relationships dynamically, Darktrace / CLOUD enables proactive risk management, helping teams identify architectural drift, redundant data connections, or unmonitored agents before attackers or accidental misuse can exploit them. This reduces investigation time and strengthens compliance confidence across AI workloads.

Figure 2: Full Bedrock agent architecture including lambda and IAM permission mapping
Figure 2: Full Bedrock agent architecture including lambda and IAM permission mapping

Access & privilege analysis

IAM permissions apply to every AWS service, including Bedrock. When Bedrock agents assume IAM roles that were broadly defined for other workloads, they often inherit excessive privileges. Without strict least-privilege controls, the agent may have access to far more data and services than required, creating avoidable security exposure. Darktrace / CLOUD:

  1. Reviews execution roles and user permissions to identify excessive privileges.
  2. Flags anomalies that could enable privilege escalation or unauthorized API actions.

This ensures agents operate within the principle of least privilege, reducing attack surface. Beyond flagging risky roles, Darktrace / CLOUD continuously learns normal patterns of access to identify when permissions are abused or expanded in real time. Security teams gain context into why an action is anomalous and how it could affect connected assets, allowing them to take targeted remediation steps that preserve productivity while minimizing exposure.

Misconfiguration detection

Misconfigurations are a leading cause of cloud security incidents. Darktrace / CLOUD automatically detects:

  1. Publicly accessible S3 buckets that may contain sensitive training data.
  2. Missing guardrails in Bedrock deployments, which can allow inappropriate or sensitive outputs.
  3. Other issues such as lack of encryption, direct internet access, and root access to models.  

By surfacing these risks early, teams can remediate before they become exploitable. Darktrace / CLOUD turns what would otherwise be manual reviews into automated, continuous checks, reducing time to discovery and preventing small oversights from escalating into full-scale incidents. This automated assurance allows organizations to innovate confidently while keeping their AI systems compliant and secure by design.

Configuration data for Anthropic foundation model
Figure 3: Configuration data for Anthropic foundation model

Behavioral anomaly detection

Even with correct configurations, behavior can signal emerging threats. Using AWS CloudTrail, Darktrace / CLOUD:

  1. Monitors for unusual data access patterns, such as agents querying unexpected datasets.
  2. Detects anomalous training job invocations that could indicate attempts to pollute models.

This real-time behavioral insight helps organizations respond quickly to suspicious activity. Because it learns the “normal” behavior of each Bedrock component over time, Darktrace / CLOUD can detect subtle shifts that indicate emerging risks, before formal indicators of compromise appear. The result is faster detection, reduced investigation effort, and continuous assurance that AI-driven workloads behave as intended.

Conclusion

Generative AI introduces transformative capabilities but also complex risks that evolve alongside innovation. The flexibility of services like Amazon Bedrock enables new efficiencies and insights, yet even legitimate use can inadvertently expose sensitive data or bypass security controls. As organizations embrace AI at scale, the ability to monitor and secure these environments holistically, without slowing development, is becoming essential.

By combining deep configuration visibility, architectural insight, privilege and behavior analysis, and real-time threat detection, Darktrace gives security teams continuous assurance across AI tools like Bedrock and SageMaker. Organizations can innovate with confidence, knowing their AI systems are governed by adaptive, intelligent protection.

[related-resource]

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace
Your data. Our AI.
Elevate your network security with Darktrace AI