Blog
/
/
February 1, 2021

Explore AI Email Security Approaches with Darktrace

Stay informed on the latest AI approaches to email security. Explore Darktrace's comparisons to find the best solution for your cybersecurity needs!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
01
Feb 2021

Innovations in artificial intelligence (AI) have fundamentally changed the email security landscape in recent years, but it can often be hard to determine what makes one system different to the next. In reality, under that umbrella term there exists a significant distinction in approach which may determine whether the technology provides genuine protection or simply a perceived notion of defense.

One backward-looking approach involves feeding a machine thousands of emails that have already been deemed to be malicious, and training it to look for patterns in these emails in order to spot future attacks. The second approach uses an AI system to analyze the entirety of an organization’s real-world data, enabling it to establish a notion of what is ‘normal’ and then spot subtle deviations indicative of an attack.

In the below, we compare the relative merits of each approach, with special consideration to novel attacks that leverage the latest news headlines to bypass machine learning systems trained on data sets. Training a machine on previously identified ‘known bads’ is only advantageous in certain, specific contexts that don’t change over time: to recognize the intent behind an email, for example. However, an effective email security solution must also incorporate a self-learning approach that understands ‘normal’ in the context of an organization in order to identify unusual and anomalous emails and catch even the novel attacks.

Signatures – a backward-looking approach

Over the past few decades, cyber security technologies have looked to mitigate risk by preventing previously seen attacks from occurring again. In the early days, when the lifespan of a given strain of malware or the infrastructure of an attack was in the range of months and years, this method was satisfactory. But the approach inevitably results in playing catch-up with malicious actors: it always looks to the past to guide detection for the future. With decreasing lifetimes of attacks, where a domain could be used in a single email and never seen again, this historic-looking signature-based approach is now being widely replaced by more intelligent systems.

Training a machine on ‘bad’ emails

The first AI approach we often see in the wild involves harnessing an extremely large data set with thousands or millions of emails. Once these emails have come through, an AI is trained to look for common patterns in malicious emails. The system then updates its models, rules set, and blacklists based on that data.

This method certainly represents an improvement to traditional rules and signatures, but it does not escape the fact that it is still reactive, and unable to stop new attack infrastructure and new types of email attacks. It is simply automating that flawed, traditional approach – only instead of having a human update the rules and signatures, a machine is updating them instead.

Relying on this approach alone has one basic but critical flaw: it does not enable you to stop new types of attacks that it has never seen before. It accepts that there has to be a ‘patient zero’ – or first victim – in order to succeed.

The industry is beginning to acknowledge the challenges with this approach, and huge amounts of resources – both automated systems and security researchers – are being thrown into minimizing its limitations. This includes leveraging a technique called “data augmentation” that involves taking a malicious email that slipped through and generating many “training samples” using open-source text augmentation libraries to create “similar” emails – so that the machine learns not only the missed phish as ‘bad’, but several others like it – enabling it to detect future attacks that use similar wording, and fall into the same category.

But spending all this time and effort into trying to fix an unsolvable problem is like putting all your eggs in the wrong basket. Why try and fix a flawed system rather than change the game altogether? To spell out the limitations of this approach, let us look at a situation where the nature of the attack is entirely new.

The rise of ‘fearware’

When the global pandemic hit, and governments began enforcing travel bans and imposing stringent restrictions, there was undoubtedly a collective sense of fear and uncertainty. As explained previously in this blog, cyber-criminals were quick to capitalize on this, taking advantage of people’s desire for information to send out topical emails related to COVID-19 containing malware or credential-grabbing links.

These emails often spoofed the Centers for Disease Control and Prevention (CDC), or later on, as the economic impact of the pandemic began to take hold, the Small Business Administration (SBA). As the global situation shifted, so did attackers’ tactics. And in the process, over 130,000 new domains related to COVID-19 were purchased.

Let’s now consider how the above approach to email security might fare when faced with these new email attacks. The question becomes: how can you train a model to look out for emails containing ‘COVID-19’, when the term hasn’t even been invented yet?

And while COVID-19 is the most salient example of this, the same reasoning follows for every single novel and unexpected news cycle that attackers are leveraging in their phishing emails to evade tools using this approach – and attracting the recipient’s attention as a bonus. Moreover, if an email attack is truly targeted to your organization, it might contain bespoke and tailored news referring to a very specific thing that supervised machine learning systems could never be trained on.

This isn’t to say there’s not a time and a place in email security for looking at past attacks to set yourself up for the future. It just isn’t here.

Spotting intention

Darktrace uses this approach for one specific use which is future-proof and not prone to change over time, to analyze grammar and tone in an email in order to identify intention: asking questions like ‘does this look like an attempt at inducement? Is the sender trying to solicit some sensitive information? Is this extortion?’ By training a system on an extremely large data set collected over a period of time, you can start to understand what, for instance, inducement looks like. This then enables you to easily spot future scenarios of inducement based on a common set of characteristics.

Training a system in this way works because, unlike news cycles and the topics of phishing emails, fundamental patterns in tone and language don’t change over time. An attempt at solicitation is always an attempt at solicitation, and will always bear common characteristics.

For this reason, this approach only plays one small part of a very large engine. It gives an additional indication about the nature of the threat, but is not in itself used to determine anomalous emails.

Detecting the unknown unknowns

In addition to using the above approach to identify intention, Darktrace uses unsupervised machine learning, which starts with extracting and extrapolating thousands of data points from every email. Some of these are taken directly from the email itself, while others are only ascertainable by the above intention-type analysis. Additional insights are also gained from observing emails in the wider context of all available data across email, network and the cloud environment of the organization.

Only after having a now-significantly larger and more comprehensive set of indicators, with a more complete description of that email, can the data be fed into a topic-indifferent machine learning engine to start questioning the data in millions of ways in order to understand if it belongs, given the wider context of the typical ‘pattern of life’ for the organization. Monitoring all emails in conjunction allows the machine to establish things like:

  • Does this person usually receive ZIP files?
  • Does this supplier usually send links to Dropbox?
  • Has this sender ever logged in from China?
  • Do these recipients usually get the same emails together?

The technology identifies patterns across an entire organization and gains a continuously evolving sense of ‘self’ as the organization grows and changes. It is this innate understanding of what is and isn’t ‘normal’ that allows AI to spot the truly ‘unknown unknowns’ instead of just ‘new variations of known bads.’

This type of analysis brings an additional advantage in that it is language and topic agnostic: because it focusses on anomaly detection rather than finding specific patterns that indicate threat, it is effective regardless of whether an organization typically communicates in English, Spanish, Japanese, or any other language.

By layering both of these approaches, you can understand the intention behind an email and understand whether that email belongs given the context of normal communication. And all of this is done without ever making an assumption or having the expectation that you’ve seen this threat before.

Years in the making

It’s well established now that the legacy approach to email security has failed – and this makes it easy to see why existing recommendation engines are being applied to the cyber security space. On first glance, these solutions may be appealing to a security team, but highly targeted, truly unique spear phishing emails easily skirt these systems. They can’t be relied on to stop email threats on the first encounter, as they have a dependency on known attacks with previously seen topics, domains, and payloads.

An effective, layered AI approach takes years of research and development. There is no single mathematical model to solve the problem of determining malicious emails from benign communication. A layered approach accepts that competing mathematical models each have their own strengths and weaknesses. It autonomously determines the relative weight these models should have and weighs them against one another to produce an overall ‘anomaly score’ given as a percentage, indicating exactly how unusual a particular email is in comparison to the organization’s wider email traffic flow.

It is time for email security to well and truly drop the assumption that you can look at threats of the past to predict tomorrow’s attacks. An effective AI cyber security system can identify abnormalities with no reliance on historical attacks, enabling it to catch truly unique novel emails on the first encounter – before they land in the inbox.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

More in this series

No items found.

Blog

/

Compliance

/

September 5, 2025

Cyber Assessment Framework v4.0 Raises the Bar: 6 Questions every security team should ask about their security posture

CAF v4.0 cyber assessment frameworkDefault blog imageDefault blog image

What is the Cyber Assessment Framework?

The Cyber Assessment Framework (CAF) acts as guide for organizations, specifically across essential services, critical national infrastructure and regulated sectors, across the UK for assessing, managing and improving their cybersecurity, cyber resilience and cyber risk profile.

The guidance in the Cyber Assessment Framework aligns with regulations such as The Network and Information Systems Regulations (NIS), The Network and Information Security Directive (NIS2) and the Cyber Security and Resilience Bill.

What’s new with the Cyber Assessment Framework 4.0?

On 6 August 2025, the UK’s National Cyber Security Centre (NCSC) released Cyber Assessment Framework 4.0 (CAF v4.0) a pivotal update that reflects the increasingly complex threat landscape and the regulatory need for organisations to respond in smarter, more adaptive ways.

The Cyber Assessment Framework v4.0 introduces significant shifts in expectations, including, but not limited to:

  • Understanding threats in terms of the capabilities, methods and techniques of threat actors and the importance of maintaining a proactive security posture (A2.b)
  • The use of secure software development principles and practices (A4.b)
  • Ensuring threat intelligence is understood and utilised - with a focus on anomaly-based detection (C1.f)
  • Performance of proactive threat hunting with automation where appropriate (C2.a)

This blog post will focus on these components of the framework. However, we encourage readers to get the full scope of the framework by visiting the NCSC website where they can access the full framework here.

In summary, the changes to the framework send a clear signal: the UK’s technical authority now expects organisations to move beyond static rule-based systems and embrace more dynamic, automated defences. For those responsible for securing critical national infrastructure and essential services, these updates are not simply technical preferences, but operational mandates.

At Darktrace, this evolution comes as no surprise. In fact, it reflects the approach we've championed since our inception.

Why Darktrace? Leading the way since 2013

Darktrace was built on the principle that detecting cyber threats in real time requires more than signatures, thresholds, or retrospective analysis. Instead, we pioneered a self-learning approach powered by artificial intelligence, that understands the unique “normal” for every environment and uses this baseline to spot subtle deviations indicative of emerging threats.

From the beginning, Darktrace has understood that rules and lists will never keep pace with adversaries. That’s why we’ve spent over a decade developing AI that doesn't just alert, it learns, reasons, explains, and acts.

With Cyber Assessment Framework v4.0, the bar has been raised to meet this new reality. For technical practitioners tasked with evaluating their organisation’s readiness, there are five essential questions that should guide the selection or validation of anomaly detection capabilities.

6 Questions you should ask about your security posture to align with CAF v4

1. Can your tools detect threats by identifying anomalies?

Cyber Assessment Framework v4.0 principle C1.f has been added in this version and requires that, “Threats to the operation of network and information systems, and corresponding user and system behaviour, are sufficiently understood. These are used to detect cyber security incidents.”

This marks a significant shift from traditional signature-based approaches, which rely on known Indicators of Compromise (IOCs) or predefined rules to an expectation that normal user and system behaviour is understood to an extent enabling abnormality detection.

Why this shift?

An overemphasis on threat intelligence alone leaves defenders exposed to novel threats or new variations of existing threats. By including reference to “understanding user and system behaviour” the framework is broadening the methods of threat detection beyond the use of threat intelligence and historical attack data.

While CAF v4.0 places emphasis on understanding normal user and system behaviour and using that understanding to detect abnormalities and as a result, adverse activity. There is a further expectation that threats are understood in terms of industry specific issues and that monitoring is continually updated  

Darktrace uses an anomaly-based approach to threat detection which involves establishing a dynamic baseline of “normal” for your environment, then flagging deviations from that baseline — even when there’s no known IoCs to match against. This allows security teams to surface previously unseen tactics, techniques, and procedures in real time, whether it’s:

  • An unexpected outbound connection pattern (e.g., DNS tunnelling);
  • A first-time API call between critical services;
  • Unusual calls between services; or  
  • Sensitive data moving outside normal channels or timeframes.

The requirement that organisations must be equipped to monitor their environment, create an understanding of normal and detect anomalous behaviour aligns closely with Darktrace’s capabilities.

2. Is threat hunting structured, repeatable, and improving over time?

CAF v4.0 introduces a new focus on structured threat hunting to detect adverse activity that may evade standard security controls or when such controls are not deployable.  

Principle C2.a outlines the need for documented, repeatable threat hunting processes and stresses the importance of recording and reviewing hunts to improve future effectiveness. This inclusion acknowledges that reactive threat hunting is not sufficient. Instead, the framework calls for:

  • Pre-determined and documented methods to ensure threat hunts can be deployed at the requisite frequency;
  • Threat hunts to be converted  into automated detection and alerting, where appropriate;  
  • Maintenance of threat hunt  records and post-hunt analysis to drive improvements in the process and overall security posture;
  • Regular review of the threat hunting process to align with updated risks;
  • Leveraging automation for improvement, where appropriate;
  • Focus on threat tactics, techniques and procedures, rather than one-off indicators of compromise.

Traditionally, playbook creation has been a manual process — static, slow to amend, and limited by human foresight. Even automated SOAR playbooks tend to be stock templates that can’t cover the full spectrum of threats or reflect the specific context of your organisation.

CAF v4.0 sets the expectation that organisations should maintain documented, structured approaches to incident response. But Darktrace / Incident Readiness & Recovery goes further. Its AI-generated playbooks are bespoke to your environment and updated dynamically in real time as incidents unfold. This continuous refresh of “New Events” means responders always have the latest view of what’s happening, along with an updated understanding of the AI's interpretation based on real-time contextual awareness, and recommended next steps tailored to the current stage of the attack.

The result is far beyond checkbox compliance: a living, adaptive response capability that reduces investigation time, speeds containment, and ensures actions are always proportionate to the evolving threat.

3. Do you have a proactive security posture?

Cyber Assessment Framework v4.0 does not want organisations to detect threats, it expects them to anticipate and reduce cyber risk before an incident ever occurs. That is s why principle A2.b calls for a security posture that moves from reactive detection to predictive, preventative action.

A proactive security posture focuses on reducing the ease of the most likely attack paths in advance and reducing the number of opportunities an adversary has to succeed in an attack.

To meet this requirement, organisations could benefit in looking for solutions that can:

  • Continuously map the assets and users most critical to operations;
  • Identify vulnerabilities and misconfigurations in real time;
  • Model likely adversary behaviours and attack paths using frameworks like MITRE ATT&CK; and  
  • Prioritise remediation actions that will have the highest impact on reducing overall risk.

When done well, this approach creates a real-time picture of your security posture, one that reflects the dynamic nature and ongoing evolution of both your internal environment and the evolving external threat landscape. This enables security teams to focus their time in other areas such as  validating resilience through exercises such as red teaming or forecasting.

4. Can your team/tools customize detection rules and enable autonomous responses?

CAF v4.0 places greater emphasis on reducing false positives and acting decisively when genuine threats are detected.  

The framework highlights the need for customisable detection rules and, where appropriate, autonomous response actions that can contain threats before they escalate:

The following new requirements are included:  

  • C1.c.: Alerts and detection rules should be adjustable to reduce false positives and optimise responses. Custom tooling and rules are used in conjunction with off the shelf tooling and rules;
  • C1.d: You investigate and triage alerts from all security tools and take action – allowing for improvement and prioritization of activities;
  • C1.e: Monitoring and detection personnel have sufficient understanding of operational context and deal with workload effectively as well as identifying areas for improvement (alert or triage fatigue is not present);
  • C2.a: Threat hunts should be turned into automated detections and alerting where appropriate and automation should be leveraged to improve threat hunting.

Tailored detection rules improve accuracy, while automation accelerates response, both of which help satisfy regulatory expectations. Cyber AI Analyst allows for AI investigation of alerts and can dramatically reduce the time a security team spends on alerts, reducing alert fatigue, allowing more time for strategic initiatives and identifying improvements.

5. Is your software secure and supported?  

CAF v4.0 introduced a new principle which requires software suppliers to leverage an established secure software development framework. Software suppliers must be able to demonstrate:  

  • A thorough understanding of the composition and provenance of software provided;  
  • That the software development lifecycle is informed by a detailed and up to date understanding of threat; and  
  • They can attest to the authenticity and integrity of the software, including updates and patches.  

Darktrace is committed to secure software development and all Darktrace products and internally developed systems are developed with secure engineering principles and security by design methodologies in place. Darktrace commits to the inclusion of security requirements at all stages of the software development lifecycle. Darktrace is ISO 27001, ISO 27018 and ISO 42001 Certified – demonstrating an ongoing commitment to information security, data privacy and artificial intelligence management and compliance, throughout the organisation.  

6. Is your incident response plan built on a true understanding of your environment and does it adapt to changes over time?

CAF v4.0 raises the bar for incident response by making it clear that a plan is only as strong as the context behind it. Your response plan must be shaped by a detailed, up-to-date understanding of your organisation’s specific network, systems, and operational priorities.

The framework’s updates emphasise that:

  • Plans must explicitly cover the network and information systems that underpin your essential functions because every environment has different dependencies, choke points, and critical assets.
  • They must be readily accessible even when IT systems are disrupted ensuring critical steps and contact paths aren’t lost during an incident.
  • They should be reviewed regularly to keep pace with evolving risks, infrastructure changes, and lessons learned from testing.

From government expectation to strategic advantage

Cyber Assessment Framework v4.0 signals a powerful shift in cybersecurity best practice. The newest version sets a higher standard for detection performance, risk management, threat hunting software development and proactive security posture.

For Darktrace, this is validation of the approach we have taken since the beginning: to go beyond rules and signatures to deliver proactive cyber resilience in real-time.

-----

Disclaimer:

This document has been prepared on behalf of Darktrace Holdings Limited. It is provided for information purposes only to provide prospective readers with general information about the Cyber Assessment Framework (CAF) in a cyber security context. It does not constitute legal, regulatory, financial or any other kind of professional advice and it has not been prepared with the reader and/or its specific organisation’s requirements in mind. Darktrace offers no warranties, guarantees, undertakings or other assurances (whether express or implied)  that: (i) this document or its content are  accurate or complete; (ii) the steps outlined herein will guarantee compliance with CAF; (iii) any purchase of Darktrace’s products or services will guarantee compliance with CAF; (iv) the steps outlined herein are appropriate for all customers. Neither the reader nor any third party is entitled to rely on the contents of this document when making/taking any decisions or actions to achieve compliance with CAF. To the fullest extent permitted by applicable law or regulation, Darktrace has no liability for any actions or decisions taken or not taken by the reader to implement any suggestions contained herein, or for any third party products, links or materials referenced. Nothing in this document negates the responsibility of the reader to seek independent legal or other advice should it wish to rely on any of the statements, suggestions, or content set out herein.  

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Mariana Pereira
VP, Field CISO

Blog

/

OT

/

September 5, 2025

Rethinking Signature-Based Detection for Power Utility Cybersecurity

power utility cybersecurityDefault blog imageDefault blog image

Lessons learned from OT cyber attacks

Over the past decade, some of the most disruptive attacks on power utilities have shown the limits of signature-based detection and reshaped how defenders think about OT security. Each incident reinforced that signatures are too narrow and reactive to serve as the foundation of defense.

2015: BlackEnergy 3 in Ukraine

According to CISA, on December 23, 2015, Ukrainian power companies experienced unscheduled power outages affecting a large number of customers — public reports indicate that the BlackEnergy malware was discovered on the companies’ computer networks.

2016: Industroyer/CrashOverride

CISA describes CrashOverride malwareas an “extensible platform” reported to have been used against critical infrastructure in Ukraine in 2016. It was capable of targeting industrial control systems using protocols such as IEC‑101, IEC‑104, and IEC‑61850, and fundamentally abused legitimate control system functionality to deliver destructive effects. CISA emphasizes that “traditional methods of detection may not be sufficient to detect infections prior to the malware execution” and recommends behavioral analysis techniques to identify precursor activity to CrashOverride.

2017: TRITON Malware

The U.S. Department of the Treasury reports that the Triton malware, also known as TRISIS or HatMan, was “designed specifically to target and manipulate industrial safety systems” in a petrochemical facility in the Middle East. The malware was engineered to control Safety Instrumented System (SIS) controllers responsible for emergency shutdown procedures. During the attack, several SIS controllers entered a failed‑safe state, which prevented the malware from fully executing.

The broader lessons

These events revealed three enduring truths:

  • Signatures have diminishing returns: BlackEnergy showed that while signatures can eventually identify adapted IT malware, they arrive too late to prevent OT disruption.
  • Behavioral monitoring is essential: CrashOverride demonstrated that adversaries abuse legitimate industrial protocols, making behavioral and anomaly detection more effective than traditional signature methods.
  • Critical safety systems are now targets: TRITON revealed that attackers are willing to compromise safety instrumented systems, elevating risks from operational disruption to potential physical harm.

The natural progression for utilities is clear. Static, file-based defenses are too fragile for the realities of OT.  

These incidents showed that behavioral analytics and anomaly detection are far more effective at identifying suspicious activity across industrial systems, regardless of whether the malicious code has ever been seen before.

Strategic risks of overreliance on signatures

  • False sense of security: Believing signatures will block advanced threats can delay investment in more effective detection methods.
  • Resource drain: Constantly updating, tuning, and maintaining signature libraries consumes valuable staff resources without proportional benefit.
  • Adversary advantage: Nation-state and advanced actors understand the reactive nature of signature defenses and design attacks to circumvent them from the start.

Recommended Alternatives (with real-world OT examples)

 Alternative strategies for detecting cyber attacks in OT
Figure 1: Alternative strategies for detecting cyber attacks in OT

Behavioral and anomaly detection

Rather than relying on signatures, focusing on behavior enables detection of threats that have never been seen before—even trusted-looking devices.

Real-world insight:

In one OT setting, a vendor inadvertently left a Raspberry Pi on a customer’s ICS network. After deployment, Darktrace’s system flagged elastic anomalies in its HTTPS and DNS communication despite the absence of any known indicators of compromise. The alerting included sustained SSL increases, agent‑beacon activity, and DNS connections to unusual endpoints, revealing a possible supply‑chain or insider risk invisible to static tools.  

Darktrace’s AI-driven threat detection aligns with the zero-trust principle of assuming the risk of a breach. By leveraging AI that learns an organization’s specific patterns of life, Darktrace provides a tailored security approach ideal for organizations with complex supply chains.

Threat intelligence sharing & building toward zero-trust philosophy

Frameworks such as MITRE ATT&CK for ICS provide a common language to map activity against known adversary tactics, helping teams prioritize detections and response strategies. Similarly, information-sharing communities like E-ISAC and regional ISACs give utilities visibility into the latest tactics, techniques, and procedures (TTPs) observed across the sector. This level of intel can help shift the focus away from chasing individual signatures and toward building resilience against how adversaries actually operate.

Real-world insight:

Darktrace’s AI embodies zero‑trust by assuming breach potential and continually evaluating all device behavior, even those deemed trusted. This approach allowed the detection of an anomalous SharePoint phishing attempt coming from a trusted supplier, intercepted by spotting subtle patterns rather than predefined rules. If a cloud account is compromised, unauthorized access to sensitive information could lead to extortion and lateral movement into mission-critical systems for more damaging attacks on critical-national infrastructure.

This reinforces the need to monitor behavioral deviations across the supply chain, not just known bad artifacts.

Defense-in-Depth with OT context & unified visibility

OT environments demand visibility that spans IT, OT, and IoT layers, supported by risk-based prioritization.

Real-world insight:

Darktrace / OT offers unified AI‑led investigations that break down silos between IT and OT. Smaller teams can see unusual outbound traffic or beaconing from unknown OT devices, swiftly investigate across domains, and get clear visibility into device behavior, even when they lack specialized OT security expertise.  

Moreover, by integrating contextual risk scoring, considering real-world exploitability, device criticality, firewall misconfiguration, and legacy hardware exposure, utilities can focus on the vulnerabilities that genuinely threaten uptime and safety, rather than being overwhelmed by CVE noise.  

Regulatory alignment and positive direction

Industry regulations are beginning to reflect this evolution in strategy. NERC CIP-015 requires internal network monitoring that detects anomalies, and the standard references anomalies 15 times. In contrast, signature-based detection is not mentioned once.

This regulatory direction shows that compliance bodies understand the limitations of static defenses and are encouraging utilities to invest in anomaly-based monitoring and analytics. Utilities that adopt these approaches will not only be strengthening their resilience but also positioning themselves for regulatory compliance and operational success.

Conclusion

Signature-based detection retains utility for common IT malware, but it cannot serve as the backbone of security for power utilities. History has shown that major OT attacks are rarely stopped by signatures, since each campaign targets specific systems with customized tools. The most dangerous adversaries, from insiders to nation-states, actively design their operations to avoid detection by signature-based tools.

A more effective strategy prioritizes behavioral analytics, anomaly detection, and community-driven intelligence sharing. These approaches not only catch known threats, but also uncover the subtle anomalies and novel attack techniques that characterize tomorrow’s incidents.

Continue reading
About the author
Daniel Simonds
Director of Operational Technology
Your data. Our AI.
Elevate your network security with Darktrace AI