Blog
/
AI
/
February 10, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

AI hype is everywhere, but not many vendors are getting specific. Darktrace’s multi-layered AI combines various machine learning techniques for behavioral analytics, real-time threat detection, investigation, and autonomous response.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nicole Carignan
SVP, Security & AI Strategy, Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Feb 2025

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s Self-Learning AI is a multi-layered engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s Self-Learning AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Learn how AI is Applied in Cybersecurity

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nicole Carignan
SVP, Security & AI Strategy, Field CISO

More in this series

No items found.

Blog

/

/

February 16, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

-              Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

o    Compromise / Possible Tunnelling to Bin Services

-              Suspicious executable file downloads. Associated Darktrace models include:

o    Anomalous File / EXE from Rare External Location

-              Outbound beaconing to rare domains. Associated Darktrace models include:

o   Compromise / Agent Beacon (Medium Period)

o   Compromise / Agent Beacon (Long Period)

o   Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

o   Compromise / Beacon to Young Endpoint

o   Anomalous Server Activity / Rare External from Server

o   Compromise / SSL Beaconing to Rare Destination

-              Unusual cryptocurrency mining activity. Associated Darktrace models include:

o   Compromise / Monero Mining

o   Compromise / High Priority Crypto Currency Mining

And model alerts for:

o    Compromise / Rare Domain Pointing to Internal IP

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

AI

/

February 13, 2026

How AI is redefining cybersecurity and the role of today’s CIO

Default blog imageDefault blog image

Why AI is essential to modern security

As attackers use automation and AI to outpace traditional tools and people, our approach to cybersecurity must fundamentally change. That’s why one of my first priorities as Withum's CIO was to elevate cybersecurity from a technical function to a business enabler.

What used to be “IT’s problem” is now a boardroom conversation – and for good reason. Protecting our data, our people, and our clients directly impacts revenue, reputation and competitive positioning.  

As CIOs / CISOs, our responsibilities aren’t just keeping systems running, but enabling trust, protecting our organization's reputation, and giving the business confidence to move forward even as the digital world becomes less predictable. To pull that off, we need to know the business inside-out, understand risk, and anticipate what's coming next. That's where AI becomes essential.

Staying ahead when you’re a natural target

With more than 3,100 team members and over 1,000 CPAs (Certified Public Accountant), Withum’s operates in an industry that naturally attracts attention from attackers. Firms like ours handle highly sensitive financial and personal information, which puts us squarely in the crosshairs for sophisticated phishing, ransomware, and cloud-based attacks.

We’ve built our security program around resilience, visibility, and scale. By using Darktrace’s AI-powered platform, we can defend against both known and unknown threats, across email and network, without slowing our teams down.

Our focus is always on what we’re protecting: our clients’ information, our intellectual property, and the reputation of the firm. With Darktrace, we’re not just keeping up with the massive volume of AI-powered attacks coming our way, we’re staying ahead. The platform defends our digital ecosystem around the clock, detecting potential threats across petabytes of data and autonomously investigating and responding to tens of thousands of incidents every year.

Catching what traditional tools miss

Beyond the sheer scale of attacks, Darktrace ActiveAI Security PlatformTM is critical for identifying threats that matter to our business. Today’s attackers don’t use generic techniques. They leverage automation and AI to craft highly targeted attacks – impersonating trusted colleagues, mimicking legitimate websites, and weaving in real-world details that make their messages look completely authentic.

The platform, covering our network, endpoints, inboxes, cloud and more is so effective because it continuously learns what’s normal for our business: how our users typically behave, the business- and industry-specific language we use, how systems communicate, and how cloud resources are accessed. It picks up on minute details that would sail right past traditional tools and even highly trained security professionals.

Freeing up our team to do what matters

On average, Darktrace autonomously investigates 88% of all our security events, using AI to connect the dots across email, network, and cloud activity to figure out what matters. That shift has changed how our team works. Instead of spending hours sorting through alerts, we can focus on proactive efforts that actually strengthen our security posture.

For example, we saved 1,850 hours on investigating security issues over a ten-day period. We’ve reinvested the time saved into strengthening policies, refining controls, and supporting broader business initiatives, rather than spending endless hours manually piecing together alerts.

Real confidence, real results

The impact of our AI-driven approach goes well beyond threat detection. Today, we operate from a position of confidence, knowing that threats are identified early, investigated automatically, and communicated clearly across our organization.

That confidence was tested when we withstood a major ransomware attack by a well-known threat group. Not only were we able to contain the incident, but we were able to trace attacker activity and provided evidence to law enforcement. That was an exhilarating experience! My team did an outstanding job, and moments like that reinforce exactly why we invest in the right technology and the right people.

Internally, this capability has strengthened trust at the executive level. We share security reporting regularly with leadership, translating technical activity into business-relevant insights. That transparency reinforces cybersecurity as a shared responsibility, one that directly supports growth, continuity, and reputation.

Culturally, we’ve embedded security awareness into daily operations through mandatory monthly training, executive communication, and real-world industry examples that keep cybersecurity top of mind for every employee.

The only headlines we want are positive ones: Withum expanding services, Withum growing year over year. Security plays a huge role in making sure that’s the story we get to tell.

What’s next

Looking ahead, we’re expanding our use of Darktrace, including new cloud capabilities that extend AI-driven visibility and investigation into our AWS and Azure environments.

As I continue shaping our security team, I look for people with passion, curiosity, and a genuine drive to solve problems. Those qualities matter just as much as formal credentials in my view. Combined with AI, these attributes help us build a resilient, engaged security function with low turnover and high impact.

For fellow technology leaders, my advice is simple: be forward-thinking and embrace change. We must understand the business, the threat landscape, and how technology enables both. By augmenting human expertise rather than replacing it, AI allows us to move upstream by anticipating risk, advising the business, and fostering stronger collaboration across teams.

Continue reading
About the author
Amel Edmond
Chief Information Officer
Your data. Our AI.
Elevate your network security with Darktrace AI