ブログ
/
AI
/
March 18, 2025

Survey findings: How is AI Impacting the SOC?

Part 3/4: Darktrace releases insights on the State of AI in cybersecurity. This blog discusses the impact of AI-powered attacks and the capabilities of AI defense on the SOC.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Mar 2025

There’s no question that AI is already impacting the SOC – augmenting, assisting, and filling the gaps left by staff and skills shortages. We surveyed over 1,500 cybersecurity professionals from around the world to uncover their attitudes to AI cybersecurity in 2025. Our findings revealed striking trends in how AI is changing the way security leaders think about hiring and SOC transformation. Download the full report for the big picture, available now.

Download the full report to explore these findings in depth

The AI-human conundrum

Let’s start with some context. As the cybersecurity sector has rapidly evolved to integrate AI into all elements of cyber defense, the pace of technological advancement is outstripping the development of necessary skills. Given the ongoing challenges in security operations, such as employee burnout, high turnover rates, and talent shortages, recruiting personnel to bridge these skills gaps remains an immense challenge in today’s landscape.

But here, our main findings on this topic seem to contradict each other.

There’s no question over the impact of AI-powered threats – nearly three-quarters (74%) agree that AI-powered threats now pose a significant challenge for their organization.  

When we look at how security leaders are defending against AI-powered threats, over 3 out of 5 (62%) see insufficient personnel to manage tools and alerts as the biggest barrier.  

Yet at the same time, increasing cyber security staff is at the bottom of the priority list for survey participants, with only 11% planning to increase cybersecurity staff in 2025 – less than in 2024. What 64% of stakeholders are committed to, however, is adding new AI-powered tools onto their existing security stacks.

With burnout pervasive, the talent deficit reaching a new peak, and growing numbers of companies unable to fill cybersecurity positions, it may be that stakeholders realize they simply cannot hire enough personnel to solve this problem, no matter how much they may want to. As a result, leaders are looking for methods beyond increasing staff to overcome security obstacles.

Meanwhile, the results show that defensive AI is becoming integral to the SOC as a means of augmenting understaffed teams.

How is AI plugging skills shortages in the SOC?

As explored in our recent white paper, the CISO’s Guide to Navigating the Cybersecurity Skills Shortage, 71% of organizations report unfilled cybersecurity positions, leading to the estimation that less than 10% of alerts are thoroughly vetted. In this scenario, AI has become an essential multiplier to relieve the burden on security teams.

95% of respondents agree that AI-powered solutions can significantly improve the speed and efficiency of their defenses. But how?

The area security leaders expect defensive AI to have the biggest impact is on improving threat detection, followed by autonomous response to threats and identifying exploitable vulnerabilities.

Interestingly, the areas that participants ranked less highly (reducing alert fatigue and running phishing simulation), are the tasks that AI already does well and can therefore be used already to relieve the burden of manual, repetitive work on the SOC.

Different perspectives from different sides of the SOC

CISOs and SecOps teams aren’t necessarily aligned on the AI defense question – while CISOs tend to see it as a strategic game-changer, SecOps teams on the front lines may be more sceptical, wary of its real-world reliability and integration into workflows.  

From the data, we see that while less than a quarter of execs doubt that AI-powered solutions will block and automatically respond to AI threats, about half of SecOps aren’t convinced. And only 17% of CISOs lack confidence in the ability of their teams to implement and use AI-powered solutions, whereas over 40% those in the team doubt their own ability to do so.

This gap feeds into the enthusiasm that executives share about adding AI-driven tools into the stack, while day-to-day users of the tools are more interested in improving security awareness training and improving cybersecurity tool integration.

Levels of AI understanding in the SOC

AI is only as powerful as the people who use it, and levels of AI expertise in the SOC can make or break its real-world impact. If security leaders want to unlock AI’s full potential, they must bridge the knowledge gap—ensuring teams understand not just the different types of AI, but where it can be applied for maximum value.

Only 42% of security professionals are confident that they fully understand all the types of AI in their organization’s security stack.

This data varies between job roles – executives report higher levels of understanding (60% say they know exactly which types of AI are being used) than participants in other roles. Despite having a working knowledge of using the tools day-to-day, SecOps practitioners were more likely to report having a “reasonable understanding” of the types of AI in use in their organization (42%).  

Whether this reflects a general confidence in executives rather than technical proficiency it’s hard to say, but it speaks to the importance of AI-human collaboration – introducing AI tools for cybersecurity to plug the gaps in human teams will only be effective if security professionals are supported with the correct education and training.  

Download the full report to explore these findings in depth

The full report for Darktrace’s State of AI Cybersecurity is out now. Download the paper to dig deeper into these trends, and see how results differ by industry, region, organization size, and job title.  

Download the full report to explore these findings in depth

The report for Darktrace’s State of AI Cybersecurity is out now. Download to see how results differ by industry, region, company size, and job title.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community

Blog

/

Cloud

/

March 5, 2026

Inside Cloud Compromise: Investigating Attacker Activity with Darktrace / Forensic Acquisition & Investigation

Default blog imageDefault blog image

Investigating Cloud Attacks with Forensic Acquisition & Investigation

Darktrace / Forensic Acquisition & Investigation™ is the industry’s first truly automated forensic solution purpose-built for the cloud. This blog will demonstrate how an investigation can be carried out against a compromised cloud server in minutes, rather than hours or days.

The compromised server investigated in this case originates from Darktrace’s Cloudypots system, a global honeypot network designed to observe adversary activity in real time across a wide range of cloud services. Whenever an attacker successfully compromises one of these honeypots, a forensic copy of the virtual server's disk is preserved for later analysis. Using Forensic Acquisition & Investigation, analysts can then investigate further and obtain detailed insights into the compromise including complete attacker timelines and root cause analysis.

Forensic Acquisition & Investigation supports importing artifacts from a variety of sources, including EC2 instances, ECS, S3 buckets, and more. The Cloudypots system produces a raw disk image whenever an attack is detected and stores it in an S3 bucket. This allows the image to be directly imported into Forensic Acquisition & Investigation using the S3 bucket import option.

As Forensic Acquisition & Investigation runs cloud-natively, no additional configuration is required to add a specific S3 bucket. Analysts can browse and acquire forensic assets from any bucket that the configured IAM role is permitted to access. Operators can also add additional IAM credentials, including those from other cloud providers, to extend access across multiple cloud accounts and environments.

Figure 1: Forensic Acquisition & Investigation import screen.

Forensic Acquisition & Investigation then retrieves a copy of the file and automatically begins running the analysis pipeline on the artifact. This pipeline performs a full forensic analysis of the disk and builds a timeline of the activity that took place on the compromised asset. By leveraging Forensic Acquisition & Investigation’s cloud-native analysis system, this process condenses hour of manual work into just minutes.

Successful import of a forensic artifact and initiation of the analysis pipeline.
Figure 2: Successful import of a forensic artifact and initiation of the analysis pipeline.

Once processing is complete, the preserved artifact is visible in the Evidence tab, along with a summary of key information obtained during analysis, such as the compromised asset’s hostname, operating system, cloud provider, and key event count.

The Evidence overview showing the acquired disk image.
Figure 3: The Evidence overview showing the acquired disk image.

Clicking on the “Key events” field in the listing opens the timeline view, automatically filtered to show system- generated alarms.

The timeline provides a chronological record of every event that occurred on the system, derived from multiple sources, including:

  • Parsed log files such as the systemd journal, audit logs, application specific logs, and others.
  • Parsed history files such as .bash_history, allowing executed commands to be shown on the timeline.
  • File-specific events, such as files being created, accessed, modified, or executables being run, etc.

This approach allows timestamped information and events from multiple sources to be aggregated and parsed into a single, concise view, greatly simplifying the data review process.

Alarms are created for specific timeline events that match either a built-in system rule, curated by Darktrace’s Threat Research team or an operator-defined created at the project level. These alarms help quickly filter out noise and highlight on events of interest, such as the creation of a file containing known malware, access to sensitive files like Amazon Web Service (AWS) credentials, suspicious arguments or commands, and more.

 The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.
Figure 4: The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.

In this case, several alarms were generated for suspicious Base64 arguments being passed to Selenium. Examining the event data, it appears the attacker spawned a Selenium Grid session with the following payload:

"request.payload": "[Capabilities {browserName: chrome, goog:chromeOptions: {args: [-cimport base64;exec(base64...], binary: /usr/bin/python3, extensions: []}, pageLoadStrategy: normal}]"

This is a common attack vector for Selenium Grid. The chromeOptions object is intended to specify arguments for how Google Chrome should be launched; however, in this case the attacker has abused the binary field to execute the Python3 binary instead of Chrome. Combined with the option to specify command-line arguments, the attacker can use Python3’s -c option to execute arbitrary Python code, in this instance, decoding and executing a Base64 payload.

Selenium’s logs truncate the Arguments field automatically, so an alternate method is required to retrieve the full payload. To do this, the search bar can be used to find all events that occurred around the same time as this flagged event.

Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].
Figure 5: Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].

Scrolling through the search results, an entry from Java’s systemd journal can be identified. This log contains the full, unaltered payload. GCHQ’s CyberChef can then be used to decode the Base64 data into the attacker’s script, which will ultimately be executed.[NJ9]

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer

Blog

/

Network

/

February 19, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

  • Compromise / Possible Tunnelling to Bin Services

Suspicious executable file downloads. Associated Darktrace models include:

  • Anomalous File / EXE from Rare External Location

Outbound beaconing to rare domains. Associated Darktrace models include:

  • Compromise / Agent Beacon (Medium Period)
  • Compromise / Agent Beacon (Long Period)
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Beacon to Young Endpoint
  • Anomalous Server Activity / Rare External from Server
  • Compromise / SSL Beaconing to Rare Destination

Unusual cryptocurrency mining activity. Associated Darktrace models include:

  • Compromise / Monero Mining
  • Compromise / High Priority Crypto Currency Mining

And model alerts for:

  • Compromise / Rare Domain Pointing to Internal IP

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ