Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

Network

/

July 24, 2025

Untangling the web: Darktrace’s investigation of Scattered Spider’s evolving tactics

man on computer doing work scattered spider cybersecurityDefault blog imageDefault blog image

What is Scattered Spider?

Scattered Spider is a native English-speaking group, also referred to, or closely associated with, aliases such as UNC3944, Octo Tempest and Storm-0875. They are primarily financially motivated with a clear emphasis on leveraging social engineering, SIM swapping attacks, exploiting legitimate tooling as well as using Living-Off-the-Land (LOTL) techniques [1][2].

In recent years, Scattered Spider has been observed employing a shift in tactics, leveraging Ransomware-as-a-Service (RaaS) platforms in their attacks. This adoption reflects a shift toward more scalable attacks with a lower barrier to entry, allowing the group to carry out sophisticated ransomware attacks without the need to develop it themselves.

While RaaS offerings have been available for purchase on the Dark Web for several years, they have continued to grow in popularity, providing threat actors a way to cause significant impact to critical infrastructure and organizations without requiring highly technical capabilities [12].

This blog focuses on the group’s recent changes in tactics, techniques, and procedures (TTPs) reported by open-source intelligence (OSINT) and how TTPs in a recent Scattered Spider attack observed by Darktrace compare.

How has Scattered Spider been reported to operate?

First observed in 2022, Scattered Spider is known to target various industries globally including telecommunications, technology, financial services, and commercial facilities.

Overview of key TTPs

Scattered Spider has been known to utilize the following methods which cover multiple stages of the Cyber Kill Chain including initial access, lateral movement, evasion, persistence, and action on objective:

Social engineering [1]:

Impersonating staff via phone calls, SMS and Telegram messages; obtaining employee credentials (MITRE techniques T1598,T1656), multi-factor authentication (MFA) codes such as one-time passwords, or convincing employees to run commercial remote access tools enabling initial access (MITRE techniques T1204,T1219,T1566)

  • Phishing using specially crafted domains containing the victim name e.g. victimname-sso[.]com
  • MFA fatigue: sending repeated requests for MFA approval with the intention that the victim will eventually accept (MITRE technique T1621)

SIM swapping [1][3]:

  • Includes hijacking phone numbers to intercept 2FA codes
  • This involves the actor migrating the victim's mobile number to a new SIM card without legitimate authorization

Reconnaissance, lateral movement & command-and-control (C2) communication via use of legitimate tools:

  • Examples include Mimikatz, Ngrok, TeamViewer, and Pulseway [1]. A more recently reported example is Teleport [3].

Financial theft through their access to victim networks: Extortion via ransomware, data theft (MITRE technique T1657) [1]

Bring Your Own Vulnerable Driver (BYOVD) techniques [4]:

  • Exploiting vulnerable drivers to evade detection from Endpoint Detection and Response (EDR) security products (MITRE technique T1068) frequently used against Windows devices.

LOTL techniques

LOTL techniques are also closely associated with Scattered Spider actors once they have gained initial access; historically this has allowed them to evade detection until impact starts to be felt. It also means that specific TTPs may vary from case-to-case, making it harder for security teams to prepare and harden defences against the group.

Prominent Scattered Spider attacks over the years

While attribution is sometimes unconfirmed, Scattered Spider have been linked with a number of highly publicized attacks since 2022.

Smishing attacks on Twilio: In August 2022 the group conducted multiple social engineering-based attacks. One example was an SMS phishing (smishing) attack against the cloud communication platform Twilio, which led to the compromise of employee accounts, allowing actors to access internal systems and ultimately target Twilio customers [5][6].

Phishing and social engineering against MailChimp: Another case involved a phishing and social engineering attack against MailChimp. After gaining access to internal systems through compromised employee accounts the group conducted further attacks specifically targeting MailChimp users within cryptocurrency and finance industries [5][7].

Social engineering against Riot Games: In January 2023, the group was linked with an attack on video game developer Riot Games where social engineering was once again used to access internal systems. This time, the attackers exfiltrated game source code before sending a ransom note [8][9].

Attack on Caesars & MGM: In September 2023, Scattered Spider was linked with attacked on Caesars Entertainment and MGM Resorts International, two of the largest casino and gambling companies in the United States. It was reported that the group gathered nearly six terabytes of stolen data from the hotels and casinos, including sensitive information of guests, and made use of the RaaS strain BlackCat [10].

Ransomware against Marks & Spencer: More recently, in April 2025, the group has also been linked to the alleged ransomware incident against the UK-based retailer Marks & Spencer (M&S) making use of the DragonForce RaaS [11].

How a recent attack observed by Darktrace compares

In May 2025, Darktrace observed a Scattered Spider attack affecting one of its customers. While initial access in this attack fell outside of Darktrace’s visibility, information from the affected customer suggests similar social engineering techniques involving abuse of the customer’s helpdesk and voice phishing (vishing) were used for reconnaissance.

Initial access

It is believed the threat actor took advantage of the customer’s third-party Software-as-a-Service (SaaS) applications, such as Salesforce during the attack.

Such applications are a prime target for data exfiltration due to the sensitive data they hold; customer, personnel, and business data can all prove useful in enabling further access into target networks.

Techniques used by Scattered Spider following initial access to a victim network tend to vary more widely and so details are sparser within OSINT. However, Darktrace is able to provide some additional insight into what techniques were used in this specific case, based on observed activity and subsequent investigation by its Threat Research team.

Lateral movement

Following initial access to the customer’s network, the threat actor was able to pivot into the customer’s Virtual Desktop Infrastructure (VDI) environment.

Darktrace observed the threat actor spinning up new virtual machines and activating cloud inventory management tools to enable discovery of targets for lateral movement.

In some cases, these virtual machines were not monitored or managed by the customer’s security tools, allowing the threat actor to make use of additional tooling such as AnyDesk which may otherwise have been blocked.

Tooling in further stages of the attack sometimes overlapped with previous OSINT reporting on Scattered Spider, with anomalous use of Ngrok and Teleport observed by Darktrace, likely representing C2 communication. Additional tooling was also seen being used on the virtual machines, such as Pastebin.

 Cyber AI Analyst’s detection of C2 beaconing to a teleport endpoint with hostname CUSTOMERNAME.teleport[.]sh, likely in an attempt to conceal the traffic.
Figure 1: Cyber AI Analyst’s detection of C2 beaconing to a teleport endpoint with hostname CUSTOMERNAME.teleport[.]sh, likely in an attempt to conceal the traffic.

Leveraging LOTL techniques

Alongside use of third-party tools that may have been unexpected on the network, various LOTL techniques were observed during the incident; this primarily involved the abuse of standard network protocols such as:

  • SAMR requests to alter Active Directory account details
  • Lateral movement over RDP and SSH
  • Data collection over LDAP and SSH

Coordinated exfiltration activity linked through AI-driven analysis

Multiple methods of exfiltration were observed following internal data collection. This included SSH transfers to IPs associated with Vultr, alongside significant uploads to an Amazon S3 bucket.

While connections to this endpoint were not deemed unusual for the network at this stage due to the volume of traffic seen, Darktrace’s Cyber AI Analyst was still able to identify the suspiciousness of this behavior and launched an investigation into the activity.

Cyber AI Analyst successfully correlated seemingly unrelated internal download and external upload activity across multiple devices into a single, broader incident for the customer’s security team to review.

Cyber AI Analyst Incident summary showing a clear outline of the observed activity, including affected devices and the anomalous behaviors detected.
Figure 2: Cyber AI Analyst Incident summary showing a clear outline of the observed activity, including affected devices and the anomalous behaviors detected.
Figure 3: Cyber AI Analyst’s detection of internal data downloads and subsequent external uploads to an Amazon S3 bucket.

Exfiltration and response

Unfortunately, as Darktrace was not configured in Autonomous Response mode at the time, the attack was able to proceed without interruption, ultimately escalating to the point of data exfiltration.

Despite this, Darktrace was still able to recommend several Autonomous Response actions, aimed at containing the attack by blocking the internal data-gathering activity and the subsequent data exfiltration connections.

These actions required manual approval by the customer’s security team and as shown in Figure 3, at least one of the recommended actions was subsequently approved.

Had Darktrace been enabled in Autonomous Response mode, these measures would have been applied immediately, effectively halting the data exfiltration attempts.

Further recommendations for Autonomous Response actions in Darktrace‘s Incident Interface, with surgical response targeting both the internal data collection and subsequent exfiltration.
Figure 4: Further recommendations for Autonomous Response actions in Darktrace‘s Incident Interface, with surgical response targeting both the internal data collection and subsequent exfiltration.

Scattered Spider’s use of RaaS

In this recent Scattered Spider incident observed by Darktrace, exfiltration appears to have been the primary impact. While no signs of ransomware deployment were observed here, it is possible that this was the threat actors’ original intent, consistent with other recent Scattered Spider attacks involving RaaS platforms like DragonForce.

DragonForce emerged towards the end of 2023, operating by offering their platform and capabilities on a wide scale. They also launched a program which offered their affiliates 80% of the eventual ransom, along with tools for further automation and attack management [13].

The rise of RaaS and attacker customization is fragmenting TTPs and indicators, making it harder for security teams to anticipate and defend against each unique intrusion.

While DragonForce appears to be the latest RaaS used by Scattered Spider, it is not the first, showcasing the ongoing evolution of tactics used the group.

In addition, the BlackCat RaaS strain was reportedly used by Scattered Spider for their attacks against Caesars Entertainment and MGM Resorts International [10].

In 2024 the group was also seen making use of additional RaaS strains; RansomHub and Qilin [15].

What security teams and CISOs can do to defend against Scattered Spider

The ongoing changes in tactics used by Scattered Spider, reliance on LOTL techniques, and continued adoption of evolving RaaS providers like DragonForce make it harder for organizations and their security teams to prepare their defenses against such attacks.

CISOs and security teams should implement best practices such as MFA, Single Sign-On (SSO), notifications for suspicious logins, forward logging, ethical phishing tests.

Also, given Scattered Spider’s heavy focus on social engineering, and at times using their native English fluency to their advantage, it is critical to IT and help desk teams are reminded they are possible targets.

Beyond social engineering, the threat actor is also adept at taking advantage of third-party SaaS applications in use by victims to harvest common SaaS data, such as PII and configuration data, that enable the threat actor to take on multiple identities across different domains.

With Darktrace’s Self-Learning AI, anomaly-based detection, and Autonomous Response inhibitors, businesses can halt malicious activities in real-time, whether attackers are using known TTPs or entirely new ones. Offerings such as Darktrace /Attack Surface Management enable security teams to proactively identify signs of malicious activity before it can cause an impact, while more generally Darktrace’s ActiveAI Security Platform can provide a comprehensive view of an organization’s digital estate across multiple domains.

Credit to Justin Torres (Senior Cyber Analyst), Emma Foulger (Global Threat Research Operations Lead), Zaki Al-Dhamari (Cyber Analyst), Nathaniel Jones (VP, Security & AI Strategy, FCISO), and Ryan Traill (Analyst Content Lead)

---------------------

The information provided in this blog post is for general informational purposes only and is provided "as is" without any representations or warranties, express or implied. While Darktrace makes reasonable efforts to ensure the accuracy and timeliness of the content related to cybersecurity threats such as Scattered Spider, we make no warranties or guarantees regarding the completeness, reliability, or suitability of the information for any purpose.

This blog post does not constitute professional cybersecurity advice, and should not be relied upon as such. Readers should seek guidance from qualified cybersecurity professionals or legal counsel before making any decisions or taking any actions based on the content herein.

No warranty of any kind, whether express or implied, including, but not limited to, warranties of performance, merchantability, fitness for a particular purpose, or non-infringement, is given with respect to the contents of this post.

Darktrace expressly disclaims any liability for any loss or damage arising from reliance on the information contained in this blog.

Appendices

References

[1] https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-320a

[2] https://attack.mitre.org/groups/G1015/

[3] https://www.rapid7.com/blog/post/scattered-spider-rapid7-insights-observations-and-recommendations/

[4] https://www.crowdstrike.com/en-us/blog/scattered-spider-attempts-to-avoid-detection-with-bring-your-own-vulnerable-driver-tactic/

[5] https://krebsonsecurity.com/2024/06/alleged-boss-of-scattered-spider-hacking-group-arrested/?web_view=true

[6] https://www.cxtoday.com/crm/uk-teenager-accused-of-hacking-twilio-lastpass-mailchimp-arrested/

[7] https://mailchimp.com/newsroom/august-2022-security-incident/

[8] https://techcrunch.com/2023/02/02/0ktapus-hackers-are-back-and-targeting-tech-and-gaming-companies-says-leaked-report/

[9] https://www.pcmag.com/news/hackers-behind-riot-games-breach-stole-league-of-legends-source-code

[10] https://www.bbrown.com/us/insight/a-look-back-at-the-mgm-and-caesars-incident/

[11] https://cyberresilience.com/threatonomics/scattered-spider-uk-retail-attacks/

[12] https://www.crowdstrike.com/en-us/cybersecurity-101/ransomware/ransomware-as-a-service-raas/

[13] https://www.group-ib.com/blog/dragonforce-ransomware/
[14] https://blackpointcyber.com/wp-content/uploads/2024/11/DragonForce.pdf
[15] https://x.com/MsftSecIntel/status/1812932749314978191?lang=en

Select MITRE tactics associated with Scattered Spider

Tactic – Technique – Technique Name

Reconnaissance - T1598 -   Phishing for Information

Initial Access - T1566 – Phishing

Execution - T1204 - User Execution

Privilege Escalation - T1068 - Exploitation for Privilege Escalation

Defense Evasion - T1656 - Impersonation

Credential Access - T1621 - Multi-Factor Authentication Request Generation

Lateral Movement - T1021 - Remote Services

Command and Control - T1102 - Web Service

Command and Control - T1219 - Remote Access Tools

Command and Control - T1572 - Protocol Tunneling

Exfiltration - T1567 - Exfiltration Over Web Service

Impact - T1657 - Financial Theft

Select MITRE tactics associated with DragonForce

Tactic – Technique – Technique Name

Initial Access, Defense Evasion, Persistence, Privilege Escalation - T1078 - Valid Accounts

Initial Access, Persistence - T1133 - External Remote Services

Initial Access - T1190 - Exploit Public-Facing Application

Initial Access - T1566 – Phishing

Execution - T1047 - Windows Management Instrumentation

Privilege Escalation - T1068 - Exploitation for Privilege Escalation

Lateral Movement - T1021 - Remote Services

Impact - T1486 - Data Encrypted for Impact

Impact - T1657 - Financial Theft

Select Darktrace models

Compliance / Internet Facing RDP Server

Compliance / Incoming Remote Access Tool

Compliance / Remote Management Tool on Server

Anomalous File / Internet Facing System File Download

Anomalous Server Activity/ New User Agent from Internet Facing System

Anomalous Connection / Callback on Web Facing Device

Device / Internet Facing System with High Priority Alert

Anomalous Connection / Unusual Admin RDP

Anomalous Connection / High Priority DRSGetNCChanges

Anomalous Connection / Unusual Internal SSH

Anomalous Connection / Active Remote Desktop Tunnel

Compliance / Pastebin

Anomalous Connection / Possible Tunnelling to Rare Endpoint

Compromise / Beaconing Activity to External Rare

Device / Long Agent Connection to New Endpoint

Compromise / SSH to Rare External AWS

Compliance / SSH to Rare External Destination

Anomalous Server Activity / Outgoing from Server

Anomalous Connection / Large Volume of LDAP Download

Unusual Activity / Internal Data Transfer on New Device

Anomalous Connection / Download and Upload

Unusual Activity / Enhanced Unusual External Data Transfer

Compromise / Ransomware/Suspicious SMB Activity

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

/

July 23, 2025

Closing the Cloud Forensics and Incident Response Skills Gap

DFIR skills gap, man working on computer, SOC analyst, incident response, cloud incident responseDefault blog imageDefault blog image

Every alert that goes uninvestigated is a calculated risk — and teams are running out of room for error

Security operations today are stretched thin. SOCs face an overwhelming volume of alerts, and the shift to cloud has only made triage more complex.

Our research suggests that 23% of cloud alerts are never investigated, leaving risk on the table.

The rapid migration to cloud resources has security teams playing catch up. While they attempt to apply traditional on-prem tools to the cloud, it’s becoming increasingly clear that they are not fit for purpose. Especially in the context of forensics and incident response, the cloud presents unique complexities that demand cloud-specific solutions.

Organizations are increasingly adopting services from multiple cloud platforms (in fact, recent studies suggest 89% of organizations now operate multi-cloud environments), and container-based and serverless setups have become the norm. Security analysts already have enough on their plates; it’s unrealistic to expect them to be cloud experts too.

Why Digital Forensics and Incident Response (DFIR) roles are so hard to fill

Compounding these issues of alert fatigue and cloud complexity, there is a lack of DFIR talent. The cybersecurity skills gap is a well-known problem.

According to the 2024 ISC2 Cybersecurity Workforce Study, there is a global shortage of 4.8 million cybersecurity workers, up 19% from the previous year.

Why is this such an issue?

  • Highly specialized skill set: DFIR professionals need to have a deep understanding of various operating systems, network protocols, and security architectures, even more so when working in the cloud. They also need to be proficient in using a wide range of forensic tools and techniques. This level of expertise takes a lot of time and effort to develop.
  • Rapid technological changes: The cloud landscape is constantly changing and evolving with new services, monitoring tools, security mechanisms, and threats emerging regularly. Keeping up with these changes and staying current requires continuous learning and adaptation.
  • Lack of formal education and training: There are limited educational programs specifically dedicated for DFIR. Further, an industry for cloud DFIR has yet to be defined. While some universities and institutions offer courses or certifications in digital forensics, they may not cover the full spread of knowledge required in real-world incident response scenarios, especially for cloud-based environments.
  • High-stress nature of the job: DFIR professionals often work under tight deadlines in high-pressure situations, especially when handling security incidents. This can lead to burnout and high turnover rates in the profession.

Bridging the skills gap with usable cloud digital forensics and incident response tools  

To help organizations close the DFIR skills gap, it's critical that we modernize our approaches and implement a new way of doing things in DFIR that's fit for the cloud era. Modern cloud forensics and incident response platforms must prioritize usability in order to up-level security teams. A platform that is easy to use has the power to:

  • Enable more advanced analysts to be more efficient and have the ability to take on more cases
  • Uplevel more novel analysts to perform more advanced tasks than ever before
  • Eliminate cloud complexity– such as the complexities introduced by multi-cloud environments and container-based and serverless setups

What to look for in cloud forensics and incident response solutions

The following features greatly improve the impact of cloud forensics and incident response:

Data enrichment: Automated correlation of collected data with threat intelligence feeds, both external and proprietary, delivers immediate insight into suspicious or malicious activities. Data enrichment expedites investigations, enabling analysts to seamlessly pivot from key events and delve deeper into the raw data.

Single timeline view: A unified perspective across various cloud platforms and data sources is crucial. A single timeline view empowers security teams to seamlessly navigate evidence based on timestamps, events, users, and more, enhancing investigative efficiency. Pulling together a timeline has historically been a very time consuming task when using traditional approaches.

Saved search: Preserving queries during investigations allows analysts to re-execute complex searches or share them with colleagues, increasing efficiency and collaboration.

Faceted search: Facet search options provide analysts with quick insights into core data attributes, facilitating efficient dataset refinement.

Cross-cloud investigations: Analyzing evidence acquired from multiple cloud providers in a single platform is crucial for security teams. A unified view and timeline across cross cloud is critical in streamlining investigations.

How Darktrace can help

Darktrace’s cloud offerings have been bolstered with the acquisition of Cado Security Ltd., which enables security teams to gain immediate access to forensic-level data in multi-cloud, container, serverless, SaaS, and on-premises environments.

Not only does Darktrace offer centralized automation solutions for cloud forensics and investigation, but it also delivers a proactive approach Cloud Detection and Response (CDR). Darktrace / CLOUD is built with advanced AI to make cloud security accessible to all security teams and SOCs. By using multiple machine learning techniques, Darktrace brings unprecedented visibility, threat detection, investigation, and incident response to hybrid and multi-cloud environments.

[related-resource]

Continue reading
About the author
Your data. Our AI.
Elevate your network security with Darktrace AI