Blog
/
/
July 9, 2019

Insights on Shamoon 3 Data-Wiping Malware

Gain insights into Shamoon 3 and learn how to protect your organization from its destructive capabilities.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
09
Jul 2019

Responsible for some of the “most damaging cyber-attacks in history” since 2012, the Shamoon malware wipes compromised hard drives and overwrites key system processes, intending to render infected machines unusable. During a trial period in the network of a global company, Darktrace observed a Shamoon-powered cyber-attack on December 10, 2018 — when several Middle Eastern firms were impacted by a new variant of the malware.

While there has been detailed reporting on the malware files and wiper modules that these latest Shamoon attacks employed, the complete cyber kill chain involved remains poorly understood, while the intrusions that led to the malware’s eventual “detonation” last December has not received nearly as much coverage. As a consequence, this blog post will focus on the insights that Darktrace’s cyber AI generated regarding (a) the activity of the infected devices during the “detonation” and (b) the indicators of compromise that most likely represent lateral movement activity during the weeks prior.

A high-level overview of major events leading up to the detonation on December 10th.

In the following, we will dive into that timeline more deeply in reverse chronological order, going back in time to trace the origins of the attack. Let’s begin with zero hour.

December 10: 42 devices “detonate”

A bird's-eye perspective of how Darktrace identified the alerts in December 2018.

What immediately strikes the analyst’s eye is the fact that a large accumulation of alerts, indicated by the red rectangle above, took place on December 10, followed by complete network silence over the subsequent four days.

These highlighted alerts represent Darktrace’s detection of unusual network scans on remote port 445 that were conducted by 42 infected devices. These devices proceeded to scan more machines — none of which were among those already infected. Such behavior indicates that the compromised devices started scanning and were wiped independently from each other, instead of conducting worming-style activity during the detonation of the malware. The initial scanning device started its scan at 12:56 p.m. UTC, while the last scanning device started its scan at 2:07 p.m. UTC.

Not only was this activity readily apparent from the bird’s-eye perspective shown above, the detonating devices also created the highest-priority Darktrace alerts over a several day period: “Device / Network Scan” and “Device / Expanded Network Scan”:

Moreover, when investigating “Devices — Overall Score,” the detonating devices rank as the most critical assets for the time period December 8–11:

Darktrace AI generated all of the above alerts because they represented significant anomalies from the normal ‘pattern of life’ that the AI had learned for each user and device on the company’s network. Crucially, none of the alerts were the product of predefined ‘rules and signatures’ — the mechanism that conventional security tools rely on to detect cyber-threats. Rather, the AI revealed the activity because the scans were unusual for the devices given their precise nature and timing, demonstrating the necessity of the such a nuanced approach in catching elusive threats like Shamoon. Of further importance is that the company’s network consists of around 15,000 devices, meaning that a rules-based approach without the ability to prioritize the most serious threats would have drowned out the Shamoon alerts in noise.

Now that we’ve seen how cyber AI sounded the alarms during the detonation itself, let’s investigate the various indicators of suspicious lateral movement that precipitated the events of December 10. Most of this activity happened in brief bursts, each of which could have been spotted and remediated if Darktrace had been closely monitored.

November 19: Unusual Remote Powershell Usage (WinRM)

One such burst of unusual activity occurred on November 19, when Darktrace detected 14 devices — desktops and servers alike — that all successfully used the WinRM protocol. None of these devices had previously used WinRM, which is also unusual for the organization’s environment as a whole. Conversely, Remote PowerShell is quite often abused in intrusions during lateral movement. The devices involved did not classify as traditional administrative devices, making their use of WinRM even more suspicious.

Note the clustering of the WinRM activity as indicated by the timestamp on the left.

October 29–31: Scanning, Unusual PsExec & RDP Brute Forcing

Another burst of likely lateral movement occurred between October 29 and 31, when two servers were seen using PsExec in an unusual fashion. No PsExec activity had been observed in the network before or after these detections, prompting Darktrace to flag the behavior. One of the servers conducted an ICMP Ping sweep shortly before the lateral movement. Not only did both servers start using PsExec on the same day, they also used SMBv1 — which, again, was very unusual for the network.

Most legitimate administrative activity involving PsExec these days uses SMBv2. The graphic below shows several Darktrace alerts on one of the involved servers — take note of the chronology of detections at the bottom of the graphic. This clearly reads like an attacker’s diary: ICMP scan, SMBv1 usage, and unusual PsExec usage, followed by new remote service controls. This server was among the top five highest ranking devices during the analyzed time period and was easy to identify.

Following the PsExec use, the servers also started an anomalous amount of remote services via the srvsvc and svcctl pipes over SMB. They did so by starting services on remote devices with which they usually did not communicate — using SMBv1, of course. Some of the attempted communication failed due to access violation and access permission errors. Both are often seen during malicious lateral movement.

Additional context around the SMBv1 and remote srvsvc pipe activity. Note the access failure.

Thanks to Darktrace’s deep packet inspection, we can see exactly what happened on the application layer. Darktrace highlights any unusual or new activity in italics below the connections — we can easily see that the SMB activity is not only unusual because of SMBv1 being used, but also because this server had never used this type of SMB activity remotely to those particular destinations before. We can also observe remote access to the winreg pipe — likely indicating more lateral movement and persistence mechanisms being established.

The other server conducted some targeted address scanning on the network on October 29, employing typical lateral movement ports 135, 139 and 445:

Another device was observed to conduct RDP brute forcing on October 29 around the same time as the above address scan. The desktop made an unusual amount of RDP connections to another internal server.

A clear plateau in increased internal connections (blue) can be seen. Every colored dot on top represents an RDP brute force detection. This was again a clear-cut detection not drowned in other noise — these were the only RDP brute force detections for a several-month monitoring time window.

October 9–11: Unusual Credential Usage

Darktrace identifies the unusual use of credentials — for instance, if administrative credentials are used on client device on which they are not commonly used. This might indicate lateral movement where service accounts or local admin accounts have been compromised.

Darktrace identified another cluster of activity that is likely representing lateral movement, this time involving unusual credential usage. Between October 9 and 11, Darktrace identified 17 cases of new administrative credentials being used on client devices. While new administrative credentials were being used from time to time on devices as part of normal administrative activity, this strong clustering of unusual admin credential usage was outstanding. Additionally, Darktrace also identified the source of some of the credentials being used as unusual.

Conclusion

Having observed a live Shamoon infection within Darktrace, there are a few key takeaways. While the actual detonation on December 10 was automated, the intrusion that built up to it was most likely manual. The fact that all detonating devices started their malicious activity roughly at the same time — without scanning each other — indicates that the payload went off based on a trigger like a scheduled task. This is in line with other reporting on Shamoon 3.

In the weeks leading up to December 10, there were various significant signs of lateral movement that occurred in disparate bursts — indicating a ‘low-and-slow’ manual intrusion.

The adversaries used classic lateral movement techniques like RDP brute forcing, PsExec, WinRM usage, and the abuse of stolen administrative credentials.

While the organization in question had a robust security posture, an attacker only needs to exploit one vulnerability to bring down an entire system. During the lifecycle of the attack, the Darktrace Enterprise Immune System identified the threatening activity in real time and provided numerous suggested actions that could have prevented the Shamoon attack at various stages. However, human action was not taken, while the organization had yet to activate Antigena, Darktrace’s autonomous response solution, which could have acted in the security team’s stead.

Despite having limited scope during the trial period, the Enterprise Immune System was able to detect the lateral movement and detonation of the payload, which was indicative of the malicious Shamoon virus activity. A junior analyst could have easily identified the activity, as high-severity alerts were consistently generated, and the likely infected devices were at the top of the suspicious devices list.

Darktrace Antigena would have prevented the movement responsible for the spread of the virus, while also sending high-severity alerts to the security team to investigate the activity. Even the scanning on port 445 from the detonating devices would have been shut down, as it presented a significant deviation from the known behavior of all scanning devices, which would have further limited the virus’s spread, and ultimately, spared the company and its devices from attack.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

Cloud

/

August 14, 2025

How Organizations are Addressing Cloud Investigation and Response

Cloud investigation and responseDefault blog imageDefault blog image

Why cloud investigation and response needs to evolve

As organizations accelerate their move to the cloud, they’re confronting two interrelated pressures: a rapidly expanding attack surface and rising regulatory scrutiny. The dual pressure is forcing security practitioners to evolve their strategies in the cloud, particularly around investigation and response, where we see analysts spending the most time. This work is especially difficult in the cloud, often requiring experienced analysts to manually stitch together evidence across fragmented systems, unfamiliar platforms, and short-lived assets.

However, adapting isn’t easy. Many teams are operating with limited budgets and face a shortage of cloud-specific security talent. That’s why more organizations are now prioritizing tools that not only deliver deep visibility and rapid response in the cloud, but also help upskill their analysts to keep pace with threats and compliance demands.

Our 2024 survey report highlights just how organizations are recognizing gaps in their cloud security, feeling the heat from regulators, and making significant investments to bolster their cloud investigation capabilities.

In this blog post, we’ll explore the current challenges, approaches, and strategies organizations are employing to enhance their cloud investigation and incident response.

Recognizing the gaps in current cloud investigation and response methods

Complex environments & static tools

Due to the dynamic nature of cloud infrastructure, ephemeral assets, autoscaling environments, and multi-cloud complexity, traditional investigation and response methods which rely on static snapshots and point-in-time data, are fundamentally mismatched. And with Cloud environment APIs needing deep provider knowledge and scripting skills to extract much needed evidence its unrealistic for one person to master all aspects of cloud incident response.

Analysts are still stitching together logs from fragmented systems, manually correlating events, and relying on post-incident forensics that often arrive too late to drive meaningful response. These approaches were built for environments that rarely changed. In the cloud, where assets may only exist for minutes and attacker movement can span regions or accounts in seconds, point-in-time visibility simply can’t keep up. As a result, critical evidence is missed, timelines are incomplete, and investigations drag on longer than they should.

Even some modern approaches still depend heavily on static configurations, delayed snapshots, or siloed visibility that can’t keep pace with real-time attacker movement.

There is even the problem of  identifying what cloud data sources hold the valuable information needed to investigate in the first place. With AWS alone having over 200 products, each with its own security practices and data sources.It can be challenging to identify where you need to be looking.  

To truly secure the cloud, investigation and response must be continuous, automated, and context-rich. Tools should be able to surface the signal from the noise and support analysts at every step, even without deep forensics expertise.

Increasing compliance pressure

With the rise of data privacy regulations and incident reporting mandates worldwide, organizations face heightened scrutiny. Noncompliance can lead to severe penalties, making it crucial to have robust cloud investigation and response mechanisms in place. 74% of organizations surveyed reported that data privacy regulations complicate incident response, underscoring the urgency to adapt to regulatory requirements.

In addition, a majority of organizations surveyed (89%) acknowledged that they suffer damage before they can fully contain and investigate incidents, particularly in cloud environments, highlighting the need for enhanced cloud capabilities.  

Enhancing cloud investigation and response

To address these challenges, organizations are actively growing their capabilities to perform investigations in the cloud. Key steps include:

Allocating and increasing budgets:  

Recognizing the importance of cloud-specific investigation tools, many organizations have started to allocate dedicated budgets for cloud forensics. 83% of organizations have budgeted for cloud forensics, with 77% expecting this budget to increase. This reflects a strong commitment to improving cloud security.

Implementing automation that understands cloud behavior

Automation isn’t just about speeding up tasks. While modern threats require speed and efficiency from defenders, automation aims to achieve this by enabling consistent decision making across unique and dynamic environments. Traditional SOAR platforms, often designed for static on-prem environments, struggle to keep pace with the dynamic and ephemeral nature of the cloud, where resources can disappear before a human analyst even has a chance to look at them. Cloud-native automation, designed to act on transient infrastructure and integrate seamlessly with cloud APIs, is rapidly emerging as the more effective approach for real-time investigation and response. Automation can cover collection, processing, and storage of incident evidence without ever needing to wait for human intervention and the evidence is ready and waiting all in once place, regardless of if the evidence is cloud-provider logs, disk images, or  memory dumps. With the right automation tools you can even go further and automate the full process from end to end covering acquisition, processing, analysis, and response.

Artificial Intelligence (AI) that augments analysts’ intuition not just adds speed

While many vendors tout AI’s ability to “analyze large volumes of data,” that’s table stakes. The real differentiator is how AI understands the narrative of an incident, surfacing high-fidelity alerts, correlating attacker movement across cloud and hybrid environments, and presenting findings in a way that upskills rather than overwhelms analysts.  

In this space, AI isn’t just accelerating investigations, it’s democratizing them by reducing the reliance on highly specialized forensic expertise.  

Strategies for effective cloud investigation and response

Organizations are also exploring various strategies to optimize their cloud investigation and response capabilities:

Enhancing visibility and control:

  • Unified platforms: Implementing platforms that provide a unified view across multiple cloud environments can help organizations achieve better visibility and control. This consolidation reduces the complexity of managing disparate tools and data sources.
  • Improved integration: Ensuring that all security tools and platforms are seamlessly integrated is critical. This integration facilitates better data sharing and cohesive incident management.
  • Cloud specific expertise: Training and Recruitment: Investing in training programs to develop cloud-specific skills among existing staff and recruiting experts with cloud security knowledge can bridge the skill gap.
  • Continuous learning: Given the constantly evolving nature of cloud threats, continuous learning and adaptation are essential for maintaining effective security measures.

Leveraging automation and AI:

  • Automation solutions: Automation solutions for cloud environments can significantly speed up and simplify incident response efficiency. These solutions can handle repetitive tasks, allowing security teams to focus on more complex issues.
  • AI powered analysis: AI can assist in rapidly analyzing incident data, identifying anomalies, and predicting potential threats. This proactive approach can help prevent incidents before they escalate.

Cloud investigation and response with Darktrace

Darktrace’s  forensic acquisition & investigation capabilities helps organizations address the complexities of cloud investigations and incident response with ease. The product seamlessly integrates with AWS, GCP, and Azure, consolidating data from multiple cloud environments into one unified platform. This integration enhances visibility and control, making it easier to manage and respond to incidents across diverse cloud infrastructures.

By leveraging machine learning and automation, Forensic Acquisition & Investigation accelerates the investigation process by quickly analyzing vast amounts of data, identifying patterns, and providing actionable insights. Automation reduces manual effort and response times, allowing your security team to focus on the most pressing issues.

Forensic Acquisition & Investigation can help you stay ahead of threats whilst also meeting regulatory requirements, helping you to maintain a robust cloud security position.

Continue reading
About the author
Calum Hall
Technical Content Researcher

Blog

/

Compliance

/

August 13, 2025

ISO/IEC 42001: 2023: A milestone in AI standards at Darktrace  

ISO/IEC 42001 complianceDefault blog imageDefault blog image

Darktrace announces ISO/IEC 42001 accreditation

Darktrace is thrilled to announce that we are one of the first cybersecurity companies to achieve ISO/IEC 42001 accreditation for the responsible management of AI systems. This isn’t just a milestone for us, it’s a sign of where the AI industry is headed. ISO/IEC 42001 is quickly emerging as the global benchmark for separating vendors who truly innovate with AI from those who simply market it.

For customers, it’s more than a badge, it’s assurance that a vendor’s AI is built responsibly, governed with rigor, and backed by the expertise of real AI teams, keeping your data secure while driving meaningful innovation.

This is a critical milestone for Darktrace as we continue to strengthen our offering, mature our governance and compliance frameworks for AI management, expand our research and development capabilities, and further our commitment to the development of responsible AI.  

It cements our commitment to providing secure, trustworthy and proactive cybersecurity solutions that our customers can rely on and complements our existing compliance framework, consisting of certifications for:

  • ISO/IEC 27001:2022 – Information Security Management System
  • ISO/IEC 27018:2019 – Protection of Personally Identifiable Information in Public Cloud Environments
  • Cyber Essentials – A UK Government-backed certification scheme for cybersecurity baselines

What is ISO/IEC 42001:2023?

In response to the unique challenges that AI poses, the International Organization for Standardization (ISO) introduced the ISO/IEC 42001:2023 framework in December 2023 to help organizations providing or utilizing AI-based products or services to demonstrate responsible development and use of AI systems. To achieve the accreditation, organizations are required to establish, implement, maintain, and continually improve their Artificial Intelligence Management System (AIMS).

ISO/IEC 42001:2023 is the first of its kind, providing valuable guidance for this rapidly changing field of technology. It addresses the unique ethical and technical challenges AI poses by setting out a structured way to manage risks such as transparency, accuracy and misuse without losing opportunities. By design, it balances the benefits of innovation against the necessity of a proper governance structure.

Being certified means the organization has met the requirements of the ISO/IEC 42001 standard, is conforming to all applicable regulatory and legislative requirements, and has implemented thorough processes to address AI risks and opportunities.

What is the  ISO/IEC 42001:2023 accreditation process?

Darktrace partnered with BSI over an 11-month period to undertake the accreditation. The process involved developing and implementing a comprehensive AI management system that builds on our existing certified frameworks, addresses the risks and opportunities of using and developing cutting-edge AI systems, underpins our AI objectives and policies, and meets our regulatory and legal compliance requirements.

The AI Management System, which takes in our people, processes, and products, was extensively audited by BSI against the requirements of the standard, covering all aspects spanning the design of our AI, use of AI within the organization, and our competencies, resources and HR processes. It is an in-depth process that we’re thrilled to have undertaken, making us one of the first in our industry to achieve certification for a globally recognized AI system.

The scope of Darktrace’s certification is particularly wide due to our unique Self-Learning approach to AI for cybersecurity, which uses multi-layered AI systems consisting of varied AI techniques to address distinct cybersecurity tasks. The certification encompasses production and provision of AI systems based on anomaly detection, clustering, classifiers, regressors, neural networks, proprietary and third-party large language models for proactive, detection, response and recovery cybersecurity applications. Darktrace additionally elected to adopt all Annex A controls present in the ISO/IEC 42001 standard.

What are the benefits of an AI Management System?

While AI is not a new or novel concept, the AI industry has accelerated at an unprecedented rate in the past few years, increasing operational efficiency, driving innovation, and automating cumbersome processes in the workplace.

At the same time, the data privacy, security and bias risks created by rapid innovation in AI have been well documented.

Thus, an AI Management System enables organizations to confidently establish and adhere to governance in a way that conforms to best practice, promotes adherence, and is in line with current and emerging regulatory standards.

Not only is this vital in a unique and rapidly evolving field like AI, it additionally helps organization’s balance the drive for innovation with the risks the technology can present, helping to get the best out of their AI development and usage.

What are the key components of ISO/IEC 42001?

The Standard puts an emphasis on responsible AI development and use, requiring organizations to:

  • Establish and implement an AI Management System
  • Commit to the responsible development of AI against established, measurable objectives
  • Have in place a process to manage, monitor and adapt to risks in an effective manner
  • Commit to continuous improvement of their AI Management System

The AI Standard is similar in composition to other ISO standards, such as ISO/IEC 27001:2022, which many organizations may already be familiar with. Further information as to the structure of ISO/IEC 42001 can be found in Annex A.

What it means for Darktrace’s customers

Our certification against ISO/IEC 42001 demonstrates Darktrace’s commitment to delivering industry-leading Self-Learning AI in the name of cybersecurity resilience. Our stakeholders, customers and partners can be confident that Darktrace is responsibly, ethically and securely developing its AI systems, and is managing the use of AI in our day-to-day operations in a compliant, secure and ethical manner. It means:

  • You can trust our AI: We can demonstrate our AI is developed responsibly, in a transparent manner and in accordance with ethical rules. For more information and to learn about Darktrace's responsible AI in cybersecurity approach, please see here.
  • Our products are backed by innovation and integrity: Darktrace drives cutting edge AI innovation with ethical governance and customer trust at its core.
  • You are partnering with an organization which stays ahead of regulatory changes: In an evolving AI landscape, partnering with Darktrace helps you to stay prepared for emerging compliance and regulatory demands in your supply chain.

Achieving ISO/IEC 42001:2023 certification is not just a checkpoint for us. It represents our unwavering commitment to setting a higher standard for AI in cybersecurity. It reaffirms our leadership in building and implementing responsible AI and underscores our mission to continuously innovate and lead the way in the industry.

Why ISO/IEC 42001 matters for every AI vendor you trust

In a market where “AI” can mean anything from a true, production-grade system to a thin marketing layer, ISO/IEC 42001 acts as a critical differentiator. Vendors who have earned this certification aren’t just claiming they build responsible AI, they’ve proven it through an independent, rigorous audit of how they design, deploy, and manage their systems.

For you as a customer, that means:

You know their AI is real: Certified vendors have dedicated, skilled AI teams building and maintaining systems that meet measurable standards, not just repackaging off-the-shelf tools with an “AI” label.

Your data is safeguarded: Compliance with ISO/IEC 42001 includes stringent governance over data use, bias, transparency, and risk management.

You’re partnering with innovators: The certification process encourages continuous improvement, meaning your vendor is actively advancing AI capabilities while keeping ethics and security in focus.

In short, ISO/IEC 42001 is quickly becoming the global badge of credible AI development. If your vendor can’t show it, it’s worth asking how they manage AI risk, whether their governance is mature enough, and how they ensure innovation doesn’t outpace accountability.

Annex A: The Structure of ISO/IEC 42001

ISO/IEC 42001 has requirements for which seven adherence is required for an organization seeking to obtain or maintain its certification:

  • Context of the organization – organizations need to demonstrate an understanding of the internal and external factors influencing the organization’s AI Management System.
  • Leadership – senior leadership teams need to be committed to implementing AI governance within their organizations, providing direction and support across all aspects AI Management System lifecycle.
  • Planning – organizations need to put meaningful and manageable processes in place to identify risks and opportunities related to the AI Management System to achieve responsible AI objectives and mitigate identified risks.
  • Support – demonstrating a commitment to provisioning of adequate resources, information, competencies, awareness and communication for the AI Management System is a must to ensure that proper oversight and management of the system and its risks can be achieved.
  • Operation – establishing processes necessary to support the organization’s AI system development and usage, in conformance with the organization’s AI policy, objectives and requirements of the standard. Correcting the course of any deviations within good time is paramount.
  • Performance evaluation – the organization must be able to demonstrate that it has the capability and willingness to regularly monitor and evaluate the performance of the AI Management System effectively, including actioning any corrections and introducing new processes where relevant.
  • Improvement – relying on an existing process will not be sufficient to ensure compliance with the AI Standard. Organizations must commit to monitoring of existing systems and processes to ensure that the AI Management System is continually enhanced and improved.

To assist organizations in seeking the above, four annexes are included within the AI Standard’s rubric, which outline the objectives and measures an organization may wish to implement to address risks related to the design and operation of their AI Management System through the introduction of normative controls. Whilst they are not prescriptive, Darktrace has implemented the requirements of these Annexes to enable it to appropriately demonstrate the effectiveness of its AI Management System. We have placed a heavy emphasis on Annex A which contains these normative controls which we, and other organizations seeking to achieve certification, can align with to address the objectives and measures, such as:

  • Enforcement of policies related to AI.
  • Setting responsibilities within the organization, and expectation of roles and responsibilities.
  • Creating processes and guidelines for escalating and handling AI concerns.
  • Making resources for AI systems available to users.
  • Assessing impacts of AI systems internally and externally.
  • Implementing processes across the entire AI system life cycle.
  • Understanding treatment of Data for AI systems.
  • Defining what information is, and should be available, for AI systems.
  • Considering and defining use cases for the AI systems.
  • Considering the impact of the AI System on third-party and customer relationships.

The remaining annexes provide guidance on implementing Annex A’s controls, objectives and primary risk sources of AI implementation, and considering how the AI Management System can be used across domains or sectors responsibly.

[related-resource]

Continue reading
About the author
William Booth
Director of Cybersecurity Compliance
Your data. Our AI.
Elevate your network security with Darktrace AI