Blog
/
Cloud
/
March 12, 2024

Cloud Migration Strategies, Services and Risks

Explore strategies, services, and risks associated with mastering cloud migration. Learn more here about hybrid cloud model, benefits, and migration phases.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Adam Stevens
Director of Product, Cloud Security
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
12
Mar 2024

What is cloud migration?

Cloud migration, in its simplest form, refers to the process of moving digital assets, such as data, applications, and IT resources, from on-premises infrastructure or legacy systems to cloud computing environments. There are various flavours of migration and utilization, but according to a survey conducted by IBM, one of the most common is the 'Hybrid' approach, with around 77% of businesses adopting a hybrid cloud approach.

There are three key components of a hybrid cloud migration model:

  1. On-Premises (On-Prem): Physical location with some amount of hardware and networking, traditionally a data centre.
  2. Public Cloud: Third-party providers like AWS, Azure, and Google, who offer multiple services such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).
  3. Private Cloud: A cloud computing environment where resources are isolated for one customer.

Why does cloud migration matter for enterprises?

Cloud adoption provides many benefits to businesses, including:

  1. Scalability: Cloud environments allow enterprises to scale resources up or down based on demand, enabling them to quickly adapt to changing business requirements.
  2. Flexibility and Agility: Cloud platforms provide greater flexibility and agility, enabling enterprises to innovate and deploy new services more rapidly compared to traditional on-premises infrastructure.
  3. Cost Efficiency: Pay-as-you-go model, allowing enterprises to reduce capital expenditures on hardware and infrastructure.
  4. Enhanced Security: Cloud service providers invest heavily in security measures to protect data and infrastructure, offering advanced security features and compliance certifications.

The combination of these benefits provides significant potential for businesses to innovate and move quickly, ultimately allowing them to be flexible and adapt to changing market conditions, customer demands, and technological advancements with greater agility and efficiency.

Cloud migration strategy

There are multiple migration strategies a business can adopt, including:

  1. Rehosting (Lift-and-shift): Quickly completed but may lead to increased costs for running workloads.
  2. Refactoring (Cloud Native): Designed specifically for the cloud but requires a steep learning curve and staff training on new processes.
  3. Hybrid Cloud: Mix of on-premises and public cloud use, offering flexibility and scalability while keeping data secure on-premises. This can introduce complexities in setup and management overhead and requires ensuring security and compliance in both environments.

It is important to note that each strategy has its trade-offs and there is no single gold standard for a one size fits all cloud migration strategy. Different businesses will prioritize and leverage different benefits, for instance while some might prefer a rehosting strategy as it gets them migrated the fastest, it typically ends up also being the most costly strategy as “lift-and-shift” doesn’t take advantage of many key benefits that the cloud has to offer. Conversely, refactoring is a strategy optimized at making the most of the benefits that cloud providers have to offer, however the process of redesigning applications requires cloud expertise and based on the scale of applications that are required to be refactored this strategy might not be the quickest when it comes to moving applications from being hosted on premise to in the cloud.  

Phases of a cloud migration

At the highest level, there are four main steps in a successful migration:

  1. Discover: Identify and categorize IT assets, applications, and critical dependencies.
  2. Plan: Develop a detailed migration plan, including timelines, resource allocation, and risk management strategies.
  3. Migrate: Execute the migration plan, minimizing disruption to business operations.
  4. Optimize: Continuously optimize the cloud environment using automation, performance monitoring, and cost management tools to improve efficiency, performance, and scalability.

While it is natural to race towards the end goals of a cloud migration, most successful cloud migration strategies allocate the appropriate timelines to each phase.  

The “Discover” phase specifically is where most businesses can set themselves up for success. Having a complete understanding of assets, applications, services, and dependencies needed to migrate however is much easier said than done. Given the pace of change and how laborious of a task inventorying everything can be to manage and maintain, most mistakes at this stage will propagate and amplify through the migration journey.  

Risks and challenges of cloud migration

Though cloud migration offers a wealth of benefits, it also introduces new risks that need to be accounted for and managed effectively. Security should be considered a fundamental part of the process, not an additional measure that can be ‘bolted’ on at the end.

Let’s consider the most popular migration strategy, using a ‘Hybrid Cloud’. A recent report by the industry analyst group Forrester cited that Cloud Security Posture Management (CSPM) tools are just one facet of security, stating:

"No matter how good it is, using a CSPM solution alone will not provide you with full visibility, detection, and effective remediation capabilities for all threats. Your adversaries are also targeting operating systems, existing on-prem network infrastructure, and applications in their quest to steal valuable data".

Unpacking some of the risks here, it’s clear they fall into a range of categories, including:

  1. Security Concerns: Ensuring security across both on-premises and cloud environments, addressing potential misconfigurations and vulnerabilities.
  2. Contextual Understanding: Effective security requires a deep understanding of the organization's business processes and the context in which data and applications operate.
  3. Threat Detection and Response: Identifying and responding to threats in real-time requires advanced capabilities such as AI and anomaly detection.
  4. Platform Approach: Deploying integrated security solutions that provide end-to-end visibility, centralized management, and automated responses across hybrid infrastructure.

Since the cloud doesn’t operate in a vacuum, businesses will always have a myriad of 3rd party applications, users, endpoints, external services, and partners connecting and interacting with their cloud environments. From this perspective, being able to correlate and understand behaviors and activity both within the cloud and its surroundings becomes imperative.

It then follows that context from a business wide perspective is necessary. This has two distinct implications, the first is application or workload specific context (i.e. where do the assets, services, and functions alerted on reside within the cloud application) and the second is business wide context. Given the volume of alerts that security practitioners need to manage, findings that lack the appropriate context to fully understand and resolve the issue create additional strain on teams that are already managing a difficult challenge.  

Conclusion

With that in mind, Darktrace’s approach to security, with its existing and new advances in Cloud Detection and Response capabilities, anomaly detection across SaaS applications, and native ability to leverage many AI techniques to understand the business context within your dynamic cloud environment and on-premises infrastructure. It provides you with the integrated building blocks to provide the ‘360’ degree view required to detect and respond to threats before, during, and long after your enterprise migrates to the cloud.

References

IBM Transformation Index: State of Cloud https://www.ibm.com/blog/hybrid-cloud-use-cases/

https://www.forrester.com/report/the-top-trends-shaping-cloud-security-posture-management-cspm-in-2024/RES180379  

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Adam Stevens
Director of Product, Cloud Security

More in this series

No items found.

Blog

/

/

April 24, 2025

The Importance of NDR in Resilient XDR

picture of hands typing on laptop Default blog imageDefault blog image

As threat actors become more adept at targeting and disabling EDR agents, relying solely on endpoint detection leaves critical blind spots.

Network detection and response (NDR) offers the visibility and resilience needed to catch what EDR can’t especially in environments with unmanaged devices or advanced threats that evade local controls.

This blog explores how threat actors can disable or bypass EDR-based XDR solutions and demonstrates how Darktrace’s approach to NDR closes the resulting security gaps with Self-Learning AI that enables autonomous, real-time detection and response.

Threat actors see local security agents as targets

Recent research by security firms has highlighted ‘EDR killers’: tools that deliberately target EDR agents to disable or damage them. These include the known malicious tool EDRKillShifter, the open source EDRSilencer, EDRSandblast and variants of Terminator, and even the legitimate business application HRSword.

The attack surface of any endpoint agent is inevitably large, whether the software is challenged directly, by contesting its local visibility and access mechanisms, or by targeting the Operating System it relies upon. Additionally, threat actors can readily access and analyze EDR tools, and due to their uniformity across environments an exploit proven in a lab setting will likely succeed elsewhere.

Sophos have performed deep research into the EDRShiftKiller tool, which ESET have separately shown became accessible to multiple threat actor groups. Cisco Talos have reported via TheRegister observing significant success rates when an EDR kill was attempted by ransomware actors.

With the local EDR agent silently disabled or evaded, how will the threat be discovered?

What are the limitations of relying solely on EDR?

Cyber attackers will inevitably break through boundary defences, through innovation or trickery or exploiting zero-days. Preventive measures can reduce but not completely stop this. The attackers will always then want to expand beyond their initial access point to achieve persistence and discover and reach high value targets within the business. This is the primary domain of network activity monitoring and NDR, which includes responsibility for securing the many devices that cannot run endpoint agents.

In the insights from a CISA Red Team assessment of a US CNI organization, the Red Team was able to maintain access over the course of months and achieve their target outcomes. The top lesson learned in the report was:

“The assessed organization had insufficient technical controls to prevent and detect malicious activity. The organization relied too heavily on host-based endpoint detection and response (EDR) solutions and did not implement sufficient network layer protections.”

This proves that partial, isolated viewpoints are not sufficient to track and analyze what is fundamentally a connected problem – and without the added visibility and detection capabilities of NDR, any downstream SIEM or MDR services also still have nothing to work with.

Why is network detection & response (NDR) critical?

An effective NDR finds threats that disable or can’t be seen by local security agents and generally operates out-of-band, acquiring data from infrastructure such as traffic mirroring from physical or virtual switches. This means that the security system is extremely inaccessible to a threat actor at any stage.

An advanced NDR such as Darktrace / NETWORK is fully capable of detecting even high-end novel and unknown threats.

Detecting exploitation of Ivanti CS/PS with Darktrace / NETWORK

On January 9th 2025, two new vulnerabilities were disclosed in Ivanti Connect Secure and Policy Secure appliances that were under malicious exploitation. Perimeter devices, like Ivanti VPNs, are designed to keep threat actors out of a network, so it's quite serious when these devices are vulnerable.

An NDR solution is critical because it provides network-wide visibility for detecting lateral movement and threats that an EDR might miss, such as identifying command and control sessions (C2) and data exfiltration, even when hidden within encrypted traffic and which an EDR alone may not detect.

Darktrace initially detected suspicious activity connected with the exploitation of CVE-2025-0282 on December 29, 2024 – 11 days before the public disclosure of the vulnerability, this early detection highlights the benefits of an anomaly-based network detection method.

Throughout the campaign and based on the network telemetry available to Darktrace, a wide range of malicious activities were identified, including the malicious use of administrative credentials, the download of suspicious files, and network scanning in the cases investigated.

Darktrace / NETWORK’s autonomous response capabilities played a critical role in containment by autonomously blocking suspicious connections and enforcing normal behavior patterns. At the same time, Darktrace Cyber AI Analyst™ automatically investigated and correlated the anomalous activity into cohesive incidents, revealing the full scope of the compromise.

This case highlights the importance of real-time, AI-driven network monitoring to detect and disrupt stealthy post-exploitation techniques targeting unmanaged or unprotected systems.

Unlocking adaptive protection for evolving cyber risks

Darktrace / NETWORK uses unique AI engines that learn what is normal behavior for an organization’s entire network, continuously analyzing, mapping and modeling every connection to create a full picture of your devices, identities, connections, and potential attack paths.

With its ability to uncover previously unknown threats as well as detect known threats using signatures and threat intelligence, Darktrace is an essential layer of the security stack. Darktrace has helped secure customers against attacks including 2024 threat actor campaigns against Fortinet’s FortiManager , Palo Alto firewall devices, and more.  

Stay tuned for part II of this series which dives deeper into the differences between NDR types.

Credit to Nathaniel Jones VP, Security & AI Strategy, FCISO & Ashanka Iddya, Senior Director of Product Marketing for their contribution to this blog.

Continue reading
About the author
Nathaniel Jones
VP, Security & AI Strategy, Field CISO

Blog

/

/

April 22, 2025

Obfuscation Overdrive: Next-Gen Cryptojacking with Layers

man looking at multiple computer screensDefault blog imageDefault blog image

Out of all the services honeypotted by Darktrace, Docker is the most commonly attacked, with new strains of malware emerging daily. This blog will analyze a novel malware campaign with a unique obfuscation technique and a new cryptojacking technique.

What is obfuscation?

Obfuscation is a common technique employed by threat actors to prevent signature-based detection of their code, and to make analysis more difficult. This novel campaign uses an interesting technique of obfuscating its payload.

Docker image analysis

The attack begins with a request to launch a container from Docker Hub, specifically the kazutod/tene:ten image. Using Docker Hub’s layer viewer, an analyst can quickly identify what the container is designed to do. In this case, the container is designed to run the ten.py script which is built into itself.

 Docker Hub Image Layers, referencing the script ten.py.
Figure 1: Docker Hub Image Layers, referencing the script ten.py.

To gain more information on the Python file, Docker’s built in tooling can be used to download the image (docker pull kazutod/tene:ten) and then save it into a format that is easier to work with (docker image save kazutod/tene:ten -o tene.tar). It can then be extracted as a regular tar file for further investigation.

Extraction of the resulting tar file.
Figure 2: Extraction of the resulting tar file.

The Docker image uses the OCI format, which is a little different to a regular file system. Instead of having a static folder of files, the image consists of layers. Indeed, when running the file command over the sha256 directory, each layer is shown as a tar file, along with a JSON metadata file.

Output of the file command over the sha256 directory.
Figure 3: Output of the file command over the sha256 directory.

As the detailed layers are not necessary for analysis, a single command can be used to extract all of them into a single directory, recreating what the container file system would look like:

find blobs/sha256 -type f -exec sh -c 'file "{}" | grep -q "tar archive" && tar -xf "{}" -C root_dir' \;

Result of running the command above.
Figure 4: Result of running the command above.

The find command can then be used to quickly locate where the ten.py script is.

find root_dir -name ten.py

root_dir/app/ten.py

Details of the above ten.py script.
Figure 5: Details of the above ten.py script.

This may look complicated at first glance, however after breaking it down, it is fairly simple. The script defines a lambda function (effectively a variable that contains executable code) and runs zlib decompress on the output of base64 decode, which is run on the reversed input. The script then runs the lambda function with an input of the base64 string, and then passes it to exec, which runs the decoded string as Python code.

To help illustrate this, the code can be cleaned up to this simplified function:

def decode(input):
   reversed = input[::-1]

   decoded = base64.decode(reversed)
   decompressed = zlib.decompress(decoded)
   return decompressed

decoded_string = decode(the_big_text_blob)
exec(decoded_string) # run the decoded string

This can then be set up as a recipe in Cyberchef, an online tool for data manipulation, to decode it.

Use of Cyberchef to decode the ten.py script.
Figure 6: Use of Cyberchef to decode the ten.py script.

The decoded payload calls the decode function again and puts the output into exec. Copy and pasting the new payload into the input shows that it does this another time. Instead of copy-pasting the output into the input all day, a quick script can be used to decode this.

The script below uses the decode function from earlier in order to decode the base64 data and then uses some simple string manipulation to get to the next payload. The script will run this over and over until something interesting happens.

# Decode the initial base64

decoded = decode(initial)
# Remove the first 11 characters and last 3

# so we just have the next base64 string

clamped = decoded[11:-3]

for i in range(1, 100):
   # Decode the new payload

   decoded = decode(clamped)
   # Print it with the current step so we

   # can see what’s going on

   print(f"Step {i}")

   print(decoded)
   # Fetch the next base64 string from the

   # output, so the next loop iteration will

   # decode it

   clamped = decoded[11:-3]

Result of the 63rd iteration of this script.
Figure 7: Result of the 63rd iteration of this script.

After 63 iterations, the script returns actual code, accompanied by an error from the decode function as a stopping condition was never defined. It not clear what the attacker’s motive to perform so many layers of obfuscation was, as one round of obfuscation versus several likely would not make any meaningful difference to bypassing signature analysis. It’s possible this is an attempt to stop analysts or other hackers from reverse engineering the code. However,  it took a matter of minutes to thwart their efforts.

Cryptojacking 2.0?

Cleaned up version of the de-obfuscated code.
Figure 8: Cleaned up version of the de-obfuscated code.

The cleaned up code indicates that the malware attempts to set up a connection to teneo[.]pro, which appears to belong to a Web3 startup company.

Teneo appears to be a legitimate company, with Crunchbase reporting that they have raised USD 3 million as part of their seed round [1]. Their service allows users to join a decentralized network, to “make sure their data benefits you” [2]. Practically, their node functions as a distributed social media scraper. In exchange for doing so, users are rewarded with “Teneo Points”, which are a private crypto token.

The malware script simply connects to the websocket and sends keep-alive pings in order to gain more points from Teneo and does not do any actual scraping. Based on the website, most of the rewards are gated behind the number of heartbeats performed, which is likely why this works [2].

Checking out the attacker’s dockerhub profile, this sort of attack seems to be their modus operandi. The most recent container runs an instance of the nexus network client, which is a project to perform distributed zero-knowledge compute tasks in exchange for cryptocurrency.

Typically, traditional cryptojacking attacks rely on using XMRig to directly mine cryptocurrency, however as XMRig is highly detected, attackers are shifting to alternative methods of generating crypto. Whether this is more profitable remains to be seen. There is not currently an easy way to determine the earnings of the attackers due to the more “closed” nature of the private tokens. Translating a user ID to a wallet address does not appear to be possible, and there is limited public information about the tokens themselves. For example, the Teneo token is listed as “preview only” on CoinGecko, with no price information available.

Conclusion

This blog explores an example of Python obfuscation and how to unravel it. Obfuscation remains a ubiquitous technique employed by the majority of malware to aid in detection/defense evasion and being able to de-obfuscate code is an important skill for analysts to possess.

We have also seen this new avenue of cryptominers being deployed, demonstrating that attackers’ techniques are still evolving - even tried and tested fields. The illegitimate use of legitimate tools to obtain rewards is an increasingly common vector. For example,  as has been previously documented, 9hits has been used maliciously to earn rewards for the attack in a similar fashion.

Docker remains a highly targeted service, and system administrators need to take steps to ensure it is secure. In general, Docker should never be exposed to the wider internet unless absolutely necessary, and if it is necessary both authentication and firewalling should be employed to ensure only authorized users are able to access the service. Attacks happen every minute, and even leaving the service open for a short period of time may result in a serious compromise.

References

1. https://www.crunchbase.com/funding_round/teneo-protocol-seed--a8ff2ad4

2. https://teneo.pro/

Continue reading
About the author
Nate Bill
Threat Researcher
Your data. Our AI.
Elevate your network security with Darktrace AI