Blog
/
/
June 6, 2022

Unraveling Disinformation Tactics in Uncertain Times

Learn how Darktrace AI is combating disinformation! Learn more about the impact of disinformation and how Darktrace tackles this pressing issue.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Taisiia Garkava
Security Analyst
Written by
Justin Frank
Security Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
06
Jun 2022

Since the beginning of the internet, we have seen a near, if not an exponential, surge of information sharing amongst users in cyberspace. Not long after, we saw how the emergence of social media ushered an access to public online platforms where other internet users worldwide could share, discuss, promote, and consume information, whether by deliberate choice or not.

These platforms, which are now wealthy in users, enabled the effectual sharing of a wide range of information and has facilitated the emergence of online communities, forums, webpages, and blogs - where everyone could create content and share it with other users leading to near infinite number of sources.

Public and private organisations have been able to leverage these platforms to communicate directly with the public, share relevant knowledge with their audiences, and expand users’ exposure to their organisation’s online presence – often by providing the users a direct link to websites and domains containing supplementary information on their organisations. However, there are some issues that organisations and users face when using such platforms.

Misinformation vs Disinformation

The ever-growing catalogue of informational sources and contributing users has introduced an old challenge with a more complex twist: distinguishing which information is truth and which is not. Two terms are used to describe inaccurate information – misinformation and disinformation.

Misinformation is “false information that is spread, regardless of whether there is intent or mislead”. For example, someone can read a compelling story on social media and share it with others without checking whether this story is, in fact, true.

During the COVID-19 pandemic, many people were rightfully concerned and anxious about their health, so they wanted to inform themselves as much as possible on the looming health risk. However, when they went looking for answers – they were overloaded with varying opinions and ‘fake facts’ that it became increasingly difficult to distinguish true facts from fiction.

Subsequently, at times a social media post - or two - that contained false information was shared by a friend, relative, or acquaintance who initially had good intentions in sharing what they had learned, but unfortunately, they were misinformed.

Disinformation instead means “deliberately misleading or biased information; manipulated narrative or facts; propaganda”, which can be interpreted as the intentional spreading of misinformation.

The main difference between misinformation and disinformation is the presence of clear intent in the latter. For example, during political conflict – or even wars – it is not uncommon for one, or both, opposing parties to broadcast news narratives to their own domestic audiences in the way that portrays them as either the righteous liberator or the unsuspecting victim.

Disinformation and Geopolitics

During turbulent times – such as (geo)political conflicts, national strife, digital revolutions, and pandemics – one can see the prevalence of massive disinformation campaigns being arranged by nation-state actors, independent threat actors and other ideologically driven actors. The likes of such campaigns are targeting businesses, governments, and individuals alike.

One of the most common channels used to spread disinformation would be social media platforms. In essence, any piece of information shared on social media can spread rapidly to all kinds of audiences across the globe. This is amplified by maliciously motivated actors’ use of “bots” to speed up the momentum of which disinformation is spread.

A bot is a “computer program that operates as an agent for a user or other program to stimulate a human activity. It is used to perform specific tasks repeatedly and autonomously. There is a plethora of these bots actively used to spread disinformation throughout the most popular social platforms including Facebook, Twitter and Instagram.

Impact of Disinformation on Organizations

When organisations are targeted by disinformation campaigns, malicious actors aim to leverage the discord and uncertainty on topics that are shrouded in controversy. Malicious actors like online scammers aim to exploit this induced discord by e.g., creating phishing emails that are more compelling to recipients – who are just trying to navigate between what is real and not real.

For example, a campaign stating that data held by a big telecommunication company was breached is used to craft emails in which scammers would prompt the recipients to check whether their personal data was also affected by this ‘breach’.

Regardless of whether this information is correct or not, the flux of news floating around the internet makes it increasingly difficult for a person to decide whether this information is accurate.

In parallel, the recipient may be experiencing feelings of anxiety and uncertainty regarding the breach – and the news about the breach – which often affects the recipients' decision to immediately react to new information on the topic. Since scammers use domains that are carefully crafted to seem legitimate to an untrained eye – e.g., domains containing near uncanny resemblance to the official organisation’s domain – it further increases the recipient’s susceptibility to trusting dubious sources. Thus, increasing the likelihood that recipients of phishing emails would be more compelled to e.g., click on a link attached to an email to verify whether their data was also leaked, or not.

The Future of Disinformation

Organisations who are already dealing with the social strains created by disinformation campaigns are now facing an additional risk: their audiences may be more susceptible to phishing campaigns in times of widespread uncertainty. To make a convincing phishing campaign, malign actors often use compromised domains, or attempt to mimic legitimate domains through a method called ‘typo squatting’.

Typo squatting is the act of registering domains with intentionally misspelled names of popular or official web presences and often filling these with untrustworthy content – to give their victims a false sense of legitimacy surrounding the source.

Once this false sense of legitimacy has been established between the attacker’s source and the victim’s susceptibility in trusting that source, it will be nearly entirely up to the victim to avoid being misled. Consequently, this means the attack surface of an organisation is growing as fast as disinformation and false domains can be created and shared to its audience.

Combatting Disinformation with Attack Surface Management

Organisations trying to protect their audiences from being misled by false domains will need get better visibility on domains associated with their brand. A brand-centric approach to discovering domains can shine light on:

  • The state of existing domains that are currently managed by your organisation – if they are being well maintained and properly secured.
  • The influx of ‘new’ domains that are attempting to impersonate your organisation’s brand.

Visibility on these types of domains and how your audience often interact with these domains enables an organisation to be more vigilant and responsive to the malign actors attempting to manipulate, hijack or impersonate your brand. Since an organisation’s brand pervades all sorts of publicly accessible assets – like domains – it has become of significant importance to include them in your organisation’s attack surface management regimen. Utilising a brand-centric approach to attack surface management will give your organisation a clearer view of your attack surface from a reputation risk perspective.

An attack surface management solution bolstered by such an approach will help your organisation’s security team to efficiently determine which domains – or other external facing digital assets – are posing a risk to your audience and reputation. It will help remove the repetitive work needed to identify these domains (and other assets), detect the risks associated with them, and help you manage any changes or actions required to protect both your audience and your organisation.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Taisiia Garkava
Security Analyst
Written by
Justin Frank
Security Analyst

More in this series

No items found.

Blog

/

OT

/

September 4, 2025

Rethinking Signature-Based Detection for Power Utility Cybersecurity

Default blog imageDefault blog image

Lessons learned from OT cyber attacks

Over the past decade, some of the most disruptive attacks on power utilities have shown the limits of signature-based detection and reshaped how defenders think about OT security. Each incident reinforced that signatures are too narrow and reactive to serve as the foundation of defense.

2015: BlackEnergy 3 in Ukraine

According to CISA, on December 23, 2015, Ukrainian power companies experienced unscheduled power outages affecting a large number of customers — public reports indicate that the BlackEnergy malware was discovered on the companies’ computer networks.

2016: Industroyer/CrashOverride

CISA describes CrashOverride malwareas an “extensible platform” reported to have been used against critical infrastructure in Ukraine in 2016. It was capable of targeting industrial control systems using protocols such as IEC‑101, IEC‑104, and IEC‑61850, and fundamentally abused legitimate control system functionality to deliver destructive effects. CISA emphasizes that “traditional methods of detection may not be sufficient to detect infections prior to the malware execution” and recommends behavioral analysis techniques to identify precursor activity to CrashOverride.

2017: TRITON Malware

The U.S. Department of the Treasury reports that the Triton malware, also known as TRISIS or HatMan, was “designed specifically to target and manipulate industrial safety systems” in a petrochemical facility in the Middle East. The malware was engineered to control Safety Instrumented System (SIS) controllers responsible for emergency shutdown procedures. During the attack, several SIS controllers entered a failed‑safe state, which prevented the malware from fully executing.

The broader lessons

These events revealed three enduring truths:

  • Signatures have diminishing returns: BlackEnergy showed that while signatures can eventually identify adapted IT malware, they arrive too late to prevent OT disruption.
  • Behavioral monitoring is essential: CrashOverride demonstrated that adversaries abuse legitimate industrial protocols, making behavioral and anomaly detection more effective than traditional signature methods.
  • Critical safety systems are now targets: TRITON revealed that attackers are willing to compromise safety instrumented systems, elevating risks from operational disruption to potential physical harm.

The natural progression for utilities is clear. Static, file-based defenses are too fragile for the realities of OT.  

These incidents showed that behavioral analytics and anomaly detection are far more effective at identifying suspicious activity across industrial systems, regardless of whether the malicious code has ever been seen before.

Strategic risks of overreliance on signatures

  • False sense of security: Believing signatures will block advanced threats can delay investment in more effective detection methods.
  • Resource drain: Constantly updating, tuning, and maintaining signature libraries consumes valuable staff resources without proportional benefit.
  • Adversary advantage: Nation-state and advanced actors understand the reactive nature of signature defenses and design attacks to circumvent them from the start.

Recommended Alternatives (with real-world OT examples)

 Alternative strategies for detecting cyber attacks in OT
Figure 1: Alternative strategies for detecting cyber attacks in OT

Behavioral and anomaly detection

Rather than relying on signatures, focusing on behavior enables detection of threats that have never been seen before—even trusted-looking devices.

Real-world insight:

In one OT setting, a vendor inadvertently left a Raspberry Pi on a customer’s ICS network. After deployment, Darktrace’s system flagged elastic anomalies in its HTTPS and DNS communication despite the absence of any known indicators of compromise. The alerting included sustained SSL increases, agent‑beacon activity, and DNS connections to unusual endpoints, revealing a possible supply‑chain or insider risk invisible to static tools.  

Darktrace’s AI-driven threat detection aligns with the zero-trust principle of assuming the risk of a breach. By leveraging AI that learns an organization’s specific patterns of life, Darktrace provides a tailored security approach ideal for organizations with complex supply chains.

Threat intelligence sharing & building toward zero-trust philosophy

Frameworks such as MITRE ATT&CK for ICS provide a common language to map activity against known adversary tactics, helping teams prioritize detections and response strategies. Similarly, information-sharing communities like E-ISAC and regional ISACs give utilities visibility into the latest tactics, techniques, and procedures (TTPs) observed across the sector. This level of intel can help shift the focus away from chasing individual signatures and toward building resilience against how adversaries actually operate.

Real-world insight:

Darktrace’s AI embodies zero‑trust by assuming breach potential and continually evaluating all device behavior, even those deemed trusted. This approach allowed the detection of an anomalous SharePoint phishing attempt coming from a trusted supplier, intercepted by spotting subtle patterns rather than predefined rules. If a cloud account is compromised, unauthorized access to sensitive information could lead to extortion and lateral movement into mission-critical systems for more damaging attacks on critical-national infrastructure.

This reinforces the need to monitor behavioral deviations across the supply chain, not just known bad artifacts.

Defense-in-Depth with OT context & unified visibility

OT environments demand visibility that spans IT, OT, and IoT layers, supported by risk-based prioritization.

Real-world insight:

Darktrace / OT offers unified AI‑led investigations that break down silos between IT and OT. Smaller teams can see unusual outbound traffic or beaconing from unknown OT devices, swiftly investigate across domains, and get clear visibility into device behavior, even when they lack specialized OT security expertise.  

Moreover, by integrating contextual risk scoring, considering real-world exploitability, device criticality, firewall misconfiguration, and legacy hardware exposure, utilities can focus on the vulnerabilities that genuinely threaten uptime and safety, rather than being overwhelmed by CVE noise.  

Regulatory alignment and positive direction

Industry regulations are beginning to reflect this evolution in strategy. NERC CIP-015 requires internal network monitoring that detects anomalies, and the standard references anomalies 15 times. In contrast, signature-based detection is not mentioned once.

This regulatory direction shows that compliance bodies understand the limitations of static defenses and are encouraging utilities to invest in anomaly-based monitoring and analytics. Utilities that adopt these approaches will not only be strengthening their resilience but also positioning themselves for regulatory compliance and operational success.

Conclusion

Signature-based detection retains utility for common IT malware, but it cannot serve as the backbone of security for power utilities. History has shown that major OT attacks are rarely stopped by signatures, since each campaign targets specific systems with customized tools. The most dangerous adversaries, from insiders to nation-states, actively design their operations to avoid detection by signature-based tools.

A more effective strategy prioritizes behavioral analytics, anomaly detection, and community-driven intelligence sharing. These approaches not only catch known threats, but also uncover the subtle anomalies and novel attack techniques that characterize tomorrow’s incidents.

Continue reading
About the author
Daniel Simonds
Director of Operational Technology

Blog

/

Identity

/

August 21, 2025

From VPS to Phishing: How Darktrace Uncovered SaaS Hijacks through Virtual Infrastructure Abuse

Default blog imageDefault blog image

What is a VPS and how are they abused?

A Virtual Private Server (VPS) is a virtualized server that provides dedicated resources and control to users on a shared physical device.  VPS providers, long used by developers and businesses, are increasingly misused by threat actors to launch stealthy, scalable attacks. While not a novel tactic, VPS abuse is has seen an increase in Software-as-a-Service (SaaS)-targeted campaigns as it enables attackers to bypass geolocation-based defenses by mimicking local traffic, evade IP reputation checks with clean, newly provisioned infrastructure, and blend into legitimate behavior [3].

VPS providers like Hyonix and Host Universal offer rapid setup and minimal open-source intelligence (OSINT) footprint, making detection difficult [1][2]. These services are not only fast to deploy but also affordable, making them attractive to attackers seeking anonymous, low-cost infrastructure for scalable campaigns. Such attacks tend to be targeted and persistent, often timed to coincide with legitimate user activity, a tactic that renders traditional security tools largely ineffective.

Darktrace’s investigation into Hyonix VPS abuse

In May 2025, Darktrace’s Threat Research team investigated a series of incidents across its customer base involving VPS-associated infrastructure. The investigation began with a fleet-wide review of alerts linked to Hyonix (ASN AS931), revealing a noticeable spike in anomalous behavior from this ASN in March 2025. The alerts included brute-force attempts, anomalous logins, and phishing campaign-related inbox rule creation.

Darktrace identified suspicious activity across multiple customer environments around this time, but two networks stood out. In one instance, two internal devices exhibited mirrored patterns of compromise, including logins from rare endpoints, manipulation of inbox rules, and the deletion of emails likely used in phishing attacks. Darktrace traced the activity back to IP addresses associated with Hyonix, suggesting a deliberate use of VPS infrastructure to facilitate the attack.

On the second customer network, the attack was marked by coordinated logins from rare IPs linked to multiple VPS providers, including Hyonix. This was followed by the creation of inbox rules with obfuscated names and attempts to modify account recovery settings, indicating a broader campaign that leveraged shared infrastructure and techniques.

Darktrace’s Autonomous Response capability was not enabled in either customer environment during these attacks. As a result, no automated containment actions were triggered, allowing the attack to escalate without interruption. Had Autonomous Response been active, Darktrace would have automatically blocked connections from the unusual VPS endpoints upon detection, effectively halting the compromise in its early stages.

Case 1

Timeline of activity for Case 1 - Unusual VPS logins and deletion of phishing emails.
Figure 1: Timeline of activity for Case 1 - Unusual VPS logins and deletion of phishing emails.

Initial Intrusion

On May 19, 2025, Darktrace observed two internal devices on one customer environment initiating logins from rare external IPs associated with VPS providers, namely Hyonix and Host Universal (via Proton VPN). Darktrace recognized that these logins had occurred within minutes of legitimate user activity from distant geolocations, indicating improbable travel and reinforcing the likelihood of session hijacking. This triggered Darktrace / IDENTITY model “Login From Rare Endpoint While User Is Active”, which highlights potential credential misuse when simultaneous logins occur from both familiar and rare sources.  

Shortly after these logins, Darktrace observed the threat actor deleting emails referring to invoice documents from the user’s “Sent Items” folder, suggesting an attempt to hide phishing emails that had been sent from the now-compromised account. Though not directly observed, initial access in this case was likely achieved through a similar phishing or account hijacking method.

 Darktrace / IDENTITY model "Login From Rare Endpoint While User Is Active", which detects simultaneous logins from both a common and a rare source to highlight potential credential misuse.
Figure 2: Darktrace / IDENTITY model "Login From Rare Endpoint While User Is Active", which detects simultaneous logins from both a common and a rare source to highlight potential credential misuse.

Case 2

Timeline of activity for Case 2 – Coordinated inbox rule creation and outbound phishing campaign.
Figure 3: Timeline of activity for Case 2 – Coordinated inbox rule creation and outbound phishing campaign.

In the second customer environment, Darktrace observed similar login activity originating from Hyonix, as well as other VPS providers like Mevspace and Hivelocity. Multiple users logged in from rare endpoints, with Multi-Factor Authentication (MFA) satisfied via token claims, further indicating session hijacking.

Establishing control and maintaining persistence

Following the initial access, Darktrace observed a series of suspicious SaaS activities, including the creation of new email rules. These rules were given minimal or obfuscated names, a tactic often used by attackers to avoid drawing attention during casual mailbox reviews by the SaaS account owner or automated audits. By keeping rule names vague or generic, attackers reduce the likelihood of detection while quietly redirecting or deleting incoming emails to maintain access and conceal their activity.

One of the newly created inbox rules targeted emails with subject lines referencing a document shared by a VIP at the customer’s organization. These emails would be automatically deleted, suggesting an attempt to conceal malicious mailbox activity from legitimate users.

Mirrored activity across environments

While no direct lateral movement was observed, mirrored activity across multiple user devices suggested a coordinated campaign. Notably, three users had near identical similar inbox rules created, while another user had a different rule related to fake invoices, reinforcing the likelihood of a shared infrastructure and technique set.

Privilege escalation and broader impact

On one account, Darktrace observed “User registered security info” activity was shortly after anomalous logins, indicating attempts to modify account recovery settings. On another, the user reset passwords or updated security information from rare external IPs. In both cases, the attacker’s actions—including creating inbox rules, deleting emails, and maintaining login persistence—suggested an intent to remain undetected while potentially setting the stage for data exfiltration or spam distribution.

On a separate account, outbound spam was observed, featuring generic finance-related subject lines such as 'INV#. EMITTANCE-1'. At the network level, Darktrace / NETWORK detected DNS requests from a device to a suspicious domain, which began prior the observed email compromise. The domain showed signs of domain fluxing, a tactic involving frequent changes in IP resolution, commonly used by threat actors to maintain resilient infrastructure and evade static blocklists. Around the same time, Darktrace detected another device writing a file named 'SplashtopStreamer.exe', associated with the remote access tool Splashtop, to a domain controller. While typically used in IT support scenarios, its presence here may suggest that the attacker leveraged it to establish persistent remote access or facilitate lateral movement within the customer’s network.

Conclusion

This investigation highlights the growing abuse of VPS infrastructure in SaaS compromise campaigns. Threat actors are increasingly leveraging these affordable and anonymous hosting services to hijack accounts, launch phishing attacks, and manipulate mailbox configurations, often bypassing traditional security controls.

Despite the stealthy nature of this campaign, Darktrace detected the malicious activity early in the kill chain through its Self-Learning AI. By continuously learning what is normal for each user and device, Darktrace surfaced subtle anomalies, such as rare login sources, inbox rule manipulation, and concurrent session activity, that likely evade traditional static, rule-based systems.

As attackers continue to exploit trusted infrastructure and mimic legitimate user behavior, organizations should adopt behavioral-based detection and response strategies. Proactively monitoring for indicators such as improbable travel, unusual login sources, and mailbox rule changes, and responding swiftly with autonomous actions, is critical to staying ahead of evolving threats.

Credit to Rajendra Rushanth (Cyber Analyst), Jen Beckett (Cyber Analyst) and Ryan Traill (Analyst Content Lead)

References

·      1: https://cybersecuritynews.com/threat-actors-leveraging-vps-hosting-providers/

·      2: https://threatfox.abuse.ch/asn/931/

·      3: https://www.cyfirma.com/research/vps-exploitation-by-threat-actors/

Appendices

Darktrace Model Detections

•   SaaS / Compromise / Unusual Login, Sent Mail, Deleted Sent

•   SaaS / Compromise / Suspicious Login and Mass Email Deletes

•   SaaS / Resource / Mass Email Deletes from Rare Location

•   SaaS / Compromise / Unusual Login and New Email Rule

•   SaaS / Compliance / Anomalous New Email Rule

•   SaaS / Resource / Possible Email Spam Activity

•   SaaS / Unusual Activity / Multiple Unusual SaaS Activities

•   SaaS / Unusual Activity / Multiple Unusual External Sources For SaaS Credential

•   SaaS / Access / Unusual External Source for SaaS Credential Use

•   SaaS / Compromise / High Priority Login From Rare Endpoint

•   SaaS / Compromise / Login From Rare Endpoint While User Is Active

List of Indicators of Compromise (IoCs)

Format: IoC – Type – Description

•   38.240.42[.]160 – IP – Associated with Hyonix ASN (AS931)

•   103.75.11[.]134 – IP – Associated with Host Universal / Proton VPN

•   162.241.121[.]156 – IP – Rare IP associated with phishing

•   194.49.68[.]244 – IP – Associated with Hyonix ASN

•   193.32.248[.]242 – IP – Used in suspicious login activity / Mullvad VPN

•   50.229.155[.]2 – IP – Rare login IP / AS 7922 ( COMCAST-7922 )

•   104.168.194[.]248 – IP – Rare login IP / AS 54290 ( HOSTWINDS )

•   38.255.57[.]212 – IP – Hyonix IP used during MFA activity

•   103.131.131[.]44 – IP – Hyonix IP used in login and MFA activity

•   178.173.244[.]27 – IP – Hyonix IP

•   91.223.3[.]147 – IP – Mevspace Poland, used in multiple logins

•   2a02:748:4000:18:0:1:170b[:]2524 – IPv6 – Hivelocity VPS, used in multiple logins and MFA activity

•   51.36.233[.]224 – IP – Saudi ASN, used in suspicious login

•   103.211.53[.]84 – IP – Excitel Broadband India, used in security info update

MITRE ATT&CK Mapping

Tactic – Technique – Sub-Technique

•   Initial Access – T1566 – Phishing

                       T1566.001 – Spearphishing Attachment

•   Execution – T1078 – Valid Accounts

•   Persistence – T1098 – Account Manipulation

                       T1098.002 – Exchange Email Rules

•   Command and Control – T1071 – Application Layer Protocol

                       T1071.001 – Web Protocols

•   Defense Evasion – T1036 – Masquerading

•   Defense Evasion – T1562 – Impair Defenses

                       T1562.001 – Disable or Modify Tools

•   Credential Access – T1556 – Modify Authentication Process

                       T1556.004 – MFA Bypass

•   Discovery – T1087 – Account Discovery

•      Impact – T1531 – Account Access Removal

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Rajendra Rushanth
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI