Unraveling Disinformation Tactics in Uncertain Times

Learn how Darktrace AI is combating disinformation! Learn more about the impact of disinformation and how Darktrace tackles this pressing issue.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Taisiia Garkava
Security Analyst
Written by
Justin Frank
Security Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
06
Jun 2022

Since the beginning of the internet, we have seen a near, if not an exponential, surge of information sharing amongst users in cyberspace. Not long after, we saw how the emergence of social media ushered an access to public online platforms where other internet users worldwide could share, discuss, promote, and consume information, whether by deliberate choice or not.

These platforms, which are now wealthy in users, enabled the effectual sharing of a wide range of information and has facilitated the emergence of online communities, forums, webpages, and blogs - where everyone could create content and share it with other users leading to near infinite number of sources.

Public and private organisations have been able to leverage these platforms to communicate directly with the public, share relevant knowledge with their audiences, and expand users’ exposure to their organisation’s online presence – often by providing the users a direct link to websites and domains containing supplementary information on their organisations. However, there are some issues that organisations and users face when using such platforms.

Misinformation vs Disinformation

The ever-growing catalogue of informational sources and contributing users has introduced an old challenge with a more complex twist: distinguishing which information is truth and which is not. Two terms are used to describe inaccurate information – misinformation and disinformation.

Misinformation is “false information that is spread, regardless of whether there is intent or mislead”. For example, someone can read a compelling story on social media and share it with others without checking whether this story is, in fact, true.

During the COVID-19 pandemic, many people were rightfully concerned and anxious about their health, so they wanted to inform themselves as much as possible on the looming health risk. However, when they went looking for answers – they were overloaded with varying opinions and ‘fake facts’ that it became increasingly difficult to distinguish true facts from fiction.

Subsequently, at times a social media post - or two - that contained false information was shared by a friend, relative, or acquaintance who initially had good intentions in sharing what they had learned, but unfortunately, they were misinformed.

Disinformation instead means “deliberately misleading or biased information; manipulated narrative or facts; propaganda”, which can be interpreted as the intentional spreading of misinformation.

The main difference between misinformation and disinformation is the presence of clear intent in the latter. For example, during political conflict – or even wars – it is not uncommon for one, or both, opposing parties to broadcast news narratives to their own domestic audiences in the way that portrays them as either the righteous liberator or the unsuspecting victim.

Disinformation and Geopolitics

During turbulent times – such as (geo)political conflicts, national strife, digital revolutions, and pandemics – one can see the prevalence of massive disinformation campaigns being arranged by nation-state actors, independent threat actors and other ideologically driven actors. The likes of such campaigns are targeting businesses, governments, and individuals alike.

One of the most common channels used to spread disinformation would be social media platforms. In essence, any piece of information shared on social media can spread rapidly to all kinds of audiences across the globe. This is amplified by maliciously motivated actors’ use of “bots” to speed up the momentum of which disinformation is spread.

A bot is a “computer program that operates as an agent for a user or other program to stimulate a human activity. It is used to perform specific tasks repeatedly and autonomously. There is a plethora of these bots actively used to spread disinformation throughout the most popular social platforms including Facebook, Twitter and Instagram.

Impact of Disinformation on Organizations

When organisations are targeted by disinformation campaigns, malicious actors aim to leverage the discord and uncertainty on topics that are shrouded in controversy. Malicious actors like online scammers aim to exploit this induced discord by e.g., creating phishing emails that are more compelling to recipients – who are just trying to navigate between what is real and not real.

For example, a campaign stating that data held by a big telecommunication company was breached is used to craft emails in which scammers would prompt the recipients to check whether their personal data was also affected by this ‘breach’.

Regardless of whether this information is correct or not, the flux of news floating around the internet makes it increasingly difficult for a person to decide whether this information is accurate.

In parallel, the recipient may be experiencing feelings of anxiety and uncertainty regarding the breach – and the news about the breach – which often affects the recipients' decision to immediately react to new information on the topic. Since scammers use domains that are carefully crafted to seem legitimate to an untrained eye – e.g., domains containing near uncanny resemblance to the official organisation’s domain – it further increases the recipient’s susceptibility to trusting dubious sources. Thus, increasing the likelihood that recipients of phishing emails would be more compelled to e.g., click on a link attached to an email to verify whether their data was also leaked, or not.

The Future of Disinformation

Organisations who are already dealing with the social strains created by disinformation campaigns are now facing an additional risk: their audiences may be more susceptible to phishing campaigns in times of widespread uncertainty. To make a convincing phishing campaign, malign actors often use compromised domains, or attempt to mimic legitimate domains through a method called ‘typo squatting’.

Typo squatting is the act of registering domains with intentionally misspelled names of popular or official web presences and often filling these with untrustworthy content – to give their victims a false sense of legitimacy surrounding the source.

Once this false sense of legitimacy has been established between the attacker’s source and the victim’s susceptibility in trusting that source, it will be nearly entirely up to the victim to avoid being misled. Consequently, this means the attack surface of an organisation is growing as fast as disinformation and false domains can be created and shared to its audience.

Combatting Disinformation with Attack Surface Management

Organisations trying to protect their audiences from being misled by false domains will need get better visibility on domains associated with their brand. A brand-centric approach to discovering domains can shine light on:

  • The state of existing domains that are currently managed by your organisation – if they are being well maintained and properly secured.
  • The influx of ‘new’ domains that are attempting to impersonate your organisation’s brand.

Visibility on these types of domains and how your audience often interact with these domains enables an organisation to be more vigilant and responsive to the malign actors attempting to manipulate, hijack or impersonate your brand. Since an organisation’s brand pervades all sorts of publicly accessible assets – like domains – it has become of significant importance to include them in your organisation’s attack surface management regimen. Utilising a brand-centric approach to attack surface management will give your organisation a clearer view of your attack surface from a reputation risk perspective.

An attack surface management solution bolstered by such an approach will help your organisation’s security team to efficiently determine which domains – or other external facing digital assets – are posing a risk to your audience and reputation. It will help remove the repetitive work needed to identify these domains (and other assets), detect the risks associated with them, and help you manage any changes or actions required to protect both your audience and your organisation.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Taisiia Garkava
Security Analyst
Written by
Justin Frank
Security Analyst

More in this series

No items found.

Blog

/

Network

/

March 11, 2026

NetSupport RAT: How Legitimate Tools Can Be as Damaging as Malware

Default blog imageDefault blog image

What is NetSupport Manager?

NetSupport Manager is a legitimate IT tool used by system administrators for remote support, monitoring, and management. In use since 1989, NetSupport Manager enables users to remotely access and navigate systems across different platforms and operating systems [1].

What is NetSupport RAT?

Although NetSupport Manager is a legitimate tool that can be used by IT and security professionals, there has been a rising number of cases in which it is abused to gain unauthorized access to victim systems. This misuse has become so prevalent that, in recent years, security researchers have begun referring to NetSupport as a Remote Access Trojan (RAT), a term typically used for malware that enables a threat actor to remotely access or control an infected device [2][3][4].

NetSupport RAT activity summary

The initial stages of NetSupport RAT infection may vary depending on the source of the initial compromise. Using tactics such as the social engineering tactic ClickFix, threat actors attempt to trick users into inadvertently executing malicious PowerShell commands under the guise of resolving a non-existent issue or completing a fake CAPTCHA verification [5]. Other attack vectors such as phishing emails, fake browser updates, malicious websites, search engine optimization (SEO) poisoning, malvertising and drive-by downloads are also employed to direct users to fraudulent pages and fake reCAPTCHA verification checks, ultimately inducing them to execute malicious PowerShell commands [5][6][7]. This leads to the successful installation of NetSupport Manager on the compromised device, which is often placed in non-standard directories such as AppData, ProgramData, or Downloads [3][8].

Once installed, the adversary is able to gain remote access to the affected machine, monitor user activity, exfiltrate data, communicate with the command-and-control (C2) server, and maintain persistence [5]. External research has also highlighted that post-exploitation of NetSupport RAT has involved the additional download of malicious payloads [2][5].

Attack flow diagram highlighting key events across each phase of the attack phase
Figure 1: Attack flow diagram highlighting key events across each phase of the attack phase [2][5].

Darktrace coverage

In November of 2025, suspicious behavior indicative of the malicious abuse of NetSupport Manager was observed on multiple customers across Europe, the Middle East, and Africa (EMEA) and the Americas (AMS).

While open-source intelligence (OSINT) has reported that, in a recent campaign, a threat actor impersonated government entities to trick users in organizations in the Information Technology, Government and Financial Services sectors in Central Asia into downloading NetSupport Manager [8], approximately a third of Darktrace’s affected customers in November were based in the US while the rest were based in EMEA. This contrast underscores how widely NetSupport Manager is leveraged by threat actors and highlights its accessibility as an initial access tool.  

The Darktrace customers affected were in sectors including Information and Communication, Manufacturing and Arts, entertainment and recreation.

The ClickFix social engineering tactic typically used to distribute the NetSupport RAT is known to target multiple industries, including Technology, Manufacturing and Energy sectors [9]. It also reflects activity observed in the campaign targeting Central Asia, where the Information Technology sector was among those affected [8].

The prevalence of affected Education customers highlights NetSupport’s marketing focus on the Education sector [10]. This suggests that threat actors are also aware of this marketing strategy and have exploited the trust it creates to deploy NetSupport Manager and gain access to their targets’ systems. While the execution of the PowerShell commands that led to the installation of NetSupport Manager falls outside of Darktrace's purview in cases identified, Darktrace was still able to identify a pattern of devices making connections to multiple rare external domains and IP addresses associated with the NetSupport RAT, using a wide range of ports over the HTTP protocol. A full list of associated domains and IP addresses is provided in the Appendices of this blog.

Although OSINT identifies multiple malicious domains and IP addresses as used as C2 servers, signature-based detections of NetSupport RAT indicators of compromise (IoCs) may miss broader activity, as new malicious websites linked to the RAT continue to appear.

Darktrace’s anomaly‑based approach allows it to establish a normal ‘pattern of life’ for each device on a network and identify when behavior deviates from this baseline, enabling the detection of unusual activity even when it does not match known IoCs or tactics, techniques and procedures (TTPs).

In one customer environment in late 2025, Darktrace / NETWORK detected a device initiating new connections to the rare external endpoint, thetavaluemetrics[.]com (74.91.125[.]57), along with the use of a previously unseen user agent, which it recognized as highly unusual for the network.

Darktrace’s detection of HTTP POST requests to a suspicious URI and new user agent usage.
Figure 2: Darktrace’s detection of HTTP POST requests to a suspicious URI and new user agent usage.

Darktrace identified that user agent present in connections to this endpoint was the ‘NetSupport Manager/1.3’, initially suggesting legitimate NetSupport Manager activity. Subsequent investigation, however, revealed that the endpoint was in fact a malicious NetSupportRAT C2 endpoint [12]. Shortly after, Darktrace detected the same device performing HTTP POST requests to the URI fakeurl[.]htm. This pattern of activity is consistent with OSINT reporting that details communication between compromised devices and NetSupport Connectivity Gateways functioning as C2 servers [11].

Conclusion

As seen not only with NetSupport Manager but with any legitimate or open‑source software used by IT and security professionals, the legitimacy of a tool does not prevent it from being abused by threat actors. Open‑source software, especially tools with free or trial versions such as NetSupport Manager, remains readily accessible for malicious use, including network compromise. In an age where remote work is still prevalent, validating any anomalous use of software and remote management tools is essential to reducing opportunities for unauthorized access.

Darktrace’s anomaly‑based detection enables security teams to identify malicious use of legitimate tools, even when clear signatures or indicators of compromise are absent, helping to prevent further impact on a network.


Credit to George Kim (Analyst Consulting Lead – AMS), Anna Gilbertson (Senior Cyber Analyst)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Alerts

·       Compromise / Suspicious HTTP and Anomalous Activity

·       Compromise / New User Agent and POST

·       Device / New User Agent

·       Anomalous Connection / New User Agent to IP Without Hostname

·       Anomalous Connection / Posting HTTP to IP Without Hostname

·       Anomalous Connection / Multiple Failed Connections to Rare Endpoint

·       Anomalous Connection / Application Protocol on Uncommon Port

·       Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

·       Compromise / Beaconing Activity To External Rare

·       Compromise / HTTP Beaconing to Rare Destination

·       Compromise / Agent Beacon (Medium Period)

·       Compromise / Agent Beacon (Long Period)

·       Compromise / Quick and Regular Windows HTTP Beaconing

·       Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

·       Compromise / POST and Beacon to Rare External

Indicators of Compromise (IoCs)

Indicator           Type     Description

/fakeurl.htm URI            NetSupportRAT C2 URI

thetavaluemetrics[.]com        Connection hostname              NetSupportRAT C2 Endpoint

westford-systems[.]icu            Connection hostname              NetSupportRAT C2 Endpoint

holonisz[.]com                Connection hostname              NetSupportRAT C2 Endpoint

heaveydutyl[.]com      Connection hostname              NetSupportRAT C2 Endpoint

nsgatetest1[.]digital   Connection hostname              NetSupportRAT C2 Endpoint

finalnovel[.]com            Connection hostname              NetSupportRAT C2 Endpoint

217.91.235[.]17              IP             NetSupportRAT C2 Endpoint

45.94.47[.]224                 IP             NetSupportRAT C2 Endpoint

74.91.125[.]57                 IP             NetSupportRAT C2 Endpoint

88.214.27[.]48                 IP             NetSupportRAT C2 Endpoint

104.21.40[.]75                 IP             NetSupportRAT C2 Endpoint

38.146.28[.]242              IP             NetSupportRAT C2 Endpoint

185.39.19[.]233              IP             NetSupportRAT C2 Endpoint

45.88.79[.]237                 IP             NetSupportRAT C2 Endpoint

141.98.11[.]224              IP             NetSupportRAT C2 Endpoint

88.214.27[.]166              IP             NetSupportRAT C2 Endpoint

107.158.128[.]84          IP             NetSupportRAT C2 Endpoint

87.120.93[.]98                 IP             Rhadamanthys C2 Endpoint

References

  1. https://mspalliance.com/netsupport-debuts-netsupport-24-7/
  2. https://blogs.vmware.com/security/2023/11/netsupport-rat-the-rat-king-returns.html
  3. https://redcanary.com/threat-detection-report/threats/netsupport-manager/
  4. https://www.elastic.co/guide/en/security/8.19/netsupport-manager-execution-from-an-unusual-path.html
  5. https://rewterz.com/threat-advisory/netsupport-rat-delivered-through-spoofed-verification-pages-active-iocs
  6. https://thehackernews.com/2025/11/new-evalusion-clickfix-campaign.html
  7. https://corelight.com/blog/detecting-netsupport-manager-abuse
  8. https://thehackernews.com/2025/11/bloody-wolf-expands-java-based.html
  9. https://unit42.paloaltonetworks.com/preventing-clickfix-attack-vector
  10. https://www.netsupportsoftware.com/education-solutions
  11. https://www.esentire.com/blog/unpacking-netsupport-rat-loaders-delivered-via-clickfix
  12. https://threatfox.abuse.ch/browse/malware/win.netsupportmanager_rat/
  13. https://www.virustotal.com/gui/url/5fe6936a69c786c9ded9f31ed1242c601cd64e1d90cecd8a7bb03182c47906c2

Continue reading
About the author
George Kim
Analyst Consulting Lead – AMS

Blog

/

Cloud

/

March 5, 2026

Inside Cloud Compromise: Investigating Attacker Activity with Darktrace / Forensic Acquisition & Investigation

Default blog imageDefault blog image

Investigating Cloud Attacks with Forensic Acquisition & Investigation

Darktrace / Forensic Acquisition & Investigation™ is the industry’s first truly automated forensic solution purpose-built for the cloud. This blog will demonstrate how an investigation can be carried out against a compromised cloud server in minutes, rather than hours or days.

The compromised server investigated in this case originates from Darktrace’s Cloudypots system, a global honeypot network designed to observe adversary activity in real time across a wide range of cloud services. Whenever an attacker successfully compromises one of these honeypots, a forensic copy of the virtual server's disk is preserved for later analysis. Using Forensic Acquisition & Investigation, analysts can then investigate further and obtain detailed insights into the compromise including complete attacker timelines and root cause analysis.

Forensic Acquisition & Investigation supports importing artifacts from a variety of sources, including EC2 instances, ECS, S3 buckets, and more. The Cloudypots system produces a raw disk image whenever an attack is detected and stores it in an S3 bucket. This allows the image to be directly imported into Forensic Acquisition & Investigation using the S3 bucket import option.

As Forensic Acquisition & Investigation runs cloud-natively, no additional configuration is required to add a specific S3 bucket. Analysts can browse and acquire forensic assets from any bucket that the configured IAM role is permitted to access. Operators can also add additional IAM credentials, including those from other cloud providers, to extend access across multiple cloud accounts and environments.

Figure 1: Forensic Acquisition & Investigation import screen.

Forensic Acquisition & Investigation then retrieves a copy of the file and automatically begins running the analysis pipeline on the artifact. This pipeline performs a full forensic analysis of the disk and builds a timeline of the activity that took place on the compromised asset. By leveraging Forensic Acquisition & Investigation’s cloud-native analysis system, this process condenses hour of manual work into just minutes.

Successful import of a forensic artifact and initiation of the analysis pipeline.
Figure 2: Successful import of a forensic artifact and initiation of the analysis pipeline.

Once processing is complete, the preserved artifact is visible in the Evidence tab, along with a summary of key information obtained during analysis, such as the compromised asset’s hostname, operating system, cloud provider, and key event count.

The Evidence overview showing the acquired disk image.
Figure 3: The Evidence overview showing the acquired disk image.

Clicking on the “Key events” field in the listing opens the timeline view, automatically filtered to show system- generated alarms.

The timeline provides a chronological record of every event that occurred on the system, derived from multiple sources, including:

  • Parsed log files such as the systemd journal, audit logs, application specific logs, and others.
  • Parsed history files such as .bash_history, allowing executed commands to be shown on the timeline.
  • File-specific events, such as files being created, accessed, modified, or executables being run, etc.

This approach allows timestamped information and events from multiple sources to be aggregated and parsed into a single, concise view, greatly simplifying the data review process.

Alarms are created for specific timeline events that match either a built-in system rule, curated by Darktrace’s Threat Research team or an operator-defined created at the project level. These alarms help quickly filter out noise and highlight on events of interest, such as the creation of a file containing known malware, access to sensitive files like Amazon Web Service (AWS) credentials, suspicious arguments or commands, and more.

 The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.
Figure 4: The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.

In this case, several alarms were generated for suspicious Base64 arguments being passed to Selenium. Examining the event data, it appears the attacker spawned a Selenium Grid session with the following payload:

"request.payload": "[Capabilities {browserName: chrome, goog:chromeOptions: {args: [-cimport base64;exec(base64...], binary: /usr/bin/python3, extensions: []}, pageLoadStrategy: normal}]"

This is a common attack vector for Selenium Grid. The chromeOptions object is intended to specify arguments for how Google Chrome should be launched; however, in this case the attacker has abused the binary field to execute the Python3 binary instead of Chrome. Combined with the option to specify command-line arguments, the attacker can use Python3’s -c option to execute arbitrary Python code, in this instance, decoding and executing a Base64 payload.

Selenium’s logs truncate the Arguments field automatically, so an alternate method is required to retrieve the full payload. To do this, the search bar can be used to find all events that occurred around the same time as this flagged event.

Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].
Figure 5: Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].

Scrolling through the search results, an entry from Java’s systemd journal can be identified. This log contains the full, unaltered payload. GCHQ’s CyberChef can then be used to decode the Base64 data into the attacker’s script, which will ultimately be executed.[NJ9]

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ