Blog
/
Email
/
July 18, 2023

Understanding Email Security & the Psychology of Trust

We explore how psychological research into the nature of trust relates to our relationship with technology - and what that means for AI solutions.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research
Photo showing woman logging into her laptop with username and passwordDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Jul 2023

When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.

While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.

This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.

What is the psychology of trust?

Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1

Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.

Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2

Defining implicit and routine trust

As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.

Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3

This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.

Trust and technology

As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’  The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.

Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.

The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5   Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.

The implicit trust in workplace technology

To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.

Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.

For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.

Inattentional blindness: How the brain ignores change

Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.

A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.

With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.

How threats exploit our implicit trust in technology

Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.

In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.

In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.

How email security solutions can solve the problem of implicit trust

How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.

Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.

Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.

Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.

For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.

If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.

AI-driven Email Security: The Way Forward

Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.

References

1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx

2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.

3https://www.cognifit.com/gb/attention

4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common

5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.

6https://link.springer.com/article/10.1007/s00426-006-0072-4

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research

More in this series

No items found.

Blog

/

OT

/

May 20, 2025

Adapting to new USCG cybersecurity mandates: Darktrace for ports and maritime systems

Cargo ships at a portDefault blog imageDefault blog image

What is the Marine Transportation System (MTS)?

Marine Transportation Systems (MTS) play a substantial roll in U.S. commerce, military readiness, and economic security. Defined as a critical national infrastructure, the MTS encompasses all aspects of maritime transportation from ships and ports to the inland waterways and the rail and roadways that connect them.

MTS interconnected systems include:

  • Waterways: Coastal and inland rivers, shipping channels, and harbors
  • Ports: Terminals, piers, and facilities where cargo and passengers are transferred
  • Vessels: Commercial ships, barges, ferries, and support craft
  • Intermodal Connections: Railroads, highways, and logistics hubs that tie maritime transport into national and global supply chains

The Coast Guard plays a central role in ensuring the safety, security, and efficiency of the MTS, handling over $5.4 trillion in annual economic activity. As digital systems increasingly support operations across the MTS, from crane control to cargo tracking, cybersecurity has become essential to protecting this lifeline of U.S. trade and infrastructure.

Maritime Transportation Systems also enable international trade, making them prime targets for cyber threats from ransomware gangs to nation-state actors.

To defend against growing threats, the United States Coast Guard (USCG) has moved from encouraging cybersecurity best practices to enforcing them, culminating in a new mandate that goes into effect on July 16, 2025. These regulations aim to secure the digital backbone of the maritime industry.

Why maritime ports are at risk

Modern ports are a blend of legacy and modern OT, IoT, and IT digitally connected technologies that enable crane operations, container tracking, terminal storage, logistics, and remote maintenance.

Many of these systems were never designed with cybersecurity in mind, making them vulnerable to lateral movement and disruptive ransomware attack spillover.

The convergence of business IT networks and operational infrastructure further expands the attack surface, especially with the rise of cloud adoption and unmanaged IoT and IIoT devices.

Cyber incidents in recent years have demonstrated how ransomware or malicious activity can halt crane operations, disrupt logistics, and compromise safety at scale threatening not only port operations, but national security and economic stability.

Relevant cyber-attacks on maritime ports

Maersk & Port of Los Angeles (2017 – NotPetya):
A ransomware attack crippled A.P. Moller-Maersk, the world’s largest shipping company. Operations at 17 ports, including the Port of Los Angeles, were halted due to system outages, causing weeks of logistical chaos.

Port of San Diego (2018 – Ransomware Attack):
A ransomware attack targeted the Port of San Diego, disrupting internal IT systems including public records, business services, and dockside cargo operations. While marine traffic was unaffected, commercial activity slowed significantly during recovery.

Port of Houston (2021 – Nation-State Intrusion):
A suspected nation-state actor exploited a known vulnerability in a Port of Houston web application to gain access to its network. While the attack was reportedly thwarted, it triggered a federal investigation and highlighted the vulnerability of maritime systems.

Jawaharlal Nehru Port Trust, India (2022 – Ransomware Incident):
India’s largest container port experienced disruptions due to a ransomware attack affecting operations and logistics systems. Container handling and cargo movement slowed as IT systems were taken offline during recovery efforts.

A regulatory shift: From guidance to enforcement

Since the Maritime Transportation Security Act (MTSA) of 2002, ports have been required to develop and maintain security plans. Cybersecurity formally entered the regulatory fold in 2020 with revisions to 33 CFR Part 105 and 106, requiring port authorities to assess and address computer system vulnerabilities.

In January 2025, the USCG finalized new rules to enforce cybersecurity practices across the MTS. Key elements include (but are not limited to):

  • A dedicated cyber incident response plan (PR.IP-9)
  • Routine cybersecurity risk assessments and exercises (ID.RA)
  • Designation of a cybersecurity officer and regular workforce training (section 3.1)
  • Controls for access management, segmentation, logging, and encryption (PR.AC-1:7)
  • Supply chain risk management (ID.SC)
  • Incident reporting to the National Response Center

Port operators are encouraged to align their programs with the NIST Cybersecurity Framework (CSF 2.0) and NIST SP 800-82r3, which provide comprehensive guidance for IT and OT security in industrial environments.

How Darktrace can support maritime & ports

Unified IT + OT + Cloud coverage

Maritime ports operate in hybrid environments spanning business IT systems (finance, HR, ERP), industrial OT (cranes, gates, pumps, sensors), and an increasing array of cloud and SaaS platforms.

Darktrace is the only vendor that provides native visibility and threat detection across OT/IoT, IT, cloud, and SaaS environments — all in a single platform. This means:

  • Cranes and other physical process control networks are monitored in the same dashboard as Active Directory and Office 365.
  • Threats that start in the cloud (e.g., phishing, SaaS token theft) and pivot or attempt to pivot into OT are caught early — eliminating blind spots that siloed tools miss.

This unification is critical to meeting USCG requirements for network-wide monitoring, risk identification, and incident response.

AI that understands your environment. Not just known threats

Darktrace’s AI doesn’t rely on rules or signatures. Instead, it uses Self-Learning AI TM that builds a unique “pattern of life” for every device, protocol, user, and network segment, whether it’s a crane router or PLC, SCADA server, Workstation, or Linux file server.

  • No predefined baselines or manual training
  • Real-time anomaly detection for zero-days, ransomware, and supply chain compromise
  • Continuous adaptation to new devices, configurations, and operations

This approach is critical in diverse distributed OT environments where change and anomalous activity on the network are more frequent. It also dramatically reduces the time and expertise needed to classify and inventory assets, even for unknown or custom-built systems.

Supporting incident response requirements

A key USCG requirement is that cybersecurity plans must support effective incident response.

Key expectations include:

  • Defined response roles and procedures: Personnel must know what to do and when (RS.CO-1).
  • Timely reporting: Incidents must be reported and categorized according to established criteria (RS.CO-2, RS.AN-4).
  • Effective communication: Information must be shared internally and externally, including voluntary collaboration with law enforcement and industry peers (RS.CO-3 through RS.CO-5).
  • Thorough analysis: Alerts must be investigated, impacts understood, and forensic evidence gathered to support decision-making and recovery (RS.AN-1 through RS.AN-5).
  • Swift mitigation: Incidents must be contained and resolved efficiently, with newly discovered vulnerabilities addressed or documented (RS.MI-1 through RS.MI-3).
  • Ongoing improvement: Organizations must refine their response plans using lessons learned from past incidents (RS.IM-1 and RS.IM-2).

That means detections need to be clear, accurate, and actionable.

Darktrace cuts through the noise using AI that prioritizes only high-confidence incidents and provides natural-language narratives and investigative reports that explain:

  • What’s happening, where it’s happening, when it’s happening
  • Why it’s unusual
  • How to respond

Result: Port security teams often lean and multi-tasked can meet USCG response-time expectations and reporting needs without needing to scale headcount or triage hundreds of alerts.

Built-for-edge deployment

Maritime environments are constrained. Many traditional SaaS deployment types often are unsuitable for tugboats, cranes, or air-gapped terminal systems.

Darktrace builds and maintains its own ruggedized, purpose-built appliances and unique virtual deployment options that:

  • Deploy directly into crane networks or terminal enclosures
  • Require no configuration or tuning, drop-in ready
  • Support secure over-the-air updates and fleet management
  • Operate without cloud dependency, supporting isolated and air-gapped systems

Use case: Multiple ports have been able to deploy Darktrace directly into the crane’s switch enclosure, securing lateral movement paths without interfering with the crane control software itself.

Segmentation enforcement & real-time threat containment

Darktrace visualizes real-time connectivity and attack pathways across IT, OT, and IoT it and integrates with firewalls (e.g., Fortinet, Cisco, Palo Alto) to enforce segmentation using AI insights alongside Darktrace’s own native autonomous and human confirmed response capabilities.

Benefits of autonomous and human confirmed response:

  • Auto-isolate rogue devices before the threat can escalate
  • Quarantine a suspicious connectivity with confidence operations won’t be halted
  • Autonomously buy time for human responders during off-hours or holidays
  • This ensures segmentation isn't just documented but that in the case of its failure or exploitation responses are performed as a compensating control

No reliance on 3rd parties or external connectivity

Darktrace’s supply chain integrity is a core part of its value to critical infrastructure customers. Unlike solutions that rely on indirect data collection or third-party appliances, Darktrace:

  • Uses in-house engineered sensors and appliances
  • Does not require transmission of data to or from the cloud

This ensures confidence in both your cyber visibility and the security of the tools you deploy.

See examples here of how Darktrace stopped supply chain attacks:

Readiness for USCG and Beyond

With a self-learning system that adapts to each unique port environment, Darktrace helps maritime operators not just comply but build lasting cyber resilience in a high-threat landscape.

Cybersecurity is no longer optional for U.S. ports its operationally and nationally critical. Darktrace delivers the intelligence, automation, and precision needed to meet USCG requirements and protect the digital lifeblood of the modern port.

Continue reading
About the author
Daniel Simonds
Director of Operational Technology

Blog

/

Network

/

May 20, 2025

Catching a RAT: How Darktrace Neutralized AsyncRAT

woman working on laptopDefault blog imageDefault blog image

What is a RAT?

As the proliferation of new and more advanced cyber threats continues, the Remote Access Trojan (RAT) remains a classic tool in a threat actor's arsenal. RATs, whether standardized or custom-built, enable attackers to remotely control compromised devices, facilitating a range of malicious activities.

What is AsyncRAT?

Since its first appearance in 2019, AsyncRAT has become increasingly popular among a wide range of threat actors, including cybercriminals and advanced persistent threat (APT) groups.

Originally available on GitHub as a legitimate tool, its open-source nature has led to widespread exploitation. AsyncRAT has been used in numerous campaigns, including prolonged attacks on essential US infrastructure, and has even reportedly penetrated the Chinese cybercriminal underground market [1] [2].

How does AsyncRAT work?

Original source code analysis of AsyncRAT demonstrates that once installed, it establishes persistence via techniques such as creating scheduled tasks or registry keys and uses SeDebugPrivilege to gain elevated privileges [3].

Its key features include:

  • Keylogging
  • File search
  • Remote audio and camera access
  • Exfiltration techniques
  • Staging for final payload delivery

These are generally typical functions found in traditional RATs. However, it also boasts interesting anti-detection capabilities. Due to the popularity of Virtual Machines (VM) and sandboxes for dynamic analysis, this RAT checks for the manufacturer via the WMI query 'Select * from Win32_ComputerSystem' and looks for strings containing 'VMware' and 'VirtualBox' [4].

Darktrace’s coverage of AsyncRAT

In late 2024 and early 2025, Darktrace observed a spike in AsyncRAT activity across various customer environments. Multiple indicators of post-compromise were detected, including devices attempting or successfully connecting to endpoints associated with AsyncRAT.

On several occasions, Darktrace identified a clear association with AsyncRAT through the digital certificates of the highlighted SSL endpoints. Darktrace’s Real-time Detection effectively identified and alerted on suspicious activities related to AsyncRAT. In one notable incident, Darktrace’s Autonomous Response promptly took action to contain the emerging threat posed by AsyncRAT.

AsyncRAT attack overview

On December 20, 2024, Darktrace first identified the use of AsyncRAT, noting a device successfully establishing SSL connections to the uncommon external IP 185.49.126[.]50 (AS199654 Oxide Group Limited) via port 6606. The IP address appears to be associated with AsyncRAT as flagged by open-source intelligence (OSINT) sources [5]. This activity triggered the device to alert the ‘Anomalous Connection / Rare External SSL Self-Signed' model.

Model alert in Darktrace / NETWORK showing the repeated SSL connections to a rare external Self-Signed endpoint, 185.49.126[.]50.
Figure 1: Model alert in Darktrace / NETWORK showing the repeated SSL connections to a rare external Self-Signed endpoint, 185.49.126[.]50.

Following these initial connections, the device was observed making a significantly higher number of connections to the same endpoint 185.49.126[.]50 via port 6606 over an extended period. This pattern suggested beaconing activity and triggered the 'Compromise/Beaconing Activity to External Rare' model alert.

Further analysis of the original source code, available publicly, outlines the default ports used by AsyncRAT clients for command-and-control (C2) communications [6]. It reveals that port 6606 is the default port for creating a new AsyncRAT client. Darktrace identified both the Certificate Issuer and the Certificate Subject as "CN=AsyncRAT Server". This SSL certificate encrypts the packets between the compromised system and the server. These indicators of compromise (IoCs) detected by Darktrace further suggest that the device was successfully connecting to a server associated with AsyncRAT.

Model alert in Darktrace / NETWORK displaying the Digital Certificate attributes, IP address and port number associated with AsyncRAT.
Figure 2: Model alert in Darktrace / NETWORK displaying the Digital Certificate attributes, IP address and port number associated with AsyncRAT.
Darktrace’s detection of repeated connections to the suspicious IP address 185.49.126[.]50 over port 6606, indicative of beaconing behavior.
Figure 3: Darktrace’s detection of repeated connections to the suspicious IP address 185.49.126[.]50 over port 6606, indicative of beaconing behavior.
Darktrace's Autonomous Response actions blocking the suspicious IP address,185.49.126[.]50.
Figure 4: Darktrace's Autonomous Response actions blocking the suspicious IP address,185.49.126[.]50.

A few days later, the same device was detected making numerous connections to a different IP address, 195.26.255[.]81 (AS40021 NL-811-40021), via various ports including 2106, 6606, 7707, and 8808. Notably, ports 7707 and 8808 are also default ports specified in the original AsyncRAT source code [6].

Darktrace’s detection of connections to the suspicious endpoint 195.26.255[.]81, where the default ports (6606, 7707, and 8808) for AsyncRAT were observed.
Figure 5: Darktrace’s detection of connections to the suspicious endpoint 195.26.255[.]81, where the default ports (6606, 7707, and 8808) for AsyncRAT were observed.

Similar to the activity observed with the first endpoint, 185.49.126[.]50, the Certificate Issuer for the connections to 195.26.255[.]81 was identified as "CN=AsyncRAT Server". Further OSINT investigation confirmed associations between the IP address 195.26.255[.]81 and AsyncRAT [7].

Darktrace's detection of a connection to the suspicious IP address 195.26.255[.]81 and the domain name identified under the common name (CN) of a certificate as AsyncRAT Server
Figure 6: Darktrace's detection of a connection to the suspicious IP address 195.26.255[.]81 and the domain name identified under the common name (CN) of a certificate as AsyncRAT Server.

Once again, Darktrace's Autonomous Response acted swiftly, blocking the connections to 195.26.255[.]81 throughout the observed AsyncRAT activity.

Figure 7: Darktrace's Autonomous Response actions were applied against the suspicious IP address 195.26.255[.]81.

A day later, Darktrace again alerted to further suspicious activity from the device. This time, connections to the suspicious endpoint 'kashuub[.]com' and IP address 191.96.207[.]246 via port 8041 were observed. Further analysis of port 8041 suggests it is commonly associated with ScreenConnect or Xcorpeon ASIC Carrier Ethernet Transport [8]. ScreenConnect has been observed in recent campaign’s where AsyncRAT has been utilized [9]. Additionally, one of the ASN’s observed, namely ‘ASN Oxide Group Limited’, was seen in both connections to kashuub[.]com and 185.49.126[.]50.

This could suggest a parallel between the two endpoints, indicating they might be hosting AsyncRAT C2 servers, as inferred from our previous analysis of the endpoint 185.49.126[.]50 and its association with AsyncRAT [5]. OSINT reporting suggests that the “kashuub[.]com” endpoint may be associated with ScreenConnect scam domains, further supporting the assumption that the endpoint could be a C2 server.

Darktrace’s Autonomous Response technology was once again able to support the customer here, blocking connections to “kashuub[.]com”. Ultimately, this intervention halted the compromise and prevented the attack from escalating or any sensitive data from being exfiltrated from the customer’s network into the hands of the threat actors.

Darktrace’s Autonomous Response applied a total of nine actions against the IP address 191.96.207[.]246 and the domain 'kashuub[.]com', successfully blocking the connections.
Figure 8: Darktrace’s Autonomous Response applied a total of nine actions against the IP address 191.96.207[.]246 and the domain 'kashuub[.]com', successfully blocking the connections.

Due to the popularity of this RAT, it is difficult to determine the motive behind the attack; however, from existing knowledge of what the RAT does, we can assume accessing and exfiltrating sensitive customer data may have been a factor.

Conclusion

While some cybercriminals seek stability and simplicity, openly available RATs like AsyncRAT provide the infrastructure and open the door for even the most amateur threat actors to compromise sensitive networks. As the cyber landscape continually shifts, RATs are now being used in all types of attacks.

Darktrace’s suite of AI-driven tools provides organizations with the infrastructure to achieve complete visibility and control over emerging threats within their network environment. Although AsyncRAT’s lack of concealment allowed Darktrace to quickly detect the developing threat and alert on unusual behaviors, it was ultimately Darktrace Autonomous Response's consistent blocking of suspicious connections that prevented a more disruptive attack.

Credit to Isabel Evans (Cyber Analyst), Priya Thapa (Cyber Analyst) and Ryan Traill (Analyst Content Lead)

Appendices

  • Real-time Detection Models
       
    • Compromise / Suspicious SSL Activity
    •  
    • Compromise / Beaconing Activity To      External Rare
    •  
    • Compromise / High Volume of      Connections with Beacon Score
    •  
    • Anomalous Connection / Suspicious      Self-Signed SSL
    •  
    • Compromise / Sustained SSL or HTTP      Increase
    •  
    • Compromise / SSL Beaconing to Rare      Destination
    •  
    • Compromise / Suspicious Beaconing      Behaviour
    •  
    • Compromise / Large Number of      Suspicious Failed Connections
  •  
  • Autonomous     Response Models
       
    • Antigena / Network / Significant      Anomaly / Antigena Controlled and Model Alert
    •  
    • Antigena / Network / Significant      Anomaly / Antigena Enhanced Monitoring from Client Block

List of IoCs

·     185.49.126[.]50 - IP – AsyncRAT C2 Endpoint

·     195.26.255[.]81 – IP - AsyncRAT C2 Endpoint

·      191.96.207[.]246 – IP – Likely AsyncRAT C2 Endpoint

·     CN=AsyncRAT Server - SSL certificate - AsyncRATC2 Infrastructure

·      Kashuub[.]com– Hostname – Likely AsyncRAT C2 Endpoint

MITRE ATT&CK Mapping:

Tactic –Technique – Sub-Technique  

 

Execution– T1053 - Scheduled Task/Job: Scheduled Task

DefenceEvasion – T1497 - Virtualization/Sandbox Evasion: System Checks

Discovery– T1057 – Process Discovery

Discovery– T1082 – System Information Discovery

LateralMovement - T1021.001 - Remote Services: Remote Desktop Protocol

Collection/ Credential Access – T1056 – Input Capture: Keylogging

Collection– T1125 – Video Capture

Commandand Control – T1105 - Ingress Tool Transfer

Commandand Control – T1219 - Remote Access Software

Exfiltration– T1041 - Exfiltration Over C2 Channel

 

References

[1]  https://blog.talosintelligence.com/operation-layover-how-we-tracked-attack/

[2] https://intel471.com/blog/china-cybercrime-undergrond-deepmix-tea-horse-road-great-firewall

[3] https://www.attackiq.com/2024/08/01/emulate-asyncrat/

[4] https://www.fortinet.com/blog/threat-research/spear-phishing-campaign-with-new-techniques-aimed-at-aviation-companies

[5] https://www.virustotal.com/gui/ip-address/185.49.126[.]50/community

[6] https://dfir.ch/posts/asyncrat_quasarrat/

[7] https://www.virustotal.com/gui/ip-address/195.26.255[.]81

[8] https://www.speedguide.net/port.php?port=8041

[9] https://www.esentire.com/blog/exploring-the-infection-chain-screenconnects-link-to-asyncrat-deployment

[10] https://scammer.info/t/taking-out-connectwise-sites/153479/518?page=26

Continue reading
About the author
Isabel Evans
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI