ブログ
/
Email
/
July 18, 2023

Understanding Email Security & the Psychology of Trust

We explore how psychological research into the nature of trust relates to our relationship with technology - and what that means for AI solutions.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Jul 2023

When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.

While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.

This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.

What is the psychology of trust?

Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1

Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.

Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2

Defining implicit and routine trust

As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.

Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3

This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.

Trust and technology

As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’  The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.

Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.

The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5   Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.

The implicit trust in workplace technology

To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.

Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.

For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.

Inattentional blindness: How the brain ignores change

Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.

A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.

With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.

How threats exploit our implicit trust in technology

Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.

In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.

In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.

How email security solutions can solve the problem of implicit trust

How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.

Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.

Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.

Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.

For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.

If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.

AI-driven Email Security: The Way Forward

Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.

References

1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx

2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.

3https://www.cognifit.com/gb/attention

4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common

5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.

6https://link.springer.com/article/10.1007/s00426-006-0072-4

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research

More in this series

No items found.

Blog

/

/

January 13, 2026

Runtime Is Where Cloud Security Really Counts: The Importance of Detection, Forensics and Real-Time Architecture Awareness

Default blog imageDefault blog image

Introduction: Shifting focus from prevention to runtime

Cloud security has spent the last decade focused on prevention; tightening configurations, scanning for vulnerabilities, and enforcing best practices through Cloud Native Application Protection Platforms (CNAPP). These capabilities remain essential, but they are not where cloud attacks happen.

Attacks happen at runtime: the dynamic, ephemeral, constantly changing execution layer where applications run, permissions are granted, identities act, and workloads communicate. This is also the layer where defenders traditionally have the least visibility and the least time to respond.

Today’s threat landscape demands a fundamental shift. Reducing cloud risk now requires moving beyond static posture and CNAPP only approaches and embracing realtime behavioral detection across workloads and identities, paired with the ability to automatically preserve forensic evidence. Defenders need a continuous, real-time understanding of what “normal” looks like in their cloud environments, and AI capable of processing massive data streams to surface deviations that signal emerging attacker behavior.

Runtime: The layer where attacks happen

Runtime is the cloud in motion — containers starting and stopping, serverless functions being called, IAM roles being assumed, workloads auto scaling, and data flowing across hundreds of services. It’s also where attackers:

  • Weaponize stolen credentials
  • Escalate privileges
  • Pivot programmatically
  • Deploy malicious compute
  • Manipulate or exfiltrate data

The challenge is complex: runtime evidence is ephemeral. Containers vanish; critical process data disappears in seconds. By the time a human analyst begins investigating, the detail required to understand and respond to the alert, often is already gone. This volatility makes runtime the hardest layer to monitor, and the most important one to secure.

What Darktrace / CLOUD Brings to Runtime Defence

Darktrace / CLOUD is purpose-built for the cloud execution layer. It unifies the capabilities required to detect, contain, and understand attacks as they unfold, not hours or days later. Four elements define its value:

1. Behavioral, real-time detection

The platform learns normal activity across cloud services, identities, workloads, and data flows, then surfaces anomalies that signify real attacker behavior, even when no signature exists.

2. Automated forensic level artifact collection

The moment Darktrace detects a threat, it can automatically capture volatile forensic evidence; disk state, memory, logs, and process context, including from ephemeral resources. This preserves the truth of what happened before workloads terminate and evidence disappears.

3. AI-led investigation

Cyber AI Analyst assembles cloud behaviors into a coherent incident story, correlating identity activity, network flows, and Cloud workload behavior. Analysts no longer need to pivot across dashboards or reconstruct timelines manually.

4. Live architectural awareness

Darktrace continuously maps your cloud environment as it operates; including services, identities, connectivity, and data pathways. This real-time visibility makes anomalies clearer and investigations dramatically faster.

Together, these capabilities form a runtime-first security model.

Why CNAPP alone isn’t enough

CNAPP platforms excel at pre deployment checks all the way down to developer workstations, identifying misconfigurations, concerning permission combinations, vulnerable images, and risky infrastructure choices. But CNAPP’s breadth is also its limitation. CNAPP is about posture. Runtime defense is about behavior.

CNAPP tells you what could go wrong; runtime detection highlights what is going wrong right now.

It cannot preserve ephemeral evidence, correlate active behaviors across domains, or contain unfolding attacks with the precision and speed required during a real incident. Prevention remains essential, but prevention alone cannot stop an attacker who is already operating inside your cloud environment.

Real-world AWS Scenario: Why Runtime Monitoring Wins

A recent incident detected by Darktrace / CLOUD highlights how cloud compromises unfold, and why runtime visibility is non-negotiable. Each step below reflects detections that occur only when monitoring behavior in real time.

1. External Credential Use

Detection: Unusual external source for credential use: An attacker logs into a cloud account from a never-before-seen location, the earliest sign of account takeover.

2. AWS CLI Pivot

Detection: Unusual CLI activity: The attacker switches to programmatic access, issuing commands from a suspicious host to gain automation and stealth.

3. Credential Manipulation

Detection: Rare password reset: They reset or assign new passwords to establish persistence and bypass existing security controls.

4. Cloud Reconnaissance

Detection: Burst of resource discovery: The attacker enumerates buckets, roles, and services to map high value assets and plan next steps.

5. Privilege Escalation

Detection: Anomalous IAM update: Unauthorized policy updates or role changes grant the attacker elevated access or a backdoor.

6. Malicious Compute Deployment

Detection: Unusual EC2/Lambda/ECS creation: The attacker deploys compute resources for mining, lateral movement, or staging further tools.

7. Data Access or Tampering

Detection: Unusual S3 modifications: They alter S3 permissions or objects, often a prelude to data exfiltration or corruption.

Only some of these actions would appear in a posture scan, crucially after the fact.
Every one of these runtime detections is visible only through real-time behavioral monitoring while the attack is in progress.

The future of cloud security Is runtime-first

Cloud defense can no longer revolve solely around prevention. Modern attacks unfold in runtime, across a fast-changing mesh of workloads, services, and — critically — identities. To reduce risk, organizations must be able to detect, understand, and contain malicious activity as it happens, before ephemeral evidence disappears and before attacker's pivot across identity layers.

Darktrace / CLOUD delivers this shift by turning runtime, the most volatile and consequential layer in the cloud, into a fully defensible control point through unified visibility across behavior, workloads, and identities. It does this by providing:

  • Real-time behavior detection across workloads and identity activity
  • Autonomous response actions for rapid containment
  • Automated forensic level artifact preservation the moment events occur
  • AI-driven investigation that separates weak signals from true attacker patterns
  • Live cloud environment insight to understand context and impact instantly

Cloud security must evolve from securing what might go wrong to continuously understanding what is happening; in runtime, across identities, and at the speed attackers operate. Unifying runtime and identity visibility is how defenders regain the advantage.

[related-resource]

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace

Blog

/

Network

/

January 12, 2026

Maduro Arrest Used as a Lure to Deliver Backdoor

Default blog imageDefault blog image

Introduction

Threat actors frequently exploit ongoing world events to trick users into opening and executing malicious files. Darktrace security researchers recently identified a threat group using reports around the arrest of Venezuelan President Nicolàs Maduro on January 3, 2025, as a lure to deliver backdoor malware.

Technical Analysis

While the exact initial access method is unknown, it is likely that a spear-phishing email was sent to victims, containing a zip archive titled “US now deciding what’s next for Venezuela.zip”. This file included an executable named “Maduro to be taken to New York.exe” and a dynamic-link library (DLL), “kugou.dll”.  

The binary “Maduro to be taken to New York.exe” is a legitimate binary (albeit with an expired signature) related to KuGou, a Chinese streaming platform. Its function is to load the DLL “kugou.dll” via DLL search order. In this instance, the expected DLL has been replaced with a malicious one with the same name to load it.  

DLL called with LoadLibraryW.
Figure 1: DLL called with LoadLibraryW.

Once the DLL is executed, a directory is created C:\ProgramData\Technology360NB with the DLL copied into the directory along with the executable, renamed as “DataTechnology.exe”. A registry key is created for persistence in “HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Lite360” to run DataTechnology.exe --DATA on log on.

 Registry key added for persistence.
Figure 2. Registry key added for persistence.
Folder “Technology360NB” created.
Figure 3: Folder “Technology360NB” created.

During execution, a dialog box appears with the caption “Please restart your computer and try again, or contact the original author.”

Message box prompting user to restart.
Figure 4. Message box prompting user to restart.

Prompting the user to restart triggers the malware to run from the registry key with the command --DATA, and if the user doesn't, a forced restart is triggered. Once the system is reset, the malware begins periodic TLS connections to the command-and-control (C2) server 172.81.60[.]97 on port 443. While the encrypted traffic prevents direct inspection of commands or data, the regular beaconing and response traffic strongly imply that the malware has the ability to poll a remote server for instructions, configuration, or tasking.

Conclusion

Threat groups have long used geopolitical issues and other high-profile events to make malicious content appear more credible or urgent. Since the onset of the war in Ukraine, organizations have been repeatedly targeted with spear-phishing emails using subject lines related to the ongoing conflict, including references to prisoners of war [1]. Similarly, the Chinese threat group Mustang Panda frequently uses this tactic to deploy backdoors, using lures related to the Ukrainian war, conventions on Tibet [2], the South China Sea [3], and Taiwan [4].  

The activity described in this blog shares similarities with previous Mustang Panda campaigns, including the use of a current-events archive, a directory created in ProgramData with a legitimate executable used to load a malicious DLL and run registry keys used for persistence. While there is an overlap of tactics, techniques and procedures (TTPs), there is insufficient information available to confidently attribute this activity to a specific threat group. Users should remain vigilant, especially when opening email attachments.

Credit to Tara Gould (Malware Research Lead)
Edited by Ryan Traill (Analyst Content Lead)

Indicators of Compromise (IoCs)

172.81.60[.]97
8f81ce8ca6cdbc7d7eb10f4da5f470c6 - US now deciding what's next for Venezuela.zip
722bcd4b14aac3395f8a073050b9a578 - Maduro to be taken to New York.exe
aea6f6edbbbb0ab0f22568dcb503d731  - kugou.dll

References

[1] https://cert.gov.ua/article/6280422  

[2] https://www.ibm.com/think/x-force/hive0154-mustang-panda-shifts-focus-tibetan-community-deploy-pubload-backdoor

[3] https://www.ibm.com/think/x-force/hive0154-targeting-us-philippines-pakistan-taiwan

[4] https://www.ibm.com/think/x-force/hive0154-targeting-us-philippines-pakistan-taiwan

Continue reading
About the author
Tara Gould
Malware Research Lead
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ