Blog
/
/
August 4, 2020

How to Prevent Spear Phishing Attacks Post Twitter Hack

Twitter confirmed spear phishing as the cause of last month's attack. Learn about the limits of current defenses against spear phishing and how AI can stop it.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
04
Aug 2020

Twitter has now confirmed that it was a “phone spear phishing attack” targeting a small number of their employees that allowed hackers to access 130 high-profile user accounts and fool thousands of people into giving away money via bitcoin.

Spear phishing involves targeted texts or emails aimed at individuals in an attempt to ‘hook’ them into opening an attachment or malicious link. This attack highlights the limitations in the security controls adopted by even some of the largest and most tech-savvy organizations out there, who continue to fall victim to this well-known attack technique.

The incident has been described by Twitter as a “coordinated social engineering attack” that “successfully targeted employees with access to internal systems and tools.”

Though the specific nature of the attack remains unclear, it likely followed a similar pattern to the series of threat finds detailed elsewhere on the Darktrace Blog: impersonating trusted colleagues or platforms, such as WeTransfer, Microsoft Teams or even Twitter itself, with an urgent message coaxing an employee into clicking on a disguised URL and inputting their credentials on a fake login page.

When an employee inputs their credentials, that data is recorded and beaconed back to the attacker, who will then use these login details to access internal systems — which, in this case, allowed them to subsequently take control of celebrities’ Twitter accounts and send out the damaging Tweets that left thousands out of pocket.

Training the workforce is not enough

Twitter says in a statement that this incident has forced them to “accelerate several of [their] pre-existing security workstreams.” But the suggestion that they will continue to organize “ongoing company-wide phishing exercises throughout the year” indicates an over-reliance on the ability of humans to identify these malicious email attacks that are getting more and more advanced, and harder to distinguish from genuine communication.

Cyber-criminals are now using AI to create fake profiles, personalize messages and replicate communication patterns, at a speed and scale that no human ever could. In this threat landscape, there can no longer be a reliance solely on educating the workforce, as the difference between a malicious email and legitimate communication becomes almost imperceptible. This has led to an acceptance that we must rely on technology to help us catch the subtle signs of attack, when humans alone fail to do so.

The legacy approach: no playbook for new attacks

The majority of communications security systems are not where they need to be, and this is particularly true for the email realm. Most tools in use today rely on static blacklists of rules and signatures that analyze emails in isolation, against known ‘bads’. Methods like looking for IP addresses or file hashes associated with phishing have had limited success in stopping attackers, who have devised simple techniques to bypass them.

As we have explored previously, attackers are constantly changing their approach, purchasing new domains en masse, experimenting with novel strains of malware, and manipulating headers to get around common validation checks. It is due to these developments that Secure Email Gateways (SEGs) become antiquated almost the moment they are updated.

The mean lifetime of an attack has reduced from 2.1 days in 2018 to 0.5 days in 2020. As soon as an SEG identifies a domain or a file hash as malicious, cyber-criminals change their attack infrastructure and launch a new wave of fresh attacks. Their fundamental means of operation renders legacy security tools incapable of evolving with the threat landscape, and it is for this reason that over 94% of cyber-attacks today start with an email.

How Cyber AI catches the threats others miss

However, one area where email security has seen great progress even in the last two years is the application of AI to spot the subtle features of advanced email attacks, even those that leverage novel malware. This approach allows security tools to move away from the binary decision-making that comes with asking “Is this email ‘bad’?” and moving to the far more useful question of “does this belong?”

This form of what we’re calling ‘layered AI’ combines supervised and unsupervised machine learning, enabling it to spot the subtle deviations from learned ‘patterns of life’ that are indicative of a cyber-threat.

Supervised machine learning models can be trained on millions of emails to find subtle patterns undetectable by humans and detect new variations of known threat types. These models are able to find the real-world intentions behind an email: by training on millions of spear phishing emails, for example, a system can find patterns associated with this type of email attack and accurately classify a future email as spear phishing.

In addition, unsupervised machine learning models can be trained on all available email data for an organization to find unknown variations of unknown threat types — that is, the ‘unknown unknowns,’ the combinations never before seen. Ultimately this is what enables a system to ask that critical question “does this belong?” and spot genuine anomalies that fall outside of the norm.

Layering both of these applications of AI allows us to make determinations such as: ‘this is a phishing email and it doesn’t belong’, dramatically improving the system’s accuracy and allowing it to interrupt only the malicious emails – since there could be phishy-looking emails that are legitimate! It also enables us to act in proportion to the threat identified: locking links and attachments in some cases, or holding back emails entirely in others.

This form of ‘layered AI’ requires an advanced understanding of mathematics and machine learning that takes years of research and development. With that experience, Cyber AI has proven itself capable of catching the full range of advanced attacks targeting the inbox, from spear phishing and impersonation attempts, to account takeovers and supply chain attacks. Once implemented, it takes only a week before any new organization can derive value, and thousands of customers now rely on Cyber AI to protect both their email realm and wider network.

Plenty more phish in the sea

This will not be the last time this year that a cyber-attack caused by spear phishing makes the headlines. Just this week, it was revealed that Russian-backed cyber-criminals stole sensitive documents on US-UK trade talks after successful spear phishing, and the technique may well have played a part in ongoing vaccine research espionage that surfaced in July.

With the US presidential race heating up, it was recently revealed that fewer than 3 out of 10 election administrators have basic controls to prevent phishing. This attack method may come to not only damage organizations and their reputation, but also to undermine the trust that serves as the bedrock of democracy. Now is the time to start recognizing the very real threat that email attackers represent, and to prepare our defenses accordingly.

Learn more about AI email security

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

More in this series

No items found.

Blog

/

AI

/

March 2, 2026

What the Darktrace Annual Threat Report 2026 Means for Security Leaders

Image of the Earth from spaceDefault blog imageDefault blog image

The challenge for today’s CISOs

At the broadest level, the defining characteristic of cybersecurity in 2026 is the sheer pace of change shaping the environments we protect. Organizations are operating in ecosystems that are larger, more interconnected, and more automated than ever before – spanning cloud platforms, distributed identities, AI-driven systems, and continuous digital workflows.  

The velocity of this expansion has outstripped the slower, predictable patterns security teams once relied on. What used to be a stable backdrop is now a living, shifting landscape where technology, risk, and business operations evolve simultaneously. From this vantage point, the central challenge for security leaders isn’t reacting to individual threats, but maintaining strategic control and clarity as the entire environment accelerates around them.

Strategic takeaways from the Annual Threat Report

The Darktrace Annual Threat Report 2026 reinforces a reality every CISO feels: the center of gravity isn’t the perimeter, vulnerability management, or malware, but trust abused via identity. For example, our analysis found that nearly 70% of incidents in the Americas region begin with stolen or misused accounts, reflecting the global shift toward identity‑led intrusions.

Mass adoption of AI agents, cloud-native applications, and machine decision-making means CISOs now oversee systems that act on their own. This creates an entirely new responsibility: ensuring those systems remain safe, predictable, and aligned to business intent, even under adversarial pressure.

Attackers increasingly exploit trust boundaries, not firewalls – leveraging cloud entitlements, SaaS identity transitions, supply-chain connectivity, and automation frameworks. The rise of non-human identities intensifies this: credentials, tokens, and agent permissions now form the backbone of operational risk.

Boards are now evaluating CISOs on business continuity, operational recovery, and whether AI systems and cloud workloads can fail safely without cascading or causing catastrophic impact.

In this environment, detection accuracy, autonomous response, and blast radius minimization matter far more than traditional control coverage or policy checklists.

Every organization will face setbacks; resilience is measured by how quickly security teams can rise, respond, and resume momentum. In 2026, success will belong to those that adapt fastest.

Managing business security in the age of AI

CISO accountability in 2026 has expanded far beyond controls and tooling. Whether we asked for it or not, we now own outcomes tied to business resilience, AI trust, cloud assurance, and continuous availability. The role is less about certainty and more about recovering control in an environment that keeps accelerating.

Every major 2026 initiative – AI agents, third-party risk, cloud, or comms protection – connects to a single board-level question: Are we still in control as complexity and automation scale faster than humans?

Attackers are not just getting more sophisticated; they are becoming more automated. AI changes the economics of attack, lowering cost and increasing speed. That asymmetry is what CISOs are being measured against.

CISOs are no longer evaluated on tool coverage, but on the ability to assure outcomes – trust in AI adoption, resilience across cloud and identity, and being able to respond to unknown and unforeseen threats.

Boards are now explicitly asking whether we can defend against AI-driven threats. No one can predict every new behavior – survival depends on detecting malicious deviations from normal fast and responding autonomously.  

Agents introduce decision-making at machine speed. Governance, CI/CD scanning, posture management, red teaming, and runtime detection are no longer differentiators but the baseline.

Cloud security is no longer architectural, it is operational. Identity, control planes, and SaaS exposure now sit firmly with the CISO.

AI-speed threats already reshaping security in 2026

We’re already seeing clear examples of how quickly the threat landscape has shifted in 2026. Darktrace’s work on React2Shell exposed just how unforgiving the new tempo is: a honeypot stood up with an exposed React was hit in under two minutes. There was no recon phase, no gradual probing – just immediate, automated exploitation the moment the code appeared publicly. Exposure now equals compromise unless defenses can detect, interpret, and act at machine speed. Traditional operational rhythms simply don’t map to this reality.

We’re also facing the first wave of AI-authored malware, where LLMs generate code that mutates on demand. This removes the historic friction from the attacker side: no skill barrier, no time cost, no limit on iteration. Malware families can regenerate themselves, shift structure, and evade static controls without a human operator behind the keyboard. This forces CISOs to treat adversarial automation as a core operational risk and ensure that autonomous systems inside the business remain predictable under pressure.

The CVE-2026-1731 BeyondTrust exploitation wave reinforced the same pattern. The gap between disclosure and active, global exploitation compressed into hours. Automated scanning, automated payload deployment, coordinated exploitation campaigns, all spinning up faster than most organizations can push an emergency patch through change control. The vulnerability-to-exploit window has effectively collapsed, making runtime visibility, anomaly detection, and autonomous containment far more consequential than patching speed alone.

These cases aren’t edge scenarios; they represent the emerging norm. Complexity and automation have outpaced human-scale processes, and attackers are weaponizing that asymmetry.  

The real differentiator for CISOs in 2026 is less about knowing everything and more about knowing immediately when something shifts – and having systems that can respond at the same speed.

[related-resource]

Continue reading
About the author
Mike Beck
Global CISO

Blog

/

Network

/

March 2, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

  • Compromise / Possible Tunnelling to Bin Services

Suspicious executable file downloads. Associated Darktrace models include:

  • Anomalous File / EXE from Rare External Location

Outbound beaconing to rare domains. Associated Darktrace models include:

  • Compromise / Agent Beacon (Medium Period)
  • Compromise / Agent Beacon (Long Period)
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Beacon to Young Endpoint
  • Anomalous Server Activity / Rare External from Server
  • Compromise / SSL Beaconing to Rare Destination

Unusual cryptocurrency mining activity. Associated Darktrace models include:

  • Compromise / Monero Mining
  • Compromise / High Priority Crypto Currency Mining

And model alerts for:

  • Compromise / Rare Domain Pointing to Internal IP

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead
Your data. Our AI.
Elevate your network security with Darktrace AI