Blog
/
/
August 4, 2020

How to Prevent Spear Phishing Attacks Post Twitter Hack

Twitter confirmed spear phishing as the cause of last month's attack. Learn about the limits of current defenses against spear phishing and how AI can stop it.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
04
Aug 2020

Twitter has now confirmed that it was a “phone spear phishing attack” targeting a small number of their employees that allowed hackers to access 130 high-profile user accounts and fool thousands of people into giving away money via bitcoin.

Spear phishing involves targeted texts or emails aimed at individuals in an attempt to ‘hook’ them into opening an attachment or malicious link. This attack highlights the limitations in the security controls adopted by even some of the largest and most tech-savvy organizations out there, who continue to fall victim to this well-known attack technique.

The incident has been described by Twitter as a “coordinated social engineering attack” that “successfully targeted employees with access to internal systems and tools.”

Though the specific nature of the attack remains unclear, it likely followed a similar pattern to the series of threat finds detailed elsewhere on the Darktrace Blog: impersonating trusted colleagues or platforms, such as WeTransfer, Microsoft Teams or even Twitter itself, with an urgent message coaxing an employee into clicking on a disguised URL and inputting their credentials on a fake login page.

When an employee inputs their credentials, that data is recorded and beaconed back to the attacker, who will then use these login details to access internal systems — which, in this case, allowed them to subsequently take control of celebrities’ Twitter accounts and send out the damaging Tweets that left thousands out of pocket.

Training the workforce is not enough

Twitter says in a statement that this incident has forced them to “accelerate several of [their] pre-existing security workstreams.” But the suggestion that they will continue to organize “ongoing company-wide phishing exercises throughout the year” indicates an over-reliance on the ability of humans to identify these malicious email attacks that are getting more and more advanced, and harder to distinguish from genuine communication.

Cyber-criminals are now using AI to create fake profiles, personalize messages and replicate communication patterns, at a speed and scale that no human ever could. In this threat landscape, there can no longer be a reliance solely on educating the workforce, as the difference between a malicious email and legitimate communication becomes almost imperceptible. This has led to an acceptance that we must rely on technology to help us catch the subtle signs of attack, when humans alone fail to do so.

The legacy approach: no playbook for new attacks

The majority of communications security systems are not where they need to be, and this is particularly true for the email realm. Most tools in use today rely on static blacklists of rules and signatures that analyze emails in isolation, against known ‘bads’. Methods like looking for IP addresses or file hashes associated with phishing have had limited success in stopping attackers, who have devised simple techniques to bypass them.

As we have explored previously, attackers are constantly changing their approach, purchasing new domains en masse, experimenting with novel strains of malware, and manipulating headers to get around common validation checks. It is due to these developments that Secure Email Gateways (SEGs) become antiquated almost the moment they are updated.

The mean lifetime of an attack has reduced from 2.1 days in 2018 to 0.5 days in 2020. As soon as an SEG identifies a domain or a file hash as malicious, cyber-criminals change their attack infrastructure and launch a new wave of fresh attacks. Their fundamental means of operation renders legacy security tools incapable of evolving with the threat landscape, and it is for this reason that over 94% of cyber-attacks today start with an email.

How Cyber AI catches the threats others miss

However, one area where email security has seen great progress even in the last two years is the application of AI to spot the subtle features of advanced email attacks, even those that leverage novel malware. This approach allows security tools to move away from the binary decision-making that comes with asking “Is this email ‘bad’?” and moving to the far more useful question of “does this belong?”

This form of what we’re calling ‘layered AI’ combines supervised and unsupervised machine learning, enabling it to spot the subtle deviations from learned ‘patterns of life’ that are indicative of a cyber-threat.

Supervised machine learning models can be trained on millions of emails to find subtle patterns undetectable by humans and detect new variations of known threat types. These models are able to find the real-world intentions behind an email: by training on millions of spear phishing emails, for example, a system can find patterns associated with this type of email attack and accurately classify a future email as spear phishing.

In addition, unsupervised machine learning models can be trained on all available email data for an organization to find unknown variations of unknown threat types — that is, the ‘unknown unknowns,’ the combinations never before seen. Ultimately this is what enables a system to ask that critical question “does this belong?” and spot genuine anomalies that fall outside of the norm.

Layering both of these applications of AI allows us to make determinations such as: ‘this is a phishing email and it doesn’t belong’, dramatically improving the system’s accuracy and allowing it to interrupt only the malicious emails – since there could be phishy-looking emails that are legitimate! It also enables us to act in proportion to the threat identified: locking links and attachments in some cases, or holding back emails entirely in others.

This form of ‘layered AI’ requires an advanced understanding of mathematics and machine learning that takes years of research and development. With that experience, Cyber AI has proven itself capable of catching the full range of advanced attacks targeting the inbox, from spear phishing and impersonation attempts, to account takeovers and supply chain attacks. Once implemented, it takes only a week before any new organization can derive value, and thousands of customers now rely on Cyber AI to protect both their email realm and wider network.

Plenty more phish in the sea

This will not be the last time this year that a cyber-attack caused by spear phishing makes the headlines. Just this week, it was revealed that Russian-backed cyber-criminals stole sensitive documents on US-UK trade talks after successful spear phishing, and the technique may well have played a part in ongoing vaccine research espionage that surfaced in July.

With the US presidential race heating up, it was recently revealed that fewer than 3 out of 10 election administrators have basic controls to prevent phishing. This attack method may come to not only damage organizations and their reputation, but also to undermine the trust that serves as the bedrock of democracy. Now is the time to start recognizing the very real threat that email attackers represent, and to prepare our defenses accordingly.

Learn more about AI email security

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

More in this series

No items found.

Blog

/

Identity

/

July 3, 2025

Top Eight Threats to SaaS Security and How to Combat Them

Default blog imageDefault blog image

The latest on the identity security landscape

Following the mass adoption of remote and hybrid working patterns, more critical data than ever resides in cloud applications – from Salesforce and Google Workspace, to Box, Dropbox, and Microsoft 365.

On average, a single organization uses 130 different Software-as-a-Service (SaaS) applications, and 45% of organizations reported experiencing a cybersecurity incident through a SaaS application in the last year.

As SaaS applications look set to remain an integral part of the digital estate, organizations are being forced to rethink how they protect their users and data in this area.

What is SaaS security?

SaaS security is the protection of cloud applications. It includes securing the apps themselves as well as the user identities that engage with them.

Below are the top eight threats that target SaaS security and user identities.

1.  Account Takeover (ATO)

Attackers gain unauthorized access to a user’s SaaS or cloud account by stealing credentials through phishing, brute-force attacks, or credential stuffing. Once inside, they can exfiltrate data, send malicious emails, or escalate privileges to maintain persistent access.

2. Privilege escalation

Cybercriminals exploit misconfigurations, weak access controls, or vulnerabilities to increase their access privileges within a SaaS or cloud environment. Gaining admin or superuser rights allows attackers to disable security settings, create new accounts, or move laterally across the organization.

3. Lateral movement

Once inside a network or SaaS platform, attackers move between accounts, applications, and cloud workloads to expand their foot- hold. Compromised OAuth tokens, session hijacking, or exploited API connections can enable adversaries to escalate access and exfiltrate sensitive data.

4. Multi-Factor Authentication (MFA) bypass and session hijacking

Threat actors bypass MFA through SIM swapping, push bombing, or exploiting session cookies. By stealing an active authentication session, they can access SaaS environments without needing the original credentials or MFA approval.

5. OAuth token abuse

Attackers exploit OAuth authentication mechanisms by stealing or abusing tokens that grant persistent access to SaaS applications. This allows them to maintain access even if the original user resets their password, making detection and mitigation difficult.

6. Insider threats

Malicious or negligent insiders misuse their legitimate access to SaaS applications or cloud platforms to leak data, alter configurations, or assist external attackers. Over-provisioned accounts and poor access control policies make it easier for insiders to exploit SaaS environments.

7. Application Programming Interface (API)-based attacks

SaaS applications rely on APIs for integration and automation, but attackers exploit insecure endpoints, excessive permissions, and unmonitored API calls to gain unauthorized access. API abuse can lead to data exfiltration, privilege escalation, and service disruption.

8. Business Email Compromise (BEC) via SaaS

Adversaries compromise SaaS-based email platforms (e.g., Microsoft 365 and Google Workspace) to send phishing emails, conduct invoice fraud, or steal sensitive communications. BEC attacks often involve financial fraud or data theft by impersonating executives or suppliers.

BEC heavily uses social engineering techniques, tailoring messages for a specific audience and context. And with the growing use of generative AI by threat actors, BEC is becoming even harder to detect. By adding ingenuity and machine speed, generative AI tools give threat actors the ability to create more personalized, targeted, and convincing attacks at scale.

Protecting against these SaaS threats

Traditionally, security leaders relied on tools that were focused on the attack, reliant on threat intelligence, and confined to a single area of the digital estate.

However, these tools have limitations, and often prove inadequate for contemporary situations, environments, and threats. For example, they may lack advanced threat detection, have limited visibility and scope, and struggle to integrate with other tools and infrastructure, especially cloud platforms.

AI-powered SaaS security stays ahead of the threat landscape

New, more effective approaches involve AI-powered defense solutions that understand the digital business, reveal subtle deviations that indicate cyber-threats, and action autonomous, targeted responses.

[related-resource]

Continue reading
About the author
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

/

July 2, 2025

Pre-CVE Threat Detection: 10 Examples Identifying Malicious Activity Prior to Public Disclosure of a Vulnerability

Default blog imageDefault blog image

Vulnerabilities are weaknesses in a system that can be exploited by malicious actors to gain unauthorized access or to disrupt normal operations. Common Vulnerabilities and Exposures (or CVEs) are a list of publicly disclosed cybersecurity vulnerabilities that can be tracked and mitigated by the security community.

When a vulnerability is discovered, the standard practice is to report it to the vendor or the responsible organization, allowing them to develop and distribute a patch or fix before the details are made public. This is known as responsible disclosure.

With a record-breaking 40,000 CVEs reported for 2024 and a predicted higher number for 2025 by the Forum for Incident Response and Security Teams (FIRST) [1], anomaly-detection is essential for identifying these potential risks. The gap between exploitation of a zero-day and disclosure of the vulnerability can sometimes be considerable, and retroactively attempting to identify successful exploitation on your network can be challenging, particularly if taking a signature-based approach.

Detecting threats without relying on CVE disclosure

Abnormal behaviors in networks or systems, such as unusual login patterns or data transfers, can indicate attempted cyber-attacks, insider threats, or compromised systems. Since Darktrace does not rely on rules or signatures, it can detect malicious activity that is anomalous even without full context of the specific device or asset in question.

For example, during the Fortinet exploitation late last year, the Darktrace Threat Research team were investigating a different Fortinet vulnerability, namely CVE 2024-23113, for exploitation when Mandiant released a security advisory around CVE 2024-47575, which aligned closely with Darktrace’s findings.

Retrospective analysis like this is used by Darktrace’s threat researchers to better understand detections across the threat landscape and to add additional context.

Below are ten examples from the past year where Darktrace detected malicious activity days or even weeks before a vulnerability was publicly disclosed.

ten examples from the past year where Darktrace detected malicious activity days or even weeks before a vulnerability was publicly disclosed.

Trends in pre-cve exploitation

Often, the disclosure of an exploited vulnerability can be off the back of an incident response investigation related to a compromise by an advanced threat actor using a zero-day. Once the vulnerability is registered and publicly disclosed as having been exploited, it can kick off a race between the attacker and defender: attack vs patch.

Nation-state actors, highly skilled with significant resources, are known to use a range of capabilities to achieve their target, including zero-day use. Often, pre-CVE activity is “low and slow”, last for months with high operational security. After CVE disclosure, the barriers to entry lower, allowing less skilled and less resourced attackers, like some ransomware gangs, to exploit the vulnerability and cause harm. This is why two distinct types of activity are often seen: pre and post disclosure of an exploited vulnerability.

Darktrace saw this consistent story line play out during several of the Fortinet and PAN OS threat actor campaigns highlighted above last year, where nation-state actors were seen exploiting vulnerabilities first, followed by ransomware gangs impacting organizations [2].

The same applies with the recent SAP Netweaver exploitations being tied to a China based threat actor earlier this spring with subsequent ransomware incidents being observed [3].

Autonomous Response

Anomaly-based detection offers the benefit of identifying malicious activity even before a CVE is disclosed; however, security teams still need to quickly contain and isolate the activity.

For example, during the Ivanti chaining exploitation in the early part of 2025, a customer had Darktrace’s Autonomous Response capability enabled on their network. As a result, Darktrace was able to contain the compromise and shut down any ongoing suspicious connectivity by blocking internal connections and enforcing a “pattern of life” on the affected device.

This pre-CVE detection and response by Darktrace occurred 11 days before any public disclosure, demonstrating the value of an anomaly-based approach.

In some cases, customers have even reported that Darktrace stopped malicious exploitation of devices several days before a public disclosure of a vulnerability.

For example, During the ConnectWise exploitation, a customer informed the team that Darktrace had detected malicious software being installed via remote access. Upon further investigation, four servers were found to be impacted, while Autonomous Response had blocked outbound connections and enforced patterns of life on impacted devices.

Conclusion

By continuously analyzing behavioral patterns, systems can spot unusual activities and patterns from users, systems, and networks to detect anomalies that could signify a security breach.

Through ongoing monitoring and learning from these behaviors, anomaly-based security systems can detect threats that traditional signature-based solutions might miss, while also providing detailed insights into threat tactics, techniques, and procedures (TTPs). This type of behavioral intelligence supports pre-CVE detection, allows for a more adaptive security posture, and enables systems to evolve with the ever-changing threat landscape.

Credit to Nathaniel Jones (VP, Security & AI Strategy, Field CISO), Emma Fougler (Global Threat Research Operations Lead), Ryan Traill (Analyst Content Lead)

References and further reading:

  1. https://www.first.org/blog/20250607-Vulnerability-Forecast-for-2025
  2. https://cloud.google.com/blog/topics/threat-intelligence/fortimanager-zero-day-exploitation-cve-2024-47575
  3. https://thehackernews.com/2025/05/china-linked-hackers-exploit-sap-and.html

Related Darktrace blogs:

*Self-reported by customer, confirmed afterwards.

**Updated January 2024 blog now reflects current findings

Continue reading
About the author
Your data. Our AI.
Elevate your network security with Darktrace AI