Off the hook: How AI catches phishing emails even if we take the bait

Dave Palmer, Director of Technology | Friday September 6, 2019

From social media apps to collaborative cloud services, unprecedented methods of communication now arise on a daily basis. Yet workplaces around the world are still reliant on good old-fashioned emails, with more than 100 trillion of them sent in 2018 alone. A single office worker receives an average of 121 emails per day and — as most of us can attest — therefore has just a moment to decide whether each one merits a reply. Given this barrage, it is hardly surprising that 90% of malware originates in the inbox, disguised within phishing emails whose senders impersonate trusted colleagues.

Of course, long-time internet users have learned to be wary of messages from foreign princes asking for help transporting their gold. Yet nearly three-quarters of targeted cyber-attacks today involve “spear-phishing” emails: a personalized form of phishing wherein attackers employ online reconnaissance or physical eavesdropping to produce convincing forgeries. Both humans and conventional email security tools have proven ineffective at spotting such subtle threats. One prominent study found that — among 150,000 phishing emails sent for the experiment — almost half of recipients clicked on the emails’ scam link within the first hour.

Detecting spear-phishing campaigns requires a platform approach to cyber defense, as opposed to siloed, email-specific solutions. Powered by unsupervised machine learning, Cyber AI platforms come to understand how individual users work and collaborate across the digital infrastructure, from the email service to the cloud to the on-premises network. This contextualizing knowledge is imperative when looking for the slightest signs of something “phishy,” since activity that is malicious for one user under one circumstance could well be benign in other cases. And crucially — because motivated attackers may still find a way inside an organization’s protective skin — such all-encompassing AI platforms can autonomously respond to minimize the damage, no matter where the infection occurs.

Learning from patient zero

Consider a sophisticated but nevertheless commonplace attack against a global enterprise. The attack begins, unsurprisingly, with a spear-phishing campaign targeting employees across the business. The emails use a phishing tactic called domain spoofing, which involves registering a seemingly legitimate domain that resembles the sender address of a familiar contact. More often than not, the attacker will seek to impersonate a high-level executive and make an urgent request — hoping the employee will comply before spotting the forged domain.

In this instance, the attackers, who have spied on the company’s CEO via her tweets, have emulated her writing style in order to trick recipients into opening the emails’ attachment. Because the spoofed domain does not appear on the IP blacklists used by the company’s native email controls, they make their way into the inboxes of more than 200 employees, ready to infect the firm with a fast-acting strain of ransomware after a single click. To make matters worse, the multinational firm has offices in four continents. Thus, when “patient zero” — a saleswoman in London — gets to the email first, its US-based security team is still asleep halfway around the world.

The company’s Cyber AI platform, meanwhile, analyzed the emails and correlated their attributes with each employees’ typical online behavior, leveraging its knowledge of the entire digital infrastructure. This analysis revealed the emails to be suspicious, and although the AI did not yet intervene, it primed its Autonomous Response capability to take immediate action. Back in London, patient zero skims the email and inadvertently downloads its ransomware payload, which begins to move laterally, identify file shares, and encrypt company documents at machine speed. For most organizations, it’s already too late.

But within seconds, the Cyber AI platform flags the unusual nature of the ransomware’s activity and, given the urgency of the threat, determines that an autonomous response is necessary. It surgically neutralizes just the anomalous lateral movement and encryption, restricting infected devices to their normal behavior. However, the platform doesn’t stop there. After performing a root cause analysis, the AI traces the attack to the phishing email — information that prompts it to sanitize the other emails in the campaign before they deceive additional victims. The saleswoman continues working, unaware that the AI is also hard at work behind the scenes, saving the company from a major compromise.

AI attacks the inbox

It isn’t just defenders who have artificial intelligence at their disposal. As I discussed in a 2017 post, AI promises to supercharge spear-phishing by rendering these emails more realistic and far more scalable — automating what is, for human attacks, quite a labor-intensive process. One notable experiment in 2016 found that an AI-powered toolkit, which studied the social media behaviors of its targets in order to send them personalized spear-phishing tweets, was able to put a human attacker to shame by luring 275 victims into its trap in a mere two hours. The human, over that same duration, made only 129 attempts.

Compared to large­scale, standard phishing campaigns that have compromise rates of 5-14%, such automated spear-phishing has been found to succeed between 30% and 66% of the time, while AI technology continues to exponentially improve. There is no silver bullet for countering this next wave of AI attacks, regardless of how robust perimeter-oriented protections become. Rather, we must employ our own AI platforms to secure our digital assets from the inside-out. By uniting email security with enterprise security in this way, we can autonomously fight back against phishing attacks — even those we fall for hook, line, and sinker.

To find out how Cyber AI intelligently detects and autonomously neutralizes phishing emails, check out our data sheet: Darktrace Antigena: Cyber AI for Email.

Dave Palmer

Dave is the Director of Technology at Darktrace, overseeing the mathematics and engineering teams and project strategies. With over 19 years of experience at the forefront of government intelligence operations, Dave has worked across UK intelligence agencies GCHQ and MI5, where he was responsible for delivering mission-critical infrastructure services, including replacing and securing entire global networks, the development of operational internet capabilities and the management of critical disaster recovery incidents. He acts as an advisor to cyber security start-ups and growth-stage companies from the UK Government’s Cyber Security Accelerator and CyLon. His insights on AI and the future of cyber security are also regularly featured in the UK media. He holds a first-class degree in Computer Science and Software Engineering from the University of Birmingham.