How state-sponsored attackers took colleges to school

Max Heinemeyer, Director of Threat Hunting

Securing higher education institutions poses a unique challenge compared to just about any other type of organization, from governments to nonprofits to private companies. Higher education institutions typically prioritize an open-access IT environment, which must balance an interest in facilitating the free exchange of ideas with the imperative of robust cyber defenses to protect against intellectual property theft. Indeed, the marked rise in cyber-attacks over the past three years has not been exclusive to the private sector. Universities are, in fact, a high-value target for any cyber-criminal, as their networks hold a wealth of sensitive information, ranging from student financial details to social security numbers to cutting-edge research. And crucially, the nature of this research makes universities a prime target for advanced, state-sponsored attackers, as this information can be highly valuable to foreign governments.

The week’s news of attacks by Chinese criminals on more than two dozen universities — including MIT, The University of Hawaii, and The University of Washington — demonstrate just how serious such attacks can be. The group allegedly responsible for these attacks, Mudcarp, attempted to steal highly sensitive military research on submarine missiles using targeted phishing emails. Ostensibly written by trusted colleagues at partner universities and research institutions, these emails delivered malicious payloads by exploiting macros in Microsoft Word and Excel documents, thereby gaining access to sensitive collegiate networks. Due to the value of the technology in question, experts have speculated that these attacks were likely state-sponsored as part of China’s efforts to advance its naval operations.

A defensive nightmare

Despite containing lucrative IP and vast quantities of personal information, university networks are among the most difficult to secure. For one, the high number of both students and staff connecting to the network each day means universities must deploy hundreds, if not thousands of access points, and in contrast to private businesses, it is nearly impossible to guarantee that these access points are tightly secured. This reality makes gaining an initial foothold in the network — one of the most difficult and time-consuming stages of the attack life cycle — far easier. Moreover, as higher education institutions often facilitate high-traffic networks, they must deploy a decentralised system, with different faculties responsible for the security of their specific portion of the network. While it is common practice in the private sector, deploying a uniform set of security policies proves difficult in a university environment.

To make matters worse, an increasing number of students are now connecting multiple BYOD devices to the network, and as a consequence, higher education institutions typically have a far greater attack surface than private businesses. And at the same time, the continuous stream of students on campus also increases the difficulty of distinguishing between genuine security threats and benign — albeit undesirable — activity, such as video torrenting. This open-access culture also negatively impacts users’ attitude towards risk, with students less likely to feel responsible for their network activity compared to employees of a private business. In other words, users of higher education networks are more likely to click on suspicious links or mail attachments, while the high volume of emails sent amongst students and staff using institutional addresses makes universities an ideal target for phishing campaigns. Social engineering methods — such as delivering malware via illegitimate Facebook and Twitter accounts — are especially effective at universities, due to these services’ more or less ubiquitous use among students.

Finally, the widespread integration of poorly secured Internet of Things (IoT) devices within universities facilitates even more avenues into the network. For instance, in 2017, attackers used readily available brute-forcing tools to exploit default passwords on more than 5,000 IOT devices at an undisclosed U.S. university, implementing these devices as part of a botnet to attack the university’s network. Incidents such as this not only wreak havoc on daily activity but also inflict lasting harm on a school’s reputation.

Turning the tide with AI

Rather than focusing on building perimeter walls around campus networks, security teams should instead concentrate on tracking and monitoring network devices, ensuring that they are immediately alerted whenever an incident occurs. Indeed, with such an expansive attack surface to safeguard, and so many poorly secured IoT and BYOD devices online at all times, attackers will inevitably breach network perimeters. The key, therefore, is to attain the ability to see inside the network, and ultimately, to neutralize attacks that have already infiltrated.

Unfortunately, this internal network visibility is where the traditional security tools employed by most universities are most limited. By searching only for known threats at the perimeter using fixed rules and signatures, conventional tools alone are likely to miss the next novel attack on the world’s universities — making it all the more imperative that these institutions learn their lesson before it’s too late. On the contrary, AI security systems learn to differentiate between normal and abnormal behavior for each user, device, and network, enabling them to autonomously detect and respond to the subtle anomalies that indicate an in-progress cyber-attack.

The primary goal of a university network is to provide highly accessible learning environments on the securest possible platform. Universities should embrace cyber AI to protect valuable research and IP, without impacting on the interconnectivity that we’ve come to expect on campus.

Max Heinemeyer

Max is a cyber security expert with over a decade of experience in the field, specializing in a wide range of areas such as Penetration Testing, Red-Teaming, SIEM and SOC consulting and hunting Advanced Persistent Threat (APT) groups. At Darktrace, Max oversees global threat hunting efforts, working with strategic customers to investigate and respond to cyber-threats. He works closely with the R&D team at Darktrace’s Cambridge UK headquarters, leading research into new AI innovations and their various defensive and offensive applications. Max’s insights are regularly featured in international media outlets such as the BBC, Forbes and WIRED. When living in Germany, he was an active member of the Chaos Computer Club. Max holds an MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.