A security analyst’s view: Detecting and investigating lateral movement with Darktrace

Tyler Fornes, Senior Security Analyst at Expel (Guest Contributor) | Tuesday March 12, 2019

The following guest-authored blog post examines an advanced cyber-threat discovered by Darktrace on a customer’s network.

At Expel — a managed security provider — our analysts get to use a lot of really cool technologies every day, including Darktrace. Given its popularity among our customers, we thought it would be useful to demonstrate how Darktrace helps us identify and triage potential security threats.

Here’s an example of how our team investigated a remote file copy over SMB.

Investigating a Darktrace alert

Take a look at this alert. It was triggered via a violation of one of the pre-packaged model breaches for Device / AT Service Scheduled Task.

To triage this specific alert, we need to answer the following questions:

  1. What were the triggers that caused the model to alert?
  2. Which host was the Scheduled Task created on?
  3. Were any files transferred?
  4. Is this activity commonly seen between these hosts?

By answering these questions, we can determine whether or not this alert is related to malicious activity. First, we need to gather additional evidence using the Darktrace console.

At this point, we know the model breach Device / AT Service Scheduled Task was triggered. But what does that mean? Let’s view the model and explore the logic.

Looking at the logic behind this model breach, we see that any message containing the strings “atsvc” and “IPC$” will match this model breach. And because the frequency has been set to “> 0 in 60 mins,” we can assume that once this activity is seen just one time, it’ll trigger an alert. By understanding this logic, we now know:

Next, let’s grab some data. We opened the Model Breach Event Log to see the related events observed for this model breach. There was a successful DCE-RPC bind, followed by SMB Write/Read success containing the keywords “atsvc” and “IPC$.”

We turned to the View advanced search for this event feature of the Model Breach Event Log for even more info.

The advanced search results for this model breach revealed two distinct messages. There’s a successful NTLM authentication message for the account “appadmin.” Since NTLM is commonly used with SMB for authentication, this is likely the account being used by the source machine to establish the SMB session.

Immediately after this authentication, we see the following DCE-RPC message for a named pipe being created involving atsvc:

We see that the RPC bind was created referencing the SASec interface. Based on a quick online search, we learned that the SASec interface “only includes methods for manipulating account information, because most SASec-created task configuration is stored in the file system using the .JOB file format0.”

One possible explanation for this connection is that it was made to query information about a scheduled task defined within the .JOB format, rather than a new scheduled task being created on the host. However, Darktrace doesn’t show any messages mentioning a file with the extension “.JOB” within this model breach. So we kept digging for answers.

By querying “*.JOB AND SMB” within the timeframe of the activity we’ve already observed, some promising results appeared:

We observed three unique .JOB files being accessed over SMB during the exact time of our previous observations. Considering the hosts and the timeframe, we correlated this activity to the original model breach.

So we know the following:

To answer the last investigative question, we used the query “AV.job AND SMB” over the past 60 days. This query returned daily entries for identical activity dating back several months. The activity occurred around the same time each day, involving the same hosts and file paths.

This was starting to smell like legitimate activity, but we still wanted to analyze the contents of the requested file AV.job. We created a packet capture for a five-minute window around the timeframe of the source IP address observed in the model breach.

Once we collected the PCAP, we downloaded and analyzed it in Wireshark, and then extracted the transferred files using the Export Objects feature.

The contents of this file refer to an executable in the location C:\Program Files\Sophos\Sophos Anti-Virus\BackgroundScanClient.exe. Judging by the name of the .JOB file this was found in, it was likely a legitimate scheduled task created to perform an antivirus scan on the endpoint each morning.

Reviewing our original analysis questions, we could confidently answer all four questions:

Darktrace’s cyber defense platform allowed our analysts to quickly confirm and scope potential threat activity and identify network-based indicators (NBIs) related to an attack. It can also generate additional, host-based indicators (HBIs) to supplement your investigation. In summary, Darktrace AI enables our Expel analysts quickly and efficiently scope an incident or hunt for threats across the entire organization — without the need for exhaustive data collection and offline parsing by an analyst.

How state-sponsored attackers took colleges to school

Max Heinemeyer, Director of Threat Hunting | Friday March 8, 2019

Securing higher education institutions poses a unique challenge compared to just about any other type of organization, from governments to nonprofits to private companies. Higher education institutions typically prioritize an open-access IT environment, which must balance an interest in facilitating the free exchange of ideas with the imperative of robust cyber defenses to protect against intellectual property theft. Indeed, the marked rise in cyber-attacks over the past three years has not been exclusive to the private sector. Universities are, in fact, a high-value target for any cyber-criminal, as their networks hold a wealth of sensitive information, ranging from student financial details to social security numbers to cutting-edge research. And crucially, the nature of this research makes universities a prime target for advanced, state-sponsored attackers, as this information can be highly valuable to foreign governments.

The week’s news of attacks by Chinese criminals on more than two dozen universities — including MIT, The University of Hawaii, and The University of Washington — demonstrate just how serious such attacks can be. The group allegedly responsible for these attacks, Mudcarp, attempted to steal highly sensitive military research on submarine missiles using targeted phishing emails. Ostensibly written by trusted colleagues at partner universities and research institutions, these emails delivered malicious payloads by exploiting macros in Microsoft Word and Excel documents, thereby gaining access to sensitive collegiate networks. Due to the value of the technology in question, experts have speculated that these attacks were likely state-sponsored as part of China’s efforts to advance its naval operations.

A defensive nightmare

Despite containing lucrative IP and vast quantities of personal information, university networks are among the most difficult to secure. For one, the high number of both students and staff connecting to the network each day means universities must deploy hundreds, if not thousands of access points, and in contrast to private businesses, it is nearly impossible to guarantee that these access points are tightly secured. This reality makes gaining an initial foothold in the network — one of the most difficult and time-consuming stages of the attack life cycle — far easier. Moreover, as higher education institutions often facilitate high-traffic networks, they must deploy a decentralised system, with different faculties responsible for the security of their specific portion of the network. While it is common practice in the private sector, deploying a uniform set of security policies proves difficult in a university environment.

To make matters worse, an increasing number of students are now connecting multiple BYOD devices to the network, and as a consequence, higher education institutions typically have a far greater attack surface than private businesses. And at the same time, the continuous stream of students on campus also increases the difficulty of distinguishing between genuine security threats and benign — albeit undesirable — activity, such as video torrenting. This open-access culture also negatively impacts users’ attitude towards risk, with students less likely to feel responsible for their network activity compared to employees of a private business. In other words, users of higher education networks are more likely to click on suspicious links or mail attachments, while the high volume of emails sent amongst students and staff using institutional addresses makes universities an ideal target for phishing campaigns. Social engineering methods — such as delivering malware via illegitimate Facebook and Twitter accounts — are especially effective at universities, due to these services’ more or less ubiquitous use among students.

Finally, the widespread integration of poorly secured Internet of Things (IoT) devices within universities facilitates even more avenues into the network. For instance, in 2017, attackers used readily available brute-forcing tools to exploit default passwords on more than 5,000 IOT devices at an undisclosed U.S. university, implementing these devices as part of a botnet to attack the university’s network. Incidents such as this not only wreak havoc on daily activity but also inflict lasting harm on a school’s reputation.

Turning the tide with AI

Rather than focusing on building perimeter walls around campus networks, security teams should instead concentrate on tracking and monitoring network devices, ensuring that they are immediately alerted whenever an incident occurs. Indeed, with such an expansive attack surface to safeguard, and so many poorly secured IoT and BYOD devices online at all times, attackers will inevitably breach network perimeters. The key, therefore, is to attain the ability to see inside the network, and ultimately, to neutralize attacks that have already infiltrated.

Unfortunately, this internal network visibility is where the traditional security tools employed by most universities are most limited. By searching only for known threats at the perimeter using fixed rules and signatures, conventional tools alone are likely to miss the next novel attack on the world’s universities — making it all the more imperative that these institutions learn their lesson before it’s too late. On the contrary, AI security systems learn to differentiate between normal and abnormal behavior for each user, device, and network, enabling them to autonomously detect and respond to the subtle anomalies that indicate an in-progress cyber-attack.

The primary goal of a university network is to provide highly accessible learning environments on the securest possible platform. Universities should embrace cyber AI to protect valuable research and IP, without impacting on the interconnectivity that we’ve come to expect on campus.

Max Heinemeyer

Max is a cyber security expert with over nine years’ experience in the field, specializing in network monitoring and offensive security. At Darktrace, Max works with strategic customers to help them investigate and respond to threats, as well as overseeing the cyber security analyst team in the Cambridge UK headquarters. Prior to his current role, Max led the Threat and Vulnerability Management department for Hewlett-Packard in Central Europe. In this role he worked as a white hat hacker, leading penetration tests and red team engagements. He was also part of the German Chaos Computer Club when he was still living in Germany. Max holds a MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.