Darktrace Cyber AI Research Center
“Research unlocks the unknowns; it also helps shed light on what we are collectively up against.” - Jack Stockdale OBE, CTO


The Darktrace AI Research Centre is foundational to our continued innovation.
Teams of mathematicians and other experts examine how AI can be applied to real-world problems.
Their aim is to discover new paths forward to augment human capabilities.
With over 200 patents and patents pending, the AI Research Centre comprises more than 200 R&D employees, including experts with ~100 master's degrees and 20 doctorates in disciplines from astrophysics to linguistics to data science.

Founded by mathematicians and cyber defense experts in 2013, Darktrace is a global leader in cybersecurity AI, delivering complete AI-powered solutions in its mission to free the world of cyber disruption. We protect more than 9,000 customers from the world’s most complex threats, including ransomware, cloud, and SaaS attacks.
Our roots lie deep in innovation. The Darktrace AI Research Center based in Cambridge, UK, has conducted research establishing new thresholds in cyber security, with technology innovations backed by over 125 patents and pending applications. The company’s European R&D center is located in The Hague, Netherlands.
Rather than confined to work on product roadmaps, researchers are free to experiment and explore, guided by creative insight arising from pure research. Yet we have never shied away from taking on real challenges faced by the industry, and the AI Research Centre operates with a common goal of solving these challenges with AI and mathematics.
The AI Research Centre has produced numerous award-winning breakthroughs, which have gone on to form the AI capabilities that today comprise our products contained in the Darktrace ActiveAI Security Platform. Please see below a selection of research abstracts stemming from these research initiatives.
Featured research
DEMIST-2 is Darktrace’s latest language model tailored for the cybersecurity domain. With just 95 million parameters, it delivers powerful text analysis for threat detection and classification, while remaining lightweight enough for local deployment. Trained on both natural language and security-specific data, DEMIST-2 excels in embedded environments using custom LoRA adapters to specialize in diverse tasks with minimal memory and compute overhead. This report details its architecture, training process, and performance evaluations. It also highlights DEMIST-2’s improvements over earlier models and its role in enhancing the accuracy and speed of cybersecurity operations within Darktrace’s product ecosystem.
DIGEST is Darktrace’s latest machine learning model designed to assess the severity of cybersecurity incidents using graph and recurrent neural networks. Built on real-world incident data, DIGEST analyzes dynamic graphs formed by interactions between users, devices, and resources. By assigning severity scores, it enables security teams to prioritize investigations and respond faster to critical threats. This brief outlines the model’s development—from dataset creation and training to deployment—highlighting how DIGEST enhances Darktrace’s Cyber AI Analyst capabilities and improves incident triage through advanced, automated threat evaluation.
Read our second Darktrace Discourse paper, authored by John C. Allen, MBA, CRISC, and VP Cyber Risk & Compliance, Darktrace. This paper reviews the current state of incident recovery and sets forth seven areas of innovation to improve Recovery from cyber incidents, based on a core capability: Real-Time Adaptive Incident Response.
This paper outlines Darktrace's Attack Path modeling (APM) capabilities, exploring how real-time, automated, dual-aspect, multi-data-source, end-to-end APM can be used to give blue teams a comprehensive view of realistic, risk-prioritized attack paths so that resources can be best allocated to defend key assets.
Read our Darktrace Discourse paper, authored by one of our researchers based at the Centre. The author has seven years’ experience in the automation of complex cyber-centric processes with specialization in the offensive domain, and holds a PhD in Astrophysics from the University of Cambridge. To close these gaps, the security industry must rethink how adversarial assessments are conducted. Automating certain aspects of red teaming could help organizations reduce costs and scale their efforts without compromising effectiveness. However, automation is no simple task—cyber adversaries continuously evolve, requiring human expertise to research and develop new attack simulations. This paper explores potential approaches to automating adversarial testing while addressing the psychological barriers that prevent broader adoption. By embracing the mindset of an attacker, organizations can strengthen their defenses and proactively mitigate risks before real-world threats strike. Download the full paper to learn how automation and strategic adversarial thinking can improve your organization's security posture.