Why higher education is a different cybersecurity battlefield
After four decades in IT, now serving as both CIO and CISO, I’ve learned one simple truth: cybersecurity is never “done.” It’s a constant game of cat and mouse. Criminals evolve. Technologies advance. Regulations expand. But in higher education, the challenge is uniquely complex.
Unlike a bank or a military installation, we can’t lock down networks to a narrow set of approved applications. Higher education environments are open by design. Students collaborate globally, faculty conduct cutting-edge research, and administrators manage critical operations, all of which require seamless access to the internet, global networks, cloud platforms, and connected systems.
Combine that openness with expanding regulatory mandates and tight budgets, and the balancing act becomes clear.
Threat actors don’t operate under the same constraints. Often well-funded and sponsored by nation-states with significant resources, they’re increasingly organized, strategic, and innovative.
That sophistication shows up in the tactics we face every day, from social engineering and ransomware to AI-driven impersonation attacks. We’re dealing with massive volumes of data, countless signals, and a very small window between detection and damage.
No human team, no matter how talented or how numerous, can manually sift through that noise at the speed required.
Discovering a force multiplier
Nothing in cybersecurity is 100% foolproof. I never “set it and forget it.” But for institutions balancing rising threats and finite resources, the Darktrace ActiveAI Security Platform™ offers something incredibly valuable: peace of mind through speed and scale.
It closes the gap between detection and response in a way humans can’t possibly match. At the speed of light, it can quarantine, investigate, and contain anomalous activity.
I’ve purchased and deployed Darktrace three separate times at three different institutions because I’ve seen firsthand what it can do and what it enables teams like mine to achieve.
I first encountered Darktrace while serving as CIO for a large multi-campus college system. What caught my attention was Darktrace's Self-Learning AI, and its ability to learn what "normal" looked like across our network. Instead of relying solely on static signatures or rigid rules, Darktrace built a behavioral baseline unique to our environment and alerted us in real time when something simply didn’t look right.
In higher education, where strict lockdowns aren’t realistic, that behavioral model made all the difference. We deployed it across five campuses, and the impact was immediate. Operating 24/7, Darktrace surfaced threats in ways our team couldn’t replicate manually.
Over time, the Darktrace platform evolved alongside the changing threat landscape, expanding into intrusion prevention, cloud visibility, and email security. At subsequent institutions, including Washington College, Darktrace was one of my first strategic investments.
Revealing the hidden threat other tools missed
One of the most surprising investigations of my career involved a data leak. Leadership suspected sensitive information from high-level meetings was being exposed, but our traditional tools couldn’t provide any answers.
Using Darktrace’s deep network visibility, down to packet-level data, we traced unusual connections to our CCTV camera system, which had been configured with a manufacturer’s default password. A small group of employees had hacked into the CCTV cameras, accessed audio-enabled recordings from boardroom meetings, and stored copies locally.
No other tool in our environment could have surfaced those connections the way Darktrace did. It was a clear example of why using AI to deeply understand how your organization, systems, and tools normally behave, matters: threats and risks don’t always look the way we expect.
Elevating a D-rating into a A-level security program
When I arrived at my last CISO role, the institution had recently experienced a significant ransomware attack. Attackers located data which informed their setting ransom demands to an amount they knew would likely result in payment. It was a sobering example of how calculated and strategic modern cybercriminals have become.
Third-party cyber ratings reflected that reality, with a D rating.
To raise the bar, we implemented a comprehensive security program and integrated layered defenses; -deploying state of the art tools and methods- across the environment, with Darktrace at its core.
After a 90-day learning period to establish our behavioral baseline, we transitioned the platform into fully autonomous mode. In a single 30-day span, Darktrace conducted more than 2,500 investigations and autonomously resolved 92% of all false positives.
For a small team, that’s transformative. Instead of drowning in alerts, my staff focused on less than 200 meaningful cases that warranted human review.
Today, we maintain a perfect A rating from third-party assessors and have remained cybersafe.
Peace of mind isn’t about complacency
The effect of Darktrace as a force multiplier has a real human impact.
With the time reclaimed through automation, we expanded community education programs and implemented simulated phishing exercises. Through sustained training and awareness efforts, we reduced social engineering susceptibility from nearly 45% to under 5%.
On a personal level, Darktrace allows me to sleep better at night and take time off knowing we have intelligent systems monitoring and responding around the clock. For any CIO or CISO carrying institutional risk on their shoulders, that matters.
The next era: AI vs. AI
A new chapter in cybersecurity is unfolding as adversaries leverage AI to enhance scale, speed, and believability. Phishing campaigns are more personalized, impersonation attempts are more precise, and deepfake video technology, including live video, is disturbingly authentic. At the same time, organizations are rapidly adopting AI across their own environments —from GenAI assistants to embedded tools to autonomous agents. These systems don’t operate within fixed rules. They act across email, cloud, SaaS, and identity systems, often with broad permissions, and their behavior can evolve over time in ways that are difficult to predict or control.
That creates a new kind of security challenge. It’s not just about defending against AI-powered threats but understanding and governing how AI behaves within your environment, including what it can access, how it acts, and where risk begins to emerge.
From my perspective, this is a natural next step for Darktrace.
Darktrace brings a level of maturity and behavioral understanding uniquely suited to the complexity of AI environments. Self-Learning AI learns the normal patterns of each business to interpret context, uncover subtle intent, and detect meaningful deviations without relying on predefined rules or signatures. Extending into securing AI by bringing real-time visibility and control to GenAI assistants, AI agents, development environments and Shadow AI, feels like the logical evolution of what Darktrace already does so well.
Just as importantly, Darktrace is already built for dynamic, cross-domain environments where risk doesn’t sit in a single tool or control plane. In higher education, activity already spans multiple systems and, with AI, that interconnection only accelerates.
Having deployed Darktrace multiple times, I have confidence it’s uniquely positioned to lead in this space and help organizations adopt AI with greater visibility and control.
---
Since authoring this blog, Irving Bruckstein has transitioned to the role of Chief Executive Officer of the Cyberaigroup.



















![Darktrace’s detection of the unusual external connection to 142.11[.]206[.]73 via port 8000.](https://cdn.prod.website-files.com/626ff4d25aca2edf4325ff97/69fa6d343ee828935a4a00f5_Screenshot%202026-05-05%20at%203.20.33%E2%80%AFPM.png)

