Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

OT

/

September 4, 2025

Rethinking Signature-Based Detection for Power Utility Cybersecurity

power utility cybersecurityDefault blog imageDefault blog image

Lessons learned from OT cyber attacks

Over the past decade, some of the most disruptive attacks on power utilities have shown the limits of signature-based detection and reshaped how defenders think about OT security. Each incident reinforced that signatures are too narrow and reactive to serve as the foundation of defense.

2015: BlackEnergy 3 in Ukraine

According to CISA, on December 23, 2015, Ukrainian power companies experienced unscheduled power outages affecting a large number of customers — public reports indicate that the BlackEnergy malware was discovered on the companies’ computer networks.

2016: Industroyer/CrashOverride

CISA describes CrashOverride malwareas an “extensible platform” reported to have been used against critical infrastructure in Ukraine in 2016. It was capable of targeting industrial control systems using protocols such as IEC‑101, IEC‑104, and IEC‑61850, and fundamentally abused legitimate control system functionality to deliver destructive effects. CISA emphasizes that “traditional methods of detection may not be sufficient to detect infections prior to the malware execution” and recommends behavioral analysis techniques to identify precursor activity to CrashOverride.

2017: TRITON Malware

The U.S. Department of the Treasury reports that the Triton malware, also known as TRISIS or HatMan, was “designed specifically to target and manipulate industrial safety systems” in a petrochemical facility in the Middle East. The malware was engineered to control Safety Instrumented System (SIS) controllers responsible for emergency shutdown procedures. During the attack, several SIS controllers entered a failed‑safe state, which prevented the malware from fully executing.

The broader lessons

These events revealed three enduring truths:

  • Signatures have diminishing returns: BlackEnergy showed that while signatures can eventually identify adapted IT malware, they arrive too late to prevent OT disruption.
  • Behavioral monitoring is essential: CrashOverride demonstrated that adversaries abuse legitimate industrial protocols, making behavioral and anomaly detection more effective than traditional signature methods.
  • Critical safety systems are now targets: TRITON revealed that attackers are willing to compromise safety instrumented systems, elevating risks from operational disruption to potential physical harm.

The natural progression for utilities is clear. Static, file-based defenses are too fragile for the realities of OT.  

These incidents showed that behavioral analytics and anomaly detection are far more effective at identifying suspicious activity across industrial systems, regardless of whether the malicious code has ever been seen before.

Strategic risks of overreliance on signatures

  • False sense of security: Believing signatures will block advanced threats can delay investment in more effective detection methods.
  • Resource drain: Constantly updating, tuning, and maintaining signature libraries consumes valuable staff resources without proportional benefit.
  • Adversary advantage: Nation-state and advanced actors understand the reactive nature of signature defenses and design attacks to circumvent them from the start.

Recommended Alternatives (with real-world OT examples)

 Alternative strategies for detecting cyber attacks in OT
Figure 1: Alternative strategies for detecting cyber attacks in OT

Behavioral and anomaly detection

Rather than relying on signatures, focusing on behavior enables detection of threats that have never been seen before—even trusted-looking devices.

Real-world insight:

In one OT setting, a vendor inadvertently left a Raspberry Pi on a customer’s ICS network. After deployment, Darktrace’s system flagged elastic anomalies in its HTTPS and DNS communication despite the absence of any known indicators of compromise. The alerting included sustained SSL increases, agent‑beacon activity, and DNS connections to unusual endpoints, revealing a possible supply‑chain or insider risk invisible to static tools.  

Darktrace’s AI-driven threat detection aligns with the zero-trust principle of assuming the risk of a breach. By leveraging AI that learns an organization’s specific patterns of life, Darktrace provides a tailored security approach ideal for organizations with complex supply chains.

Threat intelligence sharing & building toward zero-trust philosophy

Frameworks such as MITRE ATT&CK for ICS provide a common language to map activity against known adversary tactics, helping teams prioritize detections and response strategies. Similarly, information-sharing communities like E-ISAC and regional ISACs give utilities visibility into the latest tactics, techniques, and procedures (TTPs) observed across the sector. This level of intel can help shift the focus away from chasing individual signatures and toward building resilience against how adversaries actually operate.

Real-world insight:

Darktrace’s AI embodies zero‑trust by assuming breach potential and continually evaluating all device behavior, even those deemed trusted. This approach allowed the detection of an anomalous SharePoint phishing attempt coming from a trusted supplier, intercepted by spotting subtle patterns rather than predefined rules. If a cloud account is compromised, unauthorized access to sensitive information could lead to extortion and lateral movement into mission-critical systems for more damaging attacks on critical-national infrastructure.

This reinforces the need to monitor behavioral deviations across the supply chain, not just known bad artifacts.

Defense-in-Depth with OT context & unified visibility

OT environments demand visibility that spans IT, OT, and IoT layers, supported by risk-based prioritization.

Real-world insight:

Darktrace / OT offers unified AI‑led investigations that break down silos between IT and OT. Smaller teams can see unusual outbound traffic or beaconing from unknown OT devices, swiftly investigate across domains, and get clear visibility into device behavior, even when they lack specialized OT security expertise.  

Moreover, by integrating contextual risk scoring, considering real-world exploitability, device criticality, firewall misconfiguration, and legacy hardware exposure, utilities can focus on the vulnerabilities that genuinely threaten uptime and safety, rather than being overwhelmed by CVE noise.  

Regulatory alignment and positive direction

Industry regulations are beginning to reflect this evolution in strategy. NERC CIP-015 requires internal network monitoring that detects anomalies, and the standard references anomalies 15 times. In contrast, signature-based detection is not mentioned once.

This regulatory direction shows that compliance bodies understand the limitations of static defenses and are encouraging utilities to invest in anomaly-based monitoring and analytics. Utilities that adopt these approaches will not only be strengthening their resilience but also positioning themselves for regulatory compliance and operational success.

Conclusion

Signature-based detection retains utility for common IT malware, but it cannot serve as the backbone of security for power utilities. History has shown that major OT attacks are rarely stopped by signatures, since each campaign targets specific systems with customized tools. The most dangerous adversaries, from insiders to nation-states, actively design their operations to avoid detection by signature-based tools.

A more effective strategy prioritizes behavioral analytics, anomaly detection, and community-driven intelligence sharing. These approaches not only catch known threats, but also uncover the subtle anomalies and novel attack techniques that characterize tomorrow’s incidents.

Continue reading
About the author
Daniel Simonds
Director of Operational Technology

Blog

/

Identity

/

August 21, 2025

From VPS to Phishing: How Darktrace Uncovered SaaS Hijacks through Virtual Infrastructure Abuse

VPS phishingDefault blog imageDefault blog image

What is a VPS and how are they abused?

A Virtual Private Server (VPS) is a virtualized server that provides dedicated resources and control to users on a shared physical device.  VPS providers, long used by developers and businesses, are increasingly misused by threat actors to launch stealthy, scalable attacks. While not a novel tactic, VPS abuse is has seen an increase in Software-as-a-Service (SaaS)-targeted campaigns as it enables attackers to bypass geolocation-based defenses by mimicking local traffic, evade IP reputation checks with clean, newly provisioned infrastructure, and blend into legitimate behavior [3].

VPS providers like Hyonix and Host Universal offer rapid setup and minimal open-source intelligence (OSINT) footprint, making detection difficult [1][2]. These services are not only fast to deploy but also affordable, making them attractive to attackers seeking anonymous, low-cost infrastructure for scalable campaigns. Such attacks tend to be targeted and persistent, often timed to coincide with legitimate user activity, a tactic that renders traditional security tools largely ineffective.

Darktrace’s investigation into Hyonix VPS abuse

In May 2025, Darktrace’s Threat Research team investigated a series of incidents across its customer base involving VPS-associated infrastructure. The investigation began with a fleet-wide review of alerts linked to Hyonix (ASN AS931), revealing a noticeable spike in anomalous behavior from this ASN in March 2025. The alerts included brute-force attempts, anomalous logins, and phishing campaign-related inbox rule creation.

Darktrace identified suspicious activity across multiple customer environments around this time, but two networks stood out. In one instance, two internal devices exhibited mirrored patterns of compromise, including logins from rare endpoints, manipulation of inbox rules, and the deletion of emails likely used in phishing attacks. Darktrace traced the activity back to IP addresses associated with Hyonix, suggesting a deliberate use of VPS infrastructure to facilitate the attack.

On the second customer network, the attack was marked by coordinated logins from rare IPs linked to multiple VPS providers, including Hyonix. This was followed by the creation of inbox rules with obfuscated names and attempts to modify account recovery settings, indicating a broader campaign that leveraged shared infrastructure and techniques.

Darktrace’s Autonomous Response capability was not enabled in either customer environment during these attacks. As a result, no automated containment actions were triggered, allowing the attack to escalate without interruption. Had Autonomous Response been active, Darktrace would have automatically blocked connections from the unusual VPS endpoints upon detection, effectively halting the compromise in its early stages.

Case 1

Timeline of activity for Case 1 - Unusual VPS logins and deletion of phishing emails.
Figure 1: Timeline of activity for Case 1 - Unusual VPS logins and deletion of phishing emails.

Initial Intrusion

On May 19, 2025, Darktrace observed two internal devices on one customer environment initiating logins from rare external IPs associated with VPS providers, namely Hyonix and Host Universal (via Proton VPN). Darktrace recognized that these logins had occurred within minutes of legitimate user activity from distant geolocations, indicating improbable travel and reinforcing the likelihood of session hijacking. This triggered Darktrace / IDENTITY model “Login From Rare Endpoint While User Is Active”, which highlights potential credential misuse when simultaneous logins occur from both familiar and rare sources.  

Shortly after these logins, Darktrace observed the threat actor deleting emails referring to invoice documents from the user’s “Sent Items” folder, suggesting an attempt to hide phishing emails that had been sent from the now-compromised account. Though not directly observed, initial access in this case was likely achieved through a similar phishing or account hijacking method.

 Darktrace / IDENTITY model "Login From Rare Endpoint While User Is Active", which detects simultaneous logins from both a common and a rare source to highlight potential credential misuse.
Figure 2: Darktrace / IDENTITY model "Login From Rare Endpoint While User Is Active", which detects simultaneous logins from both a common and a rare source to highlight potential credential misuse.

Case 2

Timeline of activity for Case 2 – Coordinated inbox rule creation and outbound phishing campaign.
Figure 3: Timeline of activity for Case 2 – Coordinated inbox rule creation and outbound phishing campaign.

In the second customer environment, Darktrace observed similar login activity originating from Hyonix, as well as other VPS providers like Mevspace and Hivelocity. Multiple users logged in from rare endpoints, with Multi-Factor Authentication (MFA) satisfied via token claims, further indicating session hijacking.

Establishing control and maintaining persistence

Following the initial access, Darktrace observed a series of suspicious SaaS activities, including the creation of new email rules. These rules were given minimal or obfuscated names, a tactic often used by attackers to avoid drawing attention during casual mailbox reviews by the SaaS account owner or automated audits. By keeping rule names vague or generic, attackers reduce the likelihood of detection while quietly redirecting or deleting incoming emails to maintain access and conceal their activity.

One of the newly created inbox rules targeted emails with subject lines referencing a document shared by a VIP at the customer’s organization. These emails would be automatically deleted, suggesting an attempt to conceal malicious mailbox activity from legitimate users.

Mirrored activity across environments

While no direct lateral movement was observed, mirrored activity across multiple user devices suggested a coordinated campaign. Notably, three users had near identical similar inbox rules created, while another user had a different rule related to fake invoices, reinforcing the likelihood of a shared infrastructure and technique set.

Privilege escalation and broader impact

On one account, Darktrace observed “User registered security info” activity was shortly after anomalous logins, indicating attempts to modify account recovery settings. On another, the user reset passwords or updated security information from rare external IPs. In both cases, the attacker’s actions—including creating inbox rules, deleting emails, and maintaining login persistence—suggested an intent to remain undetected while potentially setting the stage for data exfiltration or spam distribution.

On a separate account, outbound spam was observed, featuring generic finance-related subject lines such as 'INV#. EMITTANCE-1'. At the network level, Darktrace / NETWORK detected DNS requests from a device to a suspicious domain, which began prior the observed email compromise. The domain showed signs of domain fluxing, a tactic involving frequent changes in IP resolution, commonly used by threat actors to maintain resilient infrastructure and evade static blocklists. Around the same time, Darktrace detected another device writing a file named 'SplashtopStreamer.exe', associated with the remote access tool Splashtop, to a domain controller. While typically used in IT support scenarios, its presence here may suggest that the attacker leveraged it to establish persistent remote access or facilitate lateral movement within the customer’s network.

Conclusion

This investigation highlights the growing abuse of VPS infrastructure in SaaS compromise campaigns. Threat actors are increasingly leveraging these affordable and anonymous hosting services to hijack accounts, launch phishing attacks, and manipulate mailbox configurations, often bypassing traditional security controls.

Despite the stealthy nature of this campaign, Darktrace detected the malicious activity early in the kill chain through its Self-Learning AI. By continuously learning what is normal for each user and device, Darktrace surfaced subtle anomalies, such as rare login sources, inbox rule manipulation, and concurrent session activity, that likely evade traditional static, rule-based systems.

As attackers continue to exploit trusted infrastructure and mimic legitimate user behavior, organizations should adopt behavioral-based detection and response strategies. Proactively monitoring for indicators such as improbable travel, unusual login sources, and mailbox rule changes, and responding swiftly with autonomous actions, is critical to staying ahead of evolving threats.

Credit to Rajendra Rushanth (Cyber Analyst), Jen Beckett (Cyber Analyst) and Ryan Traill (Analyst Content Lead)

References

·      1: https://cybersecuritynews.com/threat-actors-leveraging-vps-hosting-providers/

·      2: https://threatfox.abuse.ch/asn/931/

·      3: https://www.cyfirma.com/research/vps-exploitation-by-threat-actors/

Appendices

Darktrace Model Detections

•   SaaS / Compromise / Unusual Login, Sent Mail, Deleted Sent

•   SaaS / Compromise / Suspicious Login and Mass Email Deletes

•   SaaS / Resource / Mass Email Deletes from Rare Location

•   SaaS / Compromise / Unusual Login and New Email Rule

•   SaaS / Compliance / Anomalous New Email Rule

•   SaaS / Resource / Possible Email Spam Activity

•   SaaS / Unusual Activity / Multiple Unusual SaaS Activities

•   SaaS / Unusual Activity / Multiple Unusual External Sources For SaaS Credential

•   SaaS / Access / Unusual External Source for SaaS Credential Use

•   SaaS / Compromise / High Priority Login From Rare Endpoint

•   SaaS / Compromise / Login From Rare Endpoint While User Is Active

List of Indicators of Compromise (IoCs)

Format: IoC – Type – Description

•   38.240.42[.]160 – IP – Associated with Hyonix ASN (AS931)

•   103.75.11[.]134 – IP – Associated with Host Universal / Proton VPN

•   162.241.121[.]156 – IP – Rare IP associated with phishing

•   194.49.68[.]244 – IP – Associated with Hyonix ASN

•   193.32.248[.]242 – IP – Used in suspicious login activity / Mullvad VPN

•   50.229.155[.]2 – IP – Rare login IP / AS 7922 ( COMCAST-7922 )

•   104.168.194[.]248 – IP – Rare login IP / AS 54290 ( HOSTWINDS )

•   38.255.57[.]212 – IP – Hyonix IP used during MFA activity

•   103.131.131[.]44 – IP – Hyonix IP used in login and MFA activity

•   178.173.244[.]27 – IP – Hyonix IP

•   91.223.3[.]147 – IP – Mevspace Poland, used in multiple logins

•   2a02:748:4000:18:0:1:170b[:]2524 – IPv6 – Hivelocity VPS, used in multiple logins and MFA activity

•   51.36.233[.]224 – IP – Saudi ASN, used in suspicious login

•   103.211.53[.]84 – IP – Excitel Broadband India, used in security info update

MITRE ATT&CK Mapping

Tactic – Technique – Sub-Technique

•   Initial Access – T1566 – Phishing

                       T1566.001 – Spearphishing Attachment

•   Execution – T1078 – Valid Accounts

•   Persistence – T1098 – Account Manipulation

                       T1098.002 – Exchange Email Rules

•   Command and Control – T1071 – Application Layer Protocol

                       T1071.001 – Web Protocols

•   Defense Evasion – T1036 – Masquerading

•   Defense Evasion – T1562 – Impair Defenses

                       T1562.001 – Disable or Modify Tools

•   Credential Access – T1556 – Modify Authentication Process

                       T1556.004 – MFA Bypass

•   Discovery – T1087 – Account Discovery

•      Impact – T1531 – Account Access Removal

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Rajendra Rushanth
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI