Google Threat Report Links AI-powered Malware to DPRK Crypto Theft

CN
Decrypt
Follow
8 hours ago

Google has warned that several new malware families now use large language models during execution to modify or generate code, marking a new phase in how state-linked and criminal actors are deploying artificial intelligence in live operations.


In a report released this week, the Google Threat Intelligence Group said it has tracked at least five distinct strains of AI-enabled malware, some of which have already been used in ongoing and active attacks.


The newly-identified malware families “dynamically generate malicious scripts, obfuscate their own code to evade detection,” while also making use of AI models “to create malicious functions on demand,” instead of having those hard-coded into malware packages, the threat intelligence group stated.





Each variant leverages an external model such as Gemini or Qwen2.5-Coder during runtime to generate or obfuscate code, a method GTIG dubbed “just-in-time code creation.”


The technique represents a shift from traditional malware design, where malware logic is typically hard-coded into the binary.


By outsourcing parts of its functionality to an AI model, the malware can continuously make changes to harden itself against systems designed to deter it.


Two of the malware families, PROMPTFLUX and PROMPTSTEAL, demonstrate how attackers are integrating AI models directly into their operations.


GTIG’s technical brief describes how PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API every hour to rewrite its own VBScript code, while PROMPTSTEAL, linked to Russia’s APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands on demand.


The group also identified activity from a North Korean group known as UNC1069 (Masan) that misused Gemini.


Google’s research unit describes the group as “a North Korean threat actor known to conduct cryptocurrency theft campaigns leveraging social engineering,” with notable use of “language related to computer maintenance and credential harvesting.”


Per Google, the group’s queries to Gemini included instructions for locating wallet application data, generating scripts to access encrypted storage, and composing multilingual phishing content aimed at crypto exchange employees.


These activities, the report added, appeared to be part of a broader attempt to build code capable of stealing digital assets.


Google said it had already disabled the accounts tied to these activities and introduced new safeguards to limit model abuse, including refined prompt filters and tighter monitoring of API access.


The findings could point to a new attack surface where malware queries LLMs at runtime to locate wallet storage, generate bespoke exfiltration scripts, and craft highly credible phishing lures.


Decrypt has approached Google on how the new model could change approaches to threat modeling and attribution, but has yet to receive a response.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink