Google Confirms First AI-Powered Malware Attacks

Google confirms first AI-powered malware attacks, marking a shift in cybersecurity as adversaries leverage AI for dynamic, hard-to-detect threats.

4 min read9 views
Google Confirms First AI-Powered Malware Attacks

Google Confirms First AI-Powered Malware Attacks

In a landmark development for cybersecurity, Google’s Threat Intelligence Group (GTIG) has confirmed the first documented cases of hackers deploying AI-powered malware in active cyberattacks. The discovery, announced on November 5, 2025, marks a turning point in the digital threat landscape, as adversaries are now leveraging large language models (LLMs) to dynamically generate, mutate, and conceal malicious code during attacks.

The new malware families—PromptFlux, PromptSteal, PromptLock, QuietVault, and FruitShell—represent a significant evolution in cybercrime. Unlike traditional malware, these strains use AI to adapt their behavior in real time, making them far more difficult to detect and defend against. Google’s findings, based on analysis of uploads to VirusTotal and direct observations of ongoing attacks, confirm that AI is no longer just a tool for phishing or reconnaissance but is now embedded directly into malware execution.

Key Features of AI-Powered Malware

PromptFlux: The “Thinking Robot” Malware

PromptFlux is a Visual Basic Script (VBScript) malware that interacts with Google’s Gemini API to rewrite its own source code every hour. This “just-in-time” self-modification allows it to evade static signature-based detection systems. The malware uses a hard-coded API key to send highly specific prompts to Gemini, requesting new obfuscation and evasion techniques. The regenerated code is then saved to the Windows Startup folder for persistence and attempts to spread via removable drives and network shares.

PromptSteal: AI-Driven Data Exfiltration

PromptSteal, attributed to the Russian state-sponsored group APT28 (Fancy Bear), uses the Hugging Face API to query the open-source model Qwen2.5-Coder-32B-Instruct. The malware generates one-line Windows commands on demand, which it executes to collect and exfiltrate sensitive data from Ukrainian entities. This approach allows attackers to adapt their data collection tactics in real time, based on the target environment.

Other Notable Strains

  • PromptLock: A proof-of-concept ransomware that uses an LLM to dynamically generate and execute malicious Lua scripts at runtime.
  • QuietVault: A credential stealer that leverages AI prompts and on-host AI CLI tools to search for and exfiltrate secrets from infected systems.
  • FruitShell: A reverse shell that contains hard-coded prompts designed to bypass LLM-powered security systems.

Industry Impact and Implications

A New Era of Autonomous Malware

Google’s report highlights that these AI-powered malware families are still largely experimental, but their emergence signals a shift toward autonomous, self-modifying malware. Traditional antivirus and endpoint detection systems, which rely on static signatures and behavioral patterns, are increasingly ineffective against threats that can rewrite themselves in real time.

  • Dynamic Code Generation: Malware can now generate new attack capabilities on demand, making it harder for defenders to predict and block malicious activity.
  • Evasion Techniques: By leveraging LLMs, malware can obfuscate its code, bypass detection, and adapt to different environments.
  • Lower Barrier to Entry: The maturation of underground marketplaces for AI tools means even less sophisticated actors can now access advanced AI-powered malware.

Underground AI Marketplaces

Google’s researchers have observed a growing ecosystem of illicit AI offerings on underground forums. These marketplaces provide tools for phishing, malware development, and vulnerability research, further democratizing access to AI-powered cybercrime.

Context and Future Outlook

Google’s findings underscore a critical shift in the cybersecurity landscape. As AI becomes more accessible, attackers are moving beyond using it for productivity gains and are now integrating it directly into their offensive operations. This trend is likely to accelerate, with future malware potentially combining AI reasoning, automation, and real-time adaptation to outpace traditional defenses.

  • Defensive Challenges: Security teams must now contend with threats that can evolve during an attack, requiring more advanced, AI-driven detection and response systems.
  • Regulatory and Ethical Concerns: The misuse of AI in cyberattacks raises questions about the responsibility of AI developers and the need for stronger safeguards.
  • Global Impact: The use of AI-powered malware by state-sponsored actors, as seen with APT28, highlights the growing intersection of cybercrime and geopolitical conflict.

Conclusion

The discovery of AI-powered malware in the wild is a wake-up call for the cybersecurity community. As Google’s report makes clear, the era of static, predictable malware is over. The future belongs to adaptive, autonomous threats that can rewrite themselves in real time. Defenders must respond with equally advanced, AI-driven solutions to stay ahead of this evolving threat landscape.

For more details, refer to Google’s full report on the Google Cloud Threat Intelligence blog.

Tags

AI-powered malwareGooglecybersecurityAPT28large language modelsPromptFluxPromptSteal
Share this article

Published on November 5, 2025 at 02:00 PM UTC • Last updated 2 hours ago

Related Articles

Continue exploring AI news and insights