Sunday, March 1, 2026

Google Says State-sponsored Hackers Used Gemini and Other AI Tools Across the Attack Lifecycle

Hyperreal neon AI brain above decentralized blockchain nodes with malware glyphs, blue cyan magenta, cyber threat mood.

Google’s Threat Intelligence Group (GTIG), in collaboration with Google DeepMind, released a report on February 12, 2026 describing how state-backed actors from China, Iran, North Korea, and Russia are integrating mainstream AI tools, most notably Gemini, into day-to-day cyber operations. The core message is straightforward: AI is no longer a novelty in threat tradecraft, it is actively compressing attacker timelines and expanding the scale of credible targeting.

GTIG’s findings indicate the misuse spans multiple phases of the intrusion lifecycle, from early reconnaissance and social engineering through to malware development and large-scale attempts to extract model behavior. By positioning commercial AI as both a productivity layer and an operational enabler, the report reframes AI tooling as a frontline risk domain for defenders and vendors alike.

How AI is accelerating attacker workflows

One of the clearest patterns is AI-assisted reconnaissance and persona building, where adversaries use models to collect, summarize, and operationalize open-source intelligence at speed. In the report’s examples, North Korean activity tracked as UNC2970 used Gemini to profile defense and cybersecurity firms, showing how AI can industrialize target development with fewer human cycles.

The same AI capabilities are being used to improve engagement rates in phishing and influence operations by removing language and cultural friction. GTIG highlighted Iranian actor APT42 as using elaborate personas and translation to make outreach feel culturally native and therefore harder to dismiss as “generic” phishing. In practical terms, this elevates the baseline quality of social engineering, which increases the probability of initial access even when technical controls are strong.

AI is also lowering the barrier to malware development and iterative refinement, which changes the throughput of malicious tooling. Google observed malware such as HONESTCUE using Gemini APIs to generate second-stage C# source code that executes in memory, and a phishing kit dubbed COINBAIT that mimicked a cryptocurrency exchange and appears to have been accelerated by AI code generation. The operational implication is that defenders should expect faster mutation cycles and more variation across campaigns, which can erode the effectiveness of static indicators.

More concerning, GTIG described novel malware families that invoke large language models during execution to create or obfuscate scripts on demand. The report’s reference to PROMPTFLUX and PROMPTSTEAL signals a shift toward runtime “LLM-in-the-loop” behavior that can weaken signature-based detections and increase operational tempo. That design pattern forces a pivot toward behavioral detection, access control hardening, and tighter monitoring of suspicious model and API interactions.

Model extraction turns into an IP and resilience issue

Beyond intrusion enablement, the report flags model extraction as a direct business-model threat, not just a security nuisance. Google detected organized prompting activity—exceeding 100,000 prompts in some campaigns—aimed at distilling Gemini’s reasoning and replicating its behavior, which the report frames as a competitive and intellectual-property risk. If adversaries can replicate advanced model behavior at materially lower cost, the downstream impact is reduced differentiation and higher systemic risk across the AI supply chain.

Google also outlined a defensive posture aimed at disrupting misuse and strengthening safeguards. The report states that Google has disabled accounts and assets associated with abuse and has hardened classifiers and safety mechanisms, while also integrating AI into its own security operations to improve detection and response. In governance terms, that’s a dual-track strategy: reduce attacker access where possible and raise the cost of misuse where prevention is incomplete.

For defenders, the GTIG publication effectively reprioritizes what “AI readiness” means inside security programs. Security teams are being pushed to treat commercial AI platforms as both a source of threat intelligence and an abuse surface, and to incorporate model-aware controls into incident response and monitoring. The report also calls out heightened exposure for crypto-adjacent environments, where social engineering can translate directly into custody compromise, exchange staff targeting, or developer-account takeover.

At a market level, the report’s model-extraction section adds regulatory and commercial pressure around access controls and abuse prevention. By placing model behavior theft on the same risk board as phishing and malware enablement, GTIG frames an inflection point that will require coordinated defensive upgrades, clearer vendor controls, and sustained monitoring as adversary playbooks mature.

Scroll to Top
Chain Report
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.