Q&A: How Adversaries Are Weaponizing AI – Insights from Google's Threat Intelligence Report
Since February 2026, Google Threat Intelligence Group (GTIG) has tracked a significant shift in how adversaries use AI—moving from experimental projects to large-scale, industrial applications. Their latest report reveals that AI now powers everything from zero-day exploit creation to autonomous malware and sophisticated disinformation campaigns. Below, we break down the key findings into an accessible Q&A format.
1. Are threat actors really using AI to discover new vulnerabilities?
Yes, and for the first time, GTIG identified a criminal group that developed a zero-day exploit with the help of AI. They planned to use it in a mass exploitation event, but Google's proactive discovery may have prevented that attack. Beyond this incident, state-sponsored groups from the People's Republic of China (PRC) and the Democratic People's Republic of Korea (DPRK) have also shown strong interest in using AI to find software flaws. AI accelerates vulnerability research by analyzing code patterns and predicting potential weaknesses, making it a powerful tool for both offensive and defensive teams. However, adversaries are now leveraging it to create exploits faster than ever, increasing the danger of unpatched vulnerabilities.

2. How does AI help adversaries evade defenses?
AI-driven coding allows adversaries to build sophisticated defense evasion tools much more quickly. For example, threat actors linked to Russia have used AI to generate polymorphic malware—code that constantly changes its structure to avoid detection. They also create obfuscation networks and integrate AI-generated decoy logic into their malware, making it harder for security tools to recognize malicious behavior. By automating these development cycles, attackers can continuously produce new variants and infrastructure, staying one step ahead of signature-based defenses. This marks a shift from manual coding to AI-assisted automation, significantly increasing the speed and scale of their operations.
3. What is PROMPTSPY and why is it important?
PROMPTSPY is a new type of autonomous malware that represents a major evolution in attack orchestration. Unlike traditional malware that follows fixed instructions, PROMPTSPY uses AI models to interpret the victim's system state and dynamically generate commands. This allows it to adapt in real time, manipulate environments, and carry out complex tasks without constant human oversight. GTIG's analysis uncovered previously unreported capabilities, showing how attackers can offload operational decisions to AI for faster, more scalable attacks. Essentially, PROMPTSPY turns the AI model into the brain of the malware, enabling it to learn and react on the fly—a worrying development for defenders.
4. How are adversaries using AI for information operations?
Adversaries are employing AI as a high-speed research assistant to support every phase of an attack lifecycle. In information operations (IO), AI helps fabricate digital consensus by generating synthetic media and deepfake content at scale. A notable example is the pro-Russia campaign "Operation Overload," which used AI to flood platforms with misleading content, create fake personas, and amplify divisive narratives. These tools allow IO actors to produce vast amounts of convincing material quickly, making it harder for audiences to distinguish fact from fiction. As AI models become more sophisticated, such operations will become even more realistic and harder to counter.

5. How do attackers get access to premium AI models?
Threat actors are using obfuscated LLM access techniques to bypass usage limits and remain anonymous. They employ professionalized middleware and automated registration pipelines to sign up for premium-tier model access without revealing their identity. This infrastructure enables large-scale misuse of services like ChatGPT or Gemini, often by cycling through trial accounts or programmatically creating new ones. By subsidizing their operations through trial abuse, attackers can run thousands of queries daily for tasks like generating phishing emails, writing malicious code, or fine-tuning propaganda. This underground market for AI access poses a significant challenge for providers trying to enforce responsible use policies.
6. What role do supply chain attacks play in targeting AI environments?
Supply chain attacks are becoming a key initial access vector for compromising AI environments. Groups like "TeamPCP" (aka UNC6780) target AI software dependencies—such as libraries, frameworks, or cloud services—to inject malicious code. Once inside, they can steal training data, manipulate models, or use the compromised AI infrastructure for further attacks. These exploits are particularly dangerous because compromising one trusted dependency can affect thousands of downstream users. As organizations rush to adopt AI, they often overlook security in their software supply chains, giving adversaries a perfect entry point. GTIG warns that this trend will likely accelerate, requiring stronger vetting of AI-related components.
Related Articles
- Heightened Cyber Threats from Iran: Analysis and Defense Strategies (Updated April 17)
- How Frontier AI Is Redefining the Landscape of Cybersecurity Defense
- AI-Native Defense: SentinelOne Reveals How Frontier Models Are Reshaping Cybersecurity
- How to Decode Bitcoin's Power Projection for U.S. Military Strategy
- Linux Kernel Team Rushes Out Seven New Stable Releases with Critical Security Patches
- Weekly Cyber Threat Roundup: Key Breaches and Vulnerabilities (April 27)
- The Dark Side of DDoS Protection: 8 Key Facts About the Huge Networks Botnet Scandal
- Securing Your npm Supply Chain: A Practical Guide to Threat Awareness and Mitigation