The latest wave of artificial intelligence tools can significantly help to boost productivity, which also, unfortunately, relates to scammers and spammers, who are now using AI to create more convincing, more compelling, and more harmful systems for their attacks.
Google has shared some of these evolving tactics in its latest Threat Intelligence Group report, outlining some of the evolving methods that it’s seeing scammers adopt to dupe unwitting victims.
As explained by Google: “Over the last few months, Google Threat Intelligence Group (GTIG) has observed threat actors using AI to gather information, create super-realistic phishing scams and develop malware. While we haven’t observed direct attacks on frontier models or generative AI products from advanced persistent threat (APT) actors, we have seen and mitigated frequent model extraction attacks (a type of corporate espionage) from private sector entities all over the world - a threat other businesses with AI models will likely face in the near future.”
Google says that these scammers are using AI to “accelerate the attack lifecycle,” with AI tools helping them refine and adapt their approaches in response to threat detection, making scammers even more effective.
Which makes sense. AI tools can improve productivity, which also relates to their usage for negative purpose, and if scammers can find a way to improve their approaches through systematic evolution, they will.
And it’s not just low-level scammers either.
“For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures. Our quarterly report highlights how threat actors from the Democratic People's Republic of Korea (DPRK), Iran, the People's Republic of China (PRC), and Russia operationalized AI in late 2025 and improves our understanding of how adversarial misuse of generative AI shows up in campaigns we disrupt in the wild.”
Though Google does note that current use of AI tools by threat actors doesn’t “fundamentally alter the threat landscape.”
At least not yet.
Google says that these projects are utilizing AI tools in a variety of ways:
- Model Extraction Attacks: "Distillation attacks" are on the rise as a method for intellectual property theft over the last year.
- AI-Augmented Operations: Real-world case studies demonstrate how groups are streamlining reconnaissance and rapport-building phishing.
- Agentic AI: Threat actors are beginning to show interest in building agentic AI capabilities to support malware and tooling development.
- AI-Integrated Malware: There are new malware families, such as HONESTCUE, that experiment with using Gemini's application programming interface (API) to generate code that enables download and execution of second-stage malware.
- Underground "Jailbreak" Ecosystem: Malicious services like Xanthorox are emerging in the underground, claiming to be independent models while actually relying on jailbroken commercial APIs and open-source Model Context Protocol (MCP) servers.
It’s no surprise that AI tools are being utilized for these kinds of attacks, but it is worth noting that scammers are also getting more sophisticated, and more effective, by sharpening their approaches with the latest generative AI models.
Essentially, this is a warning that you need to be careful about the links that you click, and the material you engage with, because scammers are getting much better at stealing information.
You can read Google’s full threat report for Q4, which includes more detail on AI-assisted attacks, here.