Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

For the first time, Google claims to have detected and stopped the use of zero days by AI. According to the source report Google Threat Intelligence Group (GTIG), “well-known cybercriminals” were planning to exploit a “crowd threat” that would allow them to bypass two-factor authentication on an “open source, web-based security tool.”
Google searchers found clues in the Python script used for the opportunity that suggests help from AI, such as “CVSS threat” and “created books” related to LLM teaching. The exploit takes advantage of a “high level semantic flaw that the developer boldly wrote to think” in the 2FA platform. This follows weeks of hand-wringing about the potential of AI models aimed at cybersecurity such as Anthropic Myths and a recently exposed a Linux vulnerability which was discovered with the help of AI.
This is the first time Google has found evidence that AI was involved in such an attack, although Google researchers say they “do not believe Gemini was used.” Google says it was able to “disrupt” this, and says hackers are increasingly using AI to find and exploit security vulnerabilities. The report also mentions AI as a target for attackers, saying “GTIG has seen adversaries increasingly turn to integrated products that support AI systems, such as autonomous capabilities and third-party interfaces.”
Google’s report also describes how hackers are using “human-driven jailbreaking” for AI to find security threats, such as one that instructs AI to pretend to be a security expert. Hackers are also feeding AI models all vulnerability databases and using OpenClaw in ways that show “an interest in refining AI-generated payments within controls to increase the reliability of data before it is sent.”