For years, the cybersecurity industry has warned that AI would eventually be weaponized by hackers. That theoretical future just became the present.
Google’s threat intelligence team has identified what it describes as likely the first documented case of cybercriminals using a large language model to discover and exploit a zero-day vulnerability in the wild. The target: a flaw in a widely used open-source system administration tool that allowed attackers to bypass two-factor authentication.
What happened
The vulnerability was found in a Python script within a popular open-source login platform. Attackers identified a flaw that, when exploited, could circumvent the 2FA protections that millions of users and organizations rely on as a critical second layer of security.
Here’s what makes this case different from every previous cyberattack. The exploit code itself appears to have been generated by an AI model. Google’s researchers linked the code to telltale signs of LLM output, including unusually verbose inline comments and coding patterns characteristic of AI-generated text rather than human-written scripts.
Google coordinated with the affected vendor to patch the vulnerability before any confirmed damage occurred.
Why AI-assisted exploitation changes the game
Zero-day vulnerabilities, by definition, are flaws that the software vendor doesn’t know about yet. Finding them has traditionally required deep technical expertise, patience, and significant time investment. That’s what made zero-days rare and expensive. A single zero-day exploit can sell for hundreds of thousands of dollars on underground markets precisely because they’re so hard to find.
Google’s researchers have noted that state actors in China and North Korea are reportedly utilizing AI to explore potential exploits at scale.
What this means for crypto
The specific vulnerability in this case involved bypassing two-factor authentication, which is one of the foundational security measures used across cryptocurrency exchanges, DeFi platforms, and wallet providers.
Exchanges and DeFi protocols commonly rely on open-source tools and libraries for authentication, access control, and transaction signing. If AI can systematically probe these codebases for vulnerabilities that human auditors have missed, the attack surface for the entire industry expands.
DeFi platforms face a related but distinct risk. Many decentralized protocols integrate with open-source components at various layers of their stack. Smart contract audits have become standard practice, but the security of surrounding infrastructure, including login systems, admin panels, and API gateways, doesn’t always receive the same scrutiny. AI-discovered vulnerabilities in those layers could provide attackers with indirect paths to funds that smart contract audits would never catch.
Projects and exchanges that rely heavily on open-source authentication tools should be conducting immediate reviews of their dependencies. The patch for this specific vulnerability was deployed before exploitation caused confirmed damage, but the next AI-discovered zero-day might not come with a warning from Google’s threat intelligence team.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.
The post Google warns of first known case of AI-assisted hacking appeared first on azeritimes.com.
