A small, unknown band of hackers pulled off history’s first recorded, truly artificial intelligence-directed cyberattack earlier this year, stealing troves of data from the government of Mexico in the process. Yet when the enterprising ne’er-do-wells tried bridging the gap from IT to OT systems, the AI had no luck.
Between December 2025 and February 2026, the mysterious hackers targeted at least nine entities of the Mexican government, including its federal tax authority (Servicio de Administración Tributaria), National Electoral Institute, the Mexico City civil registry, and a handful of state governments, according to Gambit Security. But how could only a few people, seemingly unaffiliated with any nation-state or known advanced persistent threat (APT) group, take out so many high-value organizations?
With AI, of course.
The group leaned more heavily on Claude Code than any group before it, using the bot to generate a hefty exploitation framework from scratch, and having it guide them more generally through the steps in exploiting each system they came across. It worked, with the weakest of jailbreak attempts to bypass its guardrails. They ended up with access to millions of tax records, property records, and more.
A new report from Dragos summarizes a unique episode in the campaign, when the bad guys reached a technically different sort of target: the water and drainage utility for the city of Monterrey in northeastern Mexico. After rampaging through a national government, their progress was suddenly stymied when — even buoyed as they were by the wonders of AI — they failed to leverage their IT network access into OT network access. They left with superficial loot, having caused no serious damage.
IT-OT (Non-)Convergence
The hackers first entered the utility’s information network through a Web portal, probably using stolen credentials. They established a foothold, then they asked their AI for a lay of the land.
Claude looked around, then came back with the results. In particular, it took the liberty to point out one server that was hosting a gateway called vNode. VNode and industrial gateways like it connect sensitive operational networks — where sensitive operations control valuable and dangerous machinery — with enterprise IT networks — where employees watch the machinery, but also email and scroll TikTok. The “most promising next step” in their attack, the robot suggested, was to attack that gateway via its Web interface, with the potential for “MASSIVE impact if you commit.”
Though vNode may be bidirectional out of the box, for careful OT operators, it offers a data diode module that ensures data can only travel one way — from the OT network out to IT — not in reverse.
Assuming it wasn’t hiding a data diode, Claude helped the attackers identify a Web interface used for authentication and suggested they spray it with login attempts. It researched vendor documentation and other public resources to generate a list of login combos with relatively high probabilities of success: default credentials and credentials swiped earlier in the campaign from other government systems, for example.
Claude orchestrated one round of password spraying. No luck. It tried again. Still, nothing. After that, it gave up. In place of OT network access, it provided the attackers a summary of events titled “What Didn’t Work (Well-Protected Infrastructure).” The attackers exited the utility with a relative pittance: some procurement and vendor records, stolen from the IT network.
How Good is AI at Cyberattacking? Now We Know
It took the malicious underground precisely three years to pull off a properly AI-guided cyberattack campaign.
Between December 2022 and December 2025, threat actors used commercial AI tools and cheap ripoffs to inform their research and targeting. They used ChatGPT to generate malware and to support phishing attempts. If terms like “AI-driven” were used to describe any cyberattacks in that three-year window, they were used too loosely.
What happened in Mexico is, by all accounts, the first widely successful, significant campaign where the threat actors were not at the wheel. This was AI showing what it could do, for hackers not talented enough to do it themselves.
The attack was “quite impressive [but] there is a ceiling on what large language models (LLMs) can do,” says Eyal Sela, the author of that report. That the attackers in this case so successfully glided through government agency databases, only to be stumped by a gateway login screen, is a perfect image of Sela’s point. “When you give them a task, they can go quite far nowadays, but they cannot solve any problem. The AI does not solve the problem that a professional does not know how to solve. And even with Mythos, I bet that’s the case,” Sela says.
Dragos associate principal adversary hunter Jay Deen adds, “AI primarily reduced the time, effort, and expertise required to identify and leverage existing IT weaknesses, rather than bypassing mature security controls.”
It follows, then, that diligent cybersecurity hygiene — even on its own — is a significant moat against AI-driven attacks. “The activity observed in this case reinforces the importance of fundamental OT security controls at the network perimeter, such as network segmentation, secure remote access, asset visibility, and monitoring within OT networks,” Deen says.
