COMMENTARY
Gartner recently recommended that enterprises ban AI browsers. It’s an understandable impulse for cybersecurity practitioners. These tools have built-in AI sidebars that can leak sensitive data, backend connections to unknown third-party services, and prompt injection vulnerabilities that manipulate browser behavior. CISOs are rightfully wary.
However, employees are enamored of AI browsers. It’s understandable; AI browsers can help corporate workers book airline tickets, make hotel reservations, or compare items on Amazon.
Banning something people want to use won’t make it go away. Just like with the Prohibition laws in the United States in the 1920s, it will just push usage underground.
The browser has become the fundamental corporate user interface, with more than 85% of the workday now taking place in a browser, accessing software-as-a-service and web applications. Employees aren’t asking IT for permission to enhance their productivity with AI tools. LayerX research shows that 20% of enterprise users already have a GenAI browser extension installed. They’re simply installing them and getting to work.
And, they’re popular: Claude in Chrome (released in late 2025) has already reached 800,000 downloads on the Chrome Web Store, while Perplexity’s Comet browser has surpassed 1 million downloads on Google Play.
Why Digital Prohibition Won’t Work
The challenge with blocking AI browsers is practical and strategic, and history offers an instructive parallel. When the United States banned alcohol in 1920, consumption didn’t stop; it just became harder to control and far more dangerous. Bootleggers filled the gap left by legitimate breweries. Speakeasies replaced regulated bars. Without oversight, people drank bathtub gin that could blind or kill them. The government lost both visibility into what people were drinking and any ability to regulate quality or safety.
The same dynamic plays out with AI browser bans. Users working from home, in coffee shops, and on personal devices will continue finding ways to access the tools that make them more productive. Banning AI browsers will not limit the risk they pose, but it will likely impede visibility into real cyber risks as they unfold.
Blanket bans overlook the larger transformation in how people work and why they’re drawn to these tools in the first place. AI browsers genuinely help users code faster, write better, and research more efficiently. But the harder CISOs push prohibition, the more creative their users become at circumventing it. Often it’s in ways that create even greater security risks than the original behavior they were trying to prevent.
The Last Mile Problem
What makes AI browsers particularly challenging is that they operate in the “last mile” of enterprise security: the final interface between users and the Internet. This is precisely where traditional security tools have their biggest blind spots. Think of the digital equivalent of the alley behind the speakeasy where the real business happens. For example, network solutions can’t see or control anything happening inside locally deployed browsers (of any kind), and traditional endpoint DLP can’t differentiate between “good” and “bad” browsing activity.
This means when a user pastes proprietary code into an AI sidebar, traditional security controls often can’t see it, let alone stop it. By implementing a ban that is difficult — if not impossible — to fully enforce, organizations aren’t eliminating the risk. Instead, they’re just making it invisible, operating in the shadows where the worst outcomes tend to happen.
Regulation Over Prohibition
When the US repealed Prohibition in 1933, it didn’t mean a free-for-all. In fact, it was quite the opposite. It led to the establishment of frameworks for licensing, quality control, and responsible consumption. The result was a system that balanced individual freedom with public safety and, critically, one that actually worked because it acknowledged reality.
Rather than prohibition, enterprises need controlled enablement, recognizing that AI browsers are part of the modern workspace. It will also require controls that can actually monitor and manage the risk. This might mean context-aware DLP policies that can detect when sensitive data is being shared with AI services. Or it could involve identity-based access controls that adjust permissions based on user behavior and risk profiles, or browser-layer security that provides visibility into what is actually happening in that last mile.
Learn from History
The lesson from every major technology shift — from BYOD to cloud to shadow SaaS — is that users will adopt tools that make them more productive, with or without IT approval. Security teams that acknowledge this reality and work with it are far more effective than those who fight it.
Gartner’s recommendation to ban AI browsers isn’t “wrong”; the risks are indeed real. But based on our research and the lessons of history, a blanket ban without effective controls is unenforceable and counterproductive. The better approach is to meet your users where they are, with what they need, and implement controls that reflect how corporate workforces actually behave.
Prohibition failed because it fought human nature. Let’s not make the same mistake with AI browsers.
