OPINION
On March 10, 2026, Microsoft patched CVE-2026-26144, a cross-site scripting (XSS) vulnerability in Excel. XSS in Office isn’t anything new, but what makes this XSS different is what happens after the script executes.
The vulnerability chains with Copilot Agent mode. An attacker embeds a malicious payload in an Excel file. After a user opens it, the XSS fires without the user ever clicking anything. However, unlike most XSS attacks, which aim to steal a session cookie or redirect the user to a phishing site, this attack hijacks the Copilot Agent and silently exfiltrates data from the spreadsheet to an attacker-controlled endpoint: no user interaction, no visual prompt to indicate that anything had happened. The AI does the exfiltration for you.
Zero Day Initiative’s Dustin Childs called it “a fascinating bug” and warned that this attack scenario will become more common. While that is true, it is an understatement. This is not merely a single bug; it marks the start of a new wave of exploits that leverage AI agents’ capabilities.
For 30 years, we have categorized vulnerabilities by type, such as XSS, SQL injection, buffer overflow, and path traversal. Based on those classifications, we build detection rules, set patch priorities, and train developers on them. The mental model is that the vulnerability category determines the impact: an XSS steals cookies, an SSRF leaks internal data, and a command injection grants shell access.
AI agents have broken this model. When an AI agent operates inside the application, every traditional vulnerability gains a new capability: autonomous action. The XSS that previously stole a cookie can now instruct Copilot to read every cell in the workbook and post the contents to an external URL. The potential damage is no longer bounded by what the exploit code can do. It is bounded by the permissions granted to the AI agent.
The hardest lesson from production I learned is that the trust boundary between an application and its AI agent is effectively non-existent. Copilot Agent in Excel can read, analyze, and transmit data because that is what Excel does. There is no separate permission layer between “what Excel can access,” and “what Copilot can do with that access.” When the application is compromised, the AI inherits the compromise automatically.
This concept is what I call “privilege amplification.” The bug serves as the entry point, while AI acts as the weapon. The blast radius is determined by the AI agent’s access scope rather than the exploit’s technical capabilities.
What to do Beyond Patching
You should patch CVE-2026-26144. That’s the minimum required to close the hole. The architectural problem persists across every application that embeds an AI agent or assistant.
Restrict outbound network access from AI-enabled applications. If Excel with Copilot Agent does not require the ability to make arbitrary HTTP requests, block all egress traffic at the network layer to prevent unknown endpoints from being contacted. This single control would limit the exfiltration path for CVE-2026-26144.
Monitor AI-initiated network activity as a distinct detection category. Your DLP and network monitoring tools probably treat user-initiated file uploads and AI-initiated data transfers as the same thing. They should not. Any Excel process that makes HTTP POST requests to unfamiliar endpoints is worth alerting on, especially if the request originated from the AI subsystem rather than a user action.
Reassess AI Assistant permissions in your threat model. When you assessed the risks of installing Copilot, you likely evaluated it as a productivity tool. Look at it again as a privileged agent with both read and network access to everything the host application can access. If this application is compromised, what can the AI agent do with the attacker’s commands? If you can’t answer that question, your threat model has a gap.
Modify your prioritization for AI-enabled application vulnerabilities. An XSS in Excel might score as medium severity under traditional CVSS. An XSS that can commandeer an AI agent to exfiltrate your entire financial database is a completely different risk. Unless your scoring models are updated to account for AI amplification, security teams will need to increase the priority of vulnerabilities in AI-enabled applications manually.
CVE-2026-26144 will get patched. People will move on. The pattern won’t. Every application shipping an embedded AI agent is creating a new class of post-exploitation capability that our taxonomies, detection rules, and risk models were not designed to address. The Agentic AI era did not create new types of vulnerabilities; instead, it amplified all existing ones. The security teams that recognize this trend will reprioritize accordingly. The ones that don’t will keep triaging AI-amplified exploits as medium-severity XSS.
Don’t miss the latest Dark Reading Confidential podcast, Security Bosses Are All in on AI: Here’s Why, where Reddit CISO Fredrick Lee and Omdia analyst Dave Gruber discuss AI and machine learning in the SOC, how successful deployments have (or haven’t) been, and what the future holds for AI security products. Listen now!
