Artificial intelligence has shifted from an experiment to an expectation. Boards push CEOs about ROI. CEOs launch enterprise rollouts. Leaders invest in tools, platforms, and governance. Yet adoption still stalls. Work-arounds spread. Risk grows. Value lags.
The failure rarely sits with the technology. The breakdown sits in adoption design. Many organizations treat AI as an IT rollout or a standard change initiative. Tools gain approval. Policies circulate. Training launches. What’s missing is the rigor leaders apply to external products. Employees receive tools without a clear value proposition. Managers face delivery pressure without added capacity. Governance favors control over learning.
The result is predictable. Hesitation rises. Burnout grows. Execution fragments, especially in the middle of the organization.
Dana, a VP leading AI enablement at a global business-to-business services firm, lived this firsthand. The mandate was clear: deploy approved AI tools across marketing, sales, and customer success within eight months. Legal and PR aligned. Training sessions were launched as well as dashboards to track usage.
On paper, the rollout looked disciplined. Usage dashboards showed logins, prompts, and license activity. In practice, teams struggled to use the tools in live client work. Approved platforms added steps, limited outputs, or failed to match real workflows. Under delivery pressure, some teams tested briefly and moved on. Others complied superficially. Many shifted core work to external tools that felt faster and more flexible, while using approved systems only enough to register activity.
Dana ran into what we call the “mandate trap.” Leaders mandate AI from the top. The work of making it usable lands in the middle.
“We didn’t have a resistance problem,” Dana reflected. “We had a design problem.”
Her experience reflects what we see across organizations and in AI adoption workshops with C-suite and senior leaders. Teams revert to familiar workflows. Learning time disappears as daily delivery targets crowd out capability building. Worse, often, leaders label this gap as a resistance to AI, rather than identifying the underlying problems and solving them.
Through our advisory work and research, Jenny as an executive coach and learning and development expert, and Noam as an AI strategist, we see three practices separate the organizations that are able to scale AI within their organizations from the ones that have stalled rollouts.
Reframe ‘Resistance’ as a workflow problem
Leaders often label hesitation as a mindset issue. In reality, hesitation reflects risk. Employees disengage when expectations are off, outputs feel unattainable, or policies feel unclear. Under delivery pressure, people choose speed and safety. When AI complicates execution rather than simplifying it, adoption stalls.
Middle managers absorb the strain. They must deliver faster, coach new behaviors, manage risk, and hold uncertainty, without changes to incentives, capacity, or decision rights. Adoption breaks where pressure concentrates. The issue is not motivation. It is an internal product-market fit problem.
Internal product market fit exists when a tool solves a real workflow problem well enough that teams keep using it under real constraints. This insight shifted Dana’s rollout. She stopped pushing compliance and paused deployment to focus on solving the problems internal teams were running into.
What leaders can do:
- Diagnose hesitation: Identify where trust breaks. Unreliable outputs. Unclear revision paths. Slow approvals. Fix friction before pushing usage.
- Start small: Focus on one workflow, one outcome, one team learning together.
- Name the fear: Address job loss concerns directly. Clarify what stays human-led and how AI fits workforce plans. Psychological safety creates engagement.
- Relieve pressure: Protect learning time. Reset targets or adoption stays surface level.
When leaders treat resistance as a design signal, adoption moves from compliance to progress.
Treat Employees as ‘Customer Zero’
Leaders who succeed stop deploying AI and start selling it internally. Strong AI adoption follows a different playbook. Leaders anchor change in outcomes, redesign workflows, involve employees as cocreators, and invest in learning as a core capability. Dana pulled in platform teams, product marketing, communications, and functional leaders. Teams receive a clear value proposition tied to real workflow friction, not feature lists or policy decks. Trust grows when people understand how outputs form, how risks are managed, and where human judgment remains essential.
Early wins rarely show up as profit. They show up as faster cycles, higher-quality work, fewer errors, and less rework. Tools gain traction when they simplify work.
Dana ran short discovery sprints with marketing, sales, and operations. She stopped asking whether teams used the tools. She asked where work slowed, where rework piled up, and where judgment mattered most.
What leaders can do:
- Anchor on outcomes: Define what should feel faster, easier, or more reliable.
- Build trust early: Set clear governance and human-in-the-loop guardrails.
- Reimagine workflows: Integrate AI into existing systems and execution moments.
- Cocreate with employees: Involve teams in discovery and testing.
- Treat learning as core work: Protect time to experiment and build confidence.
When leaders treat employees as “customer zero,” adoption shifts from compliance to sustained change.
Protect the Middle to Unlock Learning
AI adoption breaks most often in the middle. Managers must change how work gets done while hitting the same targets. Meanwhile, managers drive most team engagement while carrying the heaviest strain. When learning competes with delivery, delivery wins.
Effective leaders redesign these conditions. They reset expectations to protect time to learn. They reward experiments that reduce risk over time. Before scaling, they ask two questions: Does this remove real workflow friction? Do people trust it enough to use it?
Dana acted on this insight. She gave managers protected time to test workflows and share findings. Early wins became simple playbooks. Only proven practices scaled. Managers moved from firefighting to coaching. Governance shifted from gatekeeping to enablement.
Dana narrowed focus instead of widening it. Teams submitted real workflow tests. Dana selected only those with clear impact and protected a full quarter to run them end to end. Some tools removed friction and earned trust. Others added noise. She scaled the winners and retired the rest.
What leaders can do:
- Spot what works: Identify teams who are already using AI to reduce friction. Turn those efforts into repeatable practices.
- Reward learning: Recognize managers for building capability and sharing insights, not tool usage.
- Run disciplined experiments: Require clear hypotheses, small pilots, and documented learning.
- Hold the bar high: Reward honest reporting of failures so scale stays credible.
AI transformation is an organizational design challenge, not an IT rollout. The mandate trap is avoidable. Leaders escape it when they stop pushing adoption and start earning it.
