When a global financial services firm sought Sam’s guidance, the problem seemed familiar. The firm had deployed AI tools across its business. Adoption was uneven, and the gap between teams was growing.
In some corners of the organization, people were already using AI to draft client materials, summarize research, and speed up analysis. In others, they avoided it entirely: unsure what was permitted, worried about quality, or skeptical that leadership really meant it. Managers were fielding questions they weren’t equipped to answer. If my team uses AI, what changes in our standards? What happens to accountability?
The leadership team quickly realized the problem wasn’t the technology. It was the people around it. The evidence is clear. BCG’s 2024 research finds top AI-performing companies invest 70% of their transformation resources in people and processes, not technology. Mercer’s Global Talent Trends 2026 finds that employee concern about AI-driven job loss has surged from 28% to 40% in two years—anxiety that impedes value creation unless leaders address it directly. The World Economic Forum’s Future of Jobs Report 2025 projects 39% of core workforce skills will change by 2030. AI has not made human development less important. It has made it the primary lever for competitive advantage.
Based on our work with senior executives—Jenny as an executive coach and leadership development expert, Sam as a global transformation leader who helps organizations redesign how they develop and deploy talent—we have identified four strategies for building the learning culture that makes AI investments work.
1. Make It Safe to Try
The first capability is cultural, not technical. Mercer’s research finds that for innovation to succeed, employees must feel safe to experiment, ideate, and face potential failure. McKinsey’s research on psychological safety finds that a positive team climate is the single most critical driver of willingness to experiment. Yet McKinsey’s research found fewer than half of employees report one. That gap is where most AI adoption efforts quietly die.
“Michael,” a senior marketing and sales leader Jenny worked with at a global consumer packaged goods company, worked with his team to define what good experimentation looked like, named the behaviors that signaled progress, and made clear that early mistakes were expected, not penalized. Within six months, voluntary AI tool usage across his team had increased by more than 40 percent, and managers who had previously avoided AI began openly sharing what they were testing in team meetings—modeling the curiosity the culture needed. “We can buy the best AI on the market,” he told Jenny. “But if our managers don’t know how to lead differently, the tools are just expensive noise.”
- Provide access to tools, focused training, and human–AI coaching at every level
- Model the right behaviors from the top: leaders who use AI openly and share what didn’t work give others permission to do the same
- Make AI fluency visible in promotion and talent decisions
- Treat adoption as a change management effort, not an IT rollout
Pro tip: Run a “psychological safety audit” before your AI rollout. Ask managers: Do your team members feel safe admitting they don’t know how to use a new tool? If the honest answer is no, address the culture first. No training or tooling will overcome a team that’s afraid to try.
2. Build Capability That Matches the Work
Once people are willing to try, the second barrier appears: they don’t know how to use AI well for their specific work. Generic training rarely closes this gap. The organizations making real progress have moved from one-size-fits-all workshops to role-based enablement: practical tools, prompt playbooks, communities of practice, and coaching anchored in the work they actually do.
This was the friction Michael’s team encountered. Employees weren’t resistant—they were underprepared. They hadn’t been shown what “good” looked like for their role: how to draft a compliant client summary with AI, how to validate AI-generated segmentation analysis, or how to build a prompt that produced usable output. Without that guidance, the tool felt risky, not helpful.
The 70-20-10 learning model holds that 70% of adult learning comes from on-the-job experience, 20% from coaching and social interaction, and only 10% from formal training. Yet most AI training programs default to exactly the kind of formal instruction—mandatory modules, certification courses—that the model says accounts for only 10% of how people actually learn. The most effective programs embed AI into real workflows first, then surround that experience with coaching and peer learning—using formal training as a foundation, not the primary event.
Michael assigned “AI Coach” responsibilities across key projects and launched “AI Office Hours” so employees could experiment and learn together in real workflows rather than in isolation. AI Coaches became peer resources, not gatekeepers—colleagues who could demonstrate what a strong prompt looked like for a client brief or walk someone through validating AI-generated analysis before it went external. Within three months, the sessions had become standing fixtures, with attendance doubling as word spread that the learning was practical and immediately applicable. Employees who had been hesitant began bringing their own use cases, and the team’s output quality on AI-assisted work measurably improved.
Pro tip: Start with the tasks your team already does repeatedly. Identify two or three high-frequency, low-risk workflows and build role-specific AI guidance around those. Competence built in context spreads faster than training delivered in a classroom.
3. Govern for Speed, Not Just Safety
As AI usage expands, a governance gap opens. Managers start asking questions no one has answered: What data can we use? Who reviews AI-generated client materials? What happens if the output is wrong? Without clear answers, even willing employees hesitate.
Effective leaders treat governance as the condition that makes adoption sustainable, not a constraint on it. McKinsey finds that companies investing in trust-enabling activities—codified ethics policies, clear data governance, consistent follow-through—are nearly twice as likely to see revenue growth exceeding 10%. Short policy documents outperform long compliance frameworks that no one reads.
Michael built this in parallel with capability development. His team created a one-page “AI use framework” defining three zones: tasks where AI could be used independently, tasks requiring human review—aka human in the loop—before going external, and tasks that remained human-only. That clarity didn’t slow adoption. It accelerated it. Before the framework existed, managers were making individual judgment calls about what was safe to use—and defaulting to caution. Once the three zones were defined and shared, the cognitive load of every AI decision dropped significantly. Employees stopped asking for permission on routine tasks and started spending that energy on learning how to do them well. Adoption in the “use independently” zone nearly doubled in the quarter after the framework launched, and the volume of questions escalating to legal and compliance dropped by more than half.
Pro tip: Build a one-page AI use framework before you launch any tools. Define three zones—use independently, use with review, human-only—specific enough for a manager to apply in a team meeting. Clarity about what’s allowed is the fastest way to remove the hesitation that stalls adoption.
4. Redesign the Division of Labor
The fourth capability is the most consequential: defining clearly where AI creates value, what work belongs to humans, and how those boundaries translate into redesigned workflows and decision rights.
Eighteen months into his initiative, Michael’s team had mapped the workflows where AI could draft, organize, and synthesize, and deliberately protected work that required human judgment: reading a retailer relationship, coaching a team through a difficult quarter, making a positioning call competitors couldn’t reverse-engineer. The division wasn’t about what AI could technically do. It was about what the business needed humans to own.
The business case is clear. Over three years, BCG found AI leaders achieved 1.5x higher revenue growth and 1.6x greater shareholder returns. The differentiating factor wasn’t model sophistication—it was the deliberateness of work redesign. Mercer’s Global Talent Trends 2026 finds that 63% of C-suite leaders say redesigning work for AI will deliver the highest people-related ROI. Yet only one-third feel their workforce is ready to make it work.
Pro tip: Map your team’s highest-frequency workflows before deciding where AI fits. For each, ask: Is this where speed and consistency are the primary value? Or where judgment and accountability matter most? Build the division of labor from that answer and revisit it every six months.
AI Becomes Normal—and That Is the Point
Eighteen months after Michael launched his people development initiative—in parallel with the technology deployment, not after it—his business unit was outperforming peers across every AI-linked productivity metric. Not because it had better software. Because it had better-prepared leaders.
The leaders who drove that shift weren’t the ones who knew the most about AI. They were the ones who redesigned work, built trust, and helped people adapt. AI stopped being a special initiative and became part of the professional toolkit.
Enabling a workforce to benefit from AI is not a software rollout. It is a leadership shift. The best leaders in the AI era are not waiting for the technology to prove itself. They are investing in the people who will make it matter. Continuous development is not a benefit you offer your people. It is the strategy.
