Opinions expressed by Entrepreneur contributors are their own.
Key Takeaways
- If you build inclusive research into your product cycle, you design with intention instead of accidentally shutting customers out.
- The customers you’re losing won’t show up in a support ticket. They’re invisible right up until you build research that makes their experience visible before you ship.
There’s a usability session from a couple of years ago that I keep coming back to. It involved an enterprise collaboration tool with tens of millions of users. A blind user navigated the interface using a screen reader.
Halfway through, the screen reader skipped an important notification entirely. A consent disclosure was right there on screen, but the user had no idea they’d agreed to anything.
For sighted users, the flow worked fine. For a blind user, the product quietly broke a promise it didn’t even know it was making.
Eight years of leading UX research across telecommunications, financial services and enterprise software, and I’ve collected more moments like that one than I’d like.
After enough of them, you stop thinking about accessibility as something separate from the rest of your typical work. Building accessible products should always be the main work.
If you’re not designing with the full range of people in mind, you’re accidentally building something that shuts some of them out. And when teams commit to fixing that, the product gets better for everyone.
Stop treating accessibility as a phase two problem
Product teams often think about accessibility as a “nice-to-have”. Ship it, get it working for the majority, circle back for assistive technology users if there’s time. Except there’s never time. Or maybe there is, a year later, but by then every design decision has calcified around the assumption that your users can see the screen and use a mouse.
Good luck untangling that. It’s expensive, it’s painful and you always end up with gaps.
Here’s what should bother teams more than it does. The WHO states that 1.3 billion people globally live with some form of disability. Working-age adults with disabilities in the U.S. control roughly $490 billion in disposable income. And these are not people who call your support line.
They try the product, can’t get through a critical flow, and leave. No review, no feedback form, no signal in your analytics. Just gone.
What does inclusive research look like in practice?
Product teams assume this means spinning up a separate research track with its own budget. It doesn’t. It means changing who is included in the studies you’re already running. Scoping a usability evaluation for a big feature? Think about who’s actually out there using your product.
Some of those people navigate by screen reader. Some are primarily keyboard users. Some are neurodivergent and process information differently than your team assumed. They should be in the study with your power users to build an “inclusive” user panel.
Where it really matters is in what you test. Running someone through a full workflow with assistive technology is completely different from spot-checking components in isolation. That consent notification? The team tested individual UI elements, and everything looked fine. Nobody ran the end-to-end flow with a screen reader.
As soon as a real user tried tabbing through that interface, the failure was obvious. These breakdowns hide in the seams between components, and they only come out when someone uses the product the way they’d really use it.
Neurodivergent participants, specifically, taught me something I didn’t expect. They pick up on the friction that every user is experiencing, but that neurotypical participants may just live with. Confusing labels. Patterns that aren’t consistent from one screen to the next.
Too many elements are competing for your eye at once. In survey data, this shows up as mild dissatisfaction with no clear cause. Neurodivergent participants will tell you exactly what’s wrong. And every time my team acted on what they said, satisfaction improved across the whole user base.
Previously, I used to write up accessibility findings in their own separate report. Stakeholders treated it exactly how you’d expect: as something that can be postponed. When I started sharing these insights into the same readout where I presented task completion and satisfaction data, the dynamic flipped.
They carried weight because they were sitting next to every other finding. If you want accessibility bugs to get engineering time, they need to be in the same conversation where every other usability issue gets prioritized.
Sign up for How Success Happens and learn from well-known business leaders and celebrities, uncovering the shifts, strategies and lessons that powered their rise. Get it in your inbox.
AI is compounding the risk
My current work is leading research for AI-powered calling and collaboration products used by over 80 million people, and I can see a new wave of accessibility failures forming. Voice agents that fall apart with speech differences. AI-generated summaries that screen readers miss. Interfaces that rearrange themselves and destroy keyboard navigation. AI Products ship fast, and every release that skips assistive technology testing is quietly pushing more users out.
Flip side, though: a lot of AI products right now aren’t functioning well for people with disabilities. Teams that crack this won’t just deliver for an underserved audience, they’ll set the bar that regulators and enterprise procurement teams measure everybody else against.
Sign up for Entrepreneur’s Franchise Bootcamp, a free, 5-day email course on how to find and invest in your first profitable franchise — no business experience required.
Where to begin
Before you plan your next research study, try this: Pull up your product’s most critical user flows and run them against the Web Content Accessibility Guidelines. W3C is a global standard for how digital content should work for people with disabilities. WCAG asks four questions about your content: can people perceive it, operate it, understand it and is it robust enough to work across different assistive technologies?
Most enterprise contracts and government procurement already expect Level AA conformance. Mapping your key flows against that bar will show you where the cracks are and which workflows need real testing with assistive technology users next.
The customers you’re losing won’t show up in a support ticket. They won’t file a churn report. They’re invisible right up until you build research that makes their experience visible before you ship. That’s not compliance. That’s a strategic decision about who your product is actually for.
Key Takeaways
- If you build inclusive research into your product cycle, you design with intention instead of accidentally shutting customers out.
- The customers you’re losing won’t show up in a support ticket. They’re invisible right up until you build research that makes their experience visible before you ship.
There’s a usability session from a couple of years ago that I keep coming back to. It involved an enterprise collaboration tool with tens of millions of users. A blind user navigated the interface using a screen reader.
Halfway through, the screen reader skipped an important notification entirely. A consent disclosure was right there on screen, but the user had no idea they’d agreed to anything.
