AI, Accessibility, and the Future of Public Systems

By Chinmay Bhaise • November 2025

Introduction: Beyond Interfaces, Toward Equity

Public systems—healthcare, transit, and government services—touch millions of lives every day. When design overlooks inclusion, it doesn’t just annoy; it builds invisible walls. On the other hand, thoughtful design can transform these systems into ladders for equity. The World Health Organization estimates that 1.3 billion people globally experience significant disability—a population larger than North America and Europe combined. With AI now embedded in core public services—from voice assistants helping citizens navigate forms to automated systems that screen eligibility for benefits—the stakes are enormous. When wielded responsibly, AI can break down barriers. When neglected, it risks building them taller and thicker than ever.

Service Design: The Blueprint for Accessibility

True accessibility isn’t a “nice-to-have” feature—it’s the foundation. Service design tools like journey mapping and service blueprints reveal where users stumble: a digital form that assumes everyone owns a smartphone or a hospital sign that contradicts online instructions. These seemingly small gaps break trust and exclude users who already face barriers. A striking example comes from the UK’s Government Digital Service. By mapping the entire patient journey for the NHS, they redesigned the online booking system to better serve older adults and people with visual impairments—a small change that created a wave of new access.

AI’s Dual Role: Promise and Peril

AI is a double-edged sword: a powerful force for equity—if designed for it—or a silent amplifier of bias.

The Promise:

Automated captioning, image descriptions, and real-time translation are already unlocking education and public information for millions. Adaptive interfaces now personalize reading levels, contrast, and input modes, tailoring digital experiences to individual needs—a glimpse of real inclusion powered by AI.

The Peril:

Algorithmic bias is disturbingly real. The landmark Gender Shades study exposed how commercial facial recognition software misidentified women of colour at disproportionate rates; what’s true for race and gender holds doubly true for disability if data isn’t representative. Meanwhile, a recent systematic review found that most AI accessibility tools focus on vision, leaving those with motor, auditory, or cognitive disabilities behind—an imbalance that reinforces the status quo instead of challenging it. And, when AI takes on critical functions like benefits eligibility, many users don’t realize when an algorithm—not a human—is making the call. Civil rights experts warn that these “black box” systems can silently erode public trust if there’s no way to question or appeal a decision.

The Responsible Path Forward

How do we avoid AI becoming just another gatekeeper? Credible global standards and research point to a few core principles:

  • Co-design with People with Disabilities: Real inclusion means involving people with lived experience from day one. Participatory design is how we build solutions that actually work for the people who need them.
  • Mandate Representative Data: If training data only covers a subset of abilities, ages, races, or genders, bias is baked in from the start.
  • Ensure Transparency and Accountability: Public systems must be upfront about when and how AI is used and always provide a clear, human appeal mechanism.
  • Test Across All Disabilities: Move beyond vision-focused features—broader testing is critical to close the accessibility gap.

Conclusion: Guardians of Trust

Public systems should embody our highest values: dignity, accessibility, and equity. AI can open new doors—or shut them forever. Designers aren’t just making things look beautiful; they are guardians of public trust.

By making accessibility foundational and holding AI accountable, public systems become places where every person—including those with disabilities or less technological confidence—can participate fully. The most impactful technology isn’t the flashiest; it’s the one that quietly disappears, leaving only access and dignity in its wake.

About Chinmay Bhaise

Chinmay Bhaise is a UX and Service Designer focused on accessibility, public systems, and ethical applications of AI. He explores how design can bridge equity gaps and make technology more human-centred.

Share the Post:

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts