Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for global professionals · Friday, June 27, 2025 · 826,383,692 Articles · 3+ Million Readers

Inclusive AI by Design: A Global Push to Rethink Who AI Is Built For

By Jamal D. Sakara Hamidu, Author of Realistic Optimism: The Greatest Threat in AI Isn’t the Tech Itself – It’s How We Choose to Adopt It

LONDON, ENGLAND, UNITED KINGDOM, June 27, 2025 /EINPresswire.com/ -- As governments, businesses, and communities race to adopt artificial intelligence, a critical question is surfacing: Who is AI really built for—and who’s being left behind?

The principle of “Inclusive AI by Design” is gaining traction across policy and industry circles, urging that inclusion must be built into AI systems from the outset—not added later as an ethical patch. This shift in thinking challenges long-held assumptions in the tech world and calls for a redesign of systems, teams, and data practices to better serve diverse users—especially in underrepresented regions and communities.

At its core, Inclusive AI by Design argues that inclusion is not an accessory. It is not a feature to be toggled on after deployment. Rather, it is a foundational decision—one that must be made at the start of the AI lifecycle. When inclusion is left to chance, exclusion becomes systemic.

This emerging paradigm views inclusion as infrastructure, not intention. Exclusion, after all, is not just philosophical. It is material. It appears in the assumptions that shape AI systems: access to high-speed internet, default English-language prompts, and training data drawn almost exclusively from dominant markets. If an AI tool assumes stable electricity, the latest devices, and high digital literacy, entire communities are already excluded—long before they interact with the system.

These issues are no longer hypothetical. A growing body of evidence shows how gaps in design and data result in measurable harms—from misdiagnoses in healthcare algorithms to discrimination in hiring tools and credit scoring systems. These outcomes are not anomalies. They are signals of a deeper design failure.

This concern formed the core of a recent conversation I had with xRaised, where we explored how exclusion in AI often begins before a user ever encounters a system. From data gaps to access assumptions, the discussion underscored how the design stage itself often sets the boundaries of who gets seen, heard, or served by AI tools.

Experts are now identifying four recurring dimensions of AI-driven exclusion:

1. Access – When stable internet, modern devices, or electricity are assumed, individuals in under-resourced settings are left out before engagement begins.

2. Representation – If lived experiences are missing from the training data, the system does not merely misinterpret—it renders entire communities invisible.

3. Usability – Interfaces that presume digital fluency, high literacy, or neurotypical behaviour exclude users who navigate the world differently.

4. Voice – When design decisions are made without affected communities at the table, the outcome reflects a narrow worldview.

This silent architecture of exclusion is already shaping who gets seen, served, and supported by AI. And once embedded in algorithms, exclusion scales quickly—quietly, efficiently, and often invisibly.

Consider language. Most AI models are trained in English. Speakers of Swahili, Hindi, Hausa, or Bengali often find themselves filtered through systems not designed for them. Or take neurodivergent users. Systems trained on normative behavioural patterns often flag their usage as errors or anomalies. These are not fringe use cases—they are the real-world edge cases that AI must learn to serve.

The implications extend globally. In many parts of the Global South, AI systems are deployed with little adaptation to local context. Communities become end-users of tools built elsewhere—with limited say in how those tools are imagined, built, or governed.

The Inclusive AI by Design approach flips that dynamic. Rather than building for the "average user," it urges design for the edges—for those most often excluded. By doing so, organisations not only widen reach but often unlock innovations that benefit all users. In sectors like healthcare, education, and public services, inclusive design is now seen not just as ethical best practice, but as strategic advantage.

Inclusive systems are more adaptable, more trusted, and more resilient. They perform better across diverse use cases, reduce reputational risk, and open pathways to underserved markets. Inclusion, in this view, is not just a value—it is a driver of performance.

But making inclusion real requires more than intent. It demands new capabilities and governance models. Leading organisations are taking steps to operationalise inclusion by:

• Auditing datasets for structural gaps
• Testing systems under real-world constraints
• Embedding inclusion into KPIs, governance frameworks, and design reviews

Crucially, these efforts must be cross-functional. While leadership sets the tone, product owners must take accountability for use case impact; design teams must ensure usability across different access needs and cognitive profiles; and governance functions must translate inclusion into enforceable standards. Without executive sponsorship through budgets, timelines, and metrics, these actions struggle to gain traction.

The stakes are high. As AI systems expand their influence, from hiring decisions to healthcare delivery, the cost of exclusion grows. Left unaddressed, today’s design gaps become tomorrow’s systemic failures.

Inclusive AI by Design offers a timely response. It is a growing global call—across sectors, disciplines, and borders—to embed inclusion where it matters most: at the start.


About Jamal Hamidu

Jamal D. Sakara Hamidu is the author of Realistic Optimism: The Greatest Threat in AI Isn’t the Tech Itself – It’s How We Choose to Adopt It. A Strategic Change Leader and former EY advisor, he previously served as a Culture & Capability Lead at the Chief Data Office of Deutsche Bank. He is the architect of The RAPP Way™—a strategic change methodology for responsible AI adoption—and the creator of changeportal.io. As Founder and Managing Principal of North Sakara Consulting, Jamal helps organisations navigate AI and digital change through people-centred, governance-led strategies.


About Xraised

Xraised explores innovation through in-depth conversations with pioneers, founders, and thought leaders building the future across industries.

Gianmarco Giordaniello
Xraised
email us here
Visit us on social media:
LinkedIn
Instagram
X

Powered by EIN Presswire

Distribution channels: Banking, Finance & Investment Industry, Business & Economy, Companies, IT Industry, Technology

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Submit your press release