Transforming AI for Development: A People-Centered Approach

Stop Development Project Failure! Use This People-Centered AI Playbook

I’ve lost count of how many AI-for-development projects I’ve seen crash and burn. Not because the technology wasn’t impressive or the intentions weren’t noble, but because teams fundamentally misunderstood what makes Generative AI work in real-world development contexts.

The latest evidence comes from Dalberg Data InsightsPeople-Centered AI Playbook, which methodically dismantles the Silicon Valley mindset that’s been imported wholesale into development contexts.

After working with NGOs, social enterprises, and governments across health, agriculture, education, and financial inclusion, their message is consistent: organizations need practical support to move from theory to action, and they don’t want to reinvent the wheel.

The Problem with Tech-First Thinking

The development sector has fallen into the same trap that plagued early ICT4D initiatives: assuming that importing methodologies from high-resource contexts will somehow work in environments with completely different constraints. The 18 AI applications highlighted earlier show what’s possible, but they don’t explain why so many similar initiatives stumble.

Dalberg’s framework starts with a radical premise: before considering any technology, teams must ground their ambitions in real user needs, organizational realities, and workflow challenges. Their six-phase approach (Discover, Define, Design, Develop, Pilot, Scale) deliberately front-loads the human research that most teams treat as an afterthought.

Consider their “Discover” phase, which can take weeks of user interviews, workflow mapping, and organizational assessment before a single line of code gets written. This is a fundamental rejection of the “build first, find users later” mentality that dominates mainstream AI development.

3 Critical Insights Challenging Conventional Wisdom

This playbook isn’t just another framework. It’s a direct challenge to how we think about AI adoption in low-resource settings. The core argument is provocative: most AI projects fail because teams skip the human work that makes technology sustainable.

1. AI Readiness Is About People Systems

Most AI readiness assessments focus on technical infrastructure: bandwidth, devices, data pipelines. Dalberg flips this by emphasizing what they call “people readiness”. The extent to which intended users, staff, and partners are willing, skilled, and motivated to adopt and sustain an AI solution.

The playbook references diagnostic tools that assess strategy, data maturity, ethical considerations, and organizational culture as primary determinants of success. Microsoft’s AI Readiness Assessment and GSMA’s AI Ethics framework get mentions, but Dalberg’s own DART assessment is specifically built for social impact in low-resource settings.

This people-first approach explains why government-led initiatives with existing infrastructure integration consistently outperform standalone digital solutions, as observed in the analysis of AI governance challenges.

2. Problem Definition Beats Solution Innovation

The playbook’s most contrarian element is its Define phase, which systematically tests whether AI is even the right tool for identified challenges. They include a decision framework asking whether tasks are high volume, repetitive, or pattern-based and whether simpler tools cannot solve it as effectively.

This represents a fundamental philosophical shift. Instead of starting with AI capabilities and seeking applications, teams begin with specific workflow challenges and test whether AI offers measurable performance gains over alternatives like workflow redesign, basic digital tools, training, or policy changes.

The framework includes explicit guidance to flag challenges for non-AI approaches and avoid building AI for its own sake. In resource-constrained contexts, deploying AI without clear fit can waste time, introduce risk, or make systems more fragile.

3. Scaling Means Building Robust Systems

The final insight challenges how we think about successful AI deployment. Dalberg’s Scale phase isn’t about user acquisition. Scale is institutionalization, continuous development, and contextual adaptation.

Their framework recognizes that what works in one setting may not in another and requires teams to re-examine assumptions, language, and data flows as solutions expand across geographies or user groups. This adaptive approach stands in stark contrast to platform thinking that assumes universal applicability.

The playbook emphasizes that scaling requires shifting from activity tracking to impact evaluation using proportionate, credible methods to validate performance, equity, and cost-effectiveness. This evidence-based approach to expansion explains why USAID’s AI implementation guidance emphasized continuous learning and iterative development.

Cross-Cutting Enablers: Where Real Work Happens

Perhaps most importantly, the playbook identifies three cross-cutting enablers that run through all six phases: People, Equity & Inclusion, and Data Governance. These aren’t add-on considerations—they’re fundamental design requirements.

  • The People dimension recognizes that success depends on building trust, aligning leadership, and equipping teams with the skills and confidence to use AI responsibly. This human-centered approach ensures that people must remain at the center, engaged, trained, and supported for adoption.
  • The Equity & Inclusion framework requires teams to examine who is represented in your data, who participates in testing, and who faces barriers such as limited connectivity, literacy, language, or access to devices. This systematic attention to inclusion helps prevent unintended harms and ensures AI delivers value across different needs and contexts.
  • Data Governance encompasses data quality, access, privacy, security, and compliance throughout all phases, ensuring AI systems are ethical, reliable, and contextually appropriate.

Implementation Reality Check

The playbook acknowledges what practitioners already know: few teams have every skill in-house.

The pragmatic approach suggests partnerships with universities, local tech-for-good groups, or global networks for specialized support, while outsourcing short-term tasks like data labeling and training internal teams for core functions.

This collaborative model aligns with the observation that successful AI initiatives require interdisciplinary approaches that combine technical expertise with domain knowledge. As demonstrated by Stanford’s Human-Centered AI Institute, bringing together computer scientists, ethicists, social scientists, and domain experts produces more robust and sustainable solutions.

The framework also provides practical templates for user persona development, problem statement framing, use case definition, and feasibility assessment. These tools translate abstract methodologies into actionable workflows that teams can implement immediately.

What This Means for Our Development Practice

This playbook matters because it offers a methodologically rigorous alternative to both AI evangelism and AI skepticism.

It neither dismisses AI’s potential nor accepts uncritical adoption. Instead, it provides a systematic approach to determine when, where, and how AI can create meaningful value in development contexts.

The framework’s emphasis on iteration and evidence-based decision-making reflects what is known about successful technology adoption in resource-constrained settings: solutions must be designed for local conditions, validated through real-world testing, and adapted based on user feedback.

Most significantly, the playbook positions AI as one tool among many, not as an inherent good. By requiring teams to justify AI solutions against alternatives and measure impact against defined value drivers, it promotes responsible innovation that serves user needs rather than technological possibilities.

For development organizations considering AI initiatives, this framework offers both a roadmap and reality check.

The future of AI in development won’t be determined by algorithm advances or funding announcements. It will be shaped by whether we’re willing to do the human-centered work that makes technology truly useful. This playbook shows us how.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...