Decentralized Solutions to AI Bias

Decentralized Communities Can Fix AI Bias

As AI rapidly scales, society faces an ideological impasse regarding the management of this transformative technology. The choices are stark: allow governments and corporations to dominate AI training and policy-making, or advocate for new governance models grounded in transparency, regeneration, and the public good.

Network States: A New Governance Model

Network states, digital communities leveraging blockchain technology to form borderless societies, present a promising approach to harmonizing AI with human well-being. With the continuous advancement of digital augmentation, it becomes essential to establish a new category of AI development administration focused on serving people rather than concentrating power.

The Bias Problem: Data and Governance Issues

Today’s generative AI systems are trained on narrow datasets and governed by centralized entities, such as xAI and OpenAI, which have limited public accountability. Training large language models on restricted data sets leads to biases that fail to reflect diverse perspectives and undermine equitable initiatives. For instance, Grok faced backlash for extremist responses after an update.

Network states can address these issues by enabling community governance, allowing for a new approach to training and democratizing AI. By shifting the foundational philosophy to consensus, ownership, privacy, and community, the negative implications associated with current AI discourse may be mitigated. Decentralized communities within network states would define their goals and datasets, training AI models that align with their specific needs.

The Role of Decentralized Autonomous Organizations (DAOs)

Impact DAOs can democratize AI by employing blockchain technology for social good. They could collectively fund open-source AI tools, facilitate inclusive data collection, and provide ongoing public oversight. This approach transitions governance from mere gatekeeping to a model of stewardship, ensuring that AI development benefits all of humanity. Shared responsibility can ensure that the needs of vulnerable populations are included, fostering greater stakeholder buy-in for AI’s advantages.

Centralization: A Threat to the AI Commons

Over 60% of the leading AI development is concentrated in California, highlighting a dangerous centralization of influence that is not only geographical but also political and economic. For example, xAI faced legal action for its environmental impact due to using gas turbines to power its data centers. This exemplifies a misalignment between local government actions and the community’s demand for environmental regulation. Such power can extract societal value while externalizing harm, particularly through AI’s high energy demands, which disproportionately affect certain communities.

Network states offer an alternative model: decentralized communities unbound by borders where digital citizens collaborate to create AI governance frameworks. Impact DAOs embedded within these systems would enable participants to propose, vote on, and implement safeguards and incentives, transforming AI from a tool of control into a commons-oriented infrastructure. Expanding the representation of AI will inform its best applications for positive societal impact.

Toward Transparent, Regenerative AI Management

Most AI systems currently operate as algorithmic black boxes, making decisions that affect people without adequate human input or oversight. From biased hiring algorithms to opaque healthcare triage systems, individuals are increasingly subjected to automated decisions with no influence over their formation.

Network states disrupt this model by allowing on-chain governance and transparent public records. Individuals can observe rule-making processes, participate in their formulation, and exit if they disagree.

Impact DAOs build on this vision by mitigating harm and incentivizing the replenishment of public goods. They invest in the long-term sustainability of fair, auditable systems, creating open, transparent developments for the community that may also invite external parties to contribute resources.

The Next Phase of AI Development

Legacy nation-states struggle to effectively regulate AI due to outdated digital contexts among lawmakers, fragmented policies, and an overreliance on traditional tech leadership. In contrast, network states are constructing models from the ground up, utilizing blockchain-native tools, decentralized coordination, and programmable governance. Impact DAOs, as open and public digital communities driven by purpose, can usher in a new era of AI development that aligns incentives and fosters participatory, representative, and regenerative AI through the integration of decentralized governance with generative AI.

Building Foundations for Collective Good

AI should be viewed as a public good, not merely as a tool for efficiency. New governance systems must be open, transparent, and community-led to foster smart, fair innovation and development planning. By embracing the inclusive, technological, and philosophical aspects of network states and impact DAOs, we can construct these systems today. Prioritizing investments in infrastructure that supports digital sovereignty and collective care is essential for designing an AI future that benefits people rather than just profits.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...