Funding the Future: Philanthropy’s Role in AI Regulation and Safety

Who’s Funding AI Regulation and Safety?

Philanthropy’s relationship with AI is complex, reflecting both the potential benefits and the risks associated with this transformative technology. On one hand, AI has made significant advancements in various fields, particularly in science and medicine, such as the development of COVID-19 vaccines. Philanthropic funders have actively supported AI-driven research and development in these areas.

Moreover, funders have begun to support AI implementation in the nonprofit sector, focusing on equitable access, training, and education. Tech funders, such as the newly established OpenAI Foundation, have led these efforts, awarding $40.5 million to over 200 organizations through its People-First AI Fund.

However, the civic sector cannot overlook the harms that AI can cause. With ongoing legal and ethical dilemmas surrounding the training of large language models, as well as issues like mass firings and job losses, the potential downsides of unchecked AI usage — particularly in generative AI — are significant. For example, Elon Musk’s AI company xAI has faced backlash due to its chatbot Grok being used to generate explicit images.

Philanthropic Efforts for AI Safety and Regulation

Despite the challenges, some philanthropic funders are stepping up to support AI safety and regulation. Their efforts range from researching AI’s potential risks to backing advocacy groups pushing for regulatory legislation. Notably, funding for AI safety remains dwarfed by the financial might of tech companies.

Humanity AI

One significant initiative is Humanity AI, launched last fall as a five-year, $500 million effort to ensure that people can shape the future of AI for public benefit. Initial funders include the Doris Duke Foundation, Ford Foundation, and Kapor Foundation. This initiative aims to expand the decision-making group regarding AI design, development, and governance, while strengthening organizations that focus on public goods and access to AI.

Omidyar Network

The Omidyar Network, founded by eBay’s Pierre Omidyar, focuses on the intersection of technology and society. Its efforts include supporting advocacy for social media warning labels, addressing the impact of AI on children and teens, and promoting consumer privacy protections. They aim to rethink online safety and digital rights, particularly for vulnerable populations.

Schmidt Sciences

Schmidt Sciences, founded by Eric and Wendy Schmidt, has a dedicated program for advancing AI safety. They believe that the field of AI safety lacks robust and verifiable measures, and they have launched a $10 million AI Safety Science program to develop methods for evaluating large language models to mitigate their harm.

Patrick J. McGovern Foundation

The Patrick J. McGovern Foundation channels technology, including AI, into positive social impact. Their funding has spanned areas like digital health and climate change. They have awarded grants to organizations like Thorn to combat AI-generated abusive materials and to the Institute for Security and Technology to evaluate risks associated with large language models.

Coefficient Giving

Coefficient Giving, formerly known as Open Philanthropy, supports AI safety through its Navigating Transformative AI fund, which aims to reduce the risk of AI-related catastrophes. They have made over 440 grants to enhance the trustworthiness and governance of AI systems.

Current AI

Current AI, an international collaboration founded last year, combines efforts from philanthropy, governments, researchers, and industries. Its focus areas include audit and accountability, trust and safety infrastructure, and safeguarding children from potential AI harms.

AI Safety Fund

The AI Safety Fund, launched by the Meridian Institute in 2023, emphasizes research for the responsible development of frontier AI models. Supported by tech companies and philanthropic funders, this fund aims to minimize risks associated with AI while ensuring standardized evaluations of AI capabilities.

Jaan Tallinn

Jaan Tallinn, a founding engineer of Skype, is a strong proponent of AI safety. His contributions include support for various AI safety initiatives and organizations dedicated to studying technology-related existential risks.

Heising-Simons Foundation

The Heising-Simons Foundation addresses the societal impacts of technology, including generative AI and AI-powered surveillance. Their focus includes understanding how these technologies affect privacy and autonomy, aiming to promote justice in digital spaces.

In conclusion, while philanthropy’s involvement in AI regulation and safety faces significant challenges, a growing coalition of funders is committed to addressing the ethical and societal implications of this powerful technology.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...