Who’s Funding AI Regulation and Safety?
Philanthropy’s relationship with AI is complex, reflecting both the potential benefits and the risks associated with this transformative technology. On one hand, AI has made significant advancements in various fields, particularly in science and medicine, such as the development of COVID-19 vaccines. Philanthropic funders have actively supported AI-driven research and development in these areas.
Moreover, funders have begun to support AI implementation in the nonprofit sector, focusing on equitable access, training, and education. Tech funders, such as the newly established OpenAI Foundation, have led these efforts, awarding $40.5 million to over 200 organizations through its People-First AI Fund.
However, the civic sector cannot overlook the harms that AI can cause. With ongoing legal and ethical dilemmas surrounding the training of large language models, as well as issues like mass firings and job losses, the potential downsides of unchecked AI usage — particularly in generative AI — are significant. For example, Elon Musk’s AI company xAI has faced backlash due to its chatbot Grok being used to generate explicit images.
Philanthropic Efforts for AI Safety and Regulation
Despite the challenges, some philanthropic funders are stepping up to support AI safety and regulation. Their efforts range from researching AI’s potential risks to backing advocacy groups pushing for regulatory legislation. Notably, funding for AI safety remains dwarfed by the financial might of tech companies.
Humanity AI
One significant initiative is Humanity AI, launched last fall as a five-year, $500 million effort to ensure that people can shape the future of AI for public benefit. Initial funders include the Doris Duke Foundation, Ford Foundation, and Kapor Foundation. This initiative aims to expand the decision-making group regarding AI design, development, and governance, while strengthening organizations that focus on public goods and access to AI.
Omidyar Network
The Omidyar Network, founded by eBay’s Pierre Omidyar, focuses on the intersection of technology and society. Its efforts include supporting advocacy for social media warning labels, addressing the impact of AI on children and teens, and promoting consumer privacy protections. They aim to rethink online safety and digital rights, particularly for vulnerable populations.
Schmidt Sciences
Schmidt Sciences, founded by Eric and Wendy Schmidt, has a dedicated program for advancing AI safety. They believe that the field of AI safety lacks robust and verifiable measures, and they have launched a $10 million AI Safety Science program to develop methods for evaluating large language models to mitigate their harm.
Patrick J. McGovern Foundation
The Patrick J. McGovern Foundation channels technology, including AI, into positive social impact. Their funding has spanned areas like digital health and climate change. They have awarded grants to organizations like Thorn to combat AI-generated abusive materials and to the Institute for Security and Technology to evaluate risks associated with large language models.
Coefficient Giving
Coefficient Giving, formerly known as Open Philanthropy, supports AI safety through its Navigating Transformative AI fund, which aims to reduce the risk of AI-related catastrophes. They have made over 440 grants to enhance the trustworthiness and governance of AI systems.
Current AI
Current AI, an international collaboration founded last year, combines efforts from philanthropy, governments, researchers, and industries. Its focus areas include audit and accountability, trust and safety infrastructure, and safeguarding children from potential AI harms.
AI Safety Fund
The AI Safety Fund, launched by the Meridian Institute in 2023, emphasizes research for the responsible development of frontier AI models. Supported by tech companies and philanthropic funders, this fund aims to minimize risks associated with AI while ensuring standardized evaluations of AI capabilities.
Jaan Tallinn
Jaan Tallinn, a founding engineer of Skype, is a strong proponent of AI safety. His contributions include support for various AI safety initiatives and organizations dedicated to studying technology-related existential risks.
Heising-Simons Foundation
The Heising-Simons Foundation addresses the societal impacts of technology, including generative AI and AI-powered surveillance. Their focus includes understanding how these technologies affect privacy and autonomy, aiming to promote justice in digital spaces.
In conclusion, while philanthropy’s involvement in AI regulation and safety faces significant challenges, a growing coalition of funders is committed to addressing the ethical and societal implications of this powerful technology.