Guiding Enterprises Towards Responsible AI Implementation

Partnership on AI Launches New Initiative for Responsible AI Adoption

On May 5, 2025, the Partnership on AI (PAI) announced the formation of its Enterprise AI Steering Committee, a strategic initiative aimed at promoting the responsible use of AI systems within enterprise and organizational environments.

Recent findings indicate that a significant 80 percent of business leaders identify the absence of standards regarding AI ethics, explainability, trust, and bias as a major barrier to the adoption of generative AI. As organizations increasingly incorporate both traditional and generative AI technologies into their workflows, it becomes imperative to ensure these systems are utilized responsibly to foster beneficial innovation while mitigating potential risks.

Leadership and Collaboration

Rebecca Finlay, CEO of Partnership on AI, expressed her enthusiasm for the new initiative, stating, “Since our founding, PAI has recognized that there are many actors in the AI value chain that influence technologies’ impact. Today, I am delighted to announce a new group of Partners focused on the influential leadership of large organizations and enterprises in the responsible deployment of AI-based systems and solutions.”

Finlay emphasized the diverse expertise brought by the members of the Enterprise AI Steering Committee and expressed her eagerness to work together in promoting positive outcomes for individuals and society.

Addressing the Critical Need for Responsible AI

Despite existing guidance on responsible AI development and deployment, there remains a pressing need to assist enterprise organizations in responsibly adopting and employing AI systems. The Enterprise AI Steering Committee aims to unite leaders from private sector companies, civil society organizations, academia, and philanthropic organizations to cultivate a shared understanding of responsible AI adoption.

Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, highlighted the unique challenges enterprises face, stating, “Salesforce is deeply committed to trusted and ethical AI, especially for the unique challenges enterprises face in deploying these technologies. While much focus is on consumer AI or frontier models, holistic, trustworthy AI solutions for enterprises require dedicated attention.”

Workshops and Reports Informing the Initiative

The Enterprise AI Steering Committee builds upon insights gained from two workshops co-hosted by PAI and Salesforce, which focused on responsible AI adoption readiness. The outcomes of these workshops contributed to PAI’s recently published report: Responsibly Navigating the Enterprise AI Landscape: Promises, Challenges, and Opportunities. This report addresses:

  • Challenges: Includes issues like fostering responsible AI adoption readiness, evaluation, monitoring, compliance, and building trust and collaboration across the AI value chain.
  • Opportunities for Research and Collaboration: Such as aligning knowledge and terminology, establishing governance structures, and developing measurement and monitoring frameworks.

Inaugural Members of the Committee

The inaugural members of the Enterprise AI Steering Committee include:

  • Daniel Berrick, Future of Privacy Forum
  • Paula Goldman, Salesforce
  • Athmeya Jayaram, Hastings Center
  • Liza Levitt, Intuit
  • Daren Orzechowski, A&O Shearman
  • Andrew Reiskind, Mastercard
  • Kip Wainscott, JP Morgan Chase
  • Abigail Gilbert, Institute for the Future of Work
  • Shing Suiter, Mozilla
  • Emily McReynolds, Adobe
  • Ruchika Joshi, Center for Democracy & Technology

This initiative marks a significant step towards ensuring that AI technologies are not only advanced but are also implemented in a manner that is ethical, responsible, and beneficial to society.

More Insights

AI Compliance Risks: Safeguarding Against Emerging Threats

The rapid growth of artificial intelligence (AI), particularly generative AI, presents both opportunities and significant risks for businesses regarding compliance with legal and regulatory...

Building Effective AI Literacy Programs for Compliance and Success

The EU AI Act mandates that providers and deployers of AI systems ensure a sufficient level of AI literacy among their staff and others involved in AI operations. This obligation applies to anyone...

Ethics at the Crossroads of AI Innovation

As artificial intelligence (AI) increasingly influences critical decision-making across various sectors, the need for robust ethical governance frameworks becomes essential. Organizations must...

Croatia’s Path to Responsible AI Legislation

EDRi affiliate Politiscope hosted an event in Croatia to discuss the human rights impacts of Artificial Intelligence (AI) and to influence national policy ahead of the implementation of the EU AI Act...

The Legal Dilemma of AI Personhood

As artificial intelligence systems evolve to make decisions and act independently, the legal frameworks that govern them are struggling to keep pace. This raises critical questions about whether AI...

Data Provenance: The Foundation of Effective AI Governance for CISOs

The article emphasizes the critical role of data provenance in ensuring effective AI governance within organizations, highlighting the need for continuous oversight and accountability in AI...

Balancing AI Governance in the Philippines

A lawmaker in the Philippines, Senator Grace Poe, emphasizes the need for a balanced approach in regulating artificial intelligence (AI) to ensure ethical and innovative use of the technology. She...

China’s Open-Source Strategy: Redefining AI Governance

China's advancements in artificial intelligence (AI) are increasingly driven by open-source collaboration among tech giants like Alibaba, Baidu, and Tencent, positioning the country to influence...

Mastering AI Governance: Nine Essential Steps

As organizations increasingly adopt artificial intelligence (AI), it is essential to implement effective AI governance to ensure data integrity, accountability, and security. The nine-point framework...