Revamped Federal AI Policies: A New Era of Innovation and Efficiency

OMB Issues Revised Policies on AI Use and Procurement by Federal Agencies

On April 3, 2025, the White House’s Office of Management and Budget (OMB) released two revised policies concerning federal agencies’ use and procurement of artificial intelligence (AI). The memos, titled M-25-21 and M-25-22, aim to facilitate the responsible adoption of AI technologies across the federal government. These policies are part of a broader strategy to enhance public services and national competitiveness in AI innovation.

These revised memos replace earlier directives published during the Biden Administration, including M-24-10, and are aligned with Executive Order 14179, signed on January 23, 2025. The Executive Order focuses on removing barriers to American leadership in AI by promoting faster and responsible technology adoption.

Key Differences in Revised Memos

The newly issued memos emphasize a forward-leaning and pro-innovation approach, encouraging federal agencies to reduce bureaucratic hurdles in AI adoption. Key provisions in the revised policies include:

  • Empowering agency leadership to implement AI governance and risk management.
  • Transparency measures for the public, ensuring clear communication regarding AI use and its efficacy.
  • Allowing waivers for high-impact AI use cases when justified.
  • A strong preference for American-made AI tools and the development of domestic AI talent.

OMB Memorandum M-25-21: Accelerating Federal Use of AI

OMB Memo M-25-21 outlines a framework aimed at accelerating the adoption of innovative AI technologies within federal agencies. The memo highlights three priorities: innovation, governance, and public trust.

Scope

The memo applies to both new and existing AI developed, used, or acquired by covered agencies. However, it excludes AI utilized as part of a National Security System.

Key Provisions

  • Streamlining AI Adoption: Agencies are urged to minimize unnecessary requirements and maximize resource efficiency. CFO Act agencies must publish strategies for removing barriers to AI use within 180 days.
  • Designating Chief AI Officers: Agencies must appoint Chief AI Officers (CAIOs) within 60 days to lead AI governance efforts.
  • Establishing AI Governance Boards: Within 90 days, agencies must form governance boards to ensure cross-functional oversight.
  • Workforce Readiness: Agencies are encouraged to utilize AI training programs to enhance employee capabilities in AI technology.
  • Implementing Oversight for High-Impact AI: Agencies must adopt risk management practices for AI use cases that significantly affect rights or safety.
  • Mandating Transparency: Agencies must publicly report their AI use cases and risk assessments annually.

OMB Memorandum M-25-22: Driving Efficient Acquisition of AI

OMB Memo M-25-22 complements Memo M-25-21 by outlining how federal agencies can responsibly acquire AI technologies. This memo focuses on fostering a competitive marketplace for AI while safeguarding taxpayer interests.

Scope

The memo applies to AI systems acquired by covered agencies, with specific exemptions for those used in National Security Systems.

Key Provisions

  • Investing in the American Marketplace: Agencies are encouraged to prioritize U.S.-developed AI solutions and invest in building local AI talent.
  • Protecting Privacy and IP Rights: Agencies must ensure compliance with existing privacy laws and prevent vendors from misusing government data.
  • Ensuring Competitive Procurement: Contracts should prevent vendor lock-in and promote transparent performance assessments.
  • Assessing AI Risks: Agencies must include monitoring provisions in contracts to evaluate AI performance effectively.
  • Contributing to a Best Practices Repository: GSA, in collaboration with OMB, will create an online repository for responsible AI procurement best practices.
  • Unanticipated Vendor AI Use: Agencies should require disclosure from vendors regarding unexpected uses of AI in their services.

These revised policies signify a critical step towards integrating AI technologies into federal operations, aiming to enhance efficiency, transparency, and public trust in government services.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...