GSA’s AI Clause: Key Changes and Implications for Contractors

UPDATE: March 20, 2026 Deadline for Comments on GSA’s Proposed AI Clause Extended to April 3, 2026

On March 6, 2026, the General Services Administration (GSA) proposed GSAR 552.239-7001, Basic Safeguarding of Artificial Intelligence Systems—a first-of-its-kind contract clause aimed at imposing dedicated, AI-specific safeguarding requirements through procurement vehicles. This clause is intended for inclusion in GSA Schedule Contracts through Multiple Award Schedule (MAS) Refresh 32, marking a significant departure from existing federal acquisition practices, as no other agency has implemented a comparable, stand-alone AI governance clause.

Instead of proceeding through traditional notice-and-comment rulemaking, the GSA issued the clause via the MAS refresh comment process, resulting in a highly compressed timeline for stakeholder feedback. Initially, comments on the proposed clause were due by March 20, 2026, but GSA posted on March 19, 2026, noting that this deadline was being extended to April 3, 2026.

If adopted in its current form, the clause would impose contractually binding obligations governing the development, deployment, and management of artificial intelligence (AI) systems used in or supplied under federal contracts.

Substantive Overview of GSAR 552.239-7001

GSAR 552.239-7001 is designed to advance federal objectives by emphasizing AI control, transparency, and accountability. This reflects growing concern within the federal government regarding data security, supply chain risk, and the opaque nature of many commercial AI systems.

Contractors who rely on AI—whether for data analysis, content generation, automation, or decision support—will need to ensure their practices align with these priorities. The proposed clause imposes far-reaching disclosure requirements, strict limitations on data use, broad Government rights to utilize information, and proactive compliance obligations for contractors delivering Artificial Intelligence capabilities.

Notably, the term “Artificial Intelligence capabilities” is left undefined in the proposed language, creating ambiguity. Industry stakeholders are likely to raise this issue during the comment period, anticipating clearer guidance in the final version of the clause.

Key Obligations Under the Proposed Clause

  • Exclusively using “American AI Systems” in contract performance, i.e., those developed and produced in the United States (foreign systems are expressly prohibited).
  • Disclosing all AI systems used in the performance of a contract throughout the supply chain within 30 days of award unless requested earlier by the contracting officer.
  • Ensuring mechanisms for Government oversight, intervention, and feedback.
  • Prohibiting the use of Government data to train, fine-tune, or improve AI models or offerings.
  • Providing the Government with ownership rights in AI outputs and developments.
  • Reporting security or performance incidents promptly (within 72 hours) and providing day-to-day updates as needed.
  • Maintaining and providing documentation related to compliance, AI System decision-making processes, privacy controls, and known biases.
  • Ensuring data portability and interoperability using open and standard data formats and APIs.
  • Making efforts to ensure AI systems adhere to unbiased AI principles, namely being truthful, historically accurate, neutral, and nonpartisan.

Scope and Definitions

The proposed clause would broadly apply to AI use in contract performance and not just to AI systems provided to the Government. For instance, it defines key terms such as “AI System”, “American AI System”, and “Service Provider”, which extends beyond traditional subcontractors to include third-party vendors and platforms that support AI functionality.

Disclosure Requirements

Contractors must comply with Government-requested disclosures, including information about their AI systems used in connection with contract performance. This obligation may require contractors to inventory and track AI usage across business units and functions, including third-party tools embedded in workflows.

Data Use and Protection Restrictions

The clause limits the use of Government data in connection with AI systems. Contractors are prohibited from using Government-furnished or generated data to train or improve AI models. It requires segregating Government data from other datasets and implementing strict data handling procedures.

Safeguarding and Security Requirements

Consistent with its title, the clause creates safeguarding obligations when using AI systems, including establishing controls to protect against unauthorized access and ensuring system integrity. These requirements align with existing federal cybersecurity frameworks while introducing AI-specific focuses.

Government Rights and Oversight

The clause grants the Government full ownership of all Government data and any Custom Developments, while contractors are given only a limited, revocable license to use such data during the contract term. This includes provisions for Government oversight on AI use, allowing evaluation of AI system performance and effectiveness.

Flowdown and Supply Chain Obligations

The clause mandates that contractors flow down its requirements to subcontractors and other service providers involved in AI system development or operation. This broad definition of “Service Provider” expands compliance obligations across a complex ecosystem of vendors.

Next Steps and Implications

While the proposed clause may evolve after considering contractor feedback, it signifies a meaningful shift in how the federal government intends to govern AI. Contractors should prepare for potential changes to the clause during the MAS refresh process.

Contractors should also anticipate operational impacts, including establishing an internal AI governance framework, developing inventories of AI systems, researching compliance with American AI Systems, and updating internal policies and procedures related to AI use.

Conclusion

This clause likely signals a new direction for federal procurement policy concerning AI. Contractors who proactively assess and implement these new requirements will be better positioned to stay compliant and competitive in the evolving landscape of artificial intelligence.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...