GSA’s New AI Contract Clause: Key Implications for Contractors

GSA’s Proposed AI Clause: A Deep Dive into New Requirements for Government Contractors

The General Services Administration (GSA) on March 6, 2026, released a draft of a significant new contract clause, GSAR 552.239-7001, titled “Basic Safeguarding of Artificial Intelligence Systems.” This proposed clause, if adopted, will impose substantial and wide-ranging obligations on contractors providing artificial intelligence (AI) solutions to the government. This analysis provides a summary of the key provisions and their practical implications for the industry.

Executive Summary

The proposed GSAR clause seeks to create a uniform set of rules governing the acquisition and use of AI systems across GSA contracts. It introduces several consequential obligations with the potential to reshape the government AI marketplace. Notably, it grants the government expansive ownership of all data inputs, outputs, and any Custom Developments, as well as prohibits contractors from using this government data for training or improving AI models; and requires the use of only American AI Systems.

Furthermore, the clause imposes an aggressive 72-hour incident reporting requirement, holds prime contractors directly responsible for the compliance of their downstream commercial AI “Service Providers”, and codifies a set of “Unbiased AI Principles”. In addition, the government would retain authority to independently evaluate AI systems and suspend their use for non-compliance.

Scope and Applicability

The new clause is slated for inclusion in all solicitations and contracts “for Artificial Intelligence capabilities.” The term “AI capabilities” is not defined, leaving some ambiguity as to the clause’s full scope. The draft defines several critical terms that clarify its reach:

  • AI System: This adopts the definition from the Advancing American AI Act, and means AI systems developed and produced in the United States.
  • Government Data: This broadly encompasses both Data Inputs (e.g., user prompts, source data) and Data Outputs (e.g., system responses, analyses, metadata and synthetic data).
  • Custom Development: This covers any modifications, enhancements, or configurations made specifically for the government.
  • Service Provider: This refers to any entity that provides, operates, or licenses an AI system used in contract performance but is not a party to the prime contract.

A notable ambiguity arises from the term “American AI Systems.” The clause provides no further test for what constitutes “produced,” which may create compliance challenges for systems built with global data, open-source components, or international talent.

Intellectual Property (IP) and Data Rights

The proposed clause establishes an IP and data rights regime heavily favoring the government. Under its terms, the government will own all “Government Data” and “Custom Developments.” Contractors and their Service Providers receive only a limited, revocable license to use this data for the sole purpose of performing the contract. Any IP rights that a contractor might otherwise obtain in Government Data or its derivatives are automatically assigned to the government upon creation.

The clause specifically covers rights in “improvements” and “derivative works” a contractor may obtain from the Government Data, contemplating broad assignment of patentable inventions or copyrightable works a contractor may create from Government Data using the AI System.

Though contractors and Service Providers retain ownership of their underlying AI systems and base models, they must grant the government an irrevocable, non-exclusive, royalty-free license to use the system for any lawful government purpose.

Security, Privacy and Incident-Reporting Requirements

The draft clause mandates a comprehensive security framework. Contractors must implement and maintain “reasonable technical, administrative, physical, and organizational safeguards” to protect Government Data from unauthorized access, loss, or alteration. A key requirement is the implementation of “eyes off” Data handling procedures, which restrict human review of Government Data to instances that are strictly necessary and logged for government visibility.

The clause also requires the logical segregation of Government Data from other customer data and provides for data localization requirements to be specified by ordering agencies. Upon contract completion, all Government Data must be securely deleted and certified in writing to the Contracting Officer.

For security incidents, the clause imposes a strict 72-hour reporting deadline. Upon discovery of a confirmed or suspected incident, the contractor must notify the Cybersecurity and Infrastructure Security Agency, contracting officer and other designated points of contact. Daily updates are required until the incident is resolved.

Contractor Responsibilities and Flowdown to Service Providers

The proposed clause reaches well beyond prime contractors, effectively regulating subcontractors, cloud providers, and commercial AI vendors. Critically, it makes prime contractors directly liable for their Service Provider’s compliance with all its terms. This flowdown responsibility extends to the commercial AI platforms and models that contractors often integrate into their solutions.

The clause mandates that contractors disclose all AI systems used in performance of the contract, including any modifications made to comply with foreign or commercial regulatory frameworks. It also institutes a strict “American AI Systems” requirement, prohibiting the use of “foreign AI systems” in the performance of this contract.

Change Management, Portability and Interoperability

The proposed clause aims to prevent vendor lock-in and ensure government flexibility through strict change management and data portability rules. Contractors must provide the government with concurrent access to new versions of an AI model for an evaluation period of 30 days for major versions and 15 days for minor versions before discontinuing the old model.

To ensure interoperability, all AI systems, data outputs, and custom developments must use open and standard formats and APIs. The clause prohibits the use of proprietary technologies that would require additional licensing or create dependencies.

Performance Standards, Evaluation and Remedies

A central feature of the clause is the mandate for contractors to adhere to a set of “Unbiased AI Principles”. These principles require that the AI system be “truthful,” prioritize “historical accuracy, scientific inquiry, and objectivity,” and operate as a “neutral, nonpartisan tool.”

The government reserves the right to conduct its own automated assessments of the AI system at any time to test for bias, truthfulness, and other factors using its own benchmarks. If the government identifies non-compliance, it has the right to suspend use of the AI system until the “performance issues” are satisfactorily addressed.

Practical Implications for Contractors and Industry

If implemented as drafted, the proposed GSAR clause will have profound practical implications for the government contracting industry. The expansive government ownership of data and custom developments, combined with the prohibition on using that data for model training, fundamentally challenges the business models of many commercial AI providers.

Contractors will face increased risk and compliance costs, which will need to be factored into contract pricing. The proposed clause also will impact contractors’ approach to IP rights, prompting companies to adopt clear protocols for the use of data constituting Government Data.

Recommended Immediate Actions and Comment Topics for Stakeholders

Time is short for contractors and other stakeholders seeking to comment: The proposed clause was issued on March 6, 2026, and GSA is accepting public and industry input through March 20, 2026. Immediate internal actions should include conducting a gap analysis of current AI offerings and compliance frameworks against the clause’s requirements.

Conclusion and Next Steps

The proposed GSAR clause represents a landmark effort by the government to regulate its procurement of AI. Its provisions on data rights, security, and performance are among the most prescriptive seen in federal contracting. Active engagement and detailed comments will be essential to help shape a final rule that balances government needs with commercial and practical realities.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...