Regulating AI Use: A Path to Safer Innovation

Don’t Regulate AI Models. Regulate AI Use

As discussions surrounding the regulation of Artificial Intelligence (AI) continue to gain momentum globally, it often seems overwhelming to navigate the myriad of proposed frameworks. Various countries are implementing their own AI governance systems, each with unique focuses and methodologies.

Global Perspectives on AI Regulation

In China, the first AI-specific regulations were issued in 2021, focusing on providers and content governance through stringent platform control and record-keeping requirements. Meanwhile, the European Union has been preparing its AI Act since 2024, with ongoing proposals for updates and simplification. Similarly, India has tasked its senior technical advisors with creating an AI governance system, which was released in November 2025. In contrast, the United States is witnessing a patchwork of state-level legislation as the federal government attempts to standardize or even loosen regulations.

The Case for Regulating AI Use

This leads to a pivotal question: what can the U.S. realistically enforce to mitigate real-world harm? The argument presented here is that we should focus on regulating AI use, rather than the underlying models.

Why Model-Centric Regulation Fails

Proposals to license “frontier” training runs or restrict open weights promise control but often result in ineffective measures. Once AI models are publicly released, they can be replicated at minimal cost, rendering attempts to restrict them futile. This creates two adverse outcomes: compliant firms become bogged down in bureaucracy while reckless actors circumvent rules through offshore or underground means.

Moreover, licensing model publication may conflict with existing free speech laws in the U.S., as courts have recognized software source code as protected expression. Hence, any regulatory framework that limits the publication of AI models may face legal challenges.

A Practical Alternative: Regulate Use Proportionate to Risk

A more effective approach would classify AI deployments by risk levels, requiring scaled obligations based on the potential impact of their use.

Baseline: General-Purpose Consumer Interaction

For general interactions (e.g., chatbots, creative writing), regulatory adherence would involve:

  • Clear AI disclosure at the point of interaction.
  • Published acceptable use policies.
  • Technical guardrails to prevent escalation into higher-risk categories.
  • A mechanism for users to flag problematic outputs.

Low-Risk Assistance

For applications like drafting and summarization:

  • Simple disclosure.
  • Baseline data hygiene standards.

Moderate-Risk Decision Support

For applications affecting individuals, such as hiring or loan pre-qualification:

  • Documented risk assessments.
  • Meaningful human oversight.
  • An “AI bill of materials” documenting model lineage and key evaluations.

High-Impact Uses

In critical contexts (e.g., clinical decision support):

  • Rigorous pre-deployment testing.
  • Continuous monitoring and incident reporting.
  • Authorization linked to validated performance.

Hazardous Dual-Use Functions

For particularly dangerous applications, such as biometric voiceprint fabrication:

  • Confine to licensed facilities and verified operators.
  • Prohibit functionalities aimed at unlawful purposes.

Enforcement Points and Compliance

Regulators should focus on enforcement at key distribution points, including:

  • App stores and enterprise marketplaces.
  • Cloud and AI platforms.
  • Payment systems and ad networks.
  • Insurers and contract counterparts.

For high-risk applications, identity verification, capability gating, and tamper-evident logging should be mandated. Companies must demonstrate compliance, maintain incident-response plans, and be liable for any harm caused by their AI systems.

Aligning with the EU AI Act

This approach aligns with the EU AI Act by centering risk at the point of impact, addressing high-risk categories such as employment and critical infrastructure. However, the U.S. must also ensure constitutional durability, focusing on what AI operators can do in sensitive settings rather than imposing restrictions on model publication.

Learning from Global Examples

While the U.S. should avoid China’s state-directed approach, there are useful takeaways. For example, implementing mandatory labeling for synthetic media and requiring operators to file their methodologies for high-risk services can enhance transparency and safety without compromising freedoms.

In conclusion, meaningful regulation of AI cannot focus solely on the development of models; it must prioritize use cases, enforce compliance at strategic chokepoints, and apply obligations that scale with risk to effectively mitigate potential harms.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...