Don’t Regulate AI Models. Regulate AI Use
As discussions surrounding the regulation of Artificial Intelligence (AI) continue to gain momentum globally, it often seems overwhelming to navigate the myriad of proposed frameworks. Various countries are implementing their own AI governance systems, each with unique focuses and methodologies.
Global Perspectives on AI Regulation
In China, the first AI-specific regulations were issued in 2021, focusing on providers and content governance through stringent platform control and record-keeping requirements. Meanwhile, the European Union has been preparing its AI Act since 2024, with ongoing proposals for updates and simplification. Similarly, India has tasked its senior technical advisors with creating an AI governance system, which was released in November 2025. In contrast, the United States is witnessing a patchwork of state-level legislation as the federal government attempts to standardize or even loosen regulations.
The Case for Regulating AI Use
This leads to a pivotal question: what can the U.S. realistically enforce to mitigate real-world harm? The argument presented here is that we should focus on regulating AI use, rather than the underlying models.
Why Model-Centric Regulation Fails
Proposals to license “frontier” training runs or restrict open weights promise control but often result in ineffective measures. Once AI models are publicly released, they can be replicated at minimal cost, rendering attempts to restrict them futile. This creates two adverse outcomes: compliant firms become bogged down in bureaucracy while reckless actors circumvent rules through offshore or underground means.
Moreover, licensing model publication may conflict with existing free speech laws in the U.S., as courts have recognized software source code as protected expression. Hence, any regulatory framework that limits the publication of AI models may face legal challenges.
A Practical Alternative: Regulate Use Proportionate to Risk
A more effective approach would classify AI deployments by risk levels, requiring scaled obligations based on the potential impact of their use.
Baseline: General-Purpose Consumer Interaction
For general interactions (e.g., chatbots, creative writing), regulatory adherence would involve:
- Clear AI disclosure at the point of interaction.
- Published acceptable use policies.
- Technical guardrails to prevent escalation into higher-risk categories.
- A mechanism for users to flag problematic outputs.
Low-Risk Assistance
For applications like drafting and summarization:
- Simple disclosure.
- Baseline data hygiene standards.
Moderate-Risk Decision Support
For applications affecting individuals, such as hiring or loan pre-qualification:
- Documented risk assessments.
- Meaningful human oversight.
- An “AI bill of materials” documenting model lineage and key evaluations.
High-Impact Uses
In critical contexts (e.g., clinical decision support):
- Rigorous pre-deployment testing.
- Continuous monitoring and incident reporting.
- Authorization linked to validated performance.
Hazardous Dual-Use Functions
For particularly dangerous applications, such as biometric voiceprint fabrication:
- Confine to licensed facilities and verified operators.
- Prohibit functionalities aimed at unlawful purposes.
Enforcement Points and Compliance
Regulators should focus on enforcement at key distribution points, including:
- App stores and enterprise marketplaces.
- Cloud and AI platforms.
- Payment systems and ad networks.
- Insurers and contract counterparts.
For high-risk applications, identity verification, capability gating, and tamper-evident logging should be mandated. Companies must demonstrate compliance, maintain incident-response plans, and be liable for any harm caused by their AI systems.
Aligning with the EU AI Act
This approach aligns with the EU AI Act by centering risk at the point of impact, addressing high-risk categories such as employment and critical infrastructure. However, the U.S. must also ensure constitutional durability, focusing on what AI operators can do in sensitive settings rather than imposing restrictions on model publication.
Learning from Global Examples
While the U.S. should avoid China’s state-directed approach, there are useful takeaways. For example, implementing mandatory labeling for synthetic media and requiring operators to file their methodologies for high-risk services can enhance transparency and safety without compromising freedoms.
In conclusion, meaningful regulation of AI cannot focus solely on the development of models; it must prioritize use cases, enforce compliance at strategic chokepoints, and apply obligations that scale with risk to effectively mitigate potential harms.