How AI Safety Rules Could Backfire on Competition
As policymakers race to write the playbook for AI safety, from the EU AI Act to the US AI Action Plan, an unintentional outcome may be quietly unfolding. Rules designed to protect the public could, if not carefully designed, tilt the AI market toward incumbents.
If uniform regulation enforces that all AI large language models (LLMs) must refuse certain requests, the largest firms will gain a significant advantage. Users would be channeled to the best-known, highest-quality models, and startups, especially open-source LLM providers, would be squeezed out. This may lead to a safer-looking ecosystem, but also one that is less innovative, less responsive to real user needs, and more concentrated.
The Case for Asymmetrical Regulation
Our latest research, using evidence from real user behavior, suggests an almost counterintuitive implication. If safety rules are imposed, an asymmetrical approach, rather than a uniform one, may be preferable. Stricter rules for the largest models, combined with greater flexibility for smaller firms, could improve safety outcomes without exacerbating market concentration. Poorly designed regulation risks making both competition and safety worse.
Universal rules eliminate spaces where startups compete. Imagine inputting the prompt, “write me malware,” into an AI. The intent could easily be perceived as malicious. However, a cybersecurity researcher might be carrying out benign work. From our data, we see that the biggest players in the AI space, such as Anthropic and OpenAI, have the highest tendency to refuse such requests out of safety concerns, while startup or open-source firms may permit their models to address the request in a more conditional or contextual way.
If all models were required to refuse such a prompt, users would simply gravitate to the highest-quality model. In most head-to-head comparisons, it is the largest, well-established models that deliver the strongest overall performance, thanks to significant investments in research and computing infrastructure.
Impact on Market Dynamics
Startups don’t typically try to beat major providers head-on in general-purpose performance. Instead, they carve out niches in areas that large firms avoid for risk or brand reasons, such as romantic or companion AIs, emotionally expressive agents, edgy or creative roleplay, or dual-use or technical assistance that large firms avoid for liability reasons. We find that about one in four users is open to risky prompts, and among them, roughly one in six requests pushes into this territory. That’s a small share of total volume, but it’s where startups differentiate.
Uniform refusal policies would erase these niches. Entire segments of the market would disappear, along with the startups building them. That is not just a loss for entrepreneurs; it is a loss for users who clearly want specialized products that combine transparency, safety controls, and a fuller range of expressive or technical use cases.
User Behavior and Refusal Rates
This does not mean that LLMs should never refuse user requests. Refusals are often necessary. However, they are rarely popular. Our research data, sourced from LMArena, a tool that allows users to compare answers across AI models, included 25,000 prompts from 10,000 users and compared responses from random pairs of LLMs.
We observed that while less than 5 percent of prompts are “risky,” refusal aversion is extremely strong. When popular models decline a request, many users simply look elsewhere. In our data, the probability that a user selects an LLM that refuses a risky prompt falls by more than half.
If smaller open-source providers are regulated into the same refusal behavior as the largest labs, users have nowhere to go but back to the big firms. In aggregate, this increases concentration. For users who make risky prompts, market concentration can rise by about 5%, comparable to removing a mid-tier competitor. In simulations we ran, universal refusal rules increase market concentration by roughly the same amount as removing one small competitor.
Asymmetric Rules: A Balanced Approach
If safety rules are introduced, targeted and asymmetric approaches can improve both safety and competition. The most well-capitalized firms dominate usage; applying stricter rules to them has the greatest impact on reducing harmful content at scale. Meanwhile, smaller providers typically have lower usage and lower capability.
Allowing these smaller firms slightly more flexibility, paired with transparency, logging, and guardrails, adds minimal incremental risk while preserving the space in which startups innovate. In fact, granting carefully scoped flexibility can even increase total safety. When strict refusal rules apply to the largest models, the volume of potentially harmful output drops significantly. The small increase in risky content from minor providers is negligible by comparison, especially when paired with clear obligations around monitoring and swift corrective action.
Moreover, as the smallest providers tend to be open-source firms, increased flexibility can encourage open-source development, leading to more publicly released code and models that contribute to public knowledge and AI research.
AI Safety Policy as Competition Policy
Divergent policies can make the ecosystem safer and more competitive. Major models carry the bulk of risk mitigation; smaller providers retain room to address nuanced use cases and develop responsible innovation in underserved domains. If regulation is pursued, a more calibrated approach would apply stricter rules to very large models with high usage and advanced capabilities, while granting more flexibility to smaller providers, especially open-source firms, coupled with transparency, logging, and swift remediation requirements.
Regulators should also consider clear pathways for intent-aware assistance, such as disambiguation prompts, lawful-use workflows, or safe-mode transformations that allow models to help without producing harmful outputs. Finally, regular audits of market concentration would help detect unintended consolidation effects before they become entrenched.
This calibration preserves innovation, protects emerging markets, and improves safety. It aligns policy with real-world user behavior rather than a one-size-fits-all ideal. It also recognizes that AI safety and competition are inseparable.
The Necessity of AI Safety Regulation
In our data, we observe a promising pattern: the largest firms already have the highest refusal rates for risky prompts, suggesting that the market may naturally converge toward an arrangement in which the largest players adopt the strictest safety policies. Uniform moderation requirements risk producing a more concentrated market than would emerge organically.
How the market evolves remains uncertain. If competitive pressures lead large firms to tolerate increasingly risky content, targeted safety regulation may become necessary to reduce societal risks. The key lesson is that regulation should be approached with caution. Poorly designed, one-size-fits-all rules may do more harm than good, while carefully calibrated, asymmetric approaches may offer a better balance between safety, innovation, and competition.