Balancing Innovation and Regulation in AI Development

We Need to Stop Pretending That AI Regulation Stifles Innovation

Lack of commonsense AI regulation opens the door to real harm and risks a potential $10.3 trillion opportunity to develop and use generative AI. We need guidelines to encourage innovators to explore new technologies without risking public trust.

Efforts to ensure responsible AI innovation have veered off course. Regulations such as the EU AI Act are crucial to unlocking AI’s most powerful use cases. All the innovations we’ve seen in AI so far are a direct result of collecting and using data for model training. When users feel protected through transparent data policies and consent measures, they’re more likely to contribute the high-quality data necessary to advance AI.

However, if companies neglect to safeguard users, they will run out of “fuel” or data to advance any further.

Are Guardrails Excessive?

The global AI community has progressed in the past several years over how to govern AI technology so that innovation doesn’t sacrifice user trust and safety. The Institute of Electrical and Electronics Engineers trade association set the stage in 2016 with its ethically aligned design principles, and the Organization for Economic Cooperation and Development adopted its own AI principles three years later. The White House’s 2022 AI Bill of Rights was followed by an executive order on AI in 2023. Last March, the EU passed its watershed AI Act.

Remarks from political leaders suggesting that “excessive regulation of the AI sector could kill a transformative industry” pit AI opportunity against safety. Why should we have to choose between the two? User-first privacy programs build lasting trust between consumers and companies.

Products and services that prioritize consumer trust and empower user choice can accelerate innovation rather than block it. Take biometric data and smart devices; these technologies continue to raise privacy questions. Regulations such as Illinois’ Biometric Information Privacy Act, the EU’s General Data Protection Regulation, and California’s IoT Security Law have established clearer rules around data collection, storage, and sharing, helping reassure the public and leading to greater adoption of technologies such as digital identity verification, wearable health devices, and smart home assistants.

Regulating Known Harms

Under-regulating technology has troubling consequences. Self-driving cars from adversarial countries traversed millions of miles gathering vast amounts of information on US citizens because the US lacks laws specifically governing such technologies.

In contrast, the EU AI Act places necessary guardrails around AI technologies with known harms, such as biometric surveillance in public spaces, predictive policing, or emotion-recognition systems in workplaces or schools. The risk of racial discrimination by facial recognition technology is well-documented, with instances of wrongful arrests highlighting the urgent need for regulation.

Rather than stifling innovation, regulations push companies to continue improving their products. Privacy laws have forced companies to rethink data collection and usage and to innovate in areas such as encryption, data minimization, and user consent management, leading to stronger security, better consumer trust, and new business models.

For example, companies like Apple have introduced advanced data protection for end-to-end encryption of iCloud data categories beyond passwords and protected health information. User consent regulations have given rise to new technologies that govern the entire lifecycle of user permissions—from initial capture and storage to handling of granular data access requests and deletion.

Innovation and Trust

Regulations such as the EU AI Act that seek to address the most critical risks of emerging technology are not examples of government overreach. No regulation is perfect, but these laws can serve as blueprints for meaningful legislation to promote public trust and AI adoption. The future of US AI leadership hinges on forging a path where innovation and responsible governance coexist.

Trust—built through transparent data practices and practical guardrails—is the currency of progress. Frameworks that prioritize safety will reassure users and empower innovators to experiment.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...