AI Regulation: A Call for Industry Leadership

GUEST COLUMN: AI Needs Regulation, Industry Should Lead the Way

To my fellow chief executives, board members, and industry associations who shape the direction of AI development in Canada and globally: The era of vague safety frameworks has passed; the era of enforceable standards must begin.

The families of Tumbler Ridge deserve more than a meeting in Ottawa and press statements expressing concern. They deserve to know that the companies operating AI platforms and technologies have committed — publicly and with enforceable consequences — to standards that will prevent the same failure from occurring again.

Let me be clear that many concerns about the negative impacts of AI are overblown and non-factual. The long-term value that AI brings to society substantially outweighs the risks, provided those risks are managed. Thanks to its power and reach, this technology will bring value that many cannot even imagine at this point in human history.

The Need for Regulation

That is precisely why serious efforts must be made to control and regulate it — which, by definition, addresses the companies behind the AI technologies. The federal government around Minister Evan Solomon is right to demand changes that increase safety for Canadians.

However, these rules and regulations should not come from the government — as was recently suggested in an op-ed published by the Globe and Mail. The companies designing these systems understand them better than any regulator. They are best positioned to design multi-layered guardrails that go far beyond simple filters without further delaying technological progress made in recent years.

Establishing Standards

This includes defining the technical thresholds for what constitutes a credible threat of violence, establishing escalation protocols that are both operationally sound and respectful of privacy, and determining the exact points where automated detection must transition to mandatory human review. By investing in these systemic safeguards, companies can ensure that safety is an architectural feature of the technology, not an afterthought.

And it makes business sense. Companies that operate in regulatory vacuums invite the kind of blunt, reactive legislation that tends to follow tragedies. They also invite liability exposure, reputational damage, and erosion of the public trust that is, ultimately, the foundation on which their products depend.

OpenAI’s handling of the Tumbler Ridge shooter’s account — and its silence to B.C. officials in the meeting held the day after the shooting — has generated exactly the kind of scrutiny that no company seeking to expand its presence in Canada can afford.

The Risks of Reacting to Crisis

Moreover, it creates negative consequences for the entire industry and, counterintuitively, for the safety of society itself. When we react to a crisis with blunt, performative legislation, we often prioritize signaling action over solving problems. Rash decisions that disregard technical nuance don’t just stifle our most transformative sector; they create a false sense of security while leaving the actual, complex loopholes wide open.

Creating a Code of Conduct

A serious, industry-designed code of conduct for AI safety — one that carries genuine force rather than serving as a public relations document — would need to address several core questions.

It must provide industry-wide standards to remove ambiguity and define clear and multi-layered guardrails to avoid one of the most troubling revelations from Tumbler Ridge: the decision not to contact police, which was made against the judgment of employees within the company who believed the content warranted it.

Additionally, clear and strict reporting structures and real accountability are essential. In practice, this means that if an automated system flags content, humans must step in and review detected content according to consistent criteria. Violating this should lead to serious investigations that lead to transparent and impactful outcomes.

Cross-Border Coordination

Lastly, any such framework must be established in genuine cross-border coordination. The internet does not recognize national boundaries. A Canadian-only framework will be incomplete so long as major AI platforms are headquartered and governed elsewhere. This might be the most difficult step, but Canada has signaled repeatedly — think back no further than to Mr. Carney’s celebrated speech in Davos — its hunger to lead.

A Call to Action

Pursuing such meaningful changes and safety requirements requires a level of cooperation that, historically, only dark and sad events like Tumbler Ridge can inspire. Let’s respond to this tragedy with the adequate speed — but also seriousness and intellectual honesty.

The question is not whether AI companies bear sole responsibility for what happened in Tumbler Ridge. They do not. The question is whether the industry has adequate, binding, and consistently applied standards for what to do when credible evidence of planned violence surfaces. It does not.

The private sector has an opportunity here that it would be unwise to squander: to demonstrate that technological innovation and public safety are not competing values, and that industry is capable of governing itself with the seriousness this moment demands. If the industry does not seize that opportunity, governments will act — and they will do so on a timeline and in a manner over which the technology sector will have far less influence and that would limit Canadians from benefiting from the value brought by this transformative technology.

Lead now or be led. The choice belongs to us.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...