AI Regulation: A Call for Action on Governance

Brown University Professor Discusses AI Regulation

On September 18, 2025, Professor Suresh Venkatasubramanian, an AI safety researcher at Brown University, delivered a lecture at Carnegie Mellon University focused on the crucial topic of artificial intelligence (AI) regulations and evaluating fairness in AI applications.

As the Co-Director of the newly established AI Research Institute on Interaction for AI assistants (ARIA), Venkatasubramanian brings a wealth of experience from his previous role as the Assistant Director for Science and Justice in the White House Office of Science and Technology Policy during the administration of former President Joe Biden. In this capacity, he co-authored the Blueprint for an AI Bill of Rights.

Venkatasubramanian noted a significant shift in the government’s stance towards AI regulations, transitioning from a cautious approach to a more aggressive regulatory framework. He expressed a sense of “profound depression and existential dread” regarding the future of AI governance, stemming from the rapid developments in the field and their implications.

Regulatory Landscape Changes

According to Biden’s Executive Orders in 2023, the administration imposed stricter scrutiny on AI development, particularly in terms of risk assessment and addressing structural discrimination. This marked a stark contrast to the previous administration, which had rolled back many regulatory efforts while still maintaining supportive provisions for data center infrastructure projects.

Currently, the federal government views AI as a critical aspect of global competition and has adopted a deregulatory, pro-innovation approach to expedite AI development. However, the absence of clear regulatory guidance has forced researchers like Venkatasubramanian to seek out methodologies to assess the impact of AI technologies.

Triangular Inquiry Model

Venkatasubramanian advocates for a triangular inquiry model that encompasses framing, tools, and measurement. While much of the scientific community focuses on developing technical tools, he emphasizes the need for policymakers and researchers to consider how problems are framed. He warns that failing to recognize different frames can lead to adopting perspectives that are not beneficial.

“If we are not aware of the different frames that are in place when talking about tech and tech governance, we will be trapped into a frame that someone else wants rather than a frame that we have chosen ourselves,” he remarked.

Bottom-Up Approaches

Venkatasubramanian argues that meaningful advancements in AI governance will not emerge from rigid top-down frameworks but rather from practical, bottom-up approaches that address specific issues within communities. He acknowledges that while scaling solutions is often viewed as ideal, it can sometimes hinder progress.

“I think in computer science, as a gospel, that scaling is what we want to do. We have to understand that sometimes, scaling is preventing us from doing the thing we want,” he cautioned.

Focus on Policy Targets

Regarding the formulation of AI regulations, Venkatasubramanian urges policymakers to concentrate on policy targets rather than relying on technical specifications, which can quickly become outdated in the fast-evolving landscape of AI technology.

“What you should be doing is focusing on identifying targets based on application and consequences to insulate yourself from some of the issues with generative AI that haven’t changed rapidly,” he advised.

A Critical Moment for AI Governance

Venkatasubramanian believes that now is a pivotal time to establish a foundation for effective AI governance. “We don’t have the luxury to spend three years figuring out the best vehicles. We have to work on this right now because if not, something will get set in place and will be very hard to change,” he stated.

He concluded by encouraging students and emerging professionals in the field, emphasizing that collective action is essential: “None of us have to change the world by ourselves. It’s going to be a group effort … just mind a small piece of it.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...