AI Regulation: Alaska’s Urgent Call to Action

Opinion: AI is Moving Faster Than Our Laws—Alaska Should Pay Attention

Among the least recognized—but potentially most consequential—of President Trump’s deregulatory policies are his efforts to roll back oversight of artificial intelligence (AI). Though his actions weakening environmental, health care, and civil rights protections draw more attention, his push to deregulate AI could prove far more detrimental in coming decades.

Most significantly, Trump rescinded the previous administration’s “Blueprint for an AI Bill of Rights,” which was designed to guide the development and deployment of AI in ways that safeguarded the rights of the American public. Instead, Trump issued an executive order to accelerate AI development by stripping away key regulatory and oversight provisions. His proposed “Big Beautiful Bill” even included a 10-year moratorium on state and local AI regulations. While the House supported this measure, the Senate did not—leaving states like Alaska free to regulate AI use at the state level. However, in December, Trump issued another executive order prohibiting such regulation. We should ask our congressional delegation to work toward restoring our right to regulate AI.

The Benefits of AI

Before answering, let’s acknowledge the many benefits of AI. It’s beginning to transform medicine, personal health care, climate and environmental research, energy technologies, finance, education, and more. AI is revolutionizing how we solve problems and understand the world. For such reasons, the United States should strive to lead the world in AI innovation. But that leadership must be balanced by an equally ambitious commitment to AI safety, regulation, and accountability.

The Rapid Development of AI

This is because AI is developing exponentially, at a pace far beyond our human capacity to fully grasp how it works and where it is heading. What we are witnessing today is only “baby AI” compared to what AI researchers and CEOs predict lies ahead. When AI fully merges with other breakthrough technologies like quantum computing, nanotechnology, robotics, molecular engineering, and genetic editing, its societal impacts could become destabilizing.

Already we are approaching a pivotal shift in human history: the emergence of AI systems capable of recursive self-improvement—systems that can learn to improve themselves. By crawling the internet and absorbing its vast content, deep learning models are becoming more adept at optimizing their own algorithms. More concerning is the growing possibility that these systems could create entirely new, self-generating algorithms without human involvement. In short, AI is on the path toward autonomy—evolving the capacity to evolve itself.

The Threat of Artificial General Intelligence (AGI)

This trajectory leads us toward Artificial General Intelligence (AGI)—AI with capabilities equal to or exceeding human intelligence. AGI would not only learn, reason, and communicate at a high level, it could operate independently, make decisions, and potentially form its own goals. This is not science fiction—it’s a path we are already on.

How soon? Predictions vary, but recent years have seen experts and industry leaders significantly shorten their timelines. According to Google’s Gemini Advanced, “a cautious synthesis might place the median expectation for AGI arrival in the late 2030s to the 2040s.” That leaves little time for society to prepare.

Potential Consequences of AGI

The most sobering concern is that once AGI surpasses human intelligence and gains autonomy, it may no longer be controllable. A self-programming system could override any safety mechanisms built by human programmers. AGI could alter or destabilize every sector of society. The labor market may face mass job displacements, fueling economic uncertainty. Inequality would deepen as access to the most powerful AI systems becomes limited to the wealthy. Advanced AI will enable surveillance capabilities and authoritarian control measures that make China’s look modest by comparison. Cyberattacks will become more sophisticated, and quantum AI may eventually circumvent encryption methods. AGI will also accelerate the development of autonomous lethal and miniaturized weapons. Psychologically and socially, humans could experience a sense of powerlessness in a world shaped by forces they neither understand nor control.

And all of this could unfold at a pace too fast for individuals and institutions to adapt.

The Need for Regulation

That’s why we must inform ourselves about AI advancement and prepare to control, regulate, and contain it now. While comprehensive federal and international policies are most needed, they won’t come under the current administration. Many states have stepped up, passing laws addressing AI’s impacts on election integrity, bias, deepfakes, consumer rights, and child protection. Though now in limbo, these states’ efforts have served to bring attention to these AI issues.

Last year, the Alaska Senate Affairs Committee introduced House Concurrent Resolution 3 (HCR 3) that would establish a Joint Legislative Task Force on AI. HCR 3 would study, bring public attention to, and make recommendations on AI’s use in state government, education, and legal and ethical concerns like data privacy, bias, and misinformation. It would also consider AI-driven economic opportunities, including establishing more AI data centers in Alaska. AI and data centers should be considered together as they are intertwined, each facilitating the other.

Even though it wouldn’t have the force of law, we should support the establishment of this AI task force. It would enhance public awareness of AI issues and prepare us to influence how this transformative technology will affect Alaskans’ lives.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...