North Carolina Leads National Effort to Regulate AI

North Carolina Takes the Lead in Regulating AI

Artificial intelligence (AI) holds great potential to enhance societal functions and improve daily life. However, it also poses significant risks, particularly in empowering malicious actors such as scammers and child sex offenders. This duality has prompted North Carolina Attorney General Jeff Jackson to initiate a national movement aimed at developing regulations to mitigate the darker aspects of AI technology.

Collaboration Across States

Attorney General Jackson is collaborating with Utah Attorney General Derek Brown to spearhead this initiative, which seeks to provide support and guidance to attorneys general across various states in brainstorming and enforcing regulations concerning AI misuse. Their focus is particularly on consumer protection, as many state AGs dedicate significant time to combating scams.

Jackson emphasized the need for rules governing AI usage, stating, “If AI can continue to be used with essentially no rules, that would likely lead to bigger, more sophisticated scams.” He cited specific concerns such as voice cloning, deep fakes, and AI robocalls as examples of how AI technology can be abused.

Addressing Serious Concerns

Brown echoed these sentiments, highlighting the troubling phenomenon of AI-generated fake pornographic images, including those of children. He remarked, “This just highlights the need to create guardrails.” The collaboration now includes attorneys general from several states, including Alabama, Idaho, Illinois, and Massachusetts.

A Rejection of Federal Intervention

This initiative stands in stark contrast to an executive order recently signed by President Donald Trump, which threatens to cut federal funding for any states that impose new AI regulations. Trump’s order followed Congress’s rejection of his proposed law to ban state-level regulations on AI. According to Trump, economic development could be hampered if companies face varying rules across states, advocating for federal control over AI regulation.

Despite the federal pushback, Jackson believes that individual states are better positioned to address complex issues like AI regulation than Congress. He stated, “My experience with Congress… is that they’re just very unlikely to act, at least until something becomes a major problem.” This perspective reinforces the role of states as “laboratories of democracy,” as articulated by Brown, who noted that state-level legislation can be more agile and effective.

Involvement of Industry Insiders

During their discussions, Jackson and Brown also included industry insiders, emphasizing the importance of collaboration between state leaders and AI professionals. Jackson pointed out that preventing deepfake child pornography is a shared objective among both the industry and state officials. He believes that insights from industry experts can be instrumental in crafting effective regulations.

As the dialogue progresses, both Jackson and Brown express optimism that a task force will emerge to address these pressing concerns, aiming to create a framework that aligns the interests of the public, the industry, and regulatory bodies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...