Legislators Should Exert States’ Rights and Regulate AI
In a recent move, President Donald Trump issued an executive order aimed at preempting states from enacting new legislation regarding artificial intelligence (AI). This measure, which includes some exceptions, is based on the belief that the United States should maintain dominance in AI for the purposes of national and economic security.
Trump is calling for Congress to establish a federal law that would set a national standard, effectively replacing individual state regulations that he claims hinder necessary innovation. States that do not comply with this executive order risk facing legal challenges by the U.S. Department of Justice and the potential loss of federal funding.
Challenges for the South Carolina Legislature
Amidst this backdrop, a pressing question arises: How should the South Carolina Legislature respond? There is a bipartisan consensus on the need to protect children from online exploitation, with issues like sextortion and AI-generated mental health counseling posing significant threats. Alarmingly, statistics suggest that up to one in five children may have fallen victim to these dangers, contributing to well over 50 suicides nationally.
Despite the urgency, Congress has not passed substantial AI regulation since 1996, aside from a law addressing intimate imagery involving deepfakes. The Communications Decency Act primarily serves as a civil liability shield for AI companies.
Moreover, AI-generated threats extend beyond children. Adults struggling with mental health issues are also at risk. A recent headline from the Washington Post highlights a disturbing incident: “A former tech executive killed his mother. Her family says ChatGPT made her a target.”
State-Level Legislative Action
Over 250 bills have been introduced in 47 states aiming to regulate AI in healthcare, and more than 40 bills have been filed addressing AI algorithms that promote anti-competitive and discriminatory practices in the housing industry, leading to inflated rents. These algorithms, crafted by tech companies, also contribute to the severe divisions within the nation. Social media platforms exploit user data to maximize engagement, often by promoting messages that elicit strong emotional responses.
According to Cindy Shen, a communications professor at the University of California at Davis, “outrage and hostility tend to drive engagement,” ultimately resulting in “online echo chambers where extreme views go unchallenged.” This dynamic adversely impacts the political landscape, as candidates feel compelled to cater to the most extreme opinions within their voter base, making bipartisan cooperation increasingly difficult.
Given the current climate, it is unlikely that Congress will pass effective legislation to regulate AI, especially with the billions that AI companies are willing to invest in the debate. A national Gallup poll indicates that 80% of Americans support more stringent AI safety regulations, yet the influence of corporate interests complicates the legislative process.
The Call for State Regulations
State Rep. Brandon Guffey, a Republican from York County, advocates for South Carolina to move forward with its own AI regulations to safeguard citizens from both intentional and unintentional harms. Guffey has emerged as a national leader in protecting children from online threats, asserting that if federal efforts to limit states’ rights take effect, he will strive to pass as many AI regulations as possible to challenge them in court.
While the executive order suggests exemptions for state AI regulations focused on child safety, it remains uncertain whether these promises will hold against the resistance from tech companies. The pressing need for AI regulation to protect society is clear, especially in light of a Congress perceived as ineffective and beholden to tech billionaires. As such, the responsibility to regulate AI may fall to the states.