Category: Artificial Intelligence Regulation

Regulating the Power of Artificial Intelligence

Dr. Rejoice Malisa-van der Walt emphasizes the importance of developing South Africa’s artificial intelligence (AI) policy by integrating elements from international frameworks while addressing the country’s unique needs. The goal is to create regulations that foster innovation and ensure AI serves all citizens, considering past challenges and current goals.

Read More »

Texas Takes the Lead in AI Regulation with New Governance Act

Texas has passed the Responsible Artificial Intelligence Governance Act (TRAIGA), which aims to regulate AI use in the workplace without imposing significant new burdens on employers. If approved by Governor Abbott, the law will come into effect on January 1, 2026, joining other states in establishing AI regulations.

Read More »

Congress’ Hidden AI Regulation Ban: A Decade of Unchecked Power

The letter expresses concern over a clause in H.R. 1 that would prohibit state or local governments from regulating artificial intelligence for the next 10 years. It warns that this moratorium could allow unelected officials to deploy AI systems without public accountability, raising significant risks for future administrations.

Read More »

Federal Ban on State AI Regulations Sparks Controversy

The U.S. House of Representatives has passed a budget bill that includes a controversial 10-year ban on states regulating artificial intelligence, which now moves to the Senate for consideration. The proposed ban has faced significant opposition, with experts arguing it undermines state efforts to protect residents from potential harms associated with AI.

Read More »

Neurotechnologies and the EU AI Act: Legal Implications and Challenges

The article discusses the implications of the EU Artificial Intelligence Act on neurotechnologies, particularly in the context of neurorights and the regulation of AI systems. It highlights the definition of an AI system, the prohibitions on subliminal techniques, and the categorization of high-risk applications in areas such as emotion recognition and biometric data usage.

Read More »

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for cross-disciplinary collaboration and the use of frameworks like the EU AI Act to align AI development with societal values and expectations.

Read More »

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to implement policies that respect copyright protections and ensure transparency about the content used in their training processes.

Read More »