Category: AI Regulation

Shaping the Future: The EU AI Act Explained

The European Union’s Artificial Intelligence Act (AI Act) aims to create a balanced framework for the responsible development and deployment of AI systems, addressing ethical considerations, transparency, and safety. This legislation represents a commitment to fostering a human-centric AI ecosystem that protects fundamental rights while promoting innovation.

Read More »

North Carolina Appoints First AI Governance Leader

The N.C. Department of Information Technology (NCDIT) has appointed I-Sah Hsieh as its first artificial intelligence governance and policy executive to promote responsible AI use in the state. Hsieh brings over 25 years of expertise in AI governance and ethics, aiming to enhance efficiency while ensuring digital safety for residents, businesses, and visitors.

Read More »

EU Commission’s Contingency Plans for AI Standards Delays

The European Commission is prepared to provide alternative solutions if technical standards for the EU’s AI Act are delayed, as the main standardization bodies have announced that the standards will now be ready in 2026 instead of August 2025. The Commission emphasizes that while these standards are not mandatory, they will significantly ease compliance efforts for providers of high-risk AI systems.

Read More »

AI Accountability: Adapting to the EU Act’s New Standards

The EU AI Act introduces new regulations that require developers and tech builders to be accountable for their AI systems, shifting the focus from innovation to compliance. This means proving the integrity of training data and ensuring AI performance is thoroughly tested before deployment.

Read More »

Hungary’s Biometric Surveillance: A Threat to Rights and EU Law

Hungary’s recent amendments to its surveillance laws allow the police to use facial recognition technology for all types of infractions, including minor ones, which poses significant risks to fundamental rights such as freedom of assembly and expression. These changes are viewed as a clear violation of the EU’s AI Act, undermining public trust in democracy and calling for urgent scrutiny from the European Union.

Read More »

AI Agents: Balancing Innovation with Accountability

Companies across industries are rapidly adopting AI agents, which are generative AI systems designed to act autonomously and make decisions without constant human input. However, the increased autonomy of these agents raises significant risks, including misalignment with developer intentions and unpredictable behaviors that could lead to various harms.

Read More »

Big Tech’s Vision for AI Regulations in the U.S.

Big Tech companies, AI startups, and financial institutions have expressed their priorities for the U.S. AI Action Plan, emphasizing the need for unified regulations, energy infrastructure, and workforce development. As the White House finalizes this plan, companies like Amazon, Meta, and Microsoft outline their visions for AI growth and innovation.

Read More »

Czechia’s Path to Complying with EU AI Regulations

The European Union’s Artificial Intelligence Act introduces significant regulations for the use of AI, particularly in high-risk areas such as critical infrastructure and medical devices. Czechia is preparing to implement these regulations, emphasizing the need for transparency and AI literacy among users in organizations.

Read More »