Category: AI Regulation

Guidelines for AI Models with Systemic Risks Under EU Regulations

The European Commission has issued guidelines to assist AI models deemed to have systemic risks in complying with the EU’s AI Act, which will take effect on August 2. These guidelines aim to clarify obligations for businesses facing substantial fines for non-compliance while addressing concerns about the regulatory burden.

Read More »

States Lead the Charge in AI Regulation

States across the U.S. are rapidly enacting their own AI regulations following the removal of a federal prohibition, leading to a fragmented landscape of laws that businesses must navigate. Key states like California and Colorado are focusing on issues such as algorithmic discrimination and data transparency, presenting both challenges and opportunities for enterprises in the AI sector.

Read More »

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance leaders need to balance innovation with accountability, addressing key risk areas such as bias, fraud, and data privacy.

Read More »

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance leaders need to balance innovation with accountability, addressing key risk areas such as bias, fraud, and data privacy.

Read More »

EU’s New AI Regulations: A Threat to Free Speech and Innovation

The new EU “safety and security” standards require tech companies to moderate content on general-purpose AI models to prevent “hate” and “discrimination.” This regulation is anticipated to enhance censorship across major tech platforms, potentially undermining democratic processes and fundamental rights.

Read More »

EU Launches AI Advisory Forum to Shape Future Regulation

The European Commission is inviting experts to apply for its newly established AI Act Advisory Forum, which will provide crucial guidance on the implementation of the EU’s AI Act aimed at ensuring responsible AI usage. This forum seeks a diverse range of professionals to ensure balanced representation and address the multifaceted challenges of AI regulation.

Read More »

EU Launches AI Advisory Forum to Shape Future Regulation

The European Commission is inviting experts to apply for its newly established AI Act Advisory Forum, which will provide crucial guidance on the implementation of the EU’s AI Act aimed at ensuring responsible AI usage. This forum seeks a diverse range of professionals to ensure balanced representation and address the multifaceted challenges of AI regulation.

Read More »

Bridging the AI Confidence Gap: Insights for CEOs

EY’s study reveals a significant disconnect between CEOs’ perceptions of AI concerns and actual public sentiment, with consumers expressing greater worries about issues like data privacy and misinformation. To bridge this gap, EY proposes a nine-point framework aimed at fostering responsible AI governance and addressing consumer apprehensions.

Read More »

Confronting the Risks of Shadow AI in the Enterprise

IBM has introduced tools to help organizations manage AI systems they may be unaware of, addressing the growing challenge of shadow AI. With a significant number of employees using unapproved AI tools, the company aims to unify governance and security to mitigate associated risks.

Read More »

Utah Lawmaker to Lead National AI Policy Task Force

Utah State Rep. Doug Fiefia has been appointed to co-chair a national task force aimed at shaping state-level artificial intelligence policies. The task force, organized by the Future Caucus, intends to counteract partisan gridlock and provide lawmakers with the necessary resources for effective AI governance.

Read More »