Category: AI Regulation

AI Act Compliance: Strategic Insights for Businesses

The EU Artificial Intelligence Act (AI Act) is the first comprehensive legal framework aimed at regulating the use of Artificial Intelligence across the European Union, establishing obligations for companies both within and outside the Union. It adopts a risk-based approach, requiring compliance frameworks that address legal, technical, and ethical considerations in the deployment of AI systems.

Read More »

Harnessing AI: The Role of LLMs, SLMs, and NLP in Legal Innovation

The integration of Artificial Intelligence (AI) into the legal field holds immense promise for enhancing efficiency and improving access to justice. A synergistic combination of Large Language Models (LLMs), Small Language Models (SLMs), and Natural Language Processing (NLP) techniques is essential for achieving responsible AI solutions in law.

Read More »

Building Responsible AI: A Comprehensive Risk Assessment Toolkit

The Responsible AI Question Bank serves as a comprehensive framework designed to support organizations in assessing and managing risks associated with AI systems. By integrating key principles of AI ethics into structured questions, it aims to facilitate compliance with emerging regulations and enhance overall AI governance.

Read More »

Colorado AI Act Amendments: Key Changes and Implications

The Colorado legislature is considering significant amendments to the nation’s first algorithmic discrimination law, introduced by Senator Robert Rodriguez and Representative Brianna Titone. These amendments aim to redefine algorithmic discrimination and narrow the definition of consequential decisions within the Colorado AI Act.

Read More »

Transforming Clinical Trials Through AI Regulation

Regulatory changes in the EU are reshaping the clinical trial landscape, emphasizing the need for greater data transparency and compliance. Biopharma companies that leverage AI and adapt to these changes will be better positioned to drive innovation and improve patient outcomes in clinical research.

Read More »

First Step in Combating AI-Driven Deepfake Abuse

On April 28, the House of Representatives passed the Take It Down Act, the first major law addressing AI-induced harm by criminalizing non-consensual deepfake porn. The bipartisan bill requires platforms to remove such content within 48 hours of being notified, aiming to protect victims from further trauma.

Read More »

AI Transparency Framework Proposed for Utah’s New Office

The Aspen Institute has introduced a new framework aimed at guiding Utah’s Office of Artificial Intelligence Policy (OAIP) in standardizing evaluation processes for AI initiatives. This framework emphasizes transparency and seeks to improve engagement between the state government and the community regarding the use of AI technologies.

Read More »