Day: March 24, 2025

AI-Generated Art: No Copyright Protection Granted

A recent court ruling has determined that AI-generated art does not qualify for copyright protection, meaning creators do not own the rights to their works. This ruling highlights that only human authors can hold copyright, leaving AI-generated creations free for anyone to use without attribution or compensation.

Read More »

Essential Model Contractual Clauses for AI Procurement in the EU

The European Commission has released updated Model Contractual Clauses for AI Procurement (MCC-AI) to assist public-sector buyers in navigating AI procurement under the EU AI Act. These clauses serve as a practical tool for both public and private organizations to meet legal obligations when providing or procuring AI systems, especially high-risk solutions.

Read More »

California’s Bold Move Against AI in the Workplace

California Senator Jerry McNerney has proposed the “No Robo Bosses” Act, aiming to regulate the use of artificial intelligence in employment decisions by mandating human oversight and transparency. The bill seeks to prevent reliance solely on automated decision systems for hiring and other critical workplace choices, marking a significant step in AI regulation.

Read More »

Preparing for the EU AI Act: Essential Steps for CIOs

The European Union’s AI Act is set to be enforced next year, with many organizations already falling behind in their compliance efforts. CIOs play a crucial role in guiding their organizations through this process, emphasizing the importance of early action and collaboration across various departments.

Read More »

California’s Bold Move to Regulate AI in the Workplace

California state Sen. Jerry McNerney has introduced the “No Robo Bosses Act,” which aims to prevent employers from relying solely on AI for personnel decisions without human oversight. The legislation seeks to balance technological innovation with the need for human involvement in decision-making processes.

Read More »

Risk Pyramid: Assessing AI Compliance for Medical Devices

Medical device manufacturers should utilize a risk pyramid to assess whether their products are classified as high-risk and require conformity assessments under the EU’s Artificial Intelligence Act. This act introduces a risk-based system for classifying AI applications, where high-risk devices will be subject to stricter regulations starting from August 2026.

Read More »

Huawei Scandal Exposes Corporate Influence on EU AI Standards

A recent corruption scandal involving Huawei highlights ongoing failures within European institutions to protect democracy from influence operations. As major tech corporations, including Huawei, engage in standard-setting processes for the EU’s AI Act, concerns arise about the transparency and fairness of these processes, which may prioritize corporate interests over public welfare.

Read More »

Taming General-Purpose AI: Safety, Security, and Ethical Safeguards

General-purpose AI offers vast potential, but ensuring its safety, security, and ethical use is crucial. Developers face hurdles like persistent harmful behaviors, easy circumvention, and difficulty quantifying risks. Monitoring and intervention, including AI content detection and multi-layered defenses, are vital for preventing malfunctions. Protecting privacy involves data scrubbing, privacy-enhancing technologies, and user-centric controls. Balancing safety with innovation, legal frameworks, and business incentives remains a key challenge for trustworthy AI.

Read More »