Category: News

GSA Delays AI Procurement Terms to Enhance Industry Feedback

The U.S. General Services Administration (GSA) has postponed the rollout of proposed terms and conditions for AI procurement to allow more time for industry feedback, extending the comment period to April 3, 2026. The proposed AI Clause includes significant provisions regarding intellectual property rights, data handling, and requirements for “American AI Systems.”

Read More »

AI Policy Framework: Congress Faces Critical Questions Ahead

On March 20, 2026, the White House revealed its National Policy Framework for Artificial Intelligence, outlining legislative recommendations and urging Congress to create a unified federal standard. The framework focuses on seven core pillars, including protecting children, safeguarding communities, and promoting AI innovation, while acknowledging gaps in regulatory enforcement and data privacy.

Read More »

Colorado’s AI Law: Preparing for Compliance and Governance Challenges

Colorado’s SB 24-205, effective June 30, 2026, mandates that businesses assess their AI use in high-risk areas like hiring and lending, requiring robust risk management programs and human review processes. Companies must begin inventorying their AI systems now to ensure compliance and avoid algorithmic discrimination, as failure to do so could lead to significant operational challenges.

Read More »

Court Imposes Record Sanctions for AI-Generated Legal Misrepresentation

The Sixth Circuit Court of Appeals has imposed over $100,000 in sanctions on two lawyers for citing fictional cases in their appellate briefs, violating both Federal Rule of Appellate Procedure 38 and the Court’s inherent authority. This ruling highlights the dangers of relying on AI-generated content that may lead to “hallucinations,” ultimately resulting in a total sanction of $116,315.09 for the two lawyers.

Read More »

Mastering EU AI Act Compliance for Security Leaders

The EU AI Act establishes a comprehensive legal framework for artificial intelligence, imposing enforceable oversight requirements on organizations that develop or deploy AI systems within the EU. Compliance necessitates organizations to inventory AI systems, classify risk levels, and implement governance processes to ensure ongoing adherence to the regulation.

Read More »

Experts Call for Urgent Action on AI Regulation in Canada

Federal MPs are working to address the regulation of artificial intelligence, focusing on its implications for jobs, cybersecurity, and data sovereignty. Experts emphasize the need for better public consultation and express concern over the growing trust gap regarding AI technology.

Read More »

AI Governance: Building Trust and Accountability in Enterprises

AI adoption in large organizations has outpaced the establishment of governance frameworks, resulting in 65% of AI programs failing to scale beyond pilots. To address this, companies need a centralized inventory of AI systems for effective governance and risk management, ensuring accountability and oversight throughout the AI lifecycle.

Read More »

Korea’s AI Basic Act: A New Era for Technology Regulation

South Korea’s new AI Basic Act, effective January 2026, aims to regulate high-impact AI systems while promoting technology development and safety for users. It introduces a unique framework that encourages voluntary measures for AI safety, contrasting with the more stringent regulations found in the EU AI Act.

Read More »

Transforming AI Risk Governance: A Sociotechnical Approach

This paper discusses the importance of risk management in AI governance, highlighting the need for frameworks that focus on preventing harms rather than merely reducing hazards. It advocates for a sociotechnical approach to risk assessment, emphasizing the integration of various expertise and interventions to effectively mitigate AI-related risks.

Read More »