Day: March 25, 2026

AI as the New Frontier of Compliance Risk

Thoropass has released its 2026 State of Audit and Compliance Report, highlighting that AI adoption is now the leading source of compliance risk, with 69% of security leaders stating it is outpacing their controls. The report indicates that while compliance programs have matured, operational inefficiencies persist, particularly in evidence collection during audits.

Read More »

AI-Driven RegTech: Transforming Compliance in 2026

In 2026, AI-powered RegTech is revolutionizing compliance by shifting from reactive to predictive systems that can identify potential breaches before they occur. The RegTech market is expected to surge to USD 82.8 billion by 2032, making compliance a critical competitive advantage for FinTech firms.

Read More »

Court Ruling: AI Conversations Lack Legal Privilege

On February 13, 2026, Judge Jed S. Rakoff ruled that communications with generative AI platforms are not protected by attorney-client privilege, warning users to treat anything typed into these tools as potentially public. This ruling highlights the risks of using AI for legal analysis, particularly for non-lawyers, and emphasizes the importance of safeguarding privileged information.

Read More »

Governance in the Age of AI: Future Challenges and Narratives

A recent study by the Universitat Oberta de Catalunya explores who will set the rules for the future of artificial intelligence, focusing on governance models tied to private digital identities and biometric systems. The research highlights projects like World, co-founded by OpenAI’s Sam Altman, which propose alternative governance frameworks that could undermine democratic legitimacy while promoting narratives of security and inclusion.

Read More »

Enhancing Data Governance for AI Agents

BigID has announced an expansion of its Data Access Governance (DAG) capabilities to include AI agents, ensuring better oversight and security for non-human entities operating within enterprise environments. The new features include agent identity discovery, access right-sizing, and real-time activity monitoring to enhance data governance and mitigate insider risks associated with AI.

Read More »

Accessibility Obligations in the AI Act

The AI Act introduces a harmonized legal framework for the development and use of AI systems in the EU, focusing on accessibility obligations for information and interfaces. These requirements apply especially to high-risk AI systems and those subject to transparency obligations, ensuring that all users, including those with disabilities, can access critical information.

Read More »

Strategic AI Regulation for America’s Future

The White House has introduced a national legislative framework for artificial intelligence, favoring federal regulations over conflicting state laws. This unified approach aims to foster innovation while preparing the U.S. for an AI arms race against China.

Read More »

Accessibility in AI: Meeting Compliance under the New EU Regulations

The AI Act establishes a legal framework for artificial intelligence systems in the EU, emphasizing the importance of accessibility for information and interfaces, especially for high-risk systems. It mandates that these systems comply with existing European accessibility directives and standards to ensure that they are perceivable, operable, understandable, and robust for all users, including those with disabilities.

Read More »